category
stringclasses 191
values | search_query
stringclasses 434
values | search_type
stringclasses 2
values | search_engine_input
stringclasses 748
values | url
stringlengths 22
468
| title
stringlengths 1
77
| text_raw
stringlengths 1.17k
459k
| text_window
stringlengths 545
2.63k
| stance
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|
Classical Music
|
Was Bach blind when he composed his final pieces?
|
yes_statement
|
"bach" was "blind" when he "composed" his "final" "pieces".. "bach" "composed" his "final" "pieces" while being "blind".
|
https://ifes.smsd.us/faculty_directory/de_leigh_wilson/composer_of_the_month/johann_sebastian_bach
|
Johann Sebastian Bach - Iron Forge Elementary School
|
Johann Sebastian Bach was born in Eisenach, Germany, on March 21,1685. He came from a family of musicians, stretching back several generations. His father, Johann Ambrosius, worked as the town musician in Eisenach, and it is believed that he taught young Johann to play the violin.
At the age of seven, Bach went to school where he received religious instruction and studied Latin and other subjects. His Lutheran faith would influence his later musical works. By the time he turned 10, Bach found himself an orphan after the death of both of his parents. His older brother Johann Christoph, a church organist, took him in. Johann Christoph provided some further musical instruction for his younger brother. Bach stayed with his brother until he was 15.
Bach had a beautiful soprano singing voice, which helped him land a place at a school in Lüneburg. Sometime after his arrival, his voice changed and Bach switched to playing the violin and the harpsichord. Bach was greatly influenced by a local organist named George Böhm. In 1703, he landed his first job as a musician at the court of Duke Johann Ernst in Weimar. There he was a jack-of-all-trades, serving as a violinist and at times, filling in for the official organist. He also was becoming an expert at repairing organs as well.
Early Career
Bach had a growing reputation as a great performer, and it was his great technical skill that landed him the position of organist at the New Church in Arnstadt. He was responsible for providing music for religious services and special events as well as giving music instruction. Bach officially received a few weeks' leave from the church when he decided to walk over 200 miles to Lübeck to hear famed organist Dietrich Buxtehude and extended his stay without informing anyone back in Arnstadt.
In 1707, Bach was glad to leave Arnstadt for an organist position at the Church of St. Blaise in Mühlhausen. This move, however, did not turn out as well as he had planned. Bach's musical style clashed with the church's pastor. Bach created complex arrangements and had a fondness for weaving together different melodic lines. His pastor believed that church music needed to be simple.
Working for Royalty
After a year in Mühlhausen, Bach won the post of organist at the court of the Duke Wilhelm Ernst in Weimar. He wrote many church cantatas and some of his best compositions for the organ while working for the duke. During his time at Weimar, Bach wrote "Toccata and Fugue in D Minor," one of his most popular pieces for the organ. He also composed the cantata "Herz und Mund und Tat," or Heart and Mouth and Deed. One section of this cantata, called "Jesu, Joy of Man's Desiring" in English, is especially famous.
In 1717, Bach accepted a position with Prince Leopold of Anhalt-Cöthen. But Duke Wilhelm Ernst had no interest in letting Bach go and even imprisoned him for a month when he tried to leave! In early December, Bach was released and allowed to go to Cöthen. Prince Leopold had a passion for music. He played the violin and often bought musical scores while traveling abroad.
While at Cöthen, Bach devoted much of his time to instrumental music, composing concertos for orchestras, dance suites and sonatas for multiple instruments. He also wrote pieces for solo instruments, including some of his finest violin works. His secular compositions still reflected his deep commitment to his faith with Bach often writing the initials I.N.J. for the Latin In Nomine Jesu, or
In tribute to the Duke of Brandenburg, Bach created a series of orchestra concertos, which became known as the "Brandenburg Concertos" in 1721. These concertos are considered to be some of Bach's greatest works. That same year, Prince Leopold got married, and his new bride discouraged the prince's interest in music. Bach completed the first book of "The Well-Tempered Clavier" around this time. With students in mind, he put together this collection of keyboard pieces to help them learn certain techniques and methods. Bach had to turn his attentions to finding work when the prince dissolved his orchestra in 1723.
Later Works in Leipzig
After auditioning for a new position in Leipzig, Bach signed a contract to become the new organist and teacher at St. Thomas Church. He was required to teach at the Thomas School as a part of his position as well. With new music needed for services each week, Bach threw himself into writing cantatas. The "Christmas Oratorio," for example, is a series of six cantatas that reflect on the holiday.
One of his later religious masterworks is "Mass in B minor." He had developed sections of it, known as Kyrie and Gloria, in 1733, which were presented to the Elector of Saxony. Bach did not finish the composition, a musical version of a traditional Latin mass, until 1749. The complete work was not performed during his lifetime.
Final Years
By 1740, Bach was struggling with his eyesight, but he continued to work despite his vision problems. He was even well enough to travel and perform, visiting Frederick the Great, the king of Prussia in 1747. By 1749, Bach started a new composition called "The Art of Fugue," but he did not complete it. He tried to fix his failing sight by having surgery the following year, but the operation ended up leaving him completely blind. Later that year, Bach suffered a stroke. He died in Leipzig on July 28, 1750.
During his lifetime, Bach was better known as an organist than a composer. Few of his works were even published during his lifetime. Still Bach's musical compositions were admired by those who followed in his footsteps, including Amadeus Mozart and Ludwig van Beethoven. His reputation received a substantial boost in 1829 when German composer Felix Mendelssohn reintroduced Bach's "Passion According to St. Matthew." Bach is considered to be the best composer of the Baroque era, and one of the most important figures in classical music in general.
Personal Life
Little personal correspondence has survived to provide a full picture of Bach as a person. But the records do shed some light on his character. Bach was devoted to his family. In 1706, he married his cousin Maria Barbara Bach. The couple had seven children together, some of whom died as infants. Maria died in 1720 while Bach was traveling with Prince Leopold. The following year, Bach married a singer named Anna Magdalena Wülcken. They had thirteen children, more than half of them died as children.
Bach clearly shared his love of music with his children. From his first marriage, Wilhelm Friedemann Bach and Carl Philipp Emanuel Bach became composers and musicians. Johann Christoph Friedrich Bach and Johann Christian Bach, sons from his second marriage, also enjoyed musical success.
|
With new music needed for services each week, Bach threw himself into writing cantatas. The "Christmas Oratorio," for example, is a series of six cantatas that reflect on the holiday.
One of his later religious masterworks is "Mass in B minor." He had developed sections of it, known as Kyrie and Gloria, in 1733, which were presented to the Elector of Saxony. Bach did not finish the composition, a musical version of a traditional Latin mass, until 1749. The complete work was not performed during his lifetime.
Final Years
By 1740, Bach was struggling with his eyesight, but he continued to work despite his vision problems. He was even well enough to travel and perform, visiting Frederick the Great, the king of Prussia in 1747. By 1749, Bach started a new composition called "The Art of Fugue," but he did not complete it. He tried to fix his failing sight by having surgery the following year, but the operation ended up leaving him completely blind. Later that year, Bach suffered a stroke. He died in Leipzig on July 28, 1750.
During his lifetime, Bach was better known as an organist than a composer. Few of his works were even published during his lifetime. Still Bach's musical compositions were admired by those who followed in his footsteps, including Amadeus Mozart and Ludwig van Beethoven. His reputation received a substantial boost in 1829 when German composer Felix Mendelssohn reintroduced Bach's "Passion According to St. Matthew." Bach is considered to be the best composer of the Baroque era, and one of the most important figures in classical music in general.
Personal Life
Little personal correspondence has survived to provide a full picture of Bach as a person. But the records do shed some light on his character. Bach was devoted to his family. In 1706, he married his cousin Maria Barbara Bach. The couple had seven children together, some of whom died as infants. Maria died in 1720 while Bach was traveling with Prince Leopold.
|
yes
|
Classical Music
|
Was Bach blind when he composed his final pieces?
|
yes_statement
|
"bach" was "blind" when he "composed" his "final" "pieces".. "bach" "composed" his "final" "pieces" while being "blind".
|
https://www.kentuckybachchoir.org/about-j-s-bach
|
J.S. Bach | kybach
|
BIOGRAPHY
A magnificent baroque-era composer, Johann Sebastian Bach is revered through the ages for his work's musical complexities and stylistic innovations.
Born on March 31, 1685 (N.S.), in Eisenach, Thuringia, Germany, Johann Sebastian Bach had a prestigious musical lineage and took on various organist positions during the early 18th century, creating famous compositions like "Toccata and Fugue in D minor." Some of his best-known compositions are the "Mass in B Minor," the "Brandenburg Concertos" and "The Well-Tempered Clavier." Bach died in Leipzig, Germany, on July 28, 1750. Today, he is considered one of the greatest Western composers of all time.
Childhood
Born in Eisenach, Thuringia, Germany, on March 31, 1685 (N.S.) / March 21, 1685 (O.S.), Johann Sebastian Bach came from a family of musicians, stretching back several generations. His father, Johann Ambrosius, worked as the town musician in Eisenach, and it is believed that he taught young Johann to play the violin.
At the age of seven, Bach went to school where he received religious instruction and studied Latin and other subjects. His Lutheran faith would influence his later musical works. By the time he turned 10, Bach found himself an orphan after the death of both of his parents. His older brother Johann Christoph, a church organist in Ohrdruf, took him in. Johann Christoph provided some further musical instruction for his younger brother and enrolled him in a local school. Bach stayed with his brother's family until he was 15.
Bach had a beautiful soprano singing voice, which helped him land a place at a school in Lüneburg. Sometime after his arrival, his voice changed and Bach switched to playing the violin and the harpsichord. Bach was greatly influenced by a local organist named George Böhm. In 1703, he landed his first job as a musician at the court of Duke Johann Ernst in Weimar. There he was a jack-of-all-trades, serving as a violinist and at times, filling in for the official organist.
Early Career
Bach had a growing reputation as a great performer, and it was his great technical skill that landed him the position of organist at the New Church in Arnstadt. He was responsible for providing music for religious services and special events as well as giving music instruction. An independent and sometimes arrogant young man, Bach did not get along well with his students and was scolded by church officials for not rehearsing them frequently enough.
Bach did not help his situation when he disappeared for several months in 1705. While he only officially received a few weeks' leave from the church, he traveled to Lübeck to hear famed organist Dietrich Buxtehude and extended his stay without informing anyone back in Arnstadt.
In 1707, Bach was glad to leave Arnstadt for an organist position at the Church of St. Blaise in Mühlhausen. This move, however, did not turn out as well as he had planned. Bach's musical style clashed with the church's pastor. Bach created complex arrangements and had a fondness for weaving together different melodic lines. His pastor believed that church music needed to be simple. One of Bach's most famous works from this time is the cantata "Gottes Zeit ist die allerbeste Zeit," also known as "Actus Tragicus."
Working for Royalty
After a year in Mühlhausen, Bach won the post of organist at the court of the Duke Wilhelm Ernst in Weimar. He wrote many church cantatas and some of his best compositions for the organ while working for the duke. During his time at Weimar, Bach wrote "Toccata and Fugue in D Minor," one of his most popular pieces for the organ. He also composed the cantata "Herz und Mund und Tat," or Heart and Mouth and Deed. One section of this cantata, called "Jesu, Joy of Man's Desiring" in English, is especially famous.
In 1717, Bach accepted a position with Prince Leopold of Anhalt-Cöthen. But Duke Wilhelm Ernst had no interest in letting Bach go and even imprisoned him for several weeks when he tried to leave. In early December, Bach was released and allowed to go to Cöthen. Prince Leopold had a passion for music. He played the violin and often bought musical scores while traveling abroad.
While at Cöthen, Bach devoted much of his time to instrumental music, composing concertos for orchestras, dance suites and sonatas for multiple instruments. He also wrote pieces for solo instruments, including some of his finest violin works. His secular compositions still reflected his deep commitment to his faith with Bach often writing the initials I.N.J. for the Latin In Nomine Jesu, or "in the name of Jesus," on his sheet music.
In tribute to the Duke of Brandenburg, Bach created a series of orchestra concertos, which became known as the "Brandenburg Concertos," in 1721. These concertos are considered to be some of Bach's greatest works. That same year, Prince Leopold got married, and his new bride discouraged the prince's interest in music. Bach completed the first book of "The Well-Tempered Clavier" around this time. With students in mind, he put together this collection of keyboard pieces to help them learn certain techniques and methods. Bach had to turn his attentions to finding work when the prince dissolved his orchestra in 1723.
Later Works in Leipzig
After auditioning for a new position in Leipzig, Bach signed a contract to become the new organist and teacher at St. Thomas Church. He was required to teach at the Thomas School as a part of his position as well. With new music needed for services each week, Bach threw himself into writing cantatas. The "Christmas Oratorio," for example, is a series of six cantatas that reflect on the holiday.
Bach also created musical interpretations of the Bible using choruses, arias and recitatives. These works are referred to as his "Passions," the most famous of which is "Passion According to St. Matthew." This musical composition, written in 1727 or 1729, tells the story of chapters 26 and 27 of the Gospel of Matthew. The piece was performed as part of a Good Friday service.
One of his later religious masterworks is "Mass in B minor." He had developed sections of it, known as Kyrie and Gloria, in 1733, which were presented to the Elector of Saxony. Bach did not finish the composition, a musical version of a traditional Latin mass, until 1749. The complete work was not performed during his lifetime.
Final Years
By 1740, Bach was struggling with his eyesight, but he continued to work despite his vision problems. He was even well enough to travel and perform, visiting Frederick the Great, the king of Prussia in 1747. He played for the king, making up a new composition on the spot. Back in Leipzig, Bach refined the piece and gave Frederick a set of fugues called "Musical Offering."
In 1749, Bach started a new composition called "The Art of Fugue," but he did not complete it. He tried to fix his failing sight by having surgery the following year, but the operation ended up leaving him completely blind. Later that year, Bach suffered a stroke. He died in Leipzig on July 28, 1750.
During his lifetime, Bach was better known as an organist than a composer. Few of his works were even published during his lifetime. Still Bach's musical compositions were admired by those who followed in his footsteps, including Amadeus Mozart and Ludwig van Beethoven. His reputation received a substantial boost in 1829 when German composer Felix Mendelssohn reintroduced Bach's "Passion According to St. Matthew."
Musically, Bach was a master at invoking and maintaining different emotions. He was an expert storyteller as well, often using melody to suggest actions or events. In his works, Bach drew from different music styles from across Europe, including French and Italian. He used counterpoint, the playing of multiple melodies simultaneously, and fugue, the repetition of a melody with slight variations, to create richly detailed compositions. He is considered to be the best composer of the Baroque era, and one of the most important figures in classical music in general.
Personal Life
Little personal correspondence has survived to provide a full picture of Bach as a person. But the records do shed some light on his character. Bach was devoted to his family. In 1706, he married his cousin Maria Barbara Bach. The couple had seven children together, some of whom died as infants. Maria died in 1720 while Bach was traveling with Prince Leopold. The following year, Bach married a singer named Anna Magdalena Wülcken. They had thirteen children, more than half of them died as children.
Bach clearly shared his love of music with his children. From his first marriage, Wilhelm Friedemann Bach and Carl Philipp Emanuel Bach became composers and musicians. Johann Christoph Friedrich Bach and Johann Christian Bach, sons from his second marriage, also enjoyed musical success. -The above article is attributed to the editors of Biography.com]
|
He had developed sections of it, known as Kyrie and Gloria, in 1733, which were presented to the Elector of Saxony. Bach did not finish the composition, a musical version of a traditional Latin mass, until 1749. The complete work was not performed during his lifetime.
Final Years
By 1740, Bach was struggling with his eyesight, but he continued to work despite his vision problems. He was even well enough to travel and perform, visiting Frederick the Great, the king of Prussia in 1747. He played for the king, making up a new composition on the spot. Back in Leipzig, Bach refined the piece and gave Frederick a set of fugues called "Musical Offering. "
In 1749, Bach started a new composition called "The Art of Fugue," but he did not complete it. He tried to fix his failing sight by having surgery the following year, but the operation ended up leaving him completely blind. Later that year, Bach suffered a stroke. He died in Leipzig on July 28, 1750.
During his lifetime, Bach was better known as an organist than a composer. Few of his works were even published during his lifetime. Still Bach's musical compositions were admired by those who followed in his footsteps, including Amadeus Mozart and Ludwig van Beethoven. His reputation received a substantial boost in 1829 when German composer Felix Mendelssohn reintroduced Bach's "Passion According to St. Matthew."
Musically, Bach was a master at invoking and maintaining different emotions. He was an expert storyteller as well, often using melody to suggest actions or events. In his works, Bach drew from different music styles from across Europe, including French and Italian. He used counterpoint, the playing of multiple melodies simultaneously, and fugue, the repetition of a melody with slight variations, to create richly detailed compositions. He is considered to be the best composer of the Baroque era, and one of the most important figures in classical music in general.
|
yes
|
Classical Music
|
Was Bach blind when he composed his final pieces?
|
yes_statement
|
"bach" was "blind" when he "composed" his "final" "pieces".. "bach" "composed" his "final" "pieces" while being "blind".
|
https://www.biography.com/musicians/johann-sebastian-bach
|
Johann Sebastian Bach - Facts, Children & Compositions
|
Johann Sebastian Bach
A magnificent baroque-era composer, Johann Sebastian Bach is revered through the ages for his work's musical complexities and stylistic innovations.
Updated: Sep 15, 2022
Stock Montage/Getty Images
(1685-1750)
Who Was Johann Sebastian Bach?
Johann Sebastian Bach had a prestigious musical lineage and took on various organist positions during the early 18th century, creating famous compositions like "Toccata and Fugue in D minor." Some of his best-known compositions are the "Mass in B Minor," the "Brandenburg Concertos" and "The Well-Tempered Clavier." Bach died in Leipzig, Germany, on July 28, 1750. Today, he is considered one of the greatest Western composers of all time.
Childhood
Born in Eisenach, Thuringia, Germany, on March 31, 1685 (N.S.) / March 21, 1685 (O.S.), Johann Sebastian Bach came from a family of musicians, stretching back several generations. His father, Johann Ambrosius, worked as the town musician in Eisenach, and it is believed that he taught young Johann to play the violin.
At the age of seven, Bach went to school where he received religious instruction and studied Latin and other subjects. His Lutheran faith would influence his later musical works. By the time he turned 10, Bach found himself an orphan after the death of both of his parents. His older brother Johann Christoph, a church organist in Ohrdruf, took him in. Johann Christoph provided some further musical instruction for his younger brother and enrolled him in a local school. Bach stayed with his brother's family until he was 15.
Bach had a beautiful soprano singing voice, which helped him land a place at a school in Lüneburg. Sometime after his arrival, his voice changed and Bach switched to playing the violin and the harpsichord. Bach was greatly influenced by a local organist named George Böhm. In 1703, he landed his first job as a musician at the court of Duke Johann Ernst in Weimar. There he was a jack-of-all-trades, serving as a violinist and at times, filling in for the official organist.
Early Career
Bach had a growing reputation as a great performer, and it was his great technical skill that landed him the position of organist at the New Church in Arnstadt. He was responsible for providing music for religious services and special events as well as giving music instruction. An independent and sometimes arrogant young man, Bach did not get along well with his students and was scolded by church officials for not rehearsing them frequently enough.
Bach did not help his situation when he disappeared for several months in 1705. While he only officially received a few weeks' leave from the church, he traveled to Lübeck to hear famed organist Dietrich Buxtehude and extended his stay without informing anyone back in Arnstadt.
In 1707, Bach was glad to leave Arnstadt for an organist position at the Church of St. Blaise in Mühlhausen. This move, however, did not turn out as well as he had planned. Bach's musical style clashed with the church's pastor. Bach created complex arrangements and had a fondness for weaving together different melodic lines. His pastor believed that church music needed to be simple. One of Bach's most famous works from this time is the cantata "Gottes Zeit ist die allerbeste Zeit," also known as "Actus Tragicus."
Working for Royalty
After a year in Mühlhausen, Bach won the post of organist at the court of the Duke Wilhelm Ernst in Weimar. He wrote many church cantatas and some of his best compositions for the organ while working for the duke. During his time at Weimar, Bach wrote "Toccata and Fugue in D Minor," one of his most popular pieces for the organ. He also composed the cantata "Herz und Mund und Tat," or Heart and Mouth and Deed. One section of this cantata, called "Jesu, Joy of Man's Desiring" in English, is especially famous.
In 1717, Bach accepted a position with Prince Leopold of Anhalt-Cöthen. But Duke Wilhelm Ernst had no interest in letting Bach go and even imprisoned him for several weeks when he tried to leave. In early December, Bach was released and allowed to go to Cöthen. Prince Leopold had a passion for music. He played the violin and often bought musical scores while traveling abroad.
While at Cöthen, Bach devoted much of his time to instrumental music, composing concertos for orchestras, dance suites and sonatas for multiple instruments. He also wrote pieces for solo instruments, including some of his finest violin works. His secular compositions still reflected his deep commitment to his faith with Bach often writing the initials I.N.J. for the Latin In Nomine Jesu, or "in the name of Jesus," on his sheet music.
In tribute to the Duke of Brandenburg, Bach created a series of orchestra concertos, which became known as the "Brandenburg Concertos," in 1721. These concertos are considered to be some of Bach's greatest works. That same year, Prince Leopold got married, and his new bride discouraged the prince's interest in music. Bach completed the first book of "The Well-Tempered Clavier" around this time. With students in mind, he put together this collection of keyboard pieces to help them learn certain techniques and methods. Bach had to turn his attentions to finding work when the prince dissolved his orchestra in 1723.
Later Works in Leipzig
After auditioning for a new position in Leipzig, Bach signed a contract to become the new organist and teacher at St. Thomas Church. He was required to teach at the Thomas School as a part of his position as well. With new music needed for services each week, Bach threw himself into writing cantatas. The "Christmas Oratorio," for example, is a series of six cantatas that reflect on the holiday.
Bach also created musical interpretations of the Bible using choruses, arias and recitatives. These works are referred to as his "Passions," the most famous of which is "Passion According to St. Matthew." This musical composition, written in 1727 or 1729, tells the story of chapters 26 and 27 of the Gospel of Matthew. The piece was performed as part of a Good Friday service.
One of his later religious masterworks is "Mass in B minor." He had developed sections of it, known as Kyrie and Gloria, in 1733, which were presented to the Elector of Saxony. Bach did not finish the composition, a musical version of a traditional Latin mass, until 1749. The complete work was not performed during his lifetime.
Final Years
By 1740, Bach was struggling with his eyesight, but he continued to work despite his vision problems. He was even well enough to travel and perform, visiting Frederick the Great, the king of Prussia in 1747. He played for the king, making up a new composition on the spot. Back in Leipzig, Bach refined the piece and gave Frederick a set of fugues called "Musical Offering."
In 1749, Bach started a new composition called "The Art of Fugue," but he did not complete it. He tried to fix his failing sight by having surgery the following year, but the operation ended up leaving him completely blind. Later that year, Bach suffered a stroke. He died in Leipzig on July 28, 1750.
During his lifetime, Bach was better known as an organist than a composer. Few of his works were even published during his lifetime. Still Bach's musical compositions were admired by those who followed in his footsteps, including Amadeus Mozart and Ludwig van Beethoven. His reputation received a substantial boost in 1829 when German composer Felix Mendelssohn reintroduced Bach's "Passion According to St. Matthew."
Musically, Bach was a master at invoking and maintaining different emotions. He was an expert storyteller as well, often using melody to suggest actions or events. In his works, Bach drew from different music styles from across Europe, including French and Italian. He used counterpoint, the playing of multiple melodies simultaneously, and fugue, the repetition of a melody with slight variations, to create richly detailed compositions. He is considered to be the best composer of the Baroque era, and one of the most important figures in classical music in general.
Personal Life
Little personal correspondence has survived to provide a full picture of Bach as a person. But the records do shed some light on his character. Bach was devoted to his family. In 1706, he married his cousin Maria Barbara Bach. The couple had seven children together, some of whom died as infants. Maria died in 1720 while Bach was traveling with Prince Leopold. The following year, Bach married a singer named Anna Magdalena Wülcken. They had thirteen children, more than half of them died as children.
Bach clearly shared his love of music with his children. From his first marriage, Wilhelm Friedemann Bach and Carl Philipp Emanuel Bach became composers and musicians. Johann Christoph Friedrich Bach and Johann Christian Bach, sons from his second marriage, also enjoyed musical success.
Videos
QUICK FACTS
Name: Johann Sebastian Bach
Birth Year: 1685
Birth date: March 31, 1685
Birth City: Eisenach, Thuringia
Birth Country: Germany
Gender: Male
Best Known For: A magnificent baroque-era composer, Johann Sebastian Bach is revered through the ages for his work's musical complexities and stylistic innovations.
Industries
Classical
Astrological Sign: Aries
Schools
St. Michael's School (Luneburg, Germany)
Nacionalities
German
Death Year: 1750
Death date: July 28, 1750
Death City: Leipzig
Death Country: Germany
Fact Check
We strive for accuracy and fairness.If you see something that doesn't look right,contact us!
|
Bach did not finish the composition, a musical version of a traditional Latin mass, until 1749. The complete work was not performed during his lifetime.
Final Years
By 1740, Bach was struggling with his eyesight, but he continued to work despite his vision problems. He was even well enough to travel and perform, visiting Frederick the Great, the king of Prussia in 1747. He played for the king, making up a new composition on the spot. Back in Leipzig, Bach refined the piece and gave Frederick a set of fugues called "Musical Offering. "
In 1749, Bach started a new composition called "The Art of Fugue," but he did not complete it. He tried to fix his failing sight by having surgery the following year, but the operation ended up leaving him completely blind. Later that year, Bach suffered a stroke. He died in Leipzig on July 28, 1750.
During his lifetime, Bach was better known as an organist than a composer. Few of his works were even published during his lifetime. Still Bach's musical compositions were admired by those who followed in his footsteps, including Amadeus Mozart and Ludwig van Beethoven. His reputation received a substantial boost in 1829 when German composer Felix Mendelssohn reintroduced Bach's "Passion According to St. Matthew. "
Musically, Bach was a master at invoking and maintaining different emotions. He was an expert storyteller as well, often using melody to suggest actions or events. In his works, Bach drew from different music styles from across Europe, including French and Italian. He used counterpoint, the playing of multiple melodies simultaneously, and fugue, the repetition of a melody with slight variations, to create richly detailed compositions. He is considered to be the best composer of the Baroque era, and one of the most important figures in classical music in general.
Personal Life
Little personal correspondence has survived to provide a full picture of Bach as a person. But the records do shed some light on his character. Bach was devoted to his family.
|
yes
|
Spelaeology
|
Was Chauvet Cave the site of the earliest known cave paintings?
|
yes_statement
|
"chauvet" "cave" was the "site" of the "earliest" "known" "cave" "paintings".. the "earliest" "known" "cave" "paintings" were found in "chauvet" "cave".
|
https://whc.unesco.org/en/list/1426/
|
Decorated Cave of Pont d'Arc, known as Grotte Chauvet-Pont d'Arc ...
|
Located in a limestone plateau of the Ardèche River in southern France, the property contains the earliest-known and best-preserved figurative drawings in the world, dating back as early as the Aurignacian period (30,000–32,000 BP), making it an exceptional testimony of prehistoric art. The cave was closed off by a rock fall approximately 20,000 years BP and remained sealed until its discovery in 1994, which helped to keep it in pristine condition. Over 1,000 images have so far been inventoried on its walls, combining a variety of anthropomorphic and animal motifs. Of exceptional aesthetic quality, they demonstrate a range of techniques including the skilful use of shading, combinations of paint and engraving, anatomical precision, three-dimensionality and movement. They include several dangerous animal species difficult to observe at that time, such as mammoth, bear, cave lion, rhino, bison and auroch, as well as 4,000 inventoried remains of prehistoric fauna and a variety of human footprints.
Outstanding Universal Value
The decorated cave of Pont d’Arc, known as Grotte Chauvet-Pont d’Arc is located in a limestone plateau of the meandering Ardèche River in southern France, and extends to an area of approximately 8,500 square meters. It contains the earliest known pictorial drawings, carbon-dated to as early as the Aurignacian period (30,000 to 32,000 BP). The cave was closed off by a rock fall approximately 20,000 years BP and remained sealed until its rediscovery in 1994. It contains more than 1,000 drawings, predominantly of animals, including several dangerous species, as well as a large number of archaeological and Palaeolithic vestiges.
The cave contains the best-preserved expressions of artistic creation of the Aurignacian people, constituting an exceptional testimony of prehistoric cave art. In addition to the anthropomorphic depictions, the zoomorphic drawings illustrate an unusual selection of animals, which were difficult to observe or approach at the time. Some of these are uniquely illustrated in Grotte Chauvet. As a result of the extremely stable interior climate over millennia, as well as the absence of natural damaging processes, the drawings and paintings have been preserved in a pristine state of conservation and in exceptional completeness.
Criterion (i): The decorated cave of Pont d’Arc, known as Grotte Chauvet-Pont d’Arc contains the first known expressions of human artistic genius and more than 1,000 drawings of anthropomorphic and zoomorphic motifs of exceptional aesthetic quality have been inventoried. These form a remarkable expression of early human artistic creation of grand excellence and variety, both in motifs and in techniques. The artistic quality is underlined by the skilful use of colours, combinations of paint and engravings, the precision in anatomical representation and the ability to give an impression of volumes and movements.
Criterion (iii): The decorated cave of Pont d’Arc, known as Grotte Chauvet-Pont d’Arc bears a unique and exceptionally well-preserved testimony to the cultural and artistic tradition of the Aurignacian people and to the early development of creative human activity in general. The cave’s seclusion for more than 20 millennia has transmitted an unparalleled testimony of early Aurignacian art, free of post-Aurignacian human intervention or disturbances. The archaeological and paleontological evidence in the cave illustrates like no other cave of the Early Upper Palaeolithic period, the frequentation of caves for cultural and ritual practices.
Integrity
The nominated property comprises the entire subterranean space of the cave of approximately 8,500 square meters and all structurally relevant parts of the limestone plateau above the cave as well as its entrance situation and immediate surroundings. These spaces contain all the attributes of Outstanding Universal Value and the property is of adequate size. Strict preventive conservation policies including access restrictions have allowed for the maintenance of an almost identical situation to the time of discovery. These access restrictions and the continuous monitoring of the climatic conditions will be key factors for the preservation of integrity of the property and for averting potential dangers of human impact.
Authenticity
The authenticity of the property can be demonstrated by its pristine condition and state of conservation, having been sealed off for 23,000 years and carefully treated and access-restricted since its rediscovery. The dating of the finds and drawings has been confirmed by C14 analysis as between 32,000 and 30,000 years BP, and the materials, designs, drawing techniques and traces of workmanship date back to this time. The rock art as well as the archaeological and paleontological vestiges are free of human impact or alterations. The only modification is the installation of completely-reversible, stainless steel bridging elements to allow for access to parts of the cave whilst preventing disturbance of floor traces or finds.
Protection and management requirements
The decorated cave of Pont d’Arc, known as Grotte Chauvet-Pont d’Arc is protected at the highest national level as a historic monument. Likewise, the buffer zone benefits from the highest level of national protection since early 2013. The buffer zone accordingly will not permit future development.
The focus of management is the implementation of a preventive conservation strategy based on constant monitoring and non-intervention. Several monitoring systems have been installed in the cave which form an integral part of these preventive conservation efforts. Any changes in relative humidity and/or the air composition inside the cave may have severe effects on the condition of the drawings and paintings. It is due to this risk that the cave will not be open to the general public, but also that future visits of experts, researchers and conservators will need to be restricted to the absolute minimum necessary. Despite the delicateness of paintings and drawings, no conservation activities have been carried out in the cave and it is intended to retain all paintings and drawings in the fragile but pristine condition in which they were discovered.
The management authorities have implemented a management plan (2012-16), based on strategic objectives, activity fields and concrete actions, which are planned with time frames, institutional responsibilities, budget requirements and quality assurance indicators. The latter will allow for full quality assurance after the cycle of implementation in 2016, following which the management plan will have to be revised for future management processes.
After it became clear that the cave would never be accessible to the general public, the idea of a facsimile reconstruction to provide interpretation and presentation facilities emerged. The Grand Projet Espace de Restitution de la Grotte Chauvet (ERGC) was established, with the aim of creating a facsimile reconstruction of the cave with its paintings and drawings, and a discovery and interpretation area to attract visitors.
|
Outstanding Universal Value
The decorated cave of Pont d’Arc, known as Grotte Chauvet-Pont d’Arc is located in a limestone plateau of the meandering Ardèche River in southern France, and extends to an area of approximately 8,500 square meters. It contains the earliest known pictorial drawings, carbon-dated to as early as the Aurignacian period (30,000 to 32,000 BP). The cave was closed off by a rock fall approximately 20,000 years BP and remained sealed until its rediscovery in 1994. It contains more than 1,000 drawings, predominantly of animals, including several dangerous species, as well as a large number of archaeological and Palaeolithic vestiges.
The cave contains the best-preserved expressions of artistic creation of the Aurignacian people, constituting an exceptional testimony of prehistoric cave art. In addition to the anthropomorphic depictions, the zoomorphic drawings illustrate an unusual selection of animals, which were difficult to observe or approach at the time. Some of these are uniquely illustrated in Grotte Chauvet. As a result of the extremely stable interior climate over millennia, as well as the absence of natural damaging processes, the drawings and paintings have been preserved in a pristine state of conservation and in exceptional completeness.
Criterion (i): The decorated cave of Pont d’Arc, known as Grotte Chauvet-Pont d’Arc contains the first known expressions of human artistic genius and more than 1,000 drawings of anthropomorphic and zoomorphic motifs of exceptional aesthetic quality have been inventoried. These form a remarkable expression of early human artistic creation of grand excellence and variety, both in motifs and in techniques. The artistic quality is underlined by the skilful use of colours, combinations of paint and engravings, the precision in anatomical representation and the ability to give an impression of volumes and movements.
|
yes
|
Spelaeology
|
Was Chauvet Cave the site of the earliest known cave paintings?
|
yes_statement
|
"chauvet" "cave" was the "site" of the "earliest" "known" "cave" "paintings".. the "earliest" "known" "cave" "paintings" were found in "chauvet" "cave".
|
https://en.wikipedia.org/wiki/Cave_painting
|
Cave painting - Wikipedia
|
A 2018 study claimed an age of 64,000 years for the oldest examples of non-figurative cave art in the Iberian Peninsula. Represented by three red non-figurative symbols found in the caves of Maltravieso, Ardales and La Pasiega, Spain, these predate the appearance of modern humans in Europe by at least 20,000 years and thus must have been made by Neanderthals rather than modern humans.[8]
In November 2018, scientists reported the discovery of the then-oldest known figurative art painting, over 40,000 (perhaps as old as 52,000) years old, of an unknown animal, in the cave of Lubang Jeriji Saléh on the Indonesian island of Borneo.[9][10] Nevertheless, in December 2019, cave paintings portraying pig hunting within the Maros-Pangkep karst region in Sulawesi were discovered to be even older, with an estimated age of at least 43,900 years. This remarkable finding was recognized as "the oldest known depiction of storytelling and the earliest instance of figurative art in human history."[11][12]
Nearly 350 caves have now been discovered in France and Spain that contain art from prehistoric times. Initially, the age of the paintings had been a contentious issue, since methods like radiocarbon dating can produce misleading results if contaminated by other samples,[13] and caves and rocky overhangs (where parietal art is found) are typically littered with debris from many time periods. But subsequent technology has made it possible to date the paintings by sampling the pigment itself, torch marks on the walls,[14] or the formation of carbonate deposits on top of the paintings.[15] The subject matter can also indicate chronology: for instance, the reindeer depicted in the Spanish cave of Cueva de las Monedas places the drawings in the last Ice Age.
The earliest known European figurative cave paintings are those of Chauvet Cave in France, dating to earlier than 30,000 BC in the Upper Paleolithic according to radiocarbon dating.[17] Some researchers believe the drawings are too advanced for this era and question this age.[18] However, more than 80 radiocarbon dates had been obtained by 2011, with samples taken from torch marks and from the paintings themselves, as well as from animal bones and charcoal found on the cave floor. The radiocarbon dates from these samples show that there were two periods of creation in Chauvet: 35,000 years ago and 30,000 years ago.[19] One of the surprises was that many of the paintings were modified repeatedly over thousands of years, possibly explaining the confusion about finer paintings that seemed to date earlier than cruder ones.[citation needed]
An artistic depiction of a group of rhinoceros was completed in the Chauvet Cave 30,000 to 32,000 years ago.
In 2009, cavers discovered drawings in Coliboaia Cave in Romania, stylistically comparable to those at Chauvet.[20] An initial dating puts the age of an image in the same range as Chauvet: about 32,000 years old.[21]
In Australia, cave paintings have been found on the Arnhem Land plateau showing megafauna which are thought to have been extinct for over 40,000 years, making this site another candidate for oldest known painting; however, the proposed age is dependent on the estimate of the extinction of the species seemingly depicted.[22] Another Australian site, Nawarla Gabarnmang, has charcoal drawings that have been radiocarbon-dated to 28,000 years, making it the oldest site in Australia and among the oldest in the world for which reliable date evidence has been obtained.[23]
Other examples may date as late as the Early Bronze Age, but the well-known Magdalenian style seen at Lascaux in France (c.15,000 BC) and Altamira in Spain died out about 10,000BC, coinciding with the advent of the Neolithic period. Some caves probably continued to be painted over a period of several thousands of years.[24]
The next phase of surviving European prehistoric painting, the rock art of the Iberian Mediterranean Basin, was very different, concentrating on large assemblies of smaller and much less detailed figures, with at least as many humans as animals. This was created roughly between 10,000 and 5,500 years ago, and painted in rock shelters under cliffs or shallow caves, in contrast to the recesses of deep caves used in the earlier (and much colder) period. Although individual figures are less naturalistic, they are grouped in coherent grouped compositions to a much greater degree. Over a long period of time, the cave art has become less naturalistic and has graduated from beautiful, naturalistic animal drawings to simple ones, and then to abstract shapes.
Cave artists use a variety of techniques such as finger tracing, modeling in clay, engravings, bas-relief sculpture, hand stencils, and paintings done in two or three colors. Scholars classify cave art as "Signs" or abstract marks.
[25] The most common subjects in cave paintings are large wild animals, such as bison, horses, aurochs, and deer, and tracings of human hands as well as abstract patterns, called finger flutings. The species found most often were suitable for hunting by humans, but were not necessarily the actual typical prey found in associated deposits of bones; for example, the painters of Lascaux have mainly left reindeer bones, but this species does not appear at all in the cave paintings, where equine species are the most common. Drawings of humans were rare and are usually schematic as opposed to the more detailed and naturalistic images of animal subjects. Kieran D. O'Hara, geologist, suggests in his book Cave Art and Climate Change that climate controlled the themes depicted.[26]
Pigments used include red and yellow ochre, hematite, manganese oxide and charcoal. Sometimes the silhouette of the animal was incised in the rock first, and in some caves all or many of the images are only engraved in this fashion,[citation needed] taking them somewhat out of a strict definition of "cave painting".
Similarly, large animals are also the most common subjects in the many small carved and engraved bone or ivory (less often stone) pieces dating from the same periods. But these include the group of Venus figurines, which with a few incomplete exceptions have no real equivalent in Paleolithic cave paintings.[27] One counterexample is a feminine figure in the Chauvet Cave, as described in an interview with Dominique Baffier in Cave of Forgotten Dreams.[28]
Hand stencils, formed by placing a hand against the wall and covering the surrounding area in pigment result in the characteristic image of a roughly round area of solid pigment with the uncoloured shape of the hand in the centre, these may then be decorated with dots, dashes, and patterns. Often, these are found in the same caves as other paintings, or may be the only form of painting in a location. Some walls contain many hand stencils. Similar hands are also painted in the usual fashion. A number of hands show a finger wholly or partly missing, for which a number of explanations have been given. Hand images are found in similar forms in Europe, Eastern Asia, Australia, and South America.[29] One site in Baja California features handprints as a prominent motif in its rock art. Archaeological study of this site revealed that, based on the size of the handprints, they most likely belonged to the women of the community. In addition to this, they were likely used during initiation rituals in Chinigchinich religious practices, which were commonly practiced in the Luiseño territory where this site is located.[30]
Another theory, developed by David Lewis-Williams and broadly based on ethnographic studies of contemporary hunter-gatherer societies, is that the paintings were made by paleolithic shamans.[33] The shaman would retreat into the darkness of the caves, enter into a trance state, then paint images of their visions, perhaps with some notion of drawing out power from the cave walls themselves.
R. Dale Guthrie, who has studied both highly artistic and lower quality art and figurines, identifies a wide range of skill and age among the artists. He hypothesizes that the main themes in the paintings and other artifacts (powerful beasts, risky hunting scenes and the representation of women in the Venus figurines) are the work of adolescent males, who constituted a large part of the human population at the time.[34][verification needed] However, in analyzing hand prints and stencils in French and Spanish caves, Dean Snow of Pennsylvania State University has proposed that a proportion of them, including those around the spotted horses in Pech Merle, were of female hands.[35]
Analysis in 2022, led by Bennet Bacon, an amateur archaeologist, along with a team of professional archeologists and psychologists at the University of Durham, including Paul Pettitt and Robert William Kentridge,[36] suggested that lines and dots (and a commonly seen, if unusual, "Y" symbol, which was proposed to mean "to give birth") on upper palaeolithic cave paintings correlated with the mating cycle of animals in a lunar calendar, potentially making them the earliest known evidence of a proto-writing system and explaining one object of many cave paintings.[37]
Rock painting was also performed on cliff faces; but fewer of those have survived because of erosion. One example is the rock paintings of Astuvansalmi (3000–2500 BC) in the Saimaa area of Finland.
When Marcelino Sanz de Sautuola first encountered the Magdalenian paintings of the Cave of Altamira in Cantabria, Spain in 1879, the academics of the time considered them hoaxes. Recent reappraisals and numerous additional discoveries have since demonstrated their authenticity, while at the same time stimulating interest in the artistry and symbolism[41] of Upper Palaeolithic peoples.
In Indonesia the caves in the district of Maros in Sulawesi are famous for their hand prints. About 1,500 negative handprints have also been found in 30 painted caves in the Sangkulirang area of Kalimantan; preliminary dating analysis as of 2005 put their age in the range of 10,000 years old.[43] A 2014 study based on uranium–thorium dating dated a Maros hand stencil to a minimum age of 39,900 years. A painting of a babirusa was dated to at least 35.4 ka, placing it among the oldest known figurative depictions worldwide.[5]
And more recently, in 2021, archaeologists announced the discovery of cave art at least 45,500 years old in Leang Tedongnge cave, Indonesia. According to the journal Science Advances, the cave painting of a warty pig is the earliest evidence of human settlement of the region.[44][45] It has been reported that it is rapidly deteriorating as a result of climate change in the region.[46]
Originating in the Paleolithic period, the rock art found in Khoit Tsenkher Cave, Mongolia, includes symbols and animal forms painted from the walls up to the ceiling.[47] Stags, buffalo, oxen, ibex, lions, Argali sheep, antelopes, camels, elephants, ostriches, and other animal pictorials are present, often forming a palimpsest of overlapping images. The paintings appear brown or red in color, and are stylistically similar to other Paleolithic rock art from around the world but are unlike any other examples in Mongolia.
The Ambadevi rock shelters have the oldest cave paintings in India, dating back to 25,000 years. The Bhimbetka rock shelters are dated to about 8,000 BC.[48][49][50][51][52] Similar paintings are found in other parts of India as well. In Tamil Nadu, ancient Paleolithic Cave paintings are found in Kombaikadu, Kilvalai, Settavarai and Nehanurpatti. In Odisha they are found in Yogimatha and Gudahandi. In Karnataka, these paintings are found in Hiregudda near Badami. The most recent painting, consisting of geometric figures, date to the medieval period.
Executed mainly in red and white with the occasional use of green and yellow, the paintings depict the lives and times of the people who lived in the caves, including scenes of childbirth, communal dancing and drinking, religious rites and burials, as well as indigenous animals.[53]
In 2011, archaeologists found a small rock fragment at Blombos Cave, about 300 km (190 mi) east of Cape Town on the southern cape coastline in South Africa, among spear points and other excavated material. After extensive testing for seven years, it was revealed that the lines drawn on the rock were handmade and from an ochre crayon dating back 73,000 years. This makes it the oldest known rock painting.[55][56]
Significant early cave paintings, executed in ochre, have been found in Kimberley and Kakadu, Australia. Ochre is not an organic material, so carbon dating of these pictures is often impossible. The oldest so far dated at 17,300 years is an ochre painting of a kangaroo in the Kimberley region, which was dated by carbon dating wasp nest material underlying and overlying the painting.[57] Sometimes the approximate date, or at least, an epoch, can be surmised from the painting content, contextual artifacts, or organic material intentionally or inadvertently mixed with the inorganic ochre paint, including torch soot.[14]
A red ochre painting, discovered at the centre of the Arnhem Land Plateau, depicts two emu-like birds with their necks outstretched. They have been identified by a palaeontologist as depicting the megafauna species Genyornis, giant birds thought to have become extinct more than 40,000 years ago; however, this evidence is inconclusive for dating. It may suggest that Genyornis became extinct at a later date than previously determined.[22]
Rock art near Qohaito appears to indicate habitation in the area since the fifth millennium BC, while the town is known to have survived to the sixth century AD. Mount Emba Soira, Eritrea's highest mountain, lies near the site, as does a small successor village. Much of the rock art sites are found together with evidence of prehistoric stone tools, suggesting that the art could predate the widely presumed pastoralist and domestication events that occurred 5000– 4000 years ago.[62][63]
In 2002, a French archaeological team discovered the Laas Geel cave paintings on the outskirts of Hargeisa in Somaliland. Dating back around 5,000 years, the paintings depict both wild animals and decorated cows. They also feature herders, who are believed to be the creators of the rock art.[64] In 2008, Somali archaeologists announced the discovery of other cave paintings in Dhambalin region, which the researchers suggest includes one of the earliest known depictions of a hunter on horseback. The rock art is dated to 1000 to 3000 BC.[65][66]
Additionally, between the towns of Las Khorey and El Ayo in Karinhegane is a site of numerous cave paintings of real and mythical animals. Each painting has an inscription below it, which collectively have been estimated to be around 2,500 years old.[67][68] Karihegane's rock art is in the same distinctive style as the Laas Geel and Dhambalin cave paintings.[69][70] Around 25 miles from Las Khorey is found Gelweita, another key rock art site.[68]
Many cave paintings are found in the Tassili n'Ajjer mountains in southeast Algeria. A UNESCO World Heritage Site, the rock art was first discovered in 1933 and has since yielded 15,000 engravings and drawings that keep a record of the various animal migrations, climatic shifts, and change in human inhabitation patterns in this part of the Sahara from 6000 BC to the late classical period.[72] Other cave paintings are also found at the Akakus, Mesak Settafet and Tadrart in Libya and other Sahara regions including: Ayr mountains, Niger and Tibesti, Chad.
In 2020, limestone cave decorated with scenes of animals such as donkeys, camels, deer, mule and mountain goats was uncovered in the area of Wadi Al-Zulma by the archaeological mission from the Tourism and Antiquities Ministry. Rock art cave is 15 meters deep and 20 meters high.[73][74]
At uKhahlamba / Drakensberg Park, South Africa, now thought to be some 3,000 years old, the paintings by the San people who settled in the area some 8,000 years ago depict animals and humans, and are thought to represent religious beliefs. Human figures are much more common in the rock art of Africa than in Europe.[75]
Distinctive monochrome and polychrome cave paintings and murals exist in the mid-peninsula regions of southern Baja California and northern Baja California Sur, consisting of Pre-Columbian paintings of humans, land animals, sea creatures, and abstract designs. These paintings are mostly confined to the sierras of this region, but can also be found in outlying mesas and rock shelters. According to recent radiocarbon studies of the area, of materials recovered from archaeological deposits in the rock shelters and on materials in the paintings themselves, suggest that the Great Murals may have a time range extending as far back as 7,500 years ago.[76]
Native American tribes have contributed to the makings of Californian cave art, whether it be in Northern or Baja California. The Chumash people of Southern and Baja California made paintings in Swordfish Cave. It was given its name after the swordfish that are painted on its walls and is a sacred site for religious and cultural practices of the Chumash tribe. It was under attack of demolition, which prompted the start of its conservation with cooperation between the Vandenberg Air Force Base and the Tribal Elders Council of the Santa Ynez Band of Chumash. These two parties were able to stabilize and conserve the cave and its art. When previously studied, there were many conclusions about how the paintings were made but not a lot of conclusions about the symbolic value of the rock art and what its meaning to the Chumash tribe. The excavation of the inside of the cave became a viewing area for archaeologists and anthropologists, specifically Clayton Lebow, Douglas Harrow, and Rebecca McKim, to find out the symbolic meaning of the art. Some of the tools that were used to make the pictographs were found in the site and were connected to the two early occupations that were in the area. This pushed back the general knowledge of understood antiquity of rock art on California’s Central Coast by more than 2,000 years.[78]
The National Institution of Anthropology and History (INAH) established in Mexico recorded over 1,500 rock art related archaeological monuments in Baja California. A little under 300 of the sites were connected to Native American Tribes. Throughout these 300 sites, 65% have paintings, 24% have petroglyphs, 10% have both paintings and petroglyphs, and 1% have geoglyphs. Five of these sites located in Baja California show hand designs or paintings, and they all spread out in that area. These sites include Milagro de Guadalupe (23 imprints), Corral de Queno (6 imprints), Rancho Viejo (1 drawing), Piedras Gordas (5 imprints), and finally Valle Seco (3 imprints).[79]
It is located in northeast state of Piauí, between latitudes 8° 26' 50" and 8° 54' 23" south and longitudes 42° 19' 47" and 42° 45' 51" west. It falls within the municipal areas of São Raimundo Nonato, São João do Piauí, Coronel José Dias and Canto do Buriti. It has an area of 1291.4 square kilometres (319,000 acres). The area has the largest concentration of prehistoric small farms on the American continents. Scientific studies confirm that the Capivara mountain range was densely populated in prehistoric periods.
The hand images are often negative (stencilled). Besides these there are also depictions of human beings, guanacos, rheas, felines and other animals, as well as geometric shapes, zigzag patterns, representations of the sun, and hunting scenes. Similar paintings, though in smaller numbers, can be found in nearby caves. There are also red dots on the ceilings, probably made by submerging their hunting bolas in ink, and then throwing them up. The colours of the paintings vary from red (made from hematite) to white, black or yellow. The negative hand impressions date to around 550 BC, the positive impressions from 180 BC, while the hunting drawings are calculated to more than 10,000 years old.[80] Most of the hands are left hands,[4][81] which suggests that painters held the spraying pipe with their right hand.[82][83][84]
There are rock paintings in caves in Thailand, Malaysia, Indonesia, and Burma. In Thailand, caves and scarps along the Thai-Burmese border, in the Petchabun Range of Central Thailand, and overlooking the Mekong River in Nakorn Sawan Province, all contain galleries of rock paintings. In Malaysia, the Tambun rock art is dated at 2000 years, and those in the Painted Cave at Niah Caves National Park are 1200 years old. The anthropologist Ivor Hugh Norman Evans visited Malaysia in the early 1920s and found that some of the tribes (especially Negritos) were still producing cave paintings and had added depictions of modern objects including what are believed to be automobiles.[85] (See prehistoric Malaysia.).
In Indonesia, the rock painting can be found in Sumatra, Kalimantan, Sulawesi, Flores, Timor, Maluku and Papua.[86][87][88]
^ abcM. Aubert et al., "Pleistocene cave art from Sulawesi, Indonesia", Nature volume 514, pages 223–227 (09 October 2014).
"using uranium-series dating of coralloid speleothems directly associated with 12 human hand stencils and two figurative animal depictions from seven cave sites in the Maros karsts of Sulawesi, we show that rock art traditions on this Indonesian island are at least compatible in age with the oldest European art. The earliest dated image from Maros, with a minimum age of 39.9 kyr, is now the oldest known hand stencil in the world. In addition, a painting of a babirusa ('pig-deer') made at least 35.4 kyr ago is among the earliest dated figurative depictions worldwide, if not the earliest one. Among the implications, it can now be demonstrated that humans were producing rock art by ~40 kyr ago at opposite ends of the Pleistocene Eurasian world."
^Aubert, M.; et al. (2014). "Pleistocene cave art from Sulawesi, Indonesia". Nature. 514 (7521): 223–227. Bibcode:2014Natur.514..223A. doi:10.1038/nature13422. PMID25297435. S2CID2725838. using uranium-series dating of coralloid speleothems directly associated with 12 human hand stencils and two figurative animal depictions from seven cave sites in the Maros karsts of Sulawesi, we show that rock art traditions on this Indonesian island are at least compatible in age with the oldest European art. The earliest dated image from Maros, with a minimum age of 39.9 kyr, is now the oldest known hand stencil in the world.
^Jaroff, Leon (1997-06-02). "Etched in Stone". Time. Archived from the original on February 4, 2013. Retrieved 2008-10-07. Wildlife and humans tend to get equal billing in African rock art. (In the caves of western Europe, by contrast, pictures of animals cover the walls and human figures are rare.) In southern Africa, home to the San, or Bushmen, many of the rock scenes depicting people interpret the rituals and hallucinations of the shamans who still dominate the San culture today. Among the most evocative images are those believed to represent shamans deep in trance: a reclining, antelope-headed man surrounded by imaginary beasts, for example, or an insect-like humanoid covered with wild decorations.
|
"[11][12]
Nearly 350 caves have now been discovered in France and Spain that contain art from prehistoric times. Initially, the age of the paintings had been a contentious issue, since methods like radiocarbon dating can produce misleading results if contaminated by other samples,[13] and caves and rocky overhangs (where parietal art is found) are typically littered with debris from many time periods. But subsequent technology has made it possible to date the paintings by sampling the pigment itself, torch marks on the walls,[14] or the formation of carbonate deposits on top of the paintings.[15] The subject matter can also indicate chronology: for instance, the reindeer depicted in the Spanish cave of Cueva de las Monedas places the drawings in the last Ice Age.
The earliest known European figurative cave paintings are those of Chauvet Cave in France, dating to earlier than 30,000 BC in the Upper Paleolithic according to radiocarbon dating.[17] Some researchers believe the drawings are too advanced for this era and question this age.[18] However, more than 80 radiocarbon dates had been obtained by 2011, with samples taken from torch marks and from the paintings themselves, as well as from animal bones and charcoal found on the cave floor. The radiocarbon dates from these samples show that there were two periods of creation in Chauvet: 35,000 years ago and 30,000 years ago.[19] One of the surprises was that many of the paintings were modified repeatedly over thousands of years, possibly explaining the confusion about finer paintings that seemed to date earlier than cruder ones.[citation needed]
An artistic depiction of a group of rhinoceros was completed in the Chauvet Cave 30,000 to 32,000 years ago.
|
yes
|
Spelaeology
|
Was Chauvet Cave the site of the earliest known cave paintings?
|
yes_statement
|
"chauvet" "cave" was the "site" of the "earliest" "known" "cave" "paintings".. the "earliest" "known" "cave" "paintings" were found in "chauvet" "cave".
|
https://www.newyorker.com/magazine/2008/06/23/first-impressions
|
First Impressions | The New Yorker
|
First Impressions
A frieze of horses and rhinos near the Chauvet cave’s Megaloceros Gallery, where artists may have gathered to make charcoal for drawing. Chauvet contains the earliest known paintings, from at least thirty-two thousand years ago.Photograph by Jean Clottes / Chauvet Cave Scientific Team
During the Old Stone Age, between thirty-seven thousand and eleven thousand years ago, some of the most remarkable art ever conceived was etched or painted on the walls of caves in southern France and northern Spain. After a visit to Lascaux, in the Dordogne, which was discovered in 1940, Picasso reportedly said to his guide, “They’ve invented everything.” What those first artists invented was a language of signs for which there will never be a Rosetta stone; perspective, a technique that was not rediscovered until the Athenian Golden Age; and a bestiary of such vitality and finesse that, by the flicker of torchlight, the animals seem to surge from the walls, and move across them like figures in a magic-lantern show (in that sense, the artists invented animation). They also thought up the grease lamp—a lump of fat, with a plant wick, placed in a hollow stone—to light their workplace; scaffolds to reach high places; the principles of stencilling and Pointillism; powdered colors, brushes, and stumping cloths; and, more to the point of Picasso’s insight, the very concept of an image. A true artist reimagines that concept with every blank canvas—but not from a void.
Some caves have rock porches that were used for shelter, but there is no evidence of domestic life in their depths. Sizable groups may have visited the chambers closest to the entrance—perhaps for communal rites—and we know from the ubiquitous handprints that were stamped or airbrushed (using the mouth to blow pigment) on the walls that people of both sexes and all ages, even babies, participated in whatever activities took place. Only a few individuals ventured or were permitted into the furthest reaches of a cave—in some cases, walking or crawling for miles. Those intrepid spelunkers explored every surface. If they bypassed certain walls that to us seem just as suitable for decoration as ones they chose, the placement of the art apparently wasn’t capricious. In the course of some twenty-five thousand years, the same animals—primarily bison, stags, aurochs, ibex, horses, and mammoths—recur in similar poses, illustrating an immortal story. For a nomadic people, living at nature’s mercy, it must have been a powerful consolation to know that such a refuge from flux existed.
As the painters were learning to crush hematite, and to sharpen embers of Scotch pine for their charcoal (red and black were their primary colors), the last Neanderthals were still living on the vast steppe that was Europe in the Ice Age, which they’d had to themselves for two hundred millennia, while Homo sapiens were making their leisurely trek out of Africa. No one can say what the encounters between that low-browed, herculean species and their slighter but formidable successors were like. (Paleolithic artists, despite their penchant for naturalism, rarely chose to depict human beings, and then did so with a crudeness that smacks of mockery, leaving us a mirror but no self-reflection.) Their genomes are discrete, so it appears that either the two populations didn’t mate or they couldn’t conceive fertile offspring. In any case, they wouldn’t have needed to contest their boundless hunting grounds. They coexisted for some eight thousand years, until the Neanderthals withdrew or were forced, in dwindling numbers, toward the arid mountains of southern Spain, making Gibraltar a final redoubt. It isn’t known from whom or from what they were retreating (if “retreat” describes their migration), though along the way the arts of the newcomers must have impressed them. Later Neanderthal campsites have yielded some rings and awls carved from ivory, and painted or grooved bones and teeth (nothing of the like predates the arrival of Homo sapiens). The pathos of their workmanship—the attempt to copy something novel and marvellous by the dimming light of their existence—nearly makes you weep. And here, perhaps, the cruel notion that we call fashion, a coded expression of rivalry and desire, was born.
The cave artists were as tall as the average Southern European of today, and well nourished on the teeming game and fish they hunted with flint weapons. They are, genetically, our direct ancestors, although “direct” is a relative term. Since recorded history began, around 3200 B.C., with the invention of writing in the Middle East, there have been some two hundred human generations (if one reckons a new one every twenty-five years). Future discoveries may alter the math, but, as it now stands, forty-five hundred generations separate the earliest Homo sapiens from the earliest cave artists, and between the artists and us another fifteen hundred generations have descended the birth canal, learned to walk upright, mastered speech and the use of tools, reached puberty, reproduced, and died.
Early last April, I set off for the Ardèche, a mountainous region in south-central France where cave networks are a common geological phenomenon (hundreds are known, dozens with ancient artifacts). It was here, a week before Christmas in 1994, that three spelunkers exploring the limestone cliffs above the Pont d’Arc, a natural bridge of awesome beauty and scale which resembles a giant mammoth straddling the river gorge, unearthed a cave that made front-page news. It proved to contain the oldest known paintings in the world—some fifteen to eighteen thousand years older than the friezes at Lascaux and at Altamira, in the Spanish Basque country—and it was named for its chief discoverer, Jean-Marie Chauvet. Unlike the amateur adventurers or lucky bumblers (in the case of Lascaux, a posse of village urchins and their dog) who have fallen, sometimes literally, upon a cave where early Europeans left their cryptic signatures, Chauvet was a professional—a park ranger working for the Ministry of Culture, and the custodian of other prehistoric sites in the region. He and his partners, Christian Hillaire and Éliette Brunel, were aware of the irreparable damage that even a few indelicate footsteps can cause to an environment that has been sealed for eons—posterity has lost whatever precious relics and evidence that the carelessly trampled floors of Lascaux and Altamira, both now sealed to the public, might have yielded.
The cavers were natives of the Ardèche: three old friends with an interest in archeology. Brunel was the smallest, so when they felt an updraft of cool air coming from a recess near the cliff’s ledge—the potential sign of a cavity—they heaved some rocks out of the way, and she squeezed through a tight passage that led to the entrance of a deep shaft. The men followed, and, unfurling a chain ladder, the group descended thirty feet into a soaring grotto with a domed roof whose every surface was blistered or spiked with stalagmites. Where the uneven clay floor had receded, it was littered with calcite accretions—blocks and columns that had broken off—and, in photographs, the wrathful, baroque grandeur of the scene evokes some Biblical act of destruction wreaked upon a temple. As the explorers advanced, moving gingerly, in single file, Brunel suddenly let out a cry: “They have been here!”
The question of who “they” were speaks to a mystery that thinking people of every epoch and place have tried to fathom: who are we? In the century since the modern study of caves began, specialists from at least half a dozen disciplines—archeology, ethnology, ethology, genetics, anthropology, and art history—have tried (and competed) to understand the culture that produced them. The experts tend to fall into two camps: those who can’t resist advancing a theory about the art, and those who believe that there isn’t, and never will be, enough evidence to support one. Jean Clottes, the celebrated prehistorian and prolific author who assembled the Chauvet research team, in 1996, belongs to the first camp, and most of his colleagues to the second. Yet no one who studies the caves seems able to resist a yearning for communion with the artists. When you consider that their legacy may have been found by chance, but surely wasn’t left by chance, it, too, suggests a yearning for communion—with us, their descendants.
Two books published in the past few years, “The Cave Painters” (2006), by Gregory Curtis, and “The Nature of Paleolithic Art” (2005), by R. Dale Guthrie, approach the controversy generated by their subject from different perspectives. Guthrie is an encyclopedic polymath who believes he can “decode” prehistory. Curtis, a former editor of Texas Monthly, is a literary detective (his previous book, on the Venus de Milo, also concerned the obscure provenance of an archaic masterpiece), and in quietly enthralling prose, without hurry or flamboyance, he spins two narratives. (The shorter one, as he notes, covers a few million years, and the longer one, the past century.)
I packed both volumes, along with some hiking boots, protein bars, and other survival gear, all of it unnecessary, for my sojourn in the Ardèche. My destination was a Spartan summer camp—a concrete barracks in a valley near the Pont d’Arc. It is owned by the regional government, and normally houses groups of schoolchildren on subsidized holidays. But twice a year, for a couple of weeks in the spring and the autumn, the camp is a base for the Chauvet team. They, and only they, are admitted to the cave (and sometimes not even they: last October, the research session was cancelled because the climate hadn’t restabilized). Access is so strictly limited not only because traffic causes contamination but also because the French government has been embroiled for thirteen years in multimillion-dollar litigation with Jean-Marie Chauvet and his partners, as well as with the owners of the land on which they found the cave. (The finders are entitled to royalties from reproductions of the art, while the owners are entitled to compensation for a treasure that, at least technically, is their property—the Napoleonic laws, modified in the nineteen-fifties, that give the Republic authority to dispose of any minerals or metals beneath the soil do not apply to cave paintings. Had Chauvet been a gold mine, the suit couldn’t have been brought.)
By dusk on the first night, most of the researchers had assembled in the cafeteria for an excellent dinner of rabbit fricassée, served with a Côtes du Vivarais, and followed by a selection of local cheeses. (The Ardèche is a gourmet’s paradise, and the camp chef was a tough former sailor from Marseilles whose speech and cooking were equally pungent.) Among the senior team members, Evelyne Debard is a geologist, as is Norbert Aujoulat. He is a former director of research at Lascaux, and the author of a fine book on its art, who calls himself “an underground man.” Marc Azéma is a documentary filmmaker who specializes in archeology. Carole Fritz and Gilles Tosello, a husband and wife from Toulouse, are experts in parietal art, and Tosello is a graphic artist whose heroically patient, stroke-by-stroke tracings of the cave’s signs and images are essential to their study. Jean-Marc Elalouf, a geneticist, and the author of a poetic essay on Chauvet, has, with a team of graduate students, sequenced the mitochondrial DNA of the cave’s numerous bears. They pocked the floor with their hibernation burrows, and, in a space known as the Skull Chamber, a bear’s cranium sits on a flat, altar-like pedestal—perhaps enshrined there by the artists. The grotto is littered with other ursine remains, and some of the bones seem to have been planted in the sediment or stuck with intent into the fissured walls. (No human DNA has yet surfaced, and Elalouf doesn’t expect to find any.) Dominique Baffier, an official at the Ministry of Culture, is Chauvet’s curator. She coördinates the research and conservation. Jean-Michel Geneste, an archeologist, is the director of the project, a post he assumed in 2001, when Jean Clottes, at sixty-seven, took mandatory retirement.
“I’ve rented the extra room to an escaped convict—maybe you know him.”
Link copied
Clottes is a hero of Gregory Curtis’s “The Cave Painters,” one of the “giants” in a line of willful, brilliant, and often eccentric personalities who have shaped a discipline that prides itself on scientific detachment but has been a battleground for the kind of turf wars that were absent from the caves themselves. No human conflict is recorded in cave art, although at three separate sites there are four ambiguous drawings of a creature with a man’s limbs and torso, pierced with spearlike lines. More pertinent, perhaps, is a famous vignette in the shaft at Lascaux. It depicts a rather comical stick figure with an avian beak or mask, a puny physique, and a long skinny penis. He and his erect member seem to have rigor mortis. He is flat on his back at the feet of an exquisitely realistic wounded bison, whose intestines are spilling out. The bison’s glance is turned away, but it might have an ironic smile. Could the subject be hubris? Whatever it represents, some mythic contest—and the struggle of prehistorians to interpret their subject is such a contest—has ended in a draw.
Curtis profiles a dynasty of interpreters, beginning with the Spanish nobleman Marcelino Sanz de Sautuola, who discovered Altamira in 1879—it was on his property. (Parts of Niaux and Mas d’Azil, two gigantic painted caves in the Pyrenees, had been known for centuries, but their decorations were regarded as graffiti made in historic times, perhaps by Roman legionaries.) He was accused of art forgery, and his scholarly papers on the paintings’ antiquity were ridiculed by two of the era’s greatest archeologists, Gabriel de Mortillet and Émile Cartailhac. Sautuola died before Cartailhac repented of his skepticism, in 1902. By then, the art at two important sites, Les Combarelles and Font-de-Gaume (which contains a ravishing portrait of two amorous reindeer), had come to light, and, in 1906, Cartailhac published a lavish compendium of cave painting that was subsidized by the Prince of Monaco. The book’s much admired illustrations of Altamira were the work of a young priest with a painterly eye, Henri Breuil, who, in the course of half a century, became known as the Pope of Prehistory. He divided the era into four periods, and dated the art by its style and appearance. Aurignacian, the oldest, was followed by Perigordian (later known as Gravettian), Solutrean, and Magdalenian. They were named for type-sites in France: Aurignac, La Gravette, Solutré, and La Madeleine. But Breuil’s theory about the art’s meaning—that it related to rituals of “hunting magic”—was discredited by subsequent studies.
During the Second World War, Max Raphael, a German art historian who had studied the caves of the Dordogne before fleeing the Nazis to New York, was looking for clues to the art’s meaning in its thematic unity. He concluded that the animals represented clan totems, and that the paintings depicted strife and alliances—an archaic saga. In 1951, the year before Raphael died, he sent an extract of his writings to Annette Laming-Emperaire, a young French archeologist who shared his conviction that “prehistory cannot be reconstructed with the aid of ethnography.” Beware, in other words, of analogue reasoning, because no one should presume to parse the icons and figures of a vanished society by comparing them with the art of hunter-gatherers from more recent eras. In 1962, she published a doctoral thesis that made her famous. “The Meaning of Paleolithic Rock Art” dismissed the various, too creative theories of its predecessors, and, with them, any residual nineteenth-century prejudice or romance about the “primitive” mind. Laming-Emperaire’s structuralist methodology is still in use, much facilitated by computer science. It involves compiling minutely detailed inventories and diagrams of the way that species are grouped on the cave walls; of their gender, frequency, and position; and of their relation to the signs and handprints that often appear close to them. In “Lascaux” (2005), Norbert Aujoulat explains how he and his colleagues added time to the equation. Analyzing the order of superimposed images, they determined that wherever horses, aurochs, and stags appear on the same panel, the horse is beneath, the aurochs in the middle, and the stag on top, and that the variations in their coats correspond to their respective mating seasons. The triad of “horse-aurochs-stag” links the fertility cycles of important, and perhaps sacred or symbolic, animals to the cosmic cycles, suggesting a great metaphor about creation.
Laming-Emperaire had an eminent thesis adviser, André Leroi-Gourhan, who revolutionized the practice of excavation by recognizing that a vertical dig destroys the context of a site. In twenty years (1964-84) of insanely painstaking labor—scraping the soil in small horizontal squares at Pincevent, a twelve-thousand-year-old campsite on the Seine—he and his disciples gave us one of the richest pictures to date of Paleolithic life as the Old Stone Age was ending.
A new age in the science of prehistory had begun in 1949, when radiocarbon dating was invented by Willard Libby, a chemist from Chicago. One of Libby’s first experiments was on a piece of charcoal from Lascaux. Breuil had, incorrectly, it turns out, classified the cave as Perigordian. (It is Magdalenian.) He had also made the Darwinian assumption that the most ancient art was the most primitive, and Leroi-Gourhan worked on the same premise. In that respect, Chauvet was a bombshell. It is Aurignacian, and its earliest paintings are at least thirty-two thousand years old, yet they are just as sophisticated as much later compositions. What emerged with that revelation was an image of Paleolithic artists transmitting their techniques from generation to generation for twenty-five millennia with almost no innovation or revolt. A profound conservatism in art, Curtis notes, is one of the hallmarks of a “classical civilization.” For the conventions of cave painting to have endured four times as long as recorded history, the culture it served, he concludes, must have been “deeply satisfying”—and stable to a degree it is hard for modern humans to imagine.
Jean Clottes is a tall, cordial man of seventy-four, who still attends the biannual sessions at Chauvet, conducting his own research (this April, he and Marc Azéma found a new panel of signs), while continuing to travel and lecture widely. The latest addition to his bibliography, “Cave Art,” a luxuriously illustrated “imaginary museum” of the Old Stone Age, is due out from Phaidon this summer.
Clottes’s eminence in his field was never preordained. He once taught high-school English in Foix, a city in the Pyrenees, near the Andorran border, which is an epicenter for decorated caves. He studied archeology in his spare time, and earned a doctorate at forty-one, when he quit teaching. He had been moonlighting in a job that gave him privileged access to new caves, and an impressive calling card—as the director of prehistory for the Midi-Pyrenees—but a nominal salary. The appointment was made official in 1971, and for the next two decades Clottes was usually the first responder at the scene of a new discovery. The most sensational find, before Chauvet, was Cosquer—a painted cave near Marseilles that could be reached only through a treacherous underwater tunnel, in which three divers had drowned. Like Altamira, Cosquer was, at first, attacked as a hoax, and some of the press coverage impeached Clottes’s integrity as its authenticator. He could judge its art only from photographs, but, in 1992, a year after Cosquer was revealed, carbon dating proved that the earliest paintings are at least twenty-seven thousand years old. That year, the Ministry of Culture elevated him to the rank of inspector general.
At the base camp, Clottes bunked down, as did everyone, in a dorm room, and braved the morning hoarfrost for a dash to the communal showers. There is a boyish quality to his energy and conviction. (At sixty-nine, he learned to scuba dive so that he could finally explore Cosquer himself.) One evening, he showed us a film about his “baptism,” in 2007, as an honorary Tuareg; the North African nomads crowned him with a turban steeped in indigo that stained his forehead, and he danced to their drums by a Saharan campfire. Among his own sometimes fractious tribesmen, Clottes also commands the respect due an unusually vigorous elder, and it was hard to keep pace with him as he scampered on his long legs up the steep cliff to Chauvet, talking with verve the entire way.
The path skirts a vineyard, then veers up into the woods, emerging onto a corniche—a natural terrace with a rocky overhang on one side, and a precipitous drop on the other. “En route to Chauvet, the painters might have sheltered here or prepared their pigments. Looking at the valley and the river gorge, they saw what we do,” Clottes said, indicating a magnificent view. “The topography hasn’t changed much, except that the Ice Age vegetation was much sparser: mostly evergreens, like fir and pine. Without all the greenery, the resemblance of the Pont d’Arc to a giant mammoth would have been even more dramatic. But nothing of the landscape—clouds, earth, sun, moon, rivers, or plant life, and, only rarely, a horizon—figures in cave art. It’s one among many striking omissions.”
Where the terrace ended, we plunged back into the underbrush, following a track obstructed by rocks and brambles, and, after about half an hour of climbing, we arrived at the entrance that Jean-Marie Chauvet and his partners discovered. (The prehistoric entrance has been plugged, for millennia, by a landslide.) A shallow cave at the trailhead has been fitted out as a storeroom for gear and supplies. From here, a wooden ramp guides one along a narrow ledge, shaped like a horseshoe, that was formed when the cliffs receded, to a massive metal door that’s as well defended—with voice alarms, video surveillance, and a double key system—as a bank vault. Some members of the team relaxed with a cigarette or a cold drink and a little academic gossip, but Clottes immediately changed into his spelunking overalls, donned a hard hat with a miner’s lamp, and disappeared into the underworld.
On a map, Chauvet resembles the British Isles, and, like an island with coves and promontories, its outline is irregular. The distance from the entrance to the deepest gallery is about eight hundred feet, and, at the northern end, the cave forks into two horn-shaped branches. In some places, like the grotto that Éliette Brunel first plumbed in 1994 (it is named for her), the terrain is rocky and chaotic, while in others, like the Chamber of the Bear Hollows, the walls and floor are relatively smooth. (In the nineteen-nineties, a metal catwalk was installed to protect the cave bed.) The ceilings of the principal galleries vary in height from about five to forty feet, but there are passages and alcoves where an adult has to kneel or crawl. Twenty-six thousand years ago (six millennia after the first paintings were created), a lone adolescent left his footprints and torch swipes in the furthest reaches of the western horn, the Gallery of the Crosshatching.
The Megaloceros Gallery—a funnel in the eastern horn named for the huge, elklike herbivores that mingle on the walls with rhinos, horses, bison, a glorious ibex, three abstract vulvas, and assorted geometric signs—is the narrowest part of the cave, and it seems to have been a gathering point or a staging area where the artists built hearths to produce their charcoal. Dominique Baffier, the curator, and Valérie Feruglio, a young archeologist who arrived at the base camp during my visit with her new baby, were moved to write in “Chauvet Cave” (2001), a book of essays and photography on the team’s research, “The freshness of these remains gives the impression that . . . we interrupted the Aurignacians in their task and caused them to flee abruptly.” They dropped an ivory projectile, which was found in the sediment.
From here, one emerges into the deepest recess of Chauvet, the End Chamber, a spectacular vaulted space that contains more than a third of the cave’s etchings and paintings—a few in ochre, most in charcoal, and all meticulously composed. A great frieze covers the back left wall: a pride of lions with Pointillist whiskers seems to be hunting a herd of bison, which appear to have stampeded a troop of rhinos, one of which looks as if it had fallen into, or is climbing out of, a cavity in the rock. As at many sites, the scratches made by a standing bear have been overlaid with a palimpsest of signs or drawings, and one has to wonder if cave art didn’t begin with a recognition that bear claws were an expressive tool for engraving a record—poignant and indelible—of a stressed creature’s passage through the dark.
“Take my advice, Roberts, and hide your light under a bushel.”
Link copied
To the far right of the frieze, on a separate wall, a huge, finely modelled bison stands alone, gazing stage left toward a pair of figures painted on a conical outcropping of rock that descends from the ceiling and comes to a point about four feet above the floor. The fleshy shape of this pendant is unmistakably phallic, and all of its sides are decorated, though only the front is clearly visible. The floor of the End Chamber is littered with relics. In order to preserve them, the catwalk stops close to the entrance, and the innermost alcove, known as the Sacristy, remains to be explored. But one of the team’s archeologists, Yanik Le Guillou, rigged a digital camera to a pole, and was able to photograph the pendant’s far side. Wrapped around, or, as it appears, straddling, the phallus is the bottom half of a woman’s body, with heavy thighs and bent knees that taper at the ankle. Her vulva is darkly shaded, and she has no feet. Hovering above her is a creature with a bison’s head and hump, and an aroused, white eye. But a line branching from its neck looks like a human arm with fingers. The relationship of these figures to each other, and to the frieze on the adjacent wall, is among the great enigmas in cave art. The woman’s posture suggests that she may be squatting in childbirth, and the animals, on a level with her loins, seem to be streaming away from her. Gregory Curtis, who fights and loses a valiant battle with his urge to speculate, admits in “The Cave Painters” that he can’t help reading a mythical narrative into the scene, one that relates to the Minotaur—the hybrid offspring of a mortal woman and a sacred bull “who lived in the Labyrinth, which is a kind of cave.” Art on the walls of Cretan palaces depicts the spectacle of youths leapfrogging a charging bull, and that public spectacle—in the guise of the bullfight—has, he points out, endured into modern times precisely in the regions where decorated caves are most concentrated. “European culture began somewhere,” he concludes. “Why not right here?”
In the course of a friendly correspondence, Yanik Le Guillou gave Curtis a warning about indulging his imagination. Perhaps that sin might be forgiven in an American journalist, but not in Jean Clottes. The book that sets forth his controversial theory about the art, “The Shamans of Prehistory,” co-written with the South African archeologist David Lewis-Williams, and published in 1996—the year Clottes took over at Chauvet—detonated a polemical fire-storm that hasn’t entirely subsided. Defying the prohibitions against importing evidence to the caves from external sources, the authors grounded their interpretation in Lewis-Williams’s studies of shamanism among hunter-gatherers, historical and contemporary, and of African rock art, specifically the paintings of a nomadic people, the San, whose shamans still serve as spiritual mediators with the powers of nature and with the dead. In an earlier article, “The Signs of All Times,” written with the anthropologist T. A. Dowson, Lewis-Williams had explored what he called “a neurological bridge” to the Old Stone Age. The authors cited laboratory experiments with subjects in an induced-trance state which suggested that the human optic system generates the same types of visual illusions, in the same three stages, differing only slightly by culture, whatever the stimulus: drugs, music, pain, fasting, repetitive movements, solitude, or high carbon-dioxide levels (a phenomenon that is common in close underground chambers). In the first stage, a subject sees a pattern of points, grids, zigzags, and other abstract forms (familiar from the caves); in the second stage, these forms morph into objects—the zigzags, for example, might become a serpent. In the third and deepest stage, a subject feels sucked into a dark vortex that generates intense hallucinations, often of monsters or animals, and feels his body and spirit merging with theirs.
Peoples who practice shamanism believe in a tiered cosmos: an upper world (the heavens); an underworld; and the mortal world. When Clottes joined forces with Lewis-Williams, he had come to believe that cave painting largely represents the experiences of shamans or initiates on a vision quest to the underworld, where spirits gathered. The caves served as a gateway, and their walls were considered porous. Where the artists or their entourage left handprints, they were palping a living rock in the hopes of reaching or summoning a force beyond it. They typically incorporated the rock’s contours and fissures into the outlines of their drawings—as a horn, a hump, or a haunch—so that a frieze becomes a bas-relief. But, in doing so, they were also locating the dwelling place of an animal from their visions, and bodying it forth.
This scenario has its loose ends, particularly in the art’s untrancelike fidelity to nature, but it fits the dreamlike suspension of the animals in a vacuum, and it helps to explain three of the most sensational figures in cave art. One is the bison-man at Chauvet; another is the bird-man at Lascaux; and the third, known as the Sorcerer, looks down from a perch close to the high ceiling at Les Trois Frères, a Magdalenian cave in the Pyrenees. He has the ears and antlers of a stag; handlike paws; athletic human legs and haunches; a horse’s tail; and a long, rather elegantly groomed wizard’s beard.
Clottes was hurt and outraged by the rancor of the attacks that greeted “The Shamans of Prehistory” (“psychedelic ravings,” one critic wrote), and the authors defended themselves in a subsequent edition. “You can advance a scientific hypothesis without claiming certainty,” Clottes told me one evening. “Everyone agrees that the paintings are, in some way, religious. I’m not a believer myself, and I’m certainly not a mystic. But Homo sapiens is Homo spiritualis. The ability to make tools defines us less than the need to create belief systems that influence nature. And shamanism is the most prevalent belief system of hunter-gatherers.”
Yet even members of the Chauvet team feel that Clottes’s theories on shamanism go too far. The divide seems, in part, to be generational. The strict purists tend to be younger, perhaps because they came of age with deconstruction, in a climate of political correctness, and are warier of their own baggage. “I don’t mind stating uncategorically that it’s impossible to know what the art means,” Carole Fritz said. Norbert Aujoulat tactfully told me, “We’re more reserved than Jean is. He may be right about the practice of shamanism in the caves, but many of us simply don’t want to interpret them.” He added with a laugh, “If I knew what the art meant, I’d be out of business. But in my own experience—I’ve inventoried five hundred caves—the more you look, the less you understand.”
For an older generation, on more intimate terms with mortality, it may be harder to accept the lack of resolution to a life’s work. Jean-Michel Geneste, a leonine man of fifty-nine with a silver mane, told me about an experiment that he had conducted at Lascaux in 1994. (In addition to directing the work at Chauvet, he is the curator of Lascaux, and last winter he had to deal with an invasion of fungus that was threatening the paintings there.) Geneste decided to invite four elders of an Aboriginal tribe, the Ngarinyins—hunter-gatherers from northwestern Australia—to visit the cave, and put them up at his house in the Dordogne. “I explained that I would be taking them to a place where ancients had, like their own ancestors, left marks and paintings on the walls, so that perhaps they could explain them,” he said. “ ‘They’re your ancestors?’ they asked. I said no, and that stupid reply made them afraid. If we weren’t visiting my ancestors, they wouldn’t enter their sanctuary, and risk the consequences. I was terribly disappointed, and finally, as good guests, they agreed to take a look. But first they had to purify themselves, so they built a fire, and pulled some of their underarm hair out and burned it. Their own rituals involve traversing a screen of smoke—passing into another zone. When they entered the cave, they took a while to get their bearings. Yes, they said, it was an initiation site. The geometric signs, in red and black, reminded them of their own clan insignia, the animals and engravings of figures from their creation myths.”
Geneste agrees with their reading, but he also believes that a cave like Lascaux or Chauvet served many purposes—“the way a twelfth-century church did. Everyone must have heard that these sanctuaries existed, and felt drawn to them. Look at the Pont d’Arc: it’s a great beacon in the landscape. And, like the art in a church, the richness of graphic expression in the caves was satisfying to lots of different people in different ways—familial, communal, and individual, across the millennia—so there is probably no one adequate explanation, no unified theory, for it.”
For the next week, I climbed the hill to Chauvet once a day. A guardian, Charles Chauveau, who, by law, has to be present when the scientists are underground, took me hiking, and we scaled the cliffs to sun our faces on a boulder, watching the first rafters of the season negotiate the river and pass under the Pont d’Arc. Only a few members of the team enter the cave at a time, each to pursue his or her research, though because of potential hazards, especially carbon-dioxide intoxication, no fewer than three can ever be alone there. “In the old days, when you sometimes had Chauvet to yourself, it was awesome and a little frightening,” the geologist Evelyne Debard said. But Aujoulat felt more intimidated at Lascaux. “I used to spend a solitary hour there once a week,” he said. “I rehearsed all my gestures, so I wouldn’t lose time. But after a while it became oppressive: those huge animals staring you down in a small space—trying, or so it feels, to dominate you.”
Those who have elected to stay behind spend the day in a prosaic annex next to the camp parking lot which was built to provide the team with office space and computer outlets. Marc Azéma, who has collaborated with Clottes on books about Chauvet’s lions (he also filmed the Tuareg baptism), gave me a virtual cave tour on a big monitor. Of necessity, Fritz and Tosello spend more time Photoshopping their research than conducting field work. (Henri Breuil made tracings directly from cave walls—an unthinkable sacrilege to modern archeologists.) They digitally photograph an image section by section, print the picture to scale, and take it back underground, where Tosello sets up a drawing board as close as possible to the area of study. The digital image is overlaid with a sheet of clear plastic, and he traces the image onto the sheet, referring constantly to the original painting as he does so. This dynamic act of translation gives him a deeper insight into the artists’ gestures and techniques than a mere reading would. He repeats the process on successive plastic sheets, each one focussed on a separate aspect of the composition, including the rock’s contours. Then he transfers the tracings (as many as a dozen layers) onto the computer, where they can be magnified and manipulated. Describing the detail in a monumental frieze of horses between the Megaloceros Chamber and the Skull Chamber, Fritz and Tosello wrote, in “Chauvet Cave”:
Once again, the surface was carefully scraped beneath the throat, which suggests to us a moment of reflection, or perhaps doubt. . . . The last horse is unquestionably the most successful of the group, perhaps because the artist is by now certain of his or her inspiration. This fourth horse was produced using a complex technique: the main lines were drawn with charcoal; the infill, colored sepia and brown, is a mixture of charcoal and clay spread with the finger. A series of fine engravings perfectly follow the profile. With energetic and precise movements, the significant details are indicated (nostril, open mouth). A final charcoal line, dark black, was placed just at the corner of the lips and gives this head an expression of astonishment or surprise.
While the team was at work, I often stayed on the cliff with Chauveau, reading Dale Guthrie’s book at a picnic table. Guthrie, a professor emeritus of zoology at the University of Alaska, specializes in the paleobiology of the Pleistocene era. Not only is he an expert on the large mammals that cavort on cave walls; he has spent forty years in the Arctic wilds hunting their descendants with a bow and arrow. In that respect, perhaps, he brings more empiricism to his research than other scholars, though he also brings less humility. “The Nature of Paleolithic Art,” as its title suggests, aspires to be definitive.
It is a handsome, five-hundred-page volume composed, like a mosaic, of boxed highlights, arresting graphics, and short sections of text that distill a wealth of multi-disciplinary research. The prose, like the layout, is designed to engage a layman without vulgarizing the science, or, at least, not too much. Guthrie, who sounds and looks, in his author’s photograph, like an earthy guy, has fun with occasional rib-nudging subtitles (“Lesbian Loving or Male Fantasy?,” “Graffiti and Testosterone”), but they promote a premise at least as audacious as that of Clottes and Lewis-Williams: that our biology, expressed in our carnal appetites and attractions, including an attraction to the supernatural, is a “baseline of truth” for the cave artists’ symbolic language.
“I don’t blame you for everything—I blame Dad for some things, too.”
Link copied
Nearly all the illustrations are Guthrie’s own renderings or interpretations of Paleolithic imagery (there are no photographs). A number of prehistorians are and have been, as he is, gifted draftsmen and copyists. But unlike the devout Breuil, or the cautious Tosello, Guthrie is a desacralizer. He admires the creative “freedom” of cave art—an acuity of observation coupled with, in his view, a nonchalance of composition. He stresses its erotic playfulness, even straining to discern evidence of dildos and bondage, despite the rarity of sexual acts depicted on walls or artifacts. (“No Sex, Please—We’re Aurignacian” was the title of a scholarly paper on the period.) The reverence with which certain researchers—including, one infers, the Chauvet team—treat even the smallest nick in a cave strikes him as a bit too nice, and, where they perceive an elaborate, if obscure, metaphysics, he sees high-spirited improvisation. “Some Paleolithic images identified as part man and part beast may simply be artistic bloopers,” he writes. (But the artists sometimes did correct their work, Azéma told me, by scraping the rock’s surface.)
Paleobiology is, in part, a science of statistical modelling, and, analyzing the handprints in the caves, Guthrie argues that many, perhaps a majority, of the artists were not the “Michelangelos” of Lascaux or Chauvet but teen-age boys, who, being boys, loved rutting and rumbling and, in essence, went on tagging sprees. It is true that among the masterpieces there are many line drawings, including pubic triangles, that seem hasty, impish, or doodle-like. In Guthrie’s view, prehistorians have imported their mandarin pieties, and the bias of a society where children are a minority, to the study of what, demographically, was a freewheeling youth culture.
Guthrie is both provocative and respected—Clottes wrote one of the cover blurbs on his book—but some of his methods make you wonder how much of the light that he throws onto the nature of the art owes to false clarity. By culling examples of erotica from a huge catchment area without noting their size, date, or position, he distorts their prevalence. His cleaned-up drawings minimize the art’s bewildering ambiguity and the contouring or the cave architecture organic to many compositions. As for the bands of brothers spelunking on a dare, and leaving what Guthrie calls their “children’s art” to bemuse posterity, the life expectancy for the era was, as he notes, about eighteen, since infant mortality was exorbitant. But those who lived on could, thanks to the rarity of infectious diseases and the abundance of protein, expect to survive for thirty years more—considerably longer than the Greeks, the Romans, or the medieval peasants who built Chartres. Can puerility as we know it—horny, reckless, and transgressive—be attributed to a people for whom early parenthood and virtuosity in survival skills were, as Guthrie acknowledges, imperative? Rash spelunkers die every year, yet no human remains have been discovered in the caves (with the exception of a single skeleton, that of a young man, at Vilhonneur, near Angoulême, and those of five adults who were buried at Cussac, in the Dordogne). That is a staggering testament to the artists’ sureness of foot and purpose, if not to their solemnity.
A few days before Easter, I left the camp and drove southwest, over the mountains, stopping at the town of Albi, where the Toulouse-Lautrec Museum, in a thirteenth-century palace off the cathedral square, has a small gallery of Stone and Bronze Age artifacts. I wanted to see the museum’s tiny Solutrean carving, in red sandstone, of an obese woman with impressive buttocks. She seemed well housed among Toulouse-Lautrec’s louche Venuses. By the next evening, in a thunderstorm, I had reached Jean Clottes’s home town of Foix, and found an old-fashioned hotel that he had recommended. From a corner table in the dining room, I could watch the swollen Ariège River flowing toward a distant wall of snow-covered peaks—the Pyrenees—that were black against a livid sunset. The Neanderthals had come this way.
Pascal Alard, an archeologist, met me the next morning at Niaux, where he has conducted research for twenty years. It is one of three caves (with Chauvet and Lascaux) that Clottes, who had arranged the rendezvous, considers paradigmatic. I had driven south for about forty minutes, the last few miles on a road with hairpin turns that wound up into flinty, striated hills. The site was nothing like Chauvet. There was, for one thing, a parking lot at the entrance, deserted at that hour, a bookshop, and an imposing architectural sculpture, in Corten steel, cantilevered into the cliff. (It is supposed to represent an imaginary prehistoric animal.)
Niaux is Magdalenian—its walls were decorated about fourteen thousand years ago—and it was one of the first caves to be explored. Visitors from the seventeenth century left graffiti, as did pranksters for the next three hundred years. In 1866, an archeologist named Félix Garrigou, who was looking for prehistoric relics, confessed to his journal that he couldn’t figure out the “funny-looking” paintings. “Amateur artists drew animals here,” he noted, “but why?”
Niaux’s enormity—a network of passages that are nearly a mile deep from the entrance gallery, which was used as a shelter during the Bronze Age, to the Great Dome, at the far end, branching like a cactus into narrow alcoves and low-ceilinged funnels, but also into chambers the size of an amphitheatre—helps to give it a stable climate, and small groups can make guided visits at appointed times. But when Alard had unlocked the door, and it closed behind us, we were alone. He had two electric torches, and he gave me one. “Don’t lose it,” he joked. He told me that he and some colleagues, all of whom know the cave intimately, decided, one day, to see if they could find their way out without a light source. None of them could.
The floor near the mouth was fairly flat, but as we went deeper it listed and swelled unpredictably. Water was dripping, and sometimes it sounded like a sinister whispered conversation. The caves are full of eerie noises that gurgle up from the bowels of the earth, yet I had a feeling of traversing a space that wasn’t terrestrial. We were, in fact, walking on the bed of a primordial river. Where the passage narrowed, we squeezed between two rocks, like a turnstile, marked with four lines. They were swipes of a finger dipped in red pigment that resembled a bar code, or symbolic flames. Further along, there was a large panel of dots, lines, and arrows, some red, some black. I felt their power without understanding it until I recalled what Norbert Aujoulat had told me about the signs at Cussac. He was the second modern human to explore the cave, in 2000, the year it was unearthed, some twenty-two thousand years after the painters had departed. (The first was Cussac’s discoverer, Marc Delluc.) “As we trailed the artists deeper and deeper, noting where they’d broken off stalagmites to mark their path, we found signs that seemed to say, ‘We’re sanctifying a finite space in an infinite universe.’ ”
Beyond the turnstile, the passage widens for about six hundred feet, veering to the right, where it leads to one of the grandest bestiaries in Paleolithic art: the Black Salon, a rotunda a hundred and thirty feet in diameter. Scores of animals were painted in sheltered spots on the floor, or etched in charcoal on the soaring walls: bison, stags, ibex, aurochs, and, what is rarer, fish (salmon), and Niaux’s famous “bearded horses”—a shaggy, short-legged species that, Clottes writes in his new book, has been reintroduced from their native habitat, in Central Asia, to French wildlife parks. All these creatures are drawn in profile with a fine point, and some of their silhouettes have been filled in with a brush or a stumping cloth. I looked for a little ibex, twenty-one inches long, that Clottes had described to me as the work of a perfectionist, and one of the most beautiful animals in a cave. When I found him, he looked so perky that I couldn’t help laughing. Alard was patient, and, since time loses its contours underground, I didn’t know how long we had spent there. “I imagine that you want to see more,” he said after a while, so we moved along.
Every encounter with a cave animal takes it and you by surprise. Your light has to rouse it, and your eye has to recognize it, because you tend to see creatures that aren’t there, while missing ones that are. Halfway home to the mortal world, I asked Alard if we could pause and turn off our torches. The acoustics magnify every sound, and it takes the brain a few minutes to accept the totality of the darkness—your sight keeps grasping for a hold. Whatever the art means, you understand, at that moment, that its vessel is both a womb and a sepulchre. ♦
|
First Impressions
A frieze of horses and rhinos near the Chauvet cave’s Megaloceros Gallery, where artists may have gathered to make charcoal for drawing. Chauvet contains the earliest known paintings, from at least thirty-two thousand years ago. Photograph by Jean Clottes / Chauvet Cave Scientific Team
During the Old Stone Age, between thirty-seven thousand and eleven thousand years ago, some of the most remarkable art ever conceived was etched or painted on the walls of caves in southern France and northern Spain. After a visit to Lascaux, in the Dordogne, which was discovered in 1940, Picasso reportedly said to his guide, “They’ve invented everything.” What those first artists invented was a language of signs for which there will never be a Rosetta stone; perspective, a technique that was not rediscovered until the Athenian Golden Age; and a bestiary of such vitality and finesse that, by the flicker of torchlight, the animals seem to surge from the walls, and move across them like figures in a magic-lantern show (in that sense, the artists invented animation). They also thought up the grease lamp—a lump of fat, with a plant wick, placed in a hollow stone—to light their workplace; scaffolds to reach high places; the principles of stencilling and Pointillism; powdered colors, brushes, and stumping cloths; and, more to the point of Picasso’s insight, the very concept of an image. A true artist reimagines that concept with every blank canvas—but not from a void.
Some caves have rock porches that were used for shelter, but there is no evidence of domestic life in their depths. Sizable groups may have visited the chambers closest to the entrance—perhaps for communal rites—and we know from the ubiquitous handprints that were stamped or airbrushed (using the mouth to blow pigment) on the walls that people of both sexes and all ages, even babies, participated in whatever activities took place. Only a few individuals ventured or were permitted into the furthest reaches of a cave—in some cases, walking or crawling for miles.
|
yes
|
Spelaeology
|
Was Chauvet Cave the site of the earliest known cave paintings?
|
yes_statement
|
"chauvet" "cave" was the "site" of the "earliest" "known" "cave" "paintings".. the "earliest" "known" "cave" "paintings" were found in "chauvet" "cave".
|
http://www.visual-arts-cork.com/prehistoric/chauvet-cave-paintings.htm
|
Chauvet Cave Paintings: Earliest Prehistoric Murals: Discovery, Layout
|
The Chauvet-Pont-d'Arc cave - among the
world's oldest sites of prehistoric cave painting,
along with the El Castillo Cave
Paintings (39,000 BCE), the Sulawesi
Cave Art (37,900 BCE) and the abstract
signs found at Altamira (c.34,000 BCE) - was discovered quite by chance
in the Ardeche gorge in 1994, by three speleologists - Jean-Marie Chauvet,
Eliette Brunel-Deschamps and Christian Hillaire - while they were surveying
another cave nearby. Inside the Chauvet grotto, they found a 400-metre
long network of galleries and rooms, covered in rock art and petroglyphs,
whose floor was littered with a variety of paleontological remains, including
the skulls of bears and two wolves. Some of these bones had been arranged
in special position by the previous human inhabitants. Amazingly, Chauvet's
entire labyrinth of prehistoric art
had been left undisturbed since a landslide sealed off the entrance about
25,000 years ago.
Carbon Dating:
How Old are the Cave Paintings at Chauvet?
Chauvet is one of the few prehistoric painted
caves to be found preserved and intact, right down to the footprints of
animals and humans. As a result it ranks alongside Lascaux
(c.17,000 BCE), Altamira (c.15,000 BCE), Pech-Merle
(c.25,000 BCE) and Cosquer (c.25,000
BCE) as one of the most significant sites of Stone Age painting. Moreover,
its earliest rock art (charcoal drawings of
two rhinos and one bison) have been dated to between 30,340 and 32,410
BP (before present). This means that these images were created roughly
29-30,000 BCE, making them the third oldest figurative cave paintings
in the world, after the Sulawesi animal pictures in Indonesia and the
more primitive Fumane cave paintings
(c.35,000 BCE) in Italy. Although Chauvet does not boast the type of polychrome
painting visible at Lascaux or
Altamira, this is more than offset by the sheer originality, diversity
and preserved quality of its art. According to the French Ministry of
Culture in Paris, the antiquity of Chauvet's cave
art has radically altered previous theories concerning the artistic
development of Paleolithic Man, and demonstrate that Homo sapiens
learnt to draw at a very early stage. (To see how the age of cave murals
at Chauvet compares with that of Lascaux, see: Prehistoric
Art Timeline. See also the Nawarla
Gabarnmang charcoal drawing (26,000 BCE) in Arnhem Land, Northern
Territory, Australia.)
What is the
Significance of Chauvet Cave and its Art?
The discovery of the Chauvet cave, along
with its galleries of prehistoric drawings, paintings and petroglyphs,
was significant for two main reasons. First, both the content of the imagery
and the artistic techniques used to create them, came as a major surprise.
The types of animals represented was unusual, because previously most
of the species depicted in Stone
Age art were game animals that were hunted for food. However, at Chauvet,
it is the more dangerous animals - not generally hunted for food - that
account for a majority of the images. Furthermore, Chauvet's Stone Age
painters employed more sophisticated techniques of drawing,
shading, perspective
and composition in their murals than was previously expected, at least
for the period in question. As a result, Chauvet contains numerous dynamic
and powerful compositions consisting of multiple images skillfully executed
and arranged to fit in with the contours of the cave chambers. There is
also some evidence to suggest that a significant quantity of the charcoal
drawings were painted by a single, master artist.
Until the discovery of Chauvet in 1994,
most paleoanthropologists believed that the major centres of parietal
art were confined to the Perigord-Quercy region, the Pyrenees, and
the Cantabrian coast. The discovery of the Chauvet-Pont-d'Arc cave in
the Ardeche region, reminds us that original caves of great cultural importance
might still await discovery in areas other than the major centres.
Note: For a comparison with African painting
from the same era, please see the animal pictures on the Apollo
11 Cave Stones (c.25,500 BCE).
Chauvet also sheds an interesting light
on the artistic inventiveness of Aurignacian
art (c.40,000-26,000 BCE). Ever since the 1930s, researchers have
known that between 35,000 and 30,000 years ago the Aurignacians living
in the Swabian Jura of southwestern Germany carved beautiful ivory statuettes
- such as the peculiar Lion
Man of Hohlenstein Stadel (c.38,000 BCE) - with both naturalistic
and stylized characteristics. (For more details, see: Ivory
Carvings of the Swabian Jura.) The unusually sophisticated cave art
at Chauvet, a site contemporary with the Swabian ivories, demonstrates
that the Aurignacians were equally talented at painting and engraving
than they were at prehistoric sculpture. At
the same time it raises an intriguing question: given the identical subject
matter (mammoths, felines, bears, bison, horses and rhinoceroses) between
Chauvet's painting and the Swabian ivory
carving, was there a direct relationship between southern Germany
and the French Ardeche region, (say) via the Rhine and Rhone valleys?
Or is this artistic congruence between the two areas mere coincidence.
Whatever the answer, let us hope that further examples of Aurignacian
artistry emerge before too long.
Archeology
and Human Habitation
The gorges in Ardeche contain numerous
caverns, many of which possess petroglyphs and other artifacts of archeological
and geological significance. But Chauvet Cave is unusually big and was
inhabited by prehistoric humans during two distinct periods: the Aurignacian
(c.40,000-26,000 BCE) and the Gravettian (26,000-20,000 BCE): that is,
firstly about 31,000-29,000 BCE, and later about 26,000 BCE, after which
the cave was sealed by a landslide. Most of the artwork dates to the earlier
period of inhabitation (c. 30,000 BCE).
Chauvet contains a total of over 300 paintings
and engravings. These were grouped in specific ways. In the most accessible
part of the cave, most images are red, with a few black or engraved ones.
In the deeper part, the animals are mostly black, with far fewer rock
engravings and red figures. Also, there are groupings of specific
animals: for example, the Horse Panel and the Panel of Lions and Rhinoceroses.
What makes Chauvet such an important example of Franco-Cantabrian
cave art, is the sophistication of its paintings. No other Aurignacian
cave contains compositions with the same degree of realism, naturalism
and complexity.
Animal Figures
The most noticeable animals in the cave
(accounting for some 60 percent of all such images) are lions, mammoths,
and rhinoceroses, all of whom were rarely hunted, thus unlike most other
caves, Chauvet is not a pictorial showcase of daily Stone Age life. Other
rare animals include a panther, a spotted leopard and an owl. In addition,
the cave features the usual horses, bison, aurochs, ibex, reindeer, red
deer and musk-oxen.
Abstract Art
As well as figurative pictures, Chauvet
contains an abundance of abstract art
in the form of geometric symbols (though less than sites in the Cantabrian
region of Spain), a number of indecipherable marks, as well as a quantity
of red-ochre prehistoric hand stencils
and handprints.
Painterly Skills
and Techniques
According to researchers, the workmanship
of Chauvet's prehistoric artists is excellent. Shading, perspective and
relief are skillfully used, the body proportions of the animals are natural,
and species are clearly defined with numerous details of anatomy shown:
for example, mammoths are drawn with an arched belly, bison are presented
in frontal perspective with a bushy mane, horses too have thick manes,
while the rhinoceroses have very distinctive ears. Chauvet's Stone Age
painters also used engraving techniques to emphasize the lines, volume
and relief of the animal figures, and mixed floor clay with charcoal to
create different hues. See: Prehistoric
Colour palette.
The Layout of the
Chauvet Cave
The Chauvet-Pont-d'Arc cave runs for about
400 metres in a north-south direction. Its entrance leads into the Brunel
Chamber.
The Brunel
Chamber
This has five areas or panels of parietal
art. Near the original prehistoric entrance is the "Vestibule
of Red Bears", in which there are three red drawings of cave
bears - recognizable by the steep incline of their foreheads - on a panel
in a small recess. The central bear has been drawn with a confident hand
using the natural contour of the cave wall. This bear is complete, but
the one to its left consists of just a bear head, while to the right the
third bear is part-complete. The painter used a technique known as stump-drawing
- the use of fingers or a piece of hide to shade the inside of the bodies
and add volume.
Also near the entrance in a small recessed
niche, is the Dotted Animal Panel marked by a group of red dots
applied with the palm of a hand. The dots may depict a mammoth. The Brunel
Chamber also contains the Panel of the Sacred Heart with its mysterious
sign of the cross, the only known example in Paleolithic culture.
Another feature is the so-called Wall
of Dominos, home to one black painting of a feline in profile. Close
by, there are a few red dots and the rear of an animal (possibly an ibex).
The Brunel Chamber also contains the Alcove of Yellow Horses, fronted
by a hanging rock decorated with dots of red ochre. The facing wall is
marked with numerous small figures.
The Chamber
of the Bear Hollows
This long chamber which leads into the
heart of the cave complex was kept deliberately bare and has no drawings
or paintings, except for a single Rhinoceros head at the very end.
The Cactus
Gallery
At the end of the Chamber of the Bear Hollows,
to the east, stands the Cactus Gallery, whose walls are marked by layers
of solidified sedimentary rock. A striking red bear, similar to those
in the Vestibule of Red Bears in the Brunel Chamber, is on one of the
walls, and a number of altered paintings appear elsewhere in the gallery.
The Gallery
of Hands
The Gallery of Hands contains the Panel
of the Panther - a somewhat unusual name given that its principal
animal has a spotted coat more reminiscent of a Hyena, and a shape more
like that of a bear. There is also a large bear, an ibex, a small vertical
bear, and an acephalous ibex. The figures are all painted in red.
In addition, there is the Panel of Red Signs, containing numerous
mysterious markings, and the Frieze of Red Rhinoceroses. Underneath,
several other animals are also painted in red, including a mammoth, felines
and yet more rhinos. On the wall nearby, a number of hand prints can be
seen, plus one hand stencil. The latter, together with the partial outline
of a black mammoth, is located on the Panel of Hand Stencils.
A low passage - known as the Candle
Gallery - with no paintings but some charcoal and torch marks, connects
The Gallery of Hands with the Hillaire Chamber. From now on the Stone
Age art becomes more monumental.
The Hillaire
Chamber
This chamber contains a large number of
individual animal pictures and several major animal groups.
The Panel of the Engraved Horse
depicts a horse walking to the left. Created with the same finger-tracing
method of painting as the nearby owl image, the Horse is a partial representation,
with a full mane and a hairy chest, but with legs tapering into abstract
lines. On a separate surface there is a rather strange painting of an
owl, whose head is seen from the front while its body is seen from the
back. The bird was engraved on the soft outer layer of the cave wall after
the surface had been scraped clean.
On the north wall of the Hillaire Chamber,
which leads into the smaller Megaceros Gallery, there is a panoramic
display of painting, which consists of several independent panels. The
entire mural is about 7 metres in length and is considered by art
critics to be one of the most important galleries in the cave.
Panel of
the Fighting Rhinos and Horses
This panel extends for several metres. Before being painted the wall was
carefully scraped, erasing several initial drawings in the process. Some
20 images were then depicted on the panel, starting with a dramatic pair
of rhinoceroses confronting each other face to face. These two charcoal
drawings have been dated to about 30,000 BCE. After this, the heads of
four horses were added, along with other animals. Slightly different hues
were obtained by mixing the charcoal with floor. Other techniques of drawing,
shading and perspective were also employed.
Panel of the
Horses
This houses three horses, whose heads are
emphasized by shading. These equine images are connected by lines to a
large lion. To the right, a bison is seen in profile, facing right. The
double lines of the back, the hindquarters and the feet were probably
intended to create the illusion of movement, or the perspective of two
animals standing side by side.
Three other panels appear by the north
wall of the Hillaire Chamber: the Panel of the Cervids (prehistoric
deer), featuring a number of oxen, bison, horses, and deer; the Panel
of the Rhinoceros, which depicts a single complete rhinoceros underneath
the dorsal outline of another; and the Panel of the Megaceros (an
extinct type of giant moose), which features the profile of a rhinoceros.
The Chamber
of the Skull
Off to the west is the Chamber of the Skull,
which is noted for its hanging rock embellished with black drawings and
engravings of reindeer and other creatures. The roof of the cavern is
marked by numerous folds and recesses many of which contain charcoal drawings
and engravings.
The Gallery
of Crosshatches
This gallery is situated in the extreme
north-west corner of the cave complex. It is famous for its human footprint
(left foot), similar to that of a male person about 4.5 feet tall and
around 9 years old. The footprint is the first of a trail of prints extending
some 160-feet in length.
Connecting the Hillaire Chamber with the
north-east inner recesses of the cave, is the Megaceros Gallery. Nearly
all of this passageway was left undecorated.
The End Chamber
This is the deepest part of the Chauvet
cave complex, and occupies the extreme north-east corner. It boasts several
areas of artistic interest.
The first is the Panel of Feline profiles,
consisting of two life-size charcoal outlines of a pair of lions, side
by side. The male lion appears in the background; the female in the foreground.
Given the relatively large scale of the images, the artist must have had
enormous faith in his drawing ability.
The Panel of
Bison
On a pillar facing the entrance of the
End Chamber is a set of images depicting several black bison, whose outlines
are enhanced with both shadings and engravings. Dated to about 29,000
BCE, the panel also includes an engraving of a partial mammoth. This engraving
was made before the black drawings. For some reason, pictures of bison
were only painted in the deepest parts of the cave.
The large west wall of the chamber is adorned
with a series of important panels arranged around a niche. As elsewhere,
the wall surface was scraped clean before the artist began. The niche,
known as the Niche of the Horse, is painted with a single image
of a horse, whose tail is drawn into a recess in the rock. The effect
of this, is that the animal appears to be emerging from out of the rock,
as if by magic.
Panel of
Rhinoceroses
This composition - above and to the left
of the niche - is set against a backdrop of large feline figures. All
in all, it is a most unusual example of parietal art. Not only is it unusual
to see so many rhinos represented - they were a comparatively rare species
at the time - but the way they are grouped and laid out is also very unusual.
Panel of
Felines
This group of animals - set above and to
the right of the niche - are also shown in perspective, and the prehistoric
artist has adeptly used the natural contours of the cave wall to separate
the different elements of the picture. The painting depicts a hunt. To
the left, there are four bison heads, and two rhinos; in the centre and
right there are seven bison, pursued by a group of sixteen lions, most
of them represented by their heads alone.
The Hanging Rock
of the Sorcerer
At the extreme end of the End Chamber a
rock formation hangs down from the ceiling to a point about 3-feet off
the ground. This rock formation is adorned with a mass of charcoal drawings
and engravings: one horse, two mammoths, four lions, one musk ox, plus
a hybrid figure - half man and half bison - known as the Sorcerer. Next
to it is the front view of a woman's pelvis joined to long tapering legs.
Her pubic triangle and genitalia are clearly visible. The figure of the
Sorcerer wraps around and faces the pubic triangle. This powerful fertility
image, not unlike the venus figurines
- such as the Venus of Hohle Fels
(c.38-33,000 BCE), the Venus of
Dolni Vestonice (26,000 - 24,000 BCE), or the The Venus
of Willendorf (25,000 BCE) - is yet another link between the art of
the Ardeche and that of the Swabian Jura.
The Sacristy
Latterly, researchers have uncovered a
new chamber, named the Sacristy. Accessible via a small corridor in the
rear left side of the End Chamber, it features a crayon drawing of a small
mammoth, whose tusks are emphasized by engraving. The drawing is as yet
undated.
The Purpose of
Chauvet
In general, although most archeologists
recognize the importance of cave painting to Paleolithic
art and culture, they are still unsure as to the specific purpose
of the caves themselves. One popular theory - based on the subject matter
of the paintings, and the fact that Chauvet, like most caves, was not
used as a place of regular habitation - is that it functioned as a centre
of ritual or magical ceremony. That is to say, the images were intended
essentially for the spirits - the primitive beings or deities worshipped
by Prehistoric Man - not for other men. Of course some or all of the images
might have been revealed to certain individuals, but they were really
meant for the spirits. It was their existence that mattered, not their
public display.
What we can say, is that while Chauvet
doesn't contain the earliest art of
prehistory, it does house the earliest cave murals
and exemplifies the rising cultural level of Aurignacian man during the
last period of the Stone Age.
|
The Chauvet-Pont-d'Arc cave - among the
world's oldest sites of prehistoric cave painting,
along with the El Castillo Cave
Paintings (39,000 BCE), the Sulawesi
Cave Art (37,900 BCE) and the abstract
signs found at Altamira (c.34,000 BCE) - was discovered quite by chance
in the Ardeche gorge in 1994, by three speleologists - Jean-Marie Chauvet,
Eliette Brunel-Deschamps and Christian Hillaire - while they were surveying
another cave nearby. Inside the Chauvet grotto, they found a 400-metre
long network of galleries and rooms, covered in rock art and petroglyphs,
whose floor was littered with a variety of paleontological remains, including
the skulls of bears and two wolves. Some of these bones had been arranged
in special position by the previous human inhabitants. Amazingly, Chauvet's
entire labyrinth of prehistoric art
had been left undisturbed since a landslide sealed off the entrance about
25,000 years ago.
Carbon Dating:
How Old are the Cave Paintings at Chauvet?
Chauvet is one of the few prehistoric painted
caves to be found preserved and intact, right down to the footprints of
animals and humans. As a result it ranks alongside Lascaux
(c.17,000 BCE), Altamira (c.15,000 BCE), Pech-Merle
(c.25,000 BCE) and Cosquer (c.25,000
BCE) as one of the most significant sites of Stone Age painting. Moreover,
its earliest rock art (charcoal drawings of
two rhinos and one bison) have been dated to between 30,340 and 32,410
BP (before present).
|
no
|
Spelaeology
|
Was Chauvet Cave the site of the earliest known cave paintings?
|
yes_statement
|
"chauvet" "cave" was the "site" of the "earliest" "known" "cave" "paintings".. the "earliest" "known" "cave" "paintings" were found in "chauvet" "cave".
|
https://www.nbcnews.com/id/wbna47418532
|
Female sex organs the focus of oldest known cave art
|
Female sex organs the focus of oldest known cave art
Multiple engraved and painted images of female sexual organs, animals and geometric figures discovered in southern France are believed to be the first known wall art.
This drawing is said to be of female sexual organ associated with unidentifiable engravings.Raphaelle Bourrillon
May 14, 2012, 9:18 PM UTC / Source: Discovery Channel
By By Jennifer Viegas
Multiple engraved and painted images of female sexual organs, animals and geometric figures discovered in southern France are believed to be the first known wall art.
Radiocarbon dating of the engravings, described in the latest Proceedings of the National Academy of Sciences, reveals that the art was created 37,000 years ago. This makes them slightly older than the world’s earliest known cave art, found in Chauvet Cave, southeastern France.
Since this site, Abri Castanet in southern France, is very close to Chauvet, it is likely that the artists in both cases came from what is known as the Aurignacian culture, which existed until about 28,000 years ago.
“Abri Castanet has long been recognized as one of the oldest sites in Eurasia with evidence for human symbolism in the form of hundreds of personal ornaments (such as) pierced animal teeth, pierced shells, ivory and soapstone beads, engravings and paintings on limestone slabs,” lead author Randall White told Discovery News.
White, a New York University anthropology professor, added that the artwork “is associated with members of some of the first modern human populations to leave Africa, dispersing into Eurasia, replacing the preceding Neanderthals.”
White and his international team analyzed the engravings, which were made with ochre on a 3,307-pound block of limestone found in a rock shelter occupied by a group of Aurignacian reindeer hunters. The researchers believe the limestone was once the shelter’s low ceiling, which later collapsed.
The engravings include depictions of “the back end of a horse,” according to the researchers, as well as multiple images of the female vulva. Other “zoomorphic” and “geometric” engravings are included, along with additional images of female sexual organs.
Unlike the Chauvet paintings and engravings, which are deep underground and away from living areas, “the engravings and paintings at Castanet are directly associated with everyday life, given their proximity to tools, fireplaces, bone and antler tool production, and ornament workshops,” White said.
The discovery in many respects leads to more questions than answers, given the subject matter of the artwork.
“While there are animal figures, the dominant motif is that considered to represent abstract female vulvas,” White said, mentioning that other interpretations could be possible.
Additional Aurignacian artwork, however, clearly represents female sexual organs. The Venus of Hohle Fels, for example, is an ivory figurine dating to at least 35,000 to 40,000 years ago, according to Nicholas Conard, a paleoanthropologist at the University of Tubingen who reported the find.
The figurine, found in a southwestern Germany cave, depicts a woman with what Conard told Discovery News were “large projecting breasts” and a pronounced vulva and labia majora visible between the woman’s open legs.
Additional so-called “Venus figurines” from the Gravettian period have been found, so there may have been a shared cultural tradition.
“All place an emphasis on sexual attributes and lack emphasis on the legs, arms, face and head, made all the more noticeable in this case (the Venus of Hohle Fels) because a carefully carved, polished ring — suggesting that the figurine was once suspended as a pendant — exists in place of a head,” Conard said.
The abstract female vulvas depicted at Abri Castanet appear to follow that style. It remains unclear if men or women created the depictions or if they were used for ritualistic purposes.
White concluded, “The discovery, in concert with the rich records of approximately the same time period from southern Germany, northern Italy and southeastern France, raises anew the question of the evolutionary and adaptive significance of graphic representation and its role in the successful dispersal of modern human populations out of Africa into Western Eurasia and beyond.”
|
Female sex organs the focus of oldest known cave art
Multiple engraved and painted images of female sexual organs, animals and geometric figures discovered in southern France are believed to be the first known wall art.
This drawing is said to be of female sexual organ associated with unidentifiable engravings. Raphaelle Bourrillon
May 14, 2012, 9:18 PM UTC / Source: Discovery Channel
By By Jennifer Viegas
Multiple engraved and painted images of female sexual organs, animals and geometric figures discovered in southern France are believed to be the first known wall art.
Radiocarbon dating of the engravings, described in the latest Proceedings of the National Academy of Sciences, reveals that the art was created 37,000 years ago. This makes them slightly older than the world’s earliest known cave art, found in Chauvet Cave, southeastern France.
Since this site, Abri Castanet in southern France, is very close to Chauvet, it is likely that the artists in both cases came from what is known as the Aurignacian culture, which existed until about 28,000 years ago.
“Abri Castanet has long been recognized as one of the oldest sites in Eurasia with evidence for human symbolism in the form of hundreds of personal ornaments (such as) pierced animal teeth, pierced shells, ivory and soapstone beads, engravings and paintings on limestone slabs,” lead author Randall White told Discovery News.
White, a New York University anthropology professor, added that the artwork “is associated with members of some of the first modern human populations to leave Africa, dispersing into Eurasia, replacing the preceding Neanderthals.”
White and his international team analyzed the engravings, which were made with ochre on a 3,307-pound block of limestone found in a rock shelter occupied by a group of Aurignacian reindeer hunters. The researchers believe the limestone was once the shelter’s low ceiling, which later collapsed.
|
no
|
Spelaeology
|
Was Chauvet Cave the site of the earliest known cave paintings?
|
yes_statement
|
"chauvet" "cave" was the "site" of the "earliest" "known" "cave" "paintings".. the "earliest" "known" "cave" "paintings" were found in "chauvet" "cave".
|
https://www.heritagedaily.com/2020/03/10-prehistoric-cave-paintings/126971
|
10 prehistoric cave paintings
|
10 prehistoric cave paintings
Cave paintings are a type of parietal art (which category also includes petroglyphs, or engravings), found on the wall or ceilings of caves.
1 – Magura Cave
Magura Cave is located in the northwest of Bulgaria and contains a collection of cave paintings, painted with bat excrement that date from 8000-4000 years ago.
An excess of 700 paintings has been discovered in the large cave, depicting people dancing and hunting as well as a wide range of animals.
Image Credit : Vislupus – CC BY-SA 4.0
2 – Cueva de las Manos
Cueva de las Manos is located in Patagonia in the southern part of Argentina and contains cave paintings that were created between 13,000 and 9,000 years ago.
The cave’s name literally means ‘Cave of hands’ and was presented that name because of the hundreds of stenciled hands painted on the cave walls. The age of the paintings was calculated from the remains of bone-made pipes used for spraying the paint.
Image Credit : Mariano – CC BY-SA 3.0
3 – Bhimbetka Rock Shelters
Bhimbetka is a collection of rock shelters, located in central India and contains over 600 paintings that span the prehistoric paleolithic and mesolithic periods, the oldest of which dates from at least 12,000 years.
The paintings depict the lives of the people who resided in the caves, as well as an array of animals that include tigers, lions, and crocodiles.
Image Credit : Bernard Gagnon – CC BY-SA 3.0
4 – Serra da Capivara
Serra da Capivara is a national park in Brazil which has the largest and the oldest concentration of prehistoric paintings in the Americas.
Rock shelters within the park were found to contain ancient paintings depicting animals and hunting. Whilst Stone tools found at Serra da Capivara date to as early as 22,000 years ago.
Image Credit : Vitor 1234 – CC BY-SA 3.0
5 – Laas Gaal
Laas Geel are cave formations on the rural outskirts of Hargeisa, Somaliland, situated in the Woqooyi Galbeed region of the country. They contain some of the earliest known cave paintings in the Horn of Africa.
The paintings are very well preserved and show images of cows in ceremonial robes, humans, domesticated dogs and giraffes. Laas Geel’s rock art is estimated to date to somewhere between 5,000 and 7,000 years ago.
Image Credit : Theodor Hoffsten – CC BY-SA 3.0
6 – Tadrart Acacus
Tadrart Acacus is a mountain range, located in the Sahara Desert of Western Libya that contains rock art dating from 14,000 years ago.
There are paintings and carvings of animals such as giraffes, elephants, ostriches and camels, but also of men and horses.
Image Credit : Roberto D’Angelo (roberdan) – Public Domain
7 – Chauvet Cave
The Chauvet-Pont-d’Arc Cave in the Ardèche department of southern France is a cave that contains some of the best-preserved figurative cave paintings in the world.
The dates have been a matter of dispute but a study published in 2012 supports placing the art in the Aurignacian period, approximately 32,000–30,000 years BP. Hundreds of animal paintings have been cataloged, depicting at least 13 different species, including some rarely or never found in other ice age paintings. Rather than depicting only the familiar herbivores that predominate in Paleolithic cave art, i.e. horses, aurochs, mammoths, etc. The walls of the Chauvet Cave feature many predatory animals, e.g., cave lions, leopards, bears, and cave hyenas.
Image Credit : Claude Valette – CC BY-ND 2.0
8 – Ubirr
Ubirr is a group of rock outcrops in the Kakadu National Park, a protected area in the Northern Territory of Australia. There several large rock overhangs that would have provided excellent shelter to Aboriginal people over thousands of years.
Some of the paintings are up to 20,000 years old and depict e barramundi, catfish, mullet, goanna, snake-necked turtle, pig-nosed turtle, rock-haunting ringtail possum, and wallaby and thylacine (Tasmanian tiger).
The Kakudu National Park contains a vast amount of Aboriginal rock paintings; over 5000 art sites have been discovered there. The Aboriginals not only painted the exterior of their subjects, but also the skeletons of some animals.
Image Credit : Thomas Schoch – CC BY-SA 3.0
9 – Altamira Cave
The Cave of Altamira is located near the historic town of Santillana del Mar in Cantabria, Spain. It is renowned for prehistoric parietal cave art featuring charcoal drawings and polychrome paintings of contemporary local fauna and human hands.
The earliest paintings were applied during the Upper Paleolithic, around 36,000 years ago. The site was only discovered in 1868 by Modesto Cubillas.
Image Credit : Rameessos – Public Domain
10 – Lascaux Paintings
Lascaux is the setting of a complex of caves near the village of Montignac, in the department of Dordogne in southwestern France. Over 600 parietal wall paintings cover the interior walls and ceilings of the cave.
The paintings represent primarily large animals, typical local and contemporary fauna that correspond with the fossil record of the Upper Paleolithic time. The drawings are the combined effort of many generations, and with continued debate, the age of the paintings is estimated at around 17,000 years (early Magdalenian).
More on this topic
Mark Milligan is an award winning journalist and the Managing Editor at HeritageDaily. His background is in archaeology and computer science, having written over 7,000 articles across several online publications. Mark is a member of the Association of British Science Writers (ABSW) and in 2023 was the recipient of the British Citizen Award for Education and the BCA Medal of Honour.
|
They contain some of the earliest known cave paintings in the Horn of Africa.
The paintings are very well preserved and show images of cows in ceremonial robes, humans, domesticated dogs and giraffes. Laas Geel’s rock art is estimated to date to somewhere between 5,000 and 7,000 years ago.
Image Credit : Theodor Hoffsten – CC BY-SA 3.0
6 – Tadrart Acacus
Tadrart Acacus is a mountain range, located in the Sahara Desert of Western Libya that contains rock art dating from 14,000 years ago.
There are paintings and carvings of animals such as giraffes, elephants, ostriches and camels, but also of men and horses.
Image Credit : Roberto D’Angelo (roberdan) – Public Domain
7 – Chauvet Cave
The Chauvet-Pont-d’Arc Cave in the Ardèche department of southern France is a cave that contains some of the best-preserved figurative cave paintings in the world.
The dates have been a matter of dispute but a study published in 2012 supports placing the art in the Aurignacian period, approximately 32,000–30,000 years BP. Hundreds of animal paintings have been cataloged, depicting at least 13 different species, including some rarely or never found in other ice age paintings. Rather than depicting only the familiar herbivores that predominate in Paleolithic cave art, i.e. horses, aurochs, mammoths, etc. The walls of the Chauvet Cave feature many predatory animals, e.g., cave lions, leopards, bears, and cave hyenas.
|
no
|
Spelaeology
|
Was Chauvet Cave the site of the earliest known cave paintings?
|
yes_statement
|
"chauvet" "cave" was the "site" of the "earliest" "known" "cave" "paintings".. the "earliest" "known" "cave" "paintings" were found in "chauvet" "cave".
|
https://www.nytimes.com/2014/10/09/science/ancient-indonesian-find-may-rival-oldest-known-cave-art.html
|
Cave Paintings in Indonesia May Be Among the Oldest Known - The ...
|
Cave Paintings in Indonesia May Be Among the Oldest Known
There is nothing like a blank stone surface to inspire a widely shared urge to make art.
A team of researchers reported in the journal Nature on Wednesday that paintings of hands and animals in seven limestone caves on the Indonesian island of Sulawesi may be as old as the earliest European cave art.
The oldest cave painting known until now is a 40,800-year-old red disk from El Castillo, in northern Spain.
Other archaeologists of human origins said the new findings were spectacular and, in at least one sense, unexpected. Sulawesi’s cave art, first described in the 1950s, had previously been dismissed as no more than 10,000 years old.
“Assuming that the dates are good,” Nicholas Conard, an archaeologist at the University of Tübingen in Germany, said in an email, “this is good news, and the only surprising thing is not that analogous finds would exist elsewhere, but rather that it has been so hard to find them” until now.
Eric Delson, a paleoanthropologist at Lehman College of the City University of New York, agreed that the discovery “certainly makes sense.” Recent genetic findings, he said, “support an early deployment of modern humans eastward to Southeast Asia and Australasia, and so having art of a similar age is reasonable as well.”
The authors of the new study, a team from Australia and Indonesia, used a uranium decay technique to date the substance that encrusts the wall paintings — a mineral called calcite, created by water flowing through the limestone in the cave. The art beneath is presumably somewhat older than the crust.
Maxime Aubert and Adam Brumm, research fellows at Griffith University in Queensland, Australia, and the leaders of the study, examined 12 images of human hands and two figurative animal depictions at the cave sites.
The researchers said the earliest images, with a minimum age of 39,900 years, are the oldest known stenciled outlines of human hands in the world. Blowing or spraying pigment around a hand pressed against rock surfaces would become a common practice among cave artists down through the ages — and even some of the youngest schoolchildren to this day.
A painting of an animal known as a pig deer, of the species babirusa, was determined to be at least 35,400 years old. The team concluded that it was “among the earliest dated figurative depiction worldwide, if not the earliest one.”
The closest in age from Western Europe is a painting of a rhinoceros from Chauvet Cave in France, dated at 35,000 years old, although some archaeologists have questioned that estimate. The most familiar rock art in the region of Sulawesi was created by the Aborigines of Australia, modern humans who arrived there 50,000 years ago. But none of the surviving rock art is older than 30,000 years.
The Sulawesi dates challenge the long-held view about the origins of cave art in an explosion of human creativity centered on Western Europe about 40,000 years ago, Dr. Aubert said, in an announcement issued by Griffith University.
Instead, he said, the creative brilliance required to produce the lifelike portrayals of horses and other animals much later at famous sites like Chauvet and Lascaux in France could have particularly deep roots within the human lineage.
But it is too soon to assess the discovery’s deeper implications, Wil Roebroeks, a specialist in human origins studies at Leiden University in the Netherlands, wrote in a commentary accompanying the report. “Whether rock art was an integral part of the cultural repertoire of colonizing modern humans, from Western Europe to southeast and beyond, or whether such practices developed independently in various regions, is unknown,” he wrote.
“But what is clear,” Dr. Roebroeks continued, “is that no figurative art is known from before the time of the initial expansion of Homo sapiens into Asia and across Europe — neither from earlier H. sapiens in Africa nor from their contemporaries in western Eurasia, the Neanderthals.”
Dr. Conard, of Tübingen University, said he had long argued for what he calls polycentric mosaic modernity, in which similar kinds of cultural innovations happened in different contexts as modern Homo sapiens spread across the world and displaced archaic hominins.
“I have never thought that complex symbolic behavior has a single point source and that cultural evolutions is like switching a light on,’” he said. “One would expect different regions to have distinctive signatures and to contribute to the story in their own way.”
Dr. Delson, of CUNY, said he tended “to prefer the idea that art came as part of the ‘baggage’ of Homo sapiens as they spread into Eurasia, mainly as we know that so many of the cultural features once thought to have developed in western Eurasia in fact occurred far earlier in Africa.”
He cited the examples of early use of pigments and engravings in Africa, as well as bodily adornment with shells and advanced stoneworking technology.
In their report, Dr. Aubert and Dr. Brumm took no sides in the debate. “It is possible that rock art emerged independently around the same time and at roughly both ends of the spatial distribution of early modern humans,” they concluded. “An alternate scenario, however, is that cave painting was widely practiced by the first H. sapiens to leave Africa tens of thousands of years earlier.”
If that is the case, the Australian-Indonesian research team predicted, “We can expect future discoveries of depictions of human hands, figurative art and other forms of image-making dating to the earliest period of the global dispersal of our species.”
A version of this article appears in print on , Section A, Page 17 of the New York edition with the headline: Cave Paintings in Indonesia May Be Among the Oldest Known. Order Reprints | Today’s Paper | Subscribe
|
A painting of an animal known as a pig deer, of the species babirusa, was determined to be at least 35,400 years old. The team concluded that it was “among the earliest dated figurative depiction worldwide, if not the earliest one.”
The closest in age from Western Europe is a painting of a rhinoceros from Chauvet Cave in France, dated at 35,000 years old, although some archaeologists have questioned that estimate. The most familiar rock art in the region of Sulawesi was created by the Aborigines of Australia, modern humans who arrived there 50,000 years ago. But none of the surviving rock art is older than 30,000 years.
The Sulawesi dates challenge the long-held view about the origins of cave art in an explosion of human creativity centered on Western Europe about 40,000 years ago, Dr. Aubert said, in an announcement issued by Griffith University.
Instead, he said, the creative brilliance required to produce the lifelike portrayals of horses and other animals much later at famous sites like Chauvet and Lascaux in France could have particularly deep roots within the human lineage.
But it is too soon to assess the discovery’s deeper implications, Wil Roebroeks, a specialist in human origins studies at Leiden University in the Netherlands, wrote in a commentary accompanying the report. “Whether rock art was an integral part of the cultural repertoire of colonizing modern humans, from Western Europe to southeast and beyond, or whether such practices developed independently in various regions, is unknown,” he wrote.
|
no
|
Spelaeology
|
Was Chauvet Cave the site of the earliest known cave paintings?
|
yes_statement
|
"chauvet" "cave" was the "site" of the "earliest" "known" "cave" "paintings".. the "earliest" "known" "cave" "paintings" were found in "chauvet" "cave".
|
https://www.britannica.com/art/cave-art
|
Cave art | Definition, Characteristics, Images, & Facts | Britannica
|
The first painted cave acknowledged as being Paleolithic, meaning from the Stone Age, was Altamira in Spain. The art discovered there was deemed by experts to be the work of modern humans (Homo sapiens). Most examples of cave art have been found in France and in Spain, but a few are also known in Portugal, England, Italy, Romania, Germany, Russia, and Indonesia. The total number of known decorated sites is about 400.
Most cave art consists of paintings made with either red or black pigment. The reds were made with iron oxides (hematite), whereas manganese dioxide and charcoal were used for the blacks. Sculptures have been discovered as well, such as the clay statues of bison in the Tuc d’Audoubert cave in 1912 and a statue of a bear in the Montespan cave in 1923, both located in the French Pyrenees. Carved walls were discovered in the shelters of Roc-aux-Sorciers (1950) in Vienne and of Cap Blanc (1909) in Dordogne. Engravings were made with fingers on soft walls or with flint tools on hard surfaces in a number of other caves and shelters.
Representations in caves, painted or otherwise, include few humans, but sometimes human heads or genitalia appear in isolation. Hand stencils and handprints are characteristic of the earlier periods, as in the Gargas cave in the French Pyrenees. Animal figures always constitute the majority of images in caves from all periods. During the earliest millennia when cave art was first being made, the species most often represented, as in the Chauvet–Pont-d’Arc cave in France, were the most-formidable ones, now long extinct—cave lions, mammoths, woolly rhinoceroses, cave bears. Later on, horses, bison, aurochs, cervids, and ibex became prevalent, as in the Lascaux and Niaux caves. Birds and fish were rarely depicted. Geometric signs are always numerous, though the specific types vary based on the time period in which the cave was painted and the cave’s location.
Cave art is generally considered to have a symbolic or religious function, sometimes both. The exact meanings of the images remain unknown, but some experts think they may have been created within the framework of shamanic beliefs and practices. One such practice involved going into a deep cave for a ceremony during which a shaman would enter a trance state and send his or her soul into the otherworld to make contact with the spirits and try to obtain their benevolence.
Examples of paintings and engravings in deep caves—i.e., existing completely in the dark—are rare outside Europe, but they do exist in the Americas (e.g., the Maya caves in Mexico, the so-called mud-glyph caves in the southeastern United States), in Australia (Koonalda Cave, South Australia), and in Asia (the Kalimantan caves in Borneo, Indonesia, with many hand stencils). Art in the open, on shelters or on rocks, is extremely abundant all over the world and generally belongs to much later times.
Get a Britannica Premium subscription and gain access to exclusive content.
|
The first painted cave acknowledged as being Paleolithic, meaning from the Stone Age, was Altamira in Spain. The art discovered there was deemed by experts to be the work of modern humans (Homo sapiens). Most examples of cave art have been found in France and in Spain, but a few are also known in Portugal, England, Italy, Romania, Germany, Russia, and Indonesia. The total number of known decorated sites is about 400.
Most cave art consists of paintings made with either red or black pigment. The reds were made with iron oxides (hematite), whereas manganese dioxide and charcoal were used for the blacks. Sculptures have been discovered as well, such as the clay statues of bison in the Tuc d’Audoubert cave in 1912 and a statue of a bear in the Montespan cave in 1923, both located in the French Pyrenees. Carved walls were discovered in the shelters of Roc-aux-Sorciers (1950) in Vienne and of Cap Blanc (1909) in Dordogne. Engravings were made with fingers on soft walls or with flint tools on hard surfaces in a number of other caves and shelters.
Representations in caves, painted or otherwise, include few humans, but sometimes human heads or genitalia appear in isolation. Hand stencils and handprints are characteristic of the earlier periods, as in the Gargas cave in the French Pyrenees. Animal figures always constitute the majority of images in caves from all periods. During the earliest millennia when cave art was first being made, the species most often represented, as in the Chauvet–Pont-d’Arc cave in France, were the most-formidable ones, now long extinct—cave lions, mammoths, woolly rhinoceroses, cave bears. Later on, horses, bison, aurochs, cervids, and ibex became prevalent, as in the Lascaux and Niaux caves. Birds and fish were rarely depicted.
|
no
|
Spelaeology
|
Was Chauvet Cave the site of the earliest known cave paintings?
|
yes_statement
|
"chauvet" "cave" was the "site" of the "earliest" "known" "cave" "paintings".. the "earliest" "known" "cave" "paintings" were found in "chauvet" "cave".
|
https://www.thecollector.com/most-important-cave-paintings-in-the-world/
|
The 7 Most Important Prehistoric Cave Paintings in the World
|
The 7 Most Important Prehistoric Cave Paintings in the World
From their earliest rediscoveries in 19th-century Europe to a game-changing find in 21st-century Indonesia, prehistoric rock art (paintings and carvings on permanent rock locations like caves, boulders, cliff faces, and rock shelters) are some of the world’s most fascinating artworks. They represent the earliest surviving evidence of the artistic instinct in early humanity and have been found on nearly every continent.
Despite differing from place to place — we should not assume that all prehistoric cultures were identical — rock art often features stylized animals and humans, handprints, and geometric symbols engraved into the rock or painted in natural pigments like ochre and charcoal. Without the assistance of historical records for these early, pre-literate societies, understanding rock art is a great challenge. However, hunting magic, shamanism, and spiritual/religious rituals are the most commonly proposed interpretations. Here are seven of the most fascinating cave paintings and rock art sites from around the world.
1. The Altamira Cave Paintings, Spain
One of the great bison paintings in Altamira, Spain, photo from the Museo de Altamira y D. Rodríguez, via Wikimedia Commons
The rock art at Altamira, Spain was the first in the world to be recognized as prehistoric artwork, but it took years for that fact to become a consensus. Altamira’s first explorers were amateur archaeologist, including a Spanish nobleman Marcelino Sanz de Sautuola and his daughter Maria. In fact, it was 12-year-old Maria who looked up at the cave’s ceiling and discovered a series of large and lively bison paintings.
Many other lifelike animal paintings and engravings were subsequently found. Don Sautuola had vision enough to correctly connect these grand and sophisticated cave paintings with small-scale prehistoric objects (the only prehistoric art known at that time). However, the experts didn’t initially agree. Archaeology was a very new field of study at the time and had not yet gotten to the point where prehistoric humans were considered capable of making any kind of sophisticated art. It wasn’t until similar sites started being discovered later in the 19th century, primarily in France, that experts finally accepted Altamira as a genuine artifact of the Ice Age.
2. Lascaux, France
Lascaux Caves, France, via travelrealfrance.com
Get the latest articles delivered to your inbox
Sign up to our Free Weekly Newsletter
Please check your inbox to activate your subscription
Thank you!
Discovered in 1940 by some kids and their dog, the Lascaux caves represented the motherlode of European rock art for many decades. French priest and amateur prehistorian Abbé Henri Breuil termed it “the Sistine Chapel of Prehistory”. Despite being surpassed by the 1994 discovery of Chauvet cave (also in France), with its stunning animal depictions dated to more than 30,000 years ago, the rock art at Lascaux is still probably the most famous in the world. It owes that status to its vivid representations of animals like horses, bison, mammoths, and deer.
Clear, graceful, and forcefully expressive, they often appear on a monumental scale, especially in Lascaux’s well-known Hall of Bulls. Each one almost seems capable of movement, a sense probably enhanced by their position on undulating cave walls. Clearly, these prehistoric painters were masters of their art form. Their impact comes across even through virtual tours of the reproduced caves. There is also a mysterious human-animal hybrid figure, sometimes called “bird man”. His connotations remain elusive but may relate to religious beliefs, rituals, or shamanism.
Unlike Altamira, the Lascaux caves got positive public attention from the very beginning, despite being discovered in the middle of World War Two. Unfortunately, several decades of heavy visitor traffic endangered the paintings, which survived for so many millennia by being protected from human and environmental factors inside the caves. That’s why, like many other popular rock art sites, the Lascaux caves are now closed to visitors for their own protection. However, high-quality replicas on the site admit tourists.
3. The Apollo 11 Cave Stones, Namibia
One of the Apollo 11 stones, photo by the State Museum of Namibia via Timetoast.com
Rock art abounds in Africa, with at least 100,000 sites discovered from prehistory through to the 19th century, but it has thus far been badly under-studied. Despite this, there have been some great finds which is unsurprising when you consider that Africa is thought to be the origin of all humanity. One such find is the Apollo 11 cave stones, found in Namibia. (Don’t get any funny ideas, the Apollo 11 stones did not come from outer space. They got that name because their initial discovery coincided with the Apollo 11 launch in 1969.) These paintings are on a set of granite slabs detached from any permanent rock surface. There are seven small slabs in total, and together they represent six animals drawn in charcoal, ochre, and white pigment. There is a zebra and rhino alongside an unidentified quadruped in two pieces and three more stones with faint and indeterminate imagery. They have been dated to about 25,000 years ago.
Other key African finds include the Blombos Cave and the Drakensburg rock art sites, both in South Africa. Blombos does not have any surviving rock art but it has preserved evidence of paint and pigment making — an early artist’s workshop — dating as far back as 100,000 years ago. Meanwhile, the Drakensburg site contains countless human and animal images made by the San peoples over thousands of years until they were forced to abandon their ancestral lands relatively recently. Projects like the Trust for African Rock Art and the African Rock Art Image Project at the British Museum are now working to record and preserve these ancient sites.
4. Kakadu National Park and Other Rock Art Sites, Australia
Some of the Gwion Gwion rock art paintings, in the Kimberley region of Australia, via the Smithsonian
Humans have lived in the area that is now Kakadu National Park, in the Arnhem Land region of Australia’s northern coast, for about 60,000 years. The surviving rock art there is 25,000 years old at most; the last painting before the area became a national park was made in 1972 by an aboriginal artist named Nayombolmi. There have been different styles and subjects in different periods, but the paintings often employ a mode of representation that has been termed the “X-Ray Style”, in which both external features (such as scales and face) and internal ones (like bones and organs) appear on the same figures.
With such an incredibly long history of art, Kakadu presents some fantastic evidence for a millennia of climate change in the area — animals now extinct in the area appear in the paintings. A similar phenomenon has been observed in places like the Sahara, where plants and animals in rock art are relics of a time when the area was lush and green, and not a desert at all.
Rock art is particularly plentiful in Australia; one estimate suggests 150,000-250,000 possible sites across the country, especially in the Kimberley and Arnhem Land regions. It remains a significant component of indigenous religion today, especially as they relate to the essential aboriginal concept known as “the Dreaming”. These ancient paintings continue to have great spiritual power and significance for modern indigenous peoples.
5. The Lower Pecos Rock Art in Texas and Mexico
Paintings at the White Shaman Preserve in Texas, photo by runarut via Flickr
Despite being quite young by prehistoric standards (the oldest examples are four thousand years old), the cave paintings of the Lower Pecos Canyonlands on the Texas-Mexico border have all the elements of the best cave art anywhere in the world. Of particular interest are the many “anthropomorph” figures, a term researchers have given to the heavily stylized human-like forms that appear throughout the Pecos caves. Appearing with elaborate headdresses, atlatls, and other attributes, these anthropomorphs are believed to depict shamans, possibly recording events from shamanic trances.
Animals and geometric symbols appear as well, and their imagery has been tentatively linked to myths and customs from the native cultures of the surrounding areas, including rituals involving hallucinogenic Peyote and Mescal. However, there is no definitive evidence that the cave painters, termed the Peoples of the Pecos, subscribed to the same beliefs as later groups, as links between the rock art and current indigenous traditions are not as strong here as those sometimes found in Australia.
6. Cueva de las Manos, Argentina
Handprints or reverse handprints (bare rock hand silhouettes surrounded by a cloud of colored paint disbursed via blowpipes) are a common feature of cave art, found in a multitude of locations and time periods. They often appear alongside other animal or geometric imagery around the world. However, one site is especially famous for them: Cueva de las Manos (the Cave of Hands) in Patagonia, Argentina, which contains about 830 handprints and reverse handprints together with representations of people, llamas, hunting scenes, and more in a cave within a dramatic canyon setting.
The paintings have been dated as far back as 9,000 years ago. Images of the Cueva de las Manos, with colorful handprints covering every surface, are dynamic, fascinating, and rather moving. Calling to mind a hoard of excited schoolchildren all raising their hands, these shadows of ancient human gestures seem to bring us even closer to our prehistoric ancestors than other examples of painted or engraved rock art elsewhere.
In 2014, it was discovered that rock art paintings in the Maros-Pangkep caves on the Indonesian island of Sulawesi date to between 40,000 – 45,000 years ago. Depicting animal forms and handprints, these paintings have become contenders for the title of oldest cave paintings anywhere.
In 2018, human and animal paintings of roughly the same age were found in Borneo, and in 2021, a painting of a native Indonesian warty pig in the Leang Tedongnge cave, again in Sulawasi, came to light. It is now considered by some to be the oldest known representational painting in the world. These 21st-century finds have been the first to make scholars get serious about the possibility that humanity’s first art was not necessarily born in the caves of western Europe.
By Alexandra KielyBA Art History (with honors)Alexandra is an art historian and writer from New Jersey. She holds a B.A. in Art History from Drew University, where she received the Stanley Prescott Hooper Memorial Prize in Art History. She wrote her honors thesis on the life and work of early-20th century art theorist Roger Fry. Her primary interests are American art, particularly 19th-century painting, and medieval European art and architecture. She runs her own website, A Scholarly Skater, is a regular contributor to DailyArt Magazine, and has written two online courses. Alexandra enjoys reading, ballroom dancing, and figure skating.
|
1. The Altamira Cave Paintings, Spain
One of the great bison paintings in Altamira, Spain, photo from the Museo de Altamira y D. Rodríguez, via Wikimedia Commons
The rock art at Altamira, Spain was the first in the world to be recognized as prehistoric artwork, but it took years for that fact to become a consensus. Altamira’s first explorers were amateur archaeologist, including a Spanish nobleman Marcelino Sanz de Sautuola and his daughter Maria. In fact, it was 12-year-old Maria who looked up at the cave’s ceiling and discovered a series of large and lively bison paintings.
Many other lifelike animal paintings and engravings were subsequently found. Don Sautuola had vision enough to correctly connect these grand and sophisticated cave paintings with small-scale prehistoric objects (the only prehistoric art known at that time). However, the experts didn’t initially agree. Archaeology was a very new field of study at the time and had not yet gotten to the point where prehistoric humans were considered capable of making any kind of sophisticated art. It wasn’t until similar sites started being discovered later in the 19th century, primarily in France, that experts finally accepted Altamira as a genuine artifact of the Ice Age.
2. Lascaux, France
Lascaux Caves, France, via travelrealfrance.com
Get the latest articles delivered to your inbox
Sign up to our Free Weekly Newsletter
Please check your inbox to activate your subscription
Thank you!
Discovered in 1940 by some kids and their dog, the Lascaux caves represented the motherlode of European rock art for many decades. French priest and amateur prehistorian Abbé Henri Breuil termed it “the Sistine Chapel of Prehistory”. Despite being surpassed by the 1994 discovery of Chauvet cave (also in France), with its stunning animal depictions dated to more than 30,000 years ago, the rock art at Lascaux is still probably the most famous in the world. It owes that status to its vivid representations of animals like horses,
|
yes
|
Spelaeology
|
Was Chauvet Cave the site of the earliest known cave paintings?
|
no_statement
|
"chauvet" "cave" was not the "site" of the "earliest" "known" "cave" "paintings".. the "earliest" "known" "cave" "paintings" were not found in "chauvet" "cave".
|
https://www.theguardian.com/world/2011/mar/17/werner-herzog-cave-of-forgotten-dreams
|
Herzog's Cave of Forgotten Dreams: the real art underground ...
|
Herzog's Cave of Forgotten Dreams: the real art underground
Once he put his actors through hell, but now the German master Werner Herzog has travelled back in time for what might be his most moving film
Simon McBurney
Thu 17 Mar 2011 19.00 EDT
On 13 December 1994, on a cliff face in the Ardèche gorge in the south of France, three speleologists first felt a slight draught of air coming from the rocks. They pulled them away and crawled into a space barely wide enough for the human body. Descending a steep shaft, they found themselves in a vast underground cavern of astonishing beauty.
But nothing prepared them for what they saw next. As they advanced into the 400m-long chamber, one of the three, Eliade Brunel, suddenly let out a cry. She said later: "Our light flashed on to a mammoth, then a bear, then a lion with a semi-circle of dots which seemed to emerge from its muzzle like drops of blood, a rhinoceros … We saw human hands, both positive and negative impressions. And a frieze of other animals 30ft long."
Haunted since the day its discovery was projected all over the world in 1994, I, like many others, have always wanted to see inside the Chauvet cave, site of the world's earliest known cave art. Quite rightly, we will never go. It is closed to the public.
Cut to 1976, standing on a cliff at the end of the day on the windswept southern coast of Jersey where the water stretched all the way to St Malo. My father flings out his arm. He tells me that, during the last ice age, this sea was a plain – a savannah-type landscape, albeit a cold one – that stretched from England to France. We were there because he was a prehistorian, and every year of my childhood and adolescence through the 1960s and 70s, we spent weeks on an excavation of a Neanderthal site dating from 120,000 years ago. Across this landscape, he said, roamed bison, mammoth, rhinoceros, lions, horses, deer, bears and countless other species, including the aurochs – giant oxen 2m tall. (The last recorded aurochs only died in 1627 in the Jaktorów Forest in Poland.) You would, my father told me, be looking over a landscape teeming with animals. It was man who was in the minority. It was an animal world.
Maybe it was on these excavations, surrounded by hair-growing, non-washing, chain-smoking 1970s students that I also first heard of Werner Herzog, the radical German film-maker. His evocation of Lope de Aguirre – the 16th-century conquistador terrifyingly incarnated by Klaus Kinski in Aguirre, Wrath of God – drew me, aged 16, into his world. I have followed his films ever since. Here, the borders of fiction and documentary are constantly blurred in an excavation of human excess, endurance and ingenuity. They also have an eccentric, wry humour. Like Herzog himself. I have watched him on YouTube eat his shoe and be shot at while being interviewed. His stubborn, unflinching vision leads him into places most of us will never go. "The poet must not avert his eyes," he once said. "You must look directly at what is around you, even the ugly and … the decadent."
And now, somehow, he has talked his way into filming inside the Chauvet cave. In 3D. With the result that we are able to penetrate where we can never go. With rough humour and touching observations, he guides us; not as an expert, but as we ourselves might look. The result is astonishingly moving.
Herzog leads us into the cave, and we go with him. As the cinema usher said to me on the way out: "This is the first time 3D has made sense to me. It always seemed a gimmick before." She is right. We feel the texture of the rock. Stalagmites and stalactites loom out of the darkness and pass us as we crawl along, their wet shape and colour reminding us of the human body. When we stand, we are in that immense chamber. We are really there. This is not an effect. It is an event. We are in a sacred place. And we feel it.
Jean Clottes, one of the first to authenticate the paintings in 1994, instructs us. "The original entrance was there, and here it was light. As you will see, there are no paintings. They were not painted in the light. They were painted in the dark."
We go further into the dark. Traces are engraved in soft clay. You can almost touch them. A bear has scratched with his claws. Then the next marks are human. Drawn with a stick. And over all this, a finger, tracing the outline of a horse. Every movement of the finger – the speed, the hesitation, the deliberation – is there to see in the single ridged line in the clay. It is astonishing. The horse is really alive. As John Berger wrote in the Guardian in 2002: "Art, it would seem, is born like a foal that can walk straight away … The talent to make art accompanies the need for that art; they arrive together."
Deeper still. And closer. Hands appear dipped in red ochre and planted on the rock. One of the artists had a deformed finger. From the angle that he applied his painted hand to the wall, we even know he was at least 6ft tall.
My father's hands were also deformed, and he too was 6ft tall. He died in 1979. I look in his stead.
The camera swings round. Herzog is sweating. A bead runs down his face. On the floor is a bear skull, next to it a boy's footprint. But this cave belonged to the bears before the humans came, and was returned to them after humans' temporary visit. And the drawings of the animals tell us that: this is the animals' place. But even when evoking their ferociousness, these drawings are not about danger, but familiarity. There is an intimacy with the animals. And an intimacy with the rock. The shoulder of a bison is made rounded by a bulge of limestone. A lion's pelvis follows a convex curve in the rock. Everything is preserved by the thinnest layer of calcite from the water oozing out of limestone over the millennia. Even charcoal pieces that dropped from the artists' hands on to the cave floor are here; still in the place where they fell, 32,000 years ago.
Jean Clottes turns to the film-makers. "Silence please. Please listen to the cave. You may even be able to hear your heartbeat." The visitors stand in silence and awe. (Herzog, perhaps not forgetting the audience in his adopted country, the US, adds music and a heartbeat, but even this does not entirely remove the wonder.) In the cinema, we too hold our breath.
We cut to Julian Monay – a young French archaeologist who startles us with the words: "I am a scientist but also a human being." He confesses he was a circus performer before he became a prehistorian. Working with lions, asks Herzog. No, no, a juggler and unicyclist, Monay replies, before describing, in most unscientific terms, not his line of research but his dreams after his first visit to the cave. I am struck by a tinge of regret in his voice. As if in recognition of something missing for him when he woke. The animals, perhaps?
The endless investigations of the scientists unfold in a delightfully haphazard fashion. An experimental archaeologist, dressed in caribou skins, demonstrates a bone flute, similar to one found in southern Germany of the same antiquity as Chauvet. Ludicrously he plays The Stars and Stripes. Another inexpertly demonstrates a spear throw. A retired "perfumier" suddenly appears. He sniffs the ground, telling us that he is searching for more possible caves with his nose.
Back in the cave, we stand before the great panel of horses. Their muzzles are soft. You can feel the velvet lips. They're not being chased. Unlike the drawings in Lascaux and Altimira 15,000 years later, there are no depictions of hunting anywhere. The legs are doubled and trebled, making them move. "Like proto-cinema," Herzog mutters. And he reinforces what we already know. This is no gallery. This is a place where the animals are alive.
Why have these artists not drawn themselves? There is only a single human figure in the cave: a woman's pudenda and legs. With a bison head overlooking and entwined with her. Are the legs hers or the bison's? There is no separation. The people are in the animals. And the animals are so alive. We can even, it is suggested, hear the sound of a horse's neigh from its open mouth. This horse is neither a symbol, nor a stylisation. It is depicted as we would see a horse now. Here. Today. The observation and imagination of these artists were the same as ours. We feel closer to the drawings on the walls of Chauvet than the painting of, say, an Egyptian mural. These artists are not remote ancestors; they are brothers. They saw like us, they drew like us; we wear essentially the same clothes against the cold.
But despite their proximity, there is something fundamental that cuts us off from them. The time they lived in connected everything. They lived in an enormous present, which also contained past and future. A present in which nature was not only contiguous with them, but continuous. They flowed in and out of a continuum of everything around them; just as the animals flow into and out of the rock. And if the rock was alive, so were the animals. Everything was alive. And perhaps this is what truly separates us: not the space of time, but the sense of time. In our minute splicing of our lives into milliseconds, we live separated from everything that surrounds us. Do you know who made your clothes, or even what they are made of? "We are locked in history; they were not," says Herzog.
And then he wilfully changes direction, suddenly filming crocodiles living in a glasshouse heated by the nuclear power station a few miles downriver from the Chauvet cave. The radioactivity of the water has caused a mutation in the offspring of the crocodiles. They are albino. We leave with this startling image of our deforming modernity, and blink our way into the light.
As we leave the cinema, I remember that on my father's study wall was a section marking the stratigraphy of earth in a trench he dug in a cave in Wales. "Look, time is vertical," he would say, pointing to the strata of clay and loess. We live in horizontal time. In the 1650s, just after the death of the last aurochs in the forest in Poland, Blaise Pascale observed in his Pensées: "We never keep to the present. We anticipate the future as if we found it too slow in coming and were trying to hurry it up, or we recall the past as if to stay its too rapid flight. We are so unwise that we wander about in times that are not ours and blindly flee the only one that is. The fact is that the present usually hurts."
|
Herzog's Cave of Forgotten Dreams: the real art underground
Once he put his actors through hell, but now the German master Werner Herzog has travelled back in time for what might be his most moving film
Simon McBurney
Thu 17 Mar 2011 19.00 EDT
On 13 December 1994, on a cliff face in the Ardèche gorge in the south of France, three speleologists first felt a slight draught of air coming from the rocks. They pulled them away and crawled into a space barely wide enough for the human body. Descending a steep shaft, they found themselves in a vast underground cavern of astonishing beauty.
But nothing prepared them for what they saw next. As they advanced into the 400m-long chamber, one of the three, Eliade Brunel, suddenly let out a cry. She said later: "Our light flashed on to a mammoth, then a bear, then a lion with a semi-circle of dots which seemed to emerge from its muzzle like drops of blood, a rhinoceros … We saw human hands, both positive and negative impressions. And a frieze of other animals 30ft long. "
Haunted since the day its discovery was projected all over the world in 1994, I, like many others, have always wanted to see inside the Chauvet cave, site of the world's earliest known cave art. Quite rightly, we will never go. It is closed to the public.
Cut to 1976, standing on a cliff at the end of the day on the windswept southern coast of Jersey where the water stretched all the way to St Malo. My father flings out his arm. He tells me that, during the last ice age, this sea was a plain – a savannah-type landscape, albeit a cold one – that stretched from England to France.
|
yes
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
yes_statement
|
"mardi" gras was "originally" a christian "holiday".. the "origins" of "mardi" gras can be traced back to christianity.
|
https://www.almanac.com/content/when-mardi-gras
|
When Is Mardi Gras 2024? | Mardi Gras History & Traditions | The ...
|
Mardi Gras 2024 (Fat Tuesday): Why Do We Celebrate Mardi Gras?
Learn the History Behind this Traditional Feast Day
For daily wit & wisdom, sign up for the Almanac newsletter.
Mardi Gras—also known as Shrove Tuesday—is Tuesday, February 13, 2024! Do you know the meaning of Mardi Gras and why it’s celebrated? Learn about this fascinating holiday from its origins as a spring fertility rite to the masked balls of medieval Italy to today’s Carnival festivities.
When Is Mardi Gras?
Mardi Gras, also called Shrove Tuesday, takes place annually on the Tuesday before Ash Wednesday—the beginning of the Christian observance of Lent, which lasts about six weeks and ends just before Easter. This means that Mardi Gras is a moveable holiday that can take place in either February or March.
What Is Mardi Gras?
Mardi Gras is the day before Ash Wednesday, when the Christian season of Lent begins. This day is also called Shrove Tuesday, a name that comes from the practice of “shriving”—purifying oneself through confession—before Lent. For many Christians, Shrove Tuesday is a time to receive penance and absolution.
You’ll sometimes hear Mardi Gras referred to as “Carnival.” Technically, this term refers to the period of feasting that begins on January 6 (the Feast of the Epiphany) and ends on Mardi Gras. In cities such as New Orleans (U.S.), Rio Janeiro (Brazil), and Venice (Italy), there are week-long festivals leading up to Mardi Gras.
What does Mardi Gras Mean?
In French, Mardi Gras means “Fat Tuesday.” (Mardi is the word for Tuesday, and gras is the word for fat.)
This name comes from the tradition of using up the eggs, milk, and fat in ones pantry because they were forbidden during the 40-day Lenten fast, which begins the next day (Ash Wednesday) and ends on Holy Thursday (three days before Easter Sunday).
Therefore, a big part of Shrove Tuesday is eating an abundance of delicious fried food—especially donuts and Shrove Tuesday Pancakes!
The word “carnival” also comes from this feasting tradition: in Medieval Latin, carnelevarium means “to take away or remove meat”, from the Latin carnem for meat. During Lent, Catholics traditionally gave up meat during the Lenten season and mainly ate fish.
Pancake Tuesday
In England, where the day is also known as Pancake Tuesday, festivities include flapjack-related activities. The pancake race held by women in Olney, Buckinghamshire, dates back to 1445. Legend says that the idea started when a woman cooking pancakes lost track of the time. When she heard the church bells ring, she rushed out the door to attend the shriving service while still wearing her apron and holding a skillet containing a pancake.
Other cultures also cook up rich treats and fried foods.
Among the Pennsylvania Dutch, the Tuesday is called Faasenacht (also spelled Fastnacht), meaning “fast night.” Everyone enjoys the traditional Fasnacht pastry, a rectangular doughnut with a slit in the middle.
In Polish communities, the Tuesday is called “Pączki Day,” after the puffy, jelly-filled doughnuts traditionally enjoyed.
In Louisiana, the favorite treat is the beignet, a pillowy fried dough concoction. (See below!)
Beignets covered in powdered sugar
Short History of Mardi Gras
According to Laurie Wilkie, an archaeologist at the University of California at Berkeley, Mardi Gras “Carnival” celebrations started before Christianity as a pagan fertility festival. Some scholars believe it may have been linked to the ancient Roman pagan feast, Saturnalia, which honored the god of agriculture, Saturn. Other research suggests that there is no connection and the customs may come from much older Indo-European spring lore—perhaps the folklore of the Germanic and Slavic races rather than from Greece or Rome.
In any event, once Christianity arrived, Roman pagan celebrations were absorbed into the religious calendar. The carnival practices in Rome continued within the framework of the Church. The masked balls of Venice were especially renowned in Renaissance Italy and spread to France and England. In France, they were called les bals des Rois for the kings who presided over the masked merrymaking. Whoever found a coin or a bean in a piece of special “king cake” (named for the Three Kings of the nativity) was named king for the night.
In 1699, French-Canadian explorer Jean Baptiste Le Moyne Sieur de Bienville arrived in the New World about 60 miles directly south of New Orleans; he named this place ”Pointe du Mardi Gras” as it was the very eve of the holiday. He also established “Fort Louis de la Louisiane” (which is now Mobile, Alabama) in 1702. While New Orleans may be most known for Mardi Gras in the U.S. today, the tiny settlement of Fort Louis de la Mobile celebrated America’s very first Mardi Gras in 1703.
Mardi Gras was celebrated in New Orleans soon after the city’s founding in 1718. The first recorded Mardi Gras street parade in New Orleans took place in 1837. Now a major metropolis, New Orleans is the city most known for its extravagant celebrations with parades, dazzling floats, masked balls, cakes, and drinks.
I think that I may say that an American has not seen the United States until he has seen Mardi Gras in New Orleans. –Mark Twain, American writer (1835–1910)
Mardi Gras Traditions
Masks The masks are one of the most popular Mardi Gras traditions. It’s thought that masks during Mardi Gras allowed wearers to escape society and class constraints to mingle however they wished.
Parades The parades are organized by prestigious New Orleans social clubs, or Krewes (pronounced “crews”). Each Krewe has its own royal court and hosts parties and masked balls during Carnival Season, leading up to the parade.
Beads or Throws Krewe members on floats throw beads and trinkets to the parade-goers; it’s a tradition that goes back to the early 1870s. The beads seem to be a nod to a king throwing gems to his loyal subjects as he passes by on his carriage.
Purple, green, and gold The colors of Mardi Gras were selected by the Krewe of Rex in 1872. Purple represents justice, green represents faith, and gold represents power.
King Cake Only eaten during Mardi Gras, King cakes are a cross between a French pastry and a coffee cake, topped with icing and sugar in the Mardi Gras colors. They can be served on Three King’s Day (January 6) through the end of Mardi Gras. A small baby (representing Jesus) is hidden in the cake. Tradition says whoever gets the king cake piece containing the baby is supposed to provide the king cake for the next gathering.
Related Content
In the spirit of New Orleans, try cooking up some great Cajun food for Mardi Gras, such as this soul-warming Jambalaya.
Discover more about the history and traditions of this holiday on the City of New Orleans’ Mardi Gras Website.
I lived in New Orleans from age two until six and moved to Baton Rouge. New Orleans is my home. The only parades I attend is FAT TUESDAY and parades in New Orleans !!! And Bourbon Street is my favorite SPOT!. .I LIVE AND I LOVE NEW ORLEANS!!!!!!!!!!(SCOTLANDVILLE HIGH CLASS OF “1974”) ♥️♥️🙏🏾🙏🏾 GOD BLESS THE WORLD 🙏🏾🙏🏾
Thank you for the wonderful article. I am originally from Illinois but I lived in New Orleans for 10 years. I loved it there. I learned many things from your read thank you so much #YouCanTakeTheGirlAwayFromMardiGrasButYouCantTakeMardiGrasOutTheGirl♡
Just a small correction - Polish tradition is to eat donuts (deep fried yeast-based balls of dough filled with jam, quite often plum butter - very yummy if you can find the authentic kind) is on the Thursday BEFORE the week of Ash Wednesday, so it's "Fat Thursday", not Fat Tuesday.
|
Mardi Gras 2024 (Fat Tuesday): Why Do We Celebrate Mardi Gras?
Learn the History Behind this Traditional Feast Day
For daily wit & wisdom, sign up for the Almanac newsletter.
Mardi Gras—also known as Shrove Tuesday—is Tuesday, February 13, 2024! Do you know the meaning of Mardi Gras and why it’s celebrated? Learn about this fascinating holiday from its origins as a spring fertility rite to the masked balls of medieval Italy to today’s Carnival festivities.
When Is Mardi Gras?
Mardi Gras, also called Shrove Tuesday, takes place annually on the Tuesday before Ash Wednesday—the beginning of the Christian observance of Lent, which lasts about six weeks and ends just before Easter. This means that Mardi Gras is a moveable holiday that can take place in either February or March.
What Is Mardi Gras?
Mardi Gras is the day before Ash Wednesday, when the Christian season of Lent begins. This day is also called Shrove Tuesday, a name that comes from the practice of “shriving”—purifying oneself through confession—before Lent. For many Christians, Shrove Tuesday is a time to receive penance and absolution.
You’ll sometimes hear Mardi Gras referred to as “Carnival.” Technically, this term refers to the period of feasting that begins on January 6 (the Feast of the Epiphany) and ends on Mardi Gras. In cities such as New Orleans (U.S.), Rio Janeiro (Brazil), and Venice (Italy), there are week-long festivals leading up to Mardi Gras.
What does Mardi Gras Mean?
In French, Mardi Gras means “Fat Tuesday.” (Mardi is the word for Tuesday, and gras is the word for fat.)
|
yes
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
yes_statement
|
"mardi" gras was "originally" a christian "holiday".. the "origins" of "mardi" gras can be traced back to christianity.
|
https://mardigrastraditions.com/mardi_gras_history/
|
Mardi Gras History and Traditions | MardiGrasTraditions.com
|
In exploring Mardi Gras history, there’s no tidy way to connect the dots between ancient festive customs and the modern pre-Lenten revels that occur in a myriad of guises around the world. Certainly, religious rituals associated with the mythic god Dionysus helped to get the party rolling, and over time the old pagan habits were subsumed into Judeo-Christian tradition and transplanted from Europe during the colonial era. Now mostly a secularized holiday, Mardi Gras in New Orleans has evolved from a celebration for locals into an iconic, internationally recognized spectacle.
Dionysian roots
“Journey of Dionysius” float depicting the Greek god in the 2015 Krewe of Endymion parade
As the mythic personification of the ecstatic experience, of the propensity to seek delight in the here and now, he is, in effect, the unofficial patron god of Carnival.
There is no pinpointing the origins of the celebration known today as Carnival or Mardi Gras. Indeed, because its most elemental characteristics — drinking and feasting, dancing and music, masks and costumes — extend back into the mists of time, there’s no tidy way to connect the dots between prehistoric cave paintings of dancing stick-like figures wearing animal masks and the modern pre-Lenten revels that occur in a myriad of guises in places as far-flung as New Orleans, Rio de Janiero, Venice and Port of Spain.
What can be discerned is a pastiche of customs and influences, for Carnival is nothing if not a melting pot — a constantly evolving accretion of ingredients that blend together within a framework of established conventions.
Certainly there are traces of ancient rites tied to the observance of the winter solstice. The Roman festival of Saturnalia, commemorating the death and rebirth of nature, was held in December (in honor of Saturn, the god of agriculture and civilization) and presided over by a mock king. The chance manner of his choosing — by throwing dice, drawing a lot, or discovering a fava bean or coin in a piece of cake — related to the mythology surrounding Saturn, whose reign was believed to be so just that there were no slaves or private property. Thus it was decreed during Saturnalia that all should be given equal rights, and indeed even a slave could rule. This ritual of inversion — whereby the usual hierarchies are temporarily suspended — is quintessentially carnavelesque, as is the concept of having a make-believe ruler preside over revels.
Even more fundamental to Carnival’s DNA was the ecstatic worship of Dionysus or, as he was known to the Romans, Bacchus. The Greek god of wine, Dionysus is also associated with madness, frenzy, theater and ritually induced ecstasy.
His gift of the viniculture to humankind makes Dionysus both beneficent and potentially dangerous, since wine, if consumed to excess, can inflict irrationality if not madness. Ancient Dionysian rites were religious rituals in which the god was said to possess his devotees, after they danced themselves into a trance. As The Horizon Cookbook and Illustrated History of Eating and Drinking Through the Ages says of the revels he and Bacchus inspired, “Intoxication was thought to wrest the human spirit from the mind’s control. Wine, then, became everywhere in the classical world a medium of religious experience.”
In her Great Courses lecture on classical mythology for The Teaching Company, Classics professor Elizabeth Vandiver observes that Dionysus, as “a god whose domains include possession, behavior inconsistent with one’s normal character — acting out of things that one would not normally do — is an appropriate god to be associated with a theatrical tradition in which the actors wore masks — in which an actor actually put on the face of another character before taking part in a drama.”
It also makes him the most appropriate god to be associated with Carnival. Because Carnival offers a way to step outside of oneself, to assume personae and indulge alter egos, to lose oneself in the moment and revel in collective rapture. (Ecstasy is derived from Greek words meaning “to stand outside of oneself.”) As the mythic personification of the ecstatic experience, of the propensity to seek delight in the here and now, Dionysus is, in a sense, the unofficial patron god of Carnival.
Knights of Hermes parade float entitled “The Birth of Dionysus”
Afflicted with divine madness, he roamed the world spreading viniculture and revelry, and inspired followers to wear masks in theatrical performances staged in his honor.
Also apropos of Carnival, he was a democratic, egalitarian and accessible god — his cult was universal; anyone could join in the festivities. His influence, moreover, was wide ranging. Not only did Hebrews worship him in Roman times, notes Barbara Ehrenreich in her book Dancing in the Streets: A History of Collective Joy, but, “[a]ccording to the archaeologist Sir Arthur Evans, the worship of gods resembling Dionysus ranged over five thousand miles, from Portugal through North Africa to India, with the god appearing under various names.” (He shares many characteristics, for instance, with the Hundu god Shiva.)
Dionysus had a special relationship with humans, for it was through him they could achieve communion with the divine and apprehend immortality directly. As Ehrenreich notes, “Dionysus cannot fully exist without his rites. Other gods demanded animal sacrifice, but the sacrifice was an act of obeisance or propitiation, not the hallmark of the god himself. Dionysus, by contrast, was not worshiped for ulterior reasons (to increase the crops or win the war) but for the sheer joy of the rite itself.”
Another aspect of Dionysus that makes him unique among the gods: He was the product of a sexual union between Zeus, the great patriarch of the Olympian gods, and a mortal named Semele. This did not sit well with Zeus’s divine consort, Hera. Disguised as Semele’s old nanny, Hera convinced Semele that she needed proof that her lover was indeed Zeus and that she should ask him to reveal himself to her in his full glory, as he would appear on Mount Olympus. When Zeus did so, at Semele’s request, she was incinerated in a puff of smoke. Whereupon, Zeus snatched the Dionysus’ embryo from Semele’s womb and implanted it in his own thigh, from whence he was born some time later. Supposedly it was because of this gestation within the body of Zeus that Dionysus turned out to be a god. “Later,” writes Ehrenreich, “Hera tracked down the grown Dionysus and afflicted him with the divine madness that caused him to roam the world, spreading viniculture and revelry.”
Dionysus’s elder half brother was the oracular god of the sun, Apollo, who is associated with light, order, reason and prophesy, as well as healing, music and the arts. Famously, in The Birth of Tragedy, German philosopher Friedrich Nietzsche identifies, as Vandiver puts it, “the Dionysian and the Apollonian as the two main strands of Greek thought that were constantly in tension with one another [and] productive, in many ways, of Greek culture.” This tension between the reason and order of Apollo and the irrationality and frenzy of Dionysus also provides an apt framework for understanding the dynamic of Carnival in society, wherein the “civilizing” forces of authority — both secular and ecclesiastical — are repeatedly driven to rein in the disorderly “pagan” revels.
Twelfth Night and the Christianization of Carnival
Boeuf Gras float in the Rex parade
With antecedents dating back to ancient religious festivals, the ritual slaughter of the boeuf gras (French for “fatted calf” or ox) came to symbolize the last meat and feasting enjoyed by Christians prior to the Lenten season of atonement and abstinence. In Paris, butchers would compete to see who could raise the biggest and most glorious boeuf gras. The winning beast would be paraded through the streets on Mardi Gras.
Early Christianity was itself an ecstatic religion, a suppressed cult in which enraptured dancing, carnivalesque behavior and charismatic forms of worship, i.e., speaking in tongues, were accepted. Subordinate members of the clergy claimed their own feast day, commonly known as the Feast of Fools, a sort of imitation of Saturnalia that included cross-dressing and blasphemous buffoonery.
The task of purging ecstatic and unruly behavior preoccupied Church leaders for much of the Middle Ages. Gradually, a sort of accommodation emerged: Christians could still celebrate with abandon on holy days, so long as the revels didn’t invade the sanctity of Church property. The diffuse elements of the old festive habits began to coalesce into a secularized holiday that would become known as Carnival.
The etymology of “carnival” suggests a dynamic in which pagan customs were subsumed into Judeo-Christian tradition. In its earliest usage in medieval Europe, the Latin wordcarnelevare, from which “carnival” is derived (literally meaning “to lift up” or relieve from “flesh” or “meat”), may have referred to the beginning of the Lenten season of atonement and abstinence rather than the festive customs that preceded Lent. In any case, the Church in effect rationalized Carnival as an expression of the occasional need for carefree folly. Because the day before Ash Wednesday, which marked the beginning of Lent, was a day of feasting — as symbolized by the ritual slaughter of a fatted bull or ox (boeuf gras) — it came to be known as Fat Tuesday or, as the French would say, Mardi Gras.
Mardi Gras became an “official” Christian holiday in 1582, when Pope Gregory XIII instituted the namesake Gregorian calendar still in use today. By recognizing Mardi Gras as an overture to Lent, the idea was for all the partying and foolery to be over with when it came time to observe the requisite austerities.
In medieval times, the feast of the Epiphany (January 6) — also known as Kings’ Day or Twelfth Night (it’s the twelfth day of Christmas, the day the gift-bearing Magi visited the Christ child) — evolved into a major celebration alongside Carnival. Monarchs would don their finest regalia, maybe even wager in a game of dice. Children received presents to commemorate the gifts given by the kings to the baby Jesus. In the great houses of Europe, the holiday became a glittering finale to a 12-day Christmas cycle, with elaborate entertainments featuring conjurers, acrobats, jugglers, harlequins and other humorous characters — notable among them the Lord of Misrule, whose task was to orchestrate the festivities. He is kin to Carnival’s King of the Fools (most famously represented by the character Quasimodo in Victor Hugo’s novel The Hunchback of Notre Dame).
Jesters on Fat Tuesday
Jesters in Carnival represent the license to poke fun with abandon, just as jesters in the medieval courts of Europe could speak truth to power with impunity. In New Orleans at Carnival time, omnipresent jester imagery serves as a constant reminder that true Carnival custom involves the spirit of merry mockery and reverence for the wisdom of fools.
While the Twelfth Night customs that spread throughout Europe were subject to numerous variations, one element transcended virtually every culture that observed the holiday: the choice of a mock king for the occasion. “The way he was chosen might vary,” explains Bridget Ann Henisch in her book Cakes and Characters: An English Christmas Tradition, “but it was always a matter of chance and good fortune: lots could be drawn or, in the most widespread convention, a cake would be divided. The person who found a bean, or a coin, in his piece was the lucky king for the night. Sometimes he picked his own queen, sometimes chance chose her for him, and a pea secreted in the cake conferred the honor on its finder. The temporary change in status was sustained with ceremony; the king was given a crown, the authority to call the toasts and lead the drinking and, sometimes, the more dubious privilege of paying the bill on the morning after.”
Adopting the old pagan “luck-of-the-draw” ritual dating back to Saturnalia, Twelfth Night thus became a holiday imbued with royal associations. Christians, in turn, transformed it a symbolic reenactment of Epiphany. In France, a bean-sized baby Jesus eventually replaced the bean (la feve); its discovery memorialized the discovery of Jesus’ divinity by the Magi.
Over time, Carnival became established as the season of merriment that begins on Twelfth Night and climaxes on Mardi Gras. Occurring on any Tuesday from February 3 through March 9, Mardi Gras is tied to Easter, which falls on the first Sunday after the full moon that follows the Spring Equinox. Mardi Gras is always scheduled 47 days preceding Easter (the 40 days of Lent plus seven Sundays).
If the festivities were to some degree sanctioned by the Church, according to Ehrenreich, “the uplifting religious experience, if any, was supposed to be found within the Church-controlled rites of mass and procession, not within the drinking and dancing. While ancient worshipers of Dionysus expected the god to manifest himself when the music reached an irresistible tempo and the wine was flowing freely, medieval Christians could only hope that God, or at least his earthly representatives, was looking the other way when the flutes and drums came out and the tankards were passed around.”
Modernity and the Suppression of European Festivities
The Fight Between Carnival and Lent by Flemish painter Pieter Bruegel the Elder
Rich in allegorical detail, the 1559 painting contrasts somber Lenten penance, charity and abstinence from meat with Carnival feasting, masking, games and foolery. In the foreground is a mock jousting contest between figures representing Carnival and Lent. Propelled by an entourage of musicians and costumed revelers, a jolly fat man, personifying Carnival, sits astride a large wine barrel holding a long cooking skewer threaded with a pig’s head, sausages and a chicken. Bearing two small fish on a baker’s paddle, Lent — dour, pale and gaunt — sits on a church chair and advances on a trolley drawn by a friar and a nun. Following behind, children eat flatbread and burghers give alms to beggars.
This “secularization of pleasure,” Ehrenreich speculates, may help account for the unbridled, chaotic nature of pre-modern Carnivals in Europe, in which traditional conventions were suspended and the common folk ran wild in the streets — indulging in mass inebriation, insubordination and mockery at the expense of the ruling elites. It was a ribald, topsy-turvy realm that might include dancers costumed as priests and nuns, saucy comedic characters in naughty parodies of religious ritual, fools impersonating nobles, and the public harassment of Jews.
At least into the 15th century, writes Ehrenreich, “nobles and members of the emerging bourgeoisie” participated in public festivities such as Carnival “as avidly as the peasants and urban workers, and the mixing of classes no doubt enhanced the drama and excitement of the occasion.” They also partook in parallel, private revels that “had often been as uninhibited as the celebrations of the poor.”
But beginning in the 16th century, the upper classes began to distances themselves from the traditional free-for-alls. Especially in France, Carnival began to take on a more menacing, political aspect — as an occasion for protest and the fermentation of rebellion. The upper classes, meanwhile, were becoming increasingly concerned with etiquette, the art of polite conversation and “cultured” entertainment such as opera, ballet and classical music. Regarding the rowdy public escapades of the hoi polloi as déclassé, if not “vulgar,” they retreated into more refined forms of Carnival merriment such as masquerade balls.
In Europe, the Protestant Reformation, Age of Enlightenment and Industrial Revolution, along with the disciplinary demands of military preparedness in an era of gun-based fighting, would take their toll on communal pleasures such as Carnival and Twelfth Night celebrations. The revels were seen as a distraction from work and a waste of resources, if not outright dangerous. “Protestantism — especially in its ascetic, Calvinist form — played a major role in convincing large numbers of people not only that unremitting, disciplined labor was good for their souls, but that festivities were positively sinful, along with idleness,” observes Ehrenreich. Obedience, self-denial and deferred gratification were the new order of the day, and surviving expressions of the old Dionysian spirit became targets for suppression.
“The Catholic south of Europe held on to its festivities more tightly than the north,” relates Ehrenreich, “though these were often reduced to mere processions of holy images and relics through the streets…. Everywhere the general drift led inexorably away from the medieval tradition of carnival.” She goes on to cite Peter Stallybrass and Allon White, authors of The Politics and Poetics of Transgression. “In the long-term history from the 17th to the 20th century…,” they write , “there were literally thousands of acts of legislation introduced which attempted to eliminate carnival and popular festivity from European life….”
Transplantation and regeneration in the New World
Society of Saint Anne maskers on Mardi Gras 2002
Traditions and customs associated with aristocratic court spectacles of Old Europe would have a lasting influence on New Orleans Mardi Gras.
Across the oceans, however, the colonies of the New World, especially Latin-Catholic outposts on the Gulf Coast, would provide fertile ground for regenerating the old rituals of collective joy. Carnival in New Orleans, in particular, would become a multifaceted extravaganza, incorporating a kaleidoscope of European, Afro-Caribbean, Native American and Mexican/Latin American cultural influences.
On the evening of March 2, 1699, French-Canadian explorer Pierre Le Moyne, Sieur d’Iberville, leading an expedition on behalf of the French crown, dropped anchor at the mouth of the great Mississippi, about 60 miles down river from the present location of New Orleans. The next day, coincidentally, happened to be Mardi Gras. “The first place names given Louisiana were, appropriately, Pointe de Mardi Gras and Mardi Gras Bayou,” notes Mel Leavitt in his book A Short History of New Orleans.
Iberville’s expedition went on to establish settlements at Biloxi Bay (Mississippi) and Fort Louis de la Louisiane (Alabama), located on the Mobile River a few miles upstream from the present site of the city of Mobile. (Mobile, which calls itself the Mother of Mystics, traces its Carnival tradition to 1704, when Nicholas Langlois founded Societe de Saint Louis, a forerunner of the secret societies, or krewes, that would later institutionalize Carnival in New Orleans.)
That Carnival would sink deep roots in New Orleans speaks to the essential character of the city. “America, which makes a fetish of reason, righteousness and modernism, is among the most Apollonian of nations,” the art critic D. Eric Bookhardt once observed in Gambit Weekly. “New Orleans, which is forever unimpressed by reason or righteousness, and is mostly anti-modern, is the most Dionysian of American cities.“
In 1718, Iberville’s brother, Jean Baptiste LeMoyne, Sieur d’Bienville, established New Orleans as a permanent settlement. The French Crown commissioned a private enterprise, the Company of the Indies, to develop the colony. In 1729, Marc Antoine Caillot arrived from France to work as a clerk for the Company of the Indies.
Cancan dancer on Mardi Gras 2008
Cross-dressing for Mardi Gras dates back to the first recorded account of New Orleans festivities in 1730.
In 2004, the Historic New Orleans Collection acquired Caillot’s lengthy written chronicle of his activities in New Orleans and his travels to and from the colony. The so-called Caillot Manuscript contains the first recorded account of Carnival in New Orleans.
According to an article in the 2011 edition of Arthur Hardy’s Mardi Gras Guide, written by Lori Boyer and based on research by Erin M. Greenwald of the Historic New Orleans Collection, Calliot’s father was a footman in the household of the Dauphin, the son of King Louis XIV. As this was a time of court festivities on a grand scale — including masquerades mixing poetic verse, music and dancing, and performed by people wearing masks and costumes in accordance with a theme — Calliot was likely accustomed to aristocratic celebrations of Carnival.
He writes of being “quite far along in the Carnival season [of 1730], without having had the least bit of fun or entertainment, which made me miss France a great deal.” Arriving in his office on the day before Mardi Gras to find his colleagues “bored to death,” he proposed a Mardi Gras masking adventure to Bayou St. John. Cross-dressing as “a shepherdess, all in white,” with “a corset of white bazin, a muslin skirt, a large pannier” and “beauty marks on my face and even on my breasts, which I had plumped up,” Caillot fancied himself “the most coquettishly” turned out member of the group. His “husband” was got up as “the Marquis of Carnival” in “a suit trimmed with gold braid on all the seams.”
Folklore has it that after he became governor of Louisiana in 1743, the Marquis de Vaudreuil, assisted by a dancing master called Bebe, established society balls and banquets that would evolve into the upper-class Carnival soirees of later generations. The elements of court behavior and presentation that would become features of these balls had roots in Europe’s ancien régime. The practice of incorporating tableaux as a performance element can be traced to the mythological-allegorical spectacles of Italian festival tradition.
Carnival and the Creoles
Louisiana-born descendents of French and Spanish colonizers came to see themselves as a New World aristocracy, aloof from mainstream Anglo-American culture. Their folkways — they were devotees of music, dance, theatrical amusements and games of chance — did much to define New Orleans as a culturally exotic, socially permissive entrepôt.
Mandingo Warriors Mardi Gras Indians practicing at the original site of Congo Square
In colonial times, the focal point of Afro-Caribbean culture was the Place des Negres, later renamed Congo Square. Until it was suppressed around 1835, the public market and venue for communal drum-and-dance convocations provided continuity for African forms of festive merriment. The percussive rhythms and call-and-response chants that drove the revelry entered the vernacular of Mardi Gras and New Orleans music.
These Creoles, as they took to calling themselves, had a soft spot for Twelfth Night and the old tradition of having the finder of a bean or trinket concealed inside a cake rule over the revels. In the colonial era, New Orleans Creoles cut cake to divine royalty during a season of balls, called les bals des Rois (the balls of kings), that began on Twelfth Night and ended on Mardi Gras. As the Carnival season of merriment became more established, while upper-crust Creoles reveled at fancy-dress and masquerade balls, impresarios staged public balls catering to various strata of society.
“New Orleans simply couldn’t resist the lure of a masked ball at any time or for any reason,” writes Henry A. Kmen in Music in New Orleans: The Formative Years, 1791 – 1841. “It was always fun to dance, but to hide one’s identity behind a mask greatly heightened the thrill and broadened the range of permissible partners or possible adventures….”
In colonial times, a remarkable ethnic diversity made New Orleans the New World’s most exotic and intriguing society but also bred fears and hostilities. In 1781, a report to the Spanish colonial governing body, the Cabildo, raised concerns about people of color masking and mingling while passing through the streets in search of dance halls. Fearing that masks provided anonymity for disorderly conduct and subversive political activities that could lead to slave rebellion, the authorities forbade slaves and free people of color to wear masks, and the prohibition was extended to all masks a few years later.
Not long after the United States purchased Louisiana in 1803, masking once again had the authorities on edge. “On Jan. 21, 1806,” writes Kmen, “the city council, acknowledging that masks and disguises ‘were the means of grand disorders among us,’ proclaimed that henceforth anyone wearing a mask on the street was to be arrested, unmasked, and fined ten dollars. More than this, the council forbade all masked balls, public and private, under fine of 20 dollars against anyone giving or attending one.” The prohibition wasn’t always enforced, however.
In 1827 — thanks largely to petitioning by prominent Creoles, who saw themselves as conservators of the cultural heritage of Old Europe — the City Council lifted the ban on masking from January 1 through Mardi Gras. As street masking burgeoned, bands of musicians and ornamented carriages began joining in the processions. Elite, exquisitely attired Creole ladies were said to ride through the streets tossing bonbons to gentlemen admirers.
Also in the early decades of the 19th century, the Afrocentric performance culture of Congo Square, an area where slaves were permitted to assemble on Sundays, began to resonate in ways that would later influence the development of jazz as well as second line and Mardi Gras Indian traditions. Simultaneously strange and alluring, the goings-on became something of a tourist attraction. As a visiting missionary described the “Saturnalia” and “Congo dances” he witnessed in 1823: “Everything is license and revelry.”
A New World Lord of Misrule
Comus krewemen, with cowbells, on Fat Tuesday 2002
The prototypical New Orleans Mardi Gras krewe, Comus is forever indebted to Michael Krafft, founder of Mobile’s Cowbellion de Rakin Society and the archetypal reveler-ringleader of Gulf Coast Mardi Gras.
Carnival historians often point to the Cowbellion de Rakin Society as the key precursor of the New Orleans Mardi Gras krewe system. On a rainy Christmas Eve night in 1831, in Mobile, Alabama, a cotton broker named Michael Krafft — described in a contemporary account as “a fellow of infinite jest and…fond of fun of any kind” — apparently found himself in the doorway of a hardware store, quite likely intoxicated. He gathered up a string of cowbells and attaching them to the teeth of a rake, went on his merry way, clattering. According to this account of the night’s events, as related by Samuel Kinser in his book Carnival, American Style, Krafft, having drawn a crowd, caught the attention of a passer-by who exclaimed, “ ‘Hello, Mike — what society is this?’ Michael, giving his rake and extra shake and looking up at his bells, responded, ‘This? This is the Cowbellion de Rakin Society.’ ”
Kraff’s waggish, playful exhibitionism was at the very core of the cultural enterprise that would become Gulf Coast Carnival. On subsequent rambles, he was joined by more “Cowbellions,” and the group went on to become Mobile’s premiere Carnival organization, sponsoring New Year’s Eve masquerades and even venturing to New Orleans in the late 1830s to partake in Mardi Gras. In 1840, the krewe presented its first parade with floats depicting a specific theme: “Heathen Gods and Goddesses.” A masked ball followed.
Some of the founders of the prototypical New Orleans Carnival organization, the Mistick Krewe of Comus, had ties to Mobile and the Cowbellions. The Comus krewemen, moreover, are said to have borrowed costumes from the Cowbellions for their inaugural Mardi Gras pageant, in 1857.
South Philadelphia String Band, a longtime fixture of that city’s Mummers Parade, shown here in the 2002 Knights of Hermes parade
A New World Lord of Misrule, Krafft, whose name indicates German ancestry, was a native of the Philadelphia area. Kinser points out that in German communities in Pennsylvania and elsewhere, a custom called “belsnickling” took place on Christmas Eve. “Belsnickle” could refer to a noisy party or a sort of Wildman figure disguised in furry garb as a demonic version of Saint Nicolas. “Shaking cowbells at his waist or smaller bells attached to his garments and brandishing a club, whip or other menacing instrument,” relates Kinser, he’d go from house to house, “frightening children but also bestowing small gifts.”
Kinser also cites an account that has Krafft tying cowbells to his rake “for music,” and mentions a fife and drum being present for the inaugural Cowbellion romp. Such “rough” music with improvised instruments was a prominent feature of holiday masking by revelers who became known as the Mummers of Philadelphia. These celebrants, who were of northern European ancestry, took to the streets on New Year’s Day dating back to the earliest days of city — dressing in costume, using miscellaneous noisemakers and bells, banging pots and pans, and generally making a clamor as they visited neighbors after Christmas.
Generating a bit of pandemonium was, of course, characteristic of the old European Carnivals. All of which makes it reasonable to surmise that through the personage of Krafft, festive holiday customs transplanted to the New World by way of northern Europe became a key ingredient in the gumbo of Gulf Coast Carnival. It was a contrasting flavor to the contribution of the more genteel Creoles, whose revels imitated European aristocratic court spectacles.
The Age of Impudence and the birth of Comus
By the late 1830s, the notion of New Orleans Mardi Gras as a public celebration with broad appeal, as an amusing occasion for carrying on Old World tradition, was on the cusp of change. Revelers who would later become known as “promiscuous maskers,” for their irreverent and bawdy foolery, had begun to make their presence felt. On Ash Wednesday 1837, the New Orleans Picayune censoriously described a scene of unruly, racially mixed cavorting in the streets involving maskers got up as animals, circus performers and Indians: “A lot of masqueraders were parading through our streets yesterday, and excited considerable speculation as to who they were, what were their motives and what on earth would induce them to turn out in such grotesque and outlandish habiliments.”
In New Orleans during the 1850s, social changes caused by an influx of immigrants and transients, as well as racial and ethnic mixing in the streets and at public balls, were a source of increasing anxiety for the privileged class. Chance encounters among anonymous maskers put customary social and racial distinctions at risk. “Leading citizens” closely associated participation by ethnic (Irish), black and mixed-race celebrants with disorder, unruliness and sexual permissiveness — hence the term “promiscuous maskers” — and resented the fact that the upstarts were adapting the celebration to suit their own culture and purposes (thereby transforming it into a more spontaneous, uninhibited affair that was perceived as disrespectful of, if not threatening to, the prerogatives of the established social order).
Comus on Mardi Gras 2002
At the pinnacle of the old-line Carnival hierarchy, he bears a sterling silver chalice and his identity is never revealed.
Ehrenreich’s analysis of black participation in Carnival in other countries is instructive in this regard. “In both Trinidad and Brazil,” she observes, “whites responded to black participation just as elites had responded to the disorderly lower-class celebrations of carnival in Europe: by retreating indoors to their own masked balls and dinner parties, which were invariably described as ‘elegant’ by the local newspapers, in contrast to the ‘barbarous’ celebrations of blacks.” (As in New Orleans, Carnival in Trinidad originated as a white celebration, imported by French settlers.) The “disapproving accounts” of black Carnival in the Caribbean by white observers, Ehrenreich adds, consistently “downplay the artistic creativity that went into costume making and choreography, to focus instead on the perceived violence, disorder and lewdness of the events.”
There was no denying that Carnival seemed to encourage impudence among the “lower classes.” In New Orleans, their jests, pranks and brawls were offending “respectable” people, prompting some newspapers to champion the abolition of the festivities. “TheDaily Orleanian decried the racial mixing in the streets during Carnival; the Daily Crescentspread rumors of Carnival’s reported eminent disintegration; and the Bee excitedly condemned Carnival back to the “barbarous age” whence it came,” notes historian Jennifer Atkins in her 2008 PhD thesis, Setting the Stage: Dance and Gender in Old-Line New Orleans Carnival Balls, 1870 – 1920.
It was against this backdrop that the Mistick Krewe of Comus made its parading debut in 1857. In a torch-lit procession on the night of Mardi Gras — with two floats, brass bands and costumed maskers — the Comus krewemen presented “The Demon Actors in Milton’s Paradise Lost,” a theme carried through in the tableaux staged at their exclusive ball. By adopting a mythological namesake and presenting a thematic, meticulously organized street spectacle, followed by a tableau ball that was more a cultural performance (staged before formally attired guests) than a typical Carnival masquerade dance with everyone in costume, Comus established a paradigm that would be widely imitated. Indeed, the cultural practices that would come to define “modern” Carnival began with the Mistick Krewe of Comus.
Royal invention: a king of Carnival
Rex 2010
After the Civil War, businessmen and civic leaders invented a benevolent monarch to reign over a daytime parade on Mardi Gras. Rex and his queen — a debutante chosen by krewe leaders largely on the basis of her father’s prominence and her familial connections to past Rex royalty — came to be recognized as monarchs of the entire Carnival celebration.
A group of businessmen and civic leaders invented a king of Carnival, Rex, in 1872. The first Rex, Lewis Salomon, was a financier in the cotton trade who helped raise funding for the inaugural parade. In 1921, in an interview with a reporter from the New OrleansTimes-Picayune, he described a newspaperman, Edward C. Hancock, as the behind-the-scenes “big chief” and recounted a gathering at the St. Charles Hotel in the weeks leading up to Mardi Gras.
Hancock’s role in Rex is illuminated by Errol Laborde in his books Marched the Day God: A History of the Rex Organization and Krewe, which focuses on the 60 years of Carnival following the birth of Comus. A crusading journalist with a literary flair, Hancock was a Philadelphia native who, like many young northerners in the pre-Civil War era, came to New Orleans — arguably the most cosmopolitan city in the country — to make a name for himself. After the war, he served as associate editor of The New Orleans Times, which stoked anticipation for the inaugural Rex parade by publishing tongue-in-cheek edicts and tidbits dispatched from His Majesty’s mythical realm on Mount Olympus. Hancock envisioned a unifying centerpiece for daytime festivities that would coordinate the miscellaneous groups that had been informally parading on Mardi Gras. According to Salomon, “Hancock insisted that all the promiscuous maskers and private clubs ought to be organized into a general parade.”
Rex provided a civic counterpoint to highly secretive and exclusive Comus. Adopting the mottoPro Bono Publico(for the Public Good), the Rex organization offered a tonic for a South still riven and weary from the Civil War, thereby helping to lure visitors back to the city. The highly anticipated visit of Russian Grand Duke Alexis Romanov, who witnessed Rex’s debut amid great fanfare, added a touch of royal romance to the pageantry.
The Rex krewemen introduced the Carnival colors of purple, green and gold. Via Rex’s 1892 parade, entitled “Symbolism of Colors,” they came to signify justice, faith and power, respectively.
This has an echo of the four fundamental principles by which Socrates and Cicero both said we must live our lives: wisdom, justice, courage and moderation. Rex’s inclusion of “faith” in the symbology can be seen as a nod to the Church’s incorporation of Carnival into Judeo-Christian tradition, while the conspicuous absence of moderation — the key prescription inscribed on the oracle of Apollo at Delphi — suggests Dionysian propensities. Whereas, “power” evokes the monarchical aspect of Carnivaldom as practiced by the elite in post-Reconstruction New Orleans. At the top of the hierarchy of rulers presiding over the krewe fantasy kingdoms was Comus (motto:Sic Volo, sic jubeo(As I wish, thus I command).
The Mistick Krewe of Comus initiated the practice of having a different krewe member each year assume the role of figurehead to preside over a parade and ball. In mythology, Comus is son of reveler Bacchus and necromancer Circe. In the Carnival realm, his kin would grow to include, among others, the Lord of Misrule (potentate of the Twelfth Night Revelers), Momus, Rex and Proteus — all of whom would be anointed anew annually, to reign over rarefied domains in which adventure, conquest and enchantment provided much of the thematic ballast for artistically ambitious parades and balls. Except for Rex — who, in a sense, became the public face of the old-line Carnival aristocracy — the identities of these mock rulers were strictly secret.
Carnival debutantes and the Golden Age
The 2006 Queen of Carnival and His Majesty Rex en route to the Comus ball for the Meeting of the Courts
Chosen by the inner sanctum of the Rex Organization, she is usually a junior in college at the time of her reign — 20 or 21 years old. Amidst a whirl of Carnival-related events and obligations in the weeks and months leading up to her day in the limelight — tea parties, debut parties, social calls, dress fittings and lessons in the finer points of royal protocol and etiquette — she’s supposed to keep the honor a secret.
Beginning in the 1870s, the staging of multiple post-parade tableaux in ballrooms and theaters gave way to a new style of spectacle in which the presentation of queens and maids, along with other krewe “royalty,” provided the central image. Carnival balls became vehicles for the formal presentation by krewemen of debutante daughters and granddaughters to society. For these chosen few, mastering the finer points of feminine regality and court protocol served to validate social status and establish credentials as prime marriage material. Elite families came to measure social rank based on how many court appearances their daughters made at prestigious Carnival balls.
No question, the early krewemen were capable of erudite feats of grandeur and artistry. During the so-called Golden Age of Carnival, from the 1870s through the 1920s, they went to extraordinary lengths to present pageants in which every last detail — as reflected in the costumes, ball décor and the design of the parade floats and ball invitations — would coalesce in a sophisticated evocation of a theme.
While parades of this era occasionally ventured into social commentary and political lampooning, mythology, literature, history and religion comprised the dominant source material. The balls themselves were otherworldly realms, infused with an aura of mystique and fantasy conducive to courtship and flirtation.
Only krewe members, all of whom were men, costumed and wore masks, however. “Masking allowed men to titillate their partners with an allure of the unknown,” observes Atkins in her PhD thesis. “Protected by masks, krewemen were free to pursue flirtations in a manner forbidden in everyday life. In the ‘real world,’ etiquette scripted every move and compliment to a lady, but costumed as a knight (or even as an elf or 7-foot tall pelican), krewemen escaped the confines of everydayness and joined their female partners in a romantic fantasy.”
In the early krewe pageants, like in William Shakespeare’s theatrical troupe, cross-dressing was de rigueur. As Henri Schindler explains in his illustrated history Mardi Gras Treasures: Costume Designs of the Golden Age, “every female character was brought to life by an all-male cast; no matter how delicate or feminine some costumes may appear, they were worn with delight by generations of the most prominent businessmen of New Orleans.”
A tale of two history lessons: official vs. unofficial
Reproduction of Rex’s 1882 “butterfly” king, who embodied the theme of that year’s parade, The Pursuit of Pleasure
While the erudite grandeur and ambitious artistry of their pageants cannot be denied, the early krewemen also sought to project cultural power, reinforce their elite status and proclaim superiority over “lesser mortals” who’d assumed positions of authority during Reconstruction.
Prominent non-academic New Orleans Carnival commentators tend to show considerable deference to the “old-line” krewes. Theirs is an apolitical view that generally looks askance at attempts to interpret Carnival’s evolution through the analytical lens of race and class distinctions. In this conception, the original krewemen were beneficent gallants who bequeathed an indelibly rich cultural legacy that revels in the creative indulgence of whimsy, brings mirth to multitudes and greatly enhances the allure of New Orleans. Were it not for the innovations they brought to bear on the unruly goings-on of yore, New Orleans Carnival would never have achieved its iconic renown.
All of which generally conforms to what sociologist Kevin Fox Gotham, in his bookAuthentic New Orleans: Tourism, Culture and Race in the Big Easy, calls the “official story” of Mardi Gras. In this narrative, as “passed down through generations in magazines and local newspaper editorials…, antebellum Carnival was marked by rampant lawlessness, violence and disorder. In response, enlightened citizens formed the elite Carnival krewes to bring order to this chaotic world, tame the unsavory past, and establish a more civil and humane Carnival for the enjoyment of all.”
The krewe system, according to Gotham, “established a new form of social differentiation between the ‘public’ and ‘private’ spheres of Carnival, separating activities open to the public (organized parades) from those limited to the private krewes and their guests (invitation-only tableaux balls).” In effect, “the spontaneity and free license” of the traditional antebellum festivities, in which spectators and participants closely intermingled, yielded to “the creation of new forms of social order and control,” i.e., “preplanned and scripted parades that separated spectators from krewe paraders….
“Guidebooks and advertisements henceforth would celebrate Carnival for its ability to deliver fun and entertainment in a rationally controlled and predictable fashion, conditions that are the opposite of the festive release, insubordination and transgression of the pre-modern Carnival described by Russian literary theorist Mikhail Bakhtin.”
What might be called the “unofficial” story of Mardi Gras — as explicated by Gotham and other academics like Anthony J. Stanonis, J. Mark Souther and Jennifer Atkins, as well as journalist James Gill — goes something like this:
2006 Rex float honoring the Mistick Krewe of Comus on its 150th anniversary
The illustrations on the side of the float are from one of Comus’s most famous parades, 1873′s Missing Links to Darwin’s Origin of Species, which presented animal-like caricatures of carpetbagging public figures from Reconstruction who had, in effect, inverted the “natural order” by placing a “missing link” — the mixed-race P. B. S. Pinchback in the guise of a banjo-playing gorilla — in a position of power. (Pinchback briefly served as governor of Louisiana.)
The original old-line krewemen were mostly Anglo-Protestants, including transplanted Northerners, who had economic interests in the plantation system and fought for the Confederacy. Their wounds from the Civil War ran deep, and their sense of indignation and alienation only increased during the social, political and racial upheaval of Reconstruction. Carnival became a realm where they could assert social dominance and reclaim a sense of honor. Forming secretive social clubs for the purpose of organizing parades and balls, they effectively usurped the Latin-Catholic tradition of Mardi Gras masking and appended it to reenactments of the courtly rituals of Old Europe. These annual revivals of monarchic rule enabled chauvinistic krewemen to project cultural power, reinforce their elite status and proclaim superiority over “lesser mortals” who had assumed positions of authority in the aftermath of the Civil War.
In cultivating pomp and elevating themselves as knights and kings, krewemen assumed the mantle of heroic defenders of a Romantic world symbolic of the Old South. As idealized embodiments of the realm’s genteel femininity, court queens and maids justified the krewemen’s masquerade as chivalrous protectors.
While ostensibly operating in a world of make-believe, their cultural performances sometimes had strong ideological undercurrents, reflecting the struggle to protect racial and class interests in the Reconstruction era. Notably, Comus and Momus presented the rhetoric of white supremacy in the guise of satire.
In place of grassroots, pre-Civil War revelry that was, to a large extent, racially integrated came new lines of hierarchy and racial division. Black men tended the mules that pulled the krewe floats and carried flambeaux to illuminate the nighttime parades. Formerly the domain of the promiscuous maskers, the public space of Carnival was now largely occupied by carefully orchestrated processions in which masked krewemen on fanciful floats — self-appointed arbiters of culture — towered above passive spectators. Photos of early krewe parades show crowds not in costume but dressed in what might be described as their “Sunday best.”
The high-profile krewe parades would prove to be a boon for tourism, but there was consternation in some quarters over the loss of old folkways. Gotham cites an editorial in the March 6, 1881, edition of the New Orleans Times expressing nostalgia for a time before Rex when “our thoroughfares swarmed with merry maskers from one dawn till almost the next.” It went on to lament the decline of the “riotous quality” of Carnival and critiqued the Rex parade as a “stern procession” of “splendor but no humor.”
“The old, robust laugh and the old, wild license have gone out of the Carnival of New Orleans,” the editorial observed. “This laugh and license were, we believe, the vital conditions of its maintenance.”
“Masking Indian” on neighborhood backstreets
An esteemed practitioner of the Mardi Gras Indian arts who taps deeply into African influences. Departing from the open-face crowns typically worn by those who “mask Indian” — a style derived from Native Americans — he wears Malian-style masks that completely cover his head.
But if “official” Carnival, as represented by krewe pageants, seemed lacking in spontaneity and abandon, the holiday, as celebrated on neighborhood back streets, still offered escape from rigidly defined roles and boundaries. As racial repression intensified in the post-Reconstruction era, hardening the color line governing participation in “mainstream” Mardi Gras festivities, organized groups of black and mixed-race celebrants masking as Indians took to the streets on Fat Tuesday.
Slaves and Native Americans intermingled from the earliest days of colonial Louisiana. They shared similar belief systems involving ceremonial communion with ancestral spirits. And both groups had in common the experience of being subjugated by the dominant culture. As racial repression intensified in the post-Reconstruction era, hardening the color line governing participation in Carnival, organized groups of Mardi Gras Indians, as they came to be known, took to the streets on Fat Tuesday. By masking Indian, they expressed ritual freedom, provided continuity to Afrocentric forms of festive performance, and paid homage to tribes that provided refuge to runaway slaves. Oral tradition places their beginnings in New Orleans as far back as the 1830s (the first documented account was in 1900).
At one time, rivalries among Mardi Gras Indian tribes or “gangs” (usually defined by neighborhoods) often turned violent. Theirs was a warrior culture largely obscure to the general public. But slowly, beginning in the 1940s, the competitive aspects of their revelry came to revolve around performance and costuming. The process of making a Mardi Gras Indian “suit,” which can take up to a year and cost thousands of dollars, brings families and communities together in a collaborative artistic endeavor. The results can be stunning: the vibrant colors of dyed ostrich, coque and marabou feathers, which recall the ceremonial attire of Plains Indians, are complimented by intricate, pictorial beadwork or sculptural (raised-relief) designs set off with dazzling arrays of crystals. A Mardi Gras Indian earns props not only through artistic skill and performance ability — singing, dancing and mastery of the protocols and dramaturgy of “playing Indian” — but also by serving as a mentor and community role model.
As the Mardi Gras Indians have evolved from a rough-and-tumble fringe element — harassed by police for parading without permits — into celebrated icons, their music and traditions have become emblazoned on the aesthetic and cultural consciousness of New Orleans and beyond. Suits by the likes of Victor Harris of the the Mandingo Warriors Mardi Gras Indians are now featured in prestigious art exhibitions, while the HBO series Treme, in which Mardi Gras Indians figure prominently, has brought the culture to a mass audience.
The National Endowment for the Arts also has recognized the achievements of Mardi Gras Indians. The NEA awarded the late “chief of of chiefs,” Allison “Tootie” Montana of the Yellow Pocahontas, a National Heritage Fellowship in 1987. Bo Dollis of the Wild Magnolias received the award, considered the nation’s top honor in folk and traditional arts, in 2011.
The uproarious revels of the Zulus
Another African-American Carnival institution, which has proven no less enduring than the Mardi Gras Indians, began in the early 1900s with a small, informal marching group called the Tramps — a raggedy lot who affected the manner of hobos on Carnival Day. The group took up an African theme after seeing a musical comedy performance that included a skit about the legendary king of the African Zulus, Shaka. The performers were African-Americans in blackface, a standard convention of minstrelsy and vaudeville. In 1916, the Tramps changed their name to the Zulu Social Aid and Pleasure Club.
Members of a Zulu marching contingent known as the Tramps, on Mardi Gras 2010
A predominately African American krewe that has evolved from humble beginnings into a popular and iconic mainstay of Carnival, Zulu is famous for its raucous parades, coconut throws and colorful assemblage of characters.
As a prominent, and at times controversial, characteristic of the Zulu masquerade, blackface came to be interpreted as being part of a lampoon of the white man’s racial stereotypes. But when the first Zulus blacked up in the early 1900s, it was more a practical matter than a subversive statement. The early members couldn’t afford to buy masks. A five-cent tube of face paint did the trick.
The braggadocio inherent in the title of the skit about Shaka, “There Never Was and Never Will Be Another King Like Me,” helped set the tone for the endeavor, and to this day zany flamboyance and irreverent theatricality are characteristic of the club’s style, at least as far as its Carnival activities are concerned. Long before the likes of Snoop Dogg and Jay Z donned bling-bling and perfected the art of ostentatious bravado, Zulu Kings extolled their own greatness and flashed whatever finery they could muster to admiring multitudes.
The earliest Zulu members may have identified with the warrior sprit exhibited by the African Zulus in resisting colonialism, just as others in the black community who took up masking as Mardi Gras Indians identified with the defiance of Native Americans. Or they may simply have fancied the potential entertainment value of the African jungle theme, which served as a vehicle for transforming the image of the African Zulus as “noble savages” into a comic performance for public display.
According to the Zulu club’s official history, William Story reigned as the first Zulu king, in 1909. He wore a lard can as a crown and waved a banana stalk scepter. Subsequent Zulu kings were known to bestow blessings on royal subjects by wielding a ham bone. Having fun with the conventions and trappings of official (white) Carnival became part of Zulu’s trickster burlesque.
In a way, the Zulu parade brought back the “riotous quality” and “laugh and license” of pre-Civil War Carnival. Instead of orderliness and scripted decorum, it offered the uproarious spontaneity of the second line, whereby spectators, musicians, parade marchers and even float raiders would — to borrow a phrase from Louis Armstrong — “pitch a boogie-woogie.” (As a young man, Armstrong played his horn in the Zulu parade; he reigned as King Zulu in 1949.)
Baby Dolls and Bone Gangs
Members of the Treme Million Dollar Baby Dolls on Mardi Gras 2009
The original Baby Dolls constituted a permissive sisterhood that was determined to partake in the festivities on their own terms.
Among those in the thick of the Zulu second line were the Baby Dolls. Recalling the promiscuous maskers of the mid-19th century, who transformed the streets on Carnival Day into a bawdy free-for-all, this informal sisterhood sported frilly, titillating attire — typically, short skirts, bloomers, satin blouses and bonnets tied under their chins with ribbons. The first Baby Dolls appeared in the vicinity of the rough-and-tumble uptown red-light district, i.e., “Black Storyville,” around 1912. Baby Doll masking caught on, enabling women from various walks of life to publicly partake in transgressive flaunting or mocking of conventional expectations requiring them to suppress their sexuality.
The original Baby Dolls were anything but submissive. Male revelers looking to stuff money into the girls’ garters or stockings could receive a kick in the pants — or worse — if they got too out of line.
Also capitalizing on the freedom that Carnival represented for black and mixed-race celebrants were skull-and-bones gangs — a mysterious folk tradition in which maskers, in the guise of skeletons, bring the spirits of the dead to the streets. Donning oversized skull heads — primitive constructions sculpted from bale wire and cheesecloth, using papier-mâché techniques — they’d roam the Tremé neighborhood early on Mardi Gras morning, brandishing huge, bloody animal bones and raising a frightful ruckus. Serving a cautionary role, they’d warn children of scary comeuppances if they misbehaved. Amidst the frivolity of Carnival, these “Bone Gangs” or “Skeletons,” as they’re sometimes called, are macabre, “in-your-face” signifiers of transience and mortality. “You next” is one of the admonitions often seen emblazoned on their decorated aprons.
They’re also particularly apropos of New Orleans, a precarious place — haunted by a history of epidemics and vengeful hurricanes — where the living fervently memorialize and celebrate ancestral spirits through ritual and performance. It’s possible their origins derive from Haitian skeleton figures and the spiritual pantheon of voodoo. (Thousands of refugees from Saint-Domingue (present-day Haiti) arrived in New Orleans after the 1804 overthrow of the island nation’s French colonial government; most of them settled in the Tremé.)
A member of the Northside Skull and Bone Gang on Carnival Day 2011
While the history of these enigmatic maskers is sketchy, they have been rousing residents of the Tremé early on Carnival Day since at least the 1930s.
Mardi Gras Indians, Zulu, Baby Dolls and Bone Gangs comprised the key elements of “black Carnival,” which occupied its own realm apart from the official festivities. Its focal point was a verdant stretch of Claiborne Avenue running through the Tremé. There was much anguish when, in the 1960s, the majestic oak trees lining the street were felled so a freeway overpass could be constructed above the neutral ground, or median.
The history of New Orleans Carnival in the 20th century can be viewed as the story of broadening avenues of participation and a gradual blurring of boundaries once delineated by race, class and gender. The one constant is that individuals from every strata of society aspired to participate on their own terms. This led to like-minded cohorts joining together to form a host of new organizations that would adapt and reinterpret the conventional repertoire of local Carnival customs, while also introducing novel twists and practices.
The advent of “truck parades” brought working- and middle-class New Orleanians into the celebration on a participatory level. Rain forced the cancellation of the Rex parade in 1933, but as Reid Mitchell relates in his book All on a Mardi Gras Day: Episodes in the History of New Orleans Carnival, Chris Valley and fellow brothers in the Elks Lodge hit the streets with a truck float and five-piece band. When police refused them entry onto Canal Street — a space reserved for the blue-blood parades — Valley got the idea for assembling a brigade of truck floats — “motorized equivalents of the old promiscuous maskers,” as Reid describes them — as a way to build clout and gain access to the main thoroughfare of Mardi Gras. Since 1938, his Krewe of Elks Orleanians has followed the Rex parade.
No longer were regular folk confined to the role of sideline spectators, witnessing the processions of the exclusive krewes. Transforming a flatbed truck into a thematic float, and coming up with costumes and headpieces to accentuate the chosen theme, enabled whole families and friends of both sexes to engage in creative collaboration. “These mobile cabarets, complete with jazz bands and dancing, offered an alternative to formal, male-dominated parades,” notes Karen Trahan Leathem in her 1994 PhD dissertation, “ ‘A Carnival According to their own Desires’: Gender and Mardi Gras in New Orleans, 1870-1941.”
Stepping out: women reinvent their role
A Krewe of Muses float rider proffering a pair of “Chickendales” boxers
Masked women throwing underwear from floats — no one could possibly have imagined such a thing back when the first all-female parading krewe, Venus, took to the streets in 1941.
Around the turn of the century, it was generally assumed that only a woman notoriously lewd and abandoned would mask and dance in the street. But winning the right to vote, notes Atkins, “created an enlivened sense of public participation by women,” who, beginning in the 1920s, “engaged in Carnival behavior that was once acceptable only for men and prostitutes.”
The advent of all-female krewes spoke to women’s growing impatience with their ornamental roles in the traditional krewe balls orchestrated by men. In 1941, the Krewe of Venus, presenting the theme “Goddesses,” became the first all-female krewe to parade through the streets of New Orleans, “provoking a considerable amount of trepidation and even resentment from people who associated masked women with moral turpitude,” relates Anthony J. Stanonis in Creating the Big Easy: New Orleans and the Emergence of Modern Tourism 1918 – 1945. “Rain poured down, and some spectators throw tomatoes and eggs.”
The gradual democratization of Carnival, having gathered steam in the early decades of the 20th century, was in keeping with, if not motivated by, the “Every man a King” spirit of Louisiana politician Huey Long (who famously thumbed his nose at Mardi Gras by calling it a “silk stocking” affair for social blue bloods).
In the interwar years, businessmen and city officials worked assiduously to promote the city’s romantic antiquity and charm, while also shaping the image of Mardi Gras as a leisure attraction — as a way for tourists to “cast aside their everyday mores for a brief period of sensory indulgence,” writes Stanonis.
For Mardi Gras, boosters dressed up the city in celebratory garb and promoted costume contests, dancing contests and the closure of Canal Street to facilitate masking among revelers and tourists in proximity to downtown retail stores. Promotional campaigns spearheaded by the New Orleans Association of Commerce included hiring a film company to capture moving images of the festivities for national distribution to theaters and schools.
As a result of the boosters’ efforts, according to Stanonis, “the public image of Carnival as a time for the city’s elite to display their wealth and social standing became secondary to the public’s general enjoyment and participation in all aspects of the festival. Mardi Gras emerged as a national holiday celebrated in the unique setting of New Orleans.”
As the celebration became increasingly important to the city’s tourism industry, new krewes comprised of business and professional men, such as the Knights of Hermes and the Knights of Babylon, appeared. No longer was Carnival royalty born exclusively to the upper crust. In 1949, Louis “Satchmo” Armstrong, who played his horn in the Zulu parade as a young man, became Carnival’s first celebrity monarch, and fulfilled a boyhood dream, when he reigned as King Zulu. (“Man, this king stuff is fine,” he said. “Real fine.”)
Blaine Kern and post-war democratization
Blaine Kern with African Zulu warriors, and members of their entourage, at the 2006 Zulu parade
A pivotal force in opening up participation in Mardi Gras, shaping the look parades and recruiting big-name celebrities to ride, he traveled to South Africa to woo African Zulus to come to Mardi Gras as a gift to the Zulu Social Aid and Pleasure Club, whose members were particularly hard hit by Hurricane Katrina in 2005.
But alas, the years following World War II, during which festivities were canceled, saw only dim traces of the virtuosity and panache that had characterized the Golden Age. Floats had become predictable and somewhat drab, typically resembling large, gussied-up baby carriages. Then along came a young artist named Blaine Kern, whose father, Roy, had built floats for the krewe of Alla in the 1930s.
In 1947, at age 20, Kern, after a stint in the Army, founded Blaine Kern Artists. Within two years, he had the contract to build the Rex parade.
In the early 1950s, the then-captain of Rex, Darwin Fenner (whose father was the Fenner in Merrill Lynch, Pierce, Fenner & Smith), dispatched Kern to Europe to study Carnival traditions in Cologne, Nice, Frankfurt, Viareggio and Valencia. The floats that upstart Kern subsequently introduced to the streets of the Crescent City were fanciful, if not outlandish: decked out with oversized, vividly colored busts of storybook creatures and characters whose heads turned and whose eyes moved.
A pivotal force in opening up participation in Mardi Gras, shaping the look parades and recruiting big-name celebrities to ride, he traveled to South Africa to woo African Zulus to come to Mardi Gras as a gift to the Zulu Social Aid and Pleasure Club, whose members were particularly hard hit by Hurricane Katrina in 2005.
Kern would become the city’s dominant float builder and a key player in fostering the formation of new krewes. Many new clubs would hit the streets as parading became more affordable, after Kern began building floats and buying tractors to rent out to others. The floats had detachable features so they could be adapted easily to any number of themes. As Carnival became a less exclusive affair, its economic impact on New Orleans multiplied.
Indeed, thanks in no small part to Kern, a k a Mr. Mardi Gras, what was once essentially a seasonal ritual for locals would witness a huge expansion in the annual number or parades, media coverage and tourist interest. Consequently, as sociologist Gotham has observed, the Mardi Gras “experience” became more accessible to outsiders and, in the process, increasingly associated with mass entertainment and consumer culture.
Sequins, feathers and gay civil rights
Original members of the Krewe of Yuga at their third ball, in 1961
The license provided by Mardi Gras for acting out fantasies and transgressing social boundaries nurtured a gay ball subculture and helped make New Orleans a gay mecca.
— Photo courtesy of First Run Features
The traditions of gay Mardi Gras officially began with the Krewe of Yuga’s first Mardi Gras drag ball, in February 1958. In 1962, the event was held at a rented school cafeteria in conservative Jefferson Parish — and raided by the police.
As Tim Wolff recounts in a synopsis of his feature documentary The Sons of Tennessee Williams, which tells the story of the New Orleans men who worked with the traditions of Mardi Gras to bring gay culture into public settings long before the start of the gay civil rights movement: “Krewe members attempted to escape by running into the swamplands adjacent to the school, chased by officers with dogs and flashlights. Many were betrayed by their glittering costumes while hiding in the dark night and tall grasses of Jefferson Parish.” Ninety-six men were taken to jail, booked for “disturbing the peace” and identified by name in the newspaper, which described the event as a “stag party.”
In 1964, Arthur Jacobs, an ex-police officer looking to drum up some business for his French Quarter restaurant, Clover Grill, started a Mardi Gras costume contest called The Bourbon Street Awards. Thanks in part to its proximity to Cafe Lafitte in Exile, a popular gay bar, the event — billed as “The Greatest Free Show at Mardi Gras — would become a magnet for drag queens and over-the-top costumes made for gay balls.
By 1969, four gay Krewes were legally chartered by the state of Louisiana as official Mardi Gras organizations, holding annual pageants at public venues across the city. Hundreds of people attended the balls, including straight female friends of krewe members. Wolff’s documentary makes the case that New Orleans was the first place in America where gay and straight people came together to publicly recognize gay culture, years before the start of gay pride marches commemorating the June 1969 police raid on the Stonewall Inn in New York’s Greenwich Village — an event widely credited as the catalyst that brought gay people out of hiding.
Gay balls thrived into the 1980s, as a slew of new krewes built on New Orleans’ already formidable reputation for extravagant artistry. And although the onslaught of AIDS and the devastation of Hurricane Katrina took a heavy toll on the local gay community, the remaining krewes — among them Amon-Ra, Lords of Leather, Petronius, Armeinieus and Satyricon — continue a tradition of elaborately produced balls staged before a seated audience. Costumed krewe members typically appear in scripted tableaux based on a theme, and the overall effect is usually humorous, if not outlandish.
While the official Carnival parade schedule would see plenty of changes as new krewes came and went, the old-line parades remained the featured attraction of the gala through the 1960s. The prerogatives of the elite krewes, in fact, extended well beyond the festivities. The Carnival aristocracy, as J. Mark Souther notes in his book New Orleans on Parade: Tourism and the Transformation of the Crescent City, “not only planned lavish social events but also exercised overwhelming influence on the city’s economic direction and its politics.”
The “superkrewe” razzamatazz of Bacchus
Krewe of Bacchus Bacchatality float, so named because it carries members who work in the hospitality industry
A flashy — and hugely popular — extravaganza from the get-go, Bacchus signaled a cultural shift away from the longstanding dominance of the old-line krewes and toward the production of spectacles associated with mass entertainment.
In the late 1960s, entrepreneurs and others lacking blue-blood pedigrees came together in an effort to promote tourism and broaden the avenues of participation in Carnival. They formed the krewes of Endymion, in 1967, and Bacchus, in 1968. You needed no social credentials to join. And instead of ceremonious balls featuring krewe royalty, these so-called “superkrewes” threw raucous “extravaganzas” with big-name entertainment. Further departing from tradition, their parades boasted celebrity riders and huge, double-decker floats. And their unprecedented generosity with throws abetted the appetite of the cheering throngs and made the older krewes seem almost stingy by comparison.
The first Bacchus parade in 1969 — which featured Danny Kaye, a Jewish actor from Beverly Hills, as its monarch — signaled a cultural shift away from the longstanding dominance of the old-line krewes and toward the production of razzle-dazzle spectacles depicting fun, accessible themes. Bacchus and krewes that followed its example, most notably Endymion and Orpheus, energized participation in Carnival and significantly enhanced the celebration’s stature as a tourist attraction. Today, their Kern-built floats — with elaborate sound systems, fiber-optic lighting and other special effects — set the standard for over-the-top extravagance and high-tech innovation.
Reveling on the fringes: Carnival counterculture
Reflecting broader social currents of youth questioning authority and conventional expectations of the “Establishment,” many New Orleanians who came of age in the 1960s and ’70s found themselves drawn toward freewheeling modes of expression that, in the realm of Carnival, improvised on the old traditions. Rejecting the prefab culture and rational orderliness extolled by modern society, subterranean tribes such as the Society of St. Anne, Krewe of Kosmic Debris, Mystic Orphans and Misfits (MOMs) and the Krewe of Dreux exhibited a loose, impromptu style — indulging in primal fantasies and reveling in the possibilities of the moment.
Box of Wine rolling on St. Charles Avenue in 2008
A gonzo affair that strives to recreate the communal ecstasy of ancient Dionysian rites in which wine was a medium of religious experience, Box of Wine has become an idiosyncratic centerpiece of Carnival counterculture.
With the exception of MOMs, whose raison d’être is a raucous costume ball, these groups represented the evolution of a tradition — of footloose marching or walking clubs — dating back to the 1800s. Venerable practitioners such as the Jefferson City Buzzards, founded in 1890, and Pete Fountain’s Half-Fast Walking Club still take to the main parade route along St. Charles Avenue and Canal Street on Carnival Day. And, in a testament to Carnival’s capacity to accommodate upstarts, even decidedly offbeat ensembles like Box of Wine and the Mondo Kayo Social and Marching Club have managed obtain permission from the city to raise a ruckus before the assembled throngs on the official parade route (the former on Bacchus Sunday, the latter on Fat Tuesday). Still many others are content to revel around the fringes — rambling with abandon through the French Quarter and neighborhoods off the main thoroughfare.
The most influential alternative Carnival organization to emerge in the latter part of the 20th century was the Krewe of Clones. Its masterminded was Denise Doughtry, who, for her Tulane University M.F.A. thesis in 1971, staged a musical, science fiction-themed Mardi Gras ball as a performance art event. Among those in attendance was Don Marshall, who would go on to become the first director of the Contemporary Arts Center, a converted four-story, red-brick warehouse on Camp Street, in midst of what was then skid row. Founded in 1976, the CAC quickly become a vibrant creative hub — combining exhibition and performance space, for artists working in a wide spectrum of disciplines, with a generous helping of social hoopla. Marshall asked Daughtry to create a fundraising event for the CAC along the lines of her M.F.A. production.
Organized under the auspices of the CAC, the Clones first paraded in what is now known as the Warehouse Arts District in 1979. Their theme: Songs and Stories of the National Enquirer.
A wild and wacky Mardi Gras arts collective, the Clones represented a break from the packaged modus operandi of “mainstream” parades. “Packaged” in the sense that the experience of a typical member of a mainstream krewe is fairly effortless: you pay your dues and basically just show up for the parade and parties. Float decoration services, masks, costumes and mass-produced throws are procured from outside suppliers.
By contrast, Clones had a hands-on, do-it-yourself ethos. With everyone pitching in to design and build the parade from scratch, participants were highly invested in the endeavor, with creative ownership of costumes, signs, thematic interpretations, performances and constructions. Various groups, each comprising a “unit” or “subkrewe” in the parade, had considerable autonomy in deciding how to interpret the overall theme. Members, in turn, could develop and depict individual fantasies within the context of their subkrewe’s presentation.
The Clones, marching in the name of art, united the arts community and creatively inclined fun-seekers in a single, multidisciplinary street procession, synthesizing a wide array of performance and visual arts traditions. The themes, most famously 1984’s Celebrity Tragedy, seemed to encourage artist-revelers to flaunt decorum. And with the vast majority of Clones maskers parading on foot, there was an electrifying intimacy with spectators — an extemporaneous give-and-take not dependent on throws. The result was a rollicking circus of a parade with an off-the-wall, Saturday Night Live sensibility.
The Clones recalled the early days of Carnival, of informal walking parades and social lampooning. “Originally Mardi Gras was a trashy street parade with a bunch of lunatics like us,” Daughtry, in an interview with Times-Picayunecolumnist Angus Lind, once explained. “But somewhere it got lost along the way. Originally Carnival satirized society. Now Carnival has become society.”
2011 Krewe of CRUDE float lampooning the BP oil spill debacle in the Gulf of Mexico
After the implosion of the Clones in 1986, CRUDE was one of four subkrewes that regrouped to partake in the Krewe du Vieux, which continued the renegade ways of the Clones.
— Photo by Pat Jolly
Although the Clones were a big hit with the public, garnered national publicity and proved to be a potent fundraising vehicle for the CAC, tensions over the krewe’s image mounted as the arts center evolved from being an experimental oasis — which flew by the seat of its pants and readily indulged the rule-bucking whims of young local artists — into a polished jewel in the crown of the city’s cultural establishment. Some members of the CAC’s board of directors felt scandalized by the parade’s outré shenanigans, which included marchers got up as hemorrhoids and depictions such as “Hollywood Habits: The Drugs of the Stars,” in which members the Krewe of CRUDE (Council for the Revival of Urban Decadence and Entertainment) costumed as different drugs (one was a gigantic nose with rolled-up hundred dollar bills protruding from its nostrils, following a trail of white power tossed onto the street).
In 1986, Super Bowl XX was scheduled to take place at the New Orleans Superdome the day after the Clones parade. Rebellion was in the air as some of the subkrewes balked at efforts by the mother krewe’s brain trust to raise dues and impose guidelines. As it turned out, then-mayor Ernest N. Morial personally intervened to have the parade permit revoked. “He called me ‘a public endangerment to the image of the city of New Orleans,’ ” Daughtry would later recall in an interview with MardiGasUnmasked.com.
In a defiant demonstration of Mardi Gras spirit, some Clones joined up for a ramble through the French Quarter on the night before the Super Bowl, while others staged a “Death of the Clones” funeral march in Mid-City. Daughtry subsequently fell out with CAC and — along with artist, musician and bon vivant George Schmidt — wound up developing a new Carnival endeavor, the Avant Garde Club. A tony affair with relatively high membership dues, centralized creative control and tastefully rendered, small-scale floats, it was a conscious departure from the renegade ways of its predecessor. The krewe paraded for two years, and although considered an artistic and intellectual success, it lacked the vitality of spirit that had animated the creative anarchy of the Clones.
Krewe du Vieux rises from the ashes
With a knack for nurturing the talents of rule-bucking young artists and instigating multidisciplinary collaboration, the former director of the Contemporary Arts Center was a key player in the formation of both the Clones and the Krewe du Vieux.
— Photo by Pat Jolly
Enter, once again, instigator Marshall, who had left the CAC to run the historic Le Petit Theatre du Vieux Carré. At his urging, four of the Clones subkrewes, plus additional recruits, reconstituted for Mardi Gras 1987 under a new umbrella called the Krewe du Vieux Carré (translation: Krewe of the Old Square, i.e., French Quarter). They managed to finagle a permit to parade through the Quarter before the start of the “official” Carnival parade season.
The krewe barely made it through some lean years in the early 1990s, when it had to advertise to recruit members. Some of the subkrewe efforts were slapdash, and attempts at float construction more the exception than the rule. But the diehard members had a profound belief in the wisdom of fools and the invisible guiding hand of the Muses — the guardian angels of the creative spirit — as well as a missionary zeal for facilitating communal catharsis through the merciless ridicule of dubious public figures. (In terms of risqué and politically incorrect, completely uncensored content, the Krewe du Vieux, as it is commonly known, has far surpassed the Clones.)
Now a recognized tour de force, the Krewe du Vieux has nurtured a talent pool well versed in the Carnival arts. With 17 subkrewes, 900-plus participants (including the city’s top brass bands), a fleet of (mostly) mule-drawn floats and an intensely loyal following, it proudly boasts of kinship with the buffoons and rabble who, in Carnivals of yore, giddily took to the streets to mock the elites.
Social conflicts and political dramas have long been fodder for Mardi Gras. In 1877, the Knights of Momus — whose ancient Greek namesake was banished from the mythical realm of Olympus for his criticism and ridicule of the gods — infamously incurred the wrath of the administration of President Ulysses S. Grant. In a pageant entitled Hades: A Dream of Momus, the krewemen depicted the Republican Party, then nearing the end of its Reconstruction-era control of New Orleans politics, as a bunch of animals on a sinking “Ship of State.” Ruling over the underworld empire from a throne was Satan (with Grant’s face), surrounded by monsters and snakes.
As demonstrated by notable modern-day keepers of the satirical flame, i.e., the Krewe du Vieux, Le Krewe d’Etat and the Knights of Chaos, Mardi Gras simultaneously gives expression to New Orleans’ joie de vivre while revealing fault lines and pent-up frustrations in the body politic. In the early 1990s, submerged conflicts came to the fore in an emotionally wrought clash over Mardi Gras itself and the politics of human relations.
The anti-discrimination imbroglio
In 1991, the City Council moved to require all krewes parading on public streets to accept members regardless of race, gender, handicap or sexual orientation. Spearheaded by the late City Councilwoman Dorothy Mae Taylor, a veteran civil rights campaigner, the effort was widely perceived as a score-settling showdown between a black-majority City Council and the once-indomitable old-line establishment, whose loosened grip on Carnival festivities mirrored its diminished influence in city government.
There were bruised feelings as Taylor, in public hearings, grilled krewe representatives about membership policies and whether the exclusivity of their organizations meant business opportunities, shared in private among members, weren’t otherwise available to non-members on a level playing field. The krewemen, for their part, felt they were owed a debt of gratitude. “Carnival, according to its old-line patrons, had always existed as a civic gesture that the city’s upper class bestowed on the citizenry at great cost to themselves,” notes Souther. “In their eyes {sic}, Mardi Gras required no public money and was truly ‘the greatest free show on earth.’
Orpheus’ Smokey Mary
Orpheus followed the “superkrewe” model by offering dazzling floats, national celebrities, fancy throws and a post-parade “extravaganza” with big-name entertainment. But unlike Bacchus and Endymion, it opened its membership to women.
It had, to be sure, grown into sprawling colossus. Upstart krewes had joined the parade schedule, placing ever-increasing demands on police and emergency personnel, not to mention sanitation crews. Therein lied Taylor’s justification for public regulation of private organizations parading on city streets.
A softened version of the anti-bias law, passed in May 1992, dropped the prohibition against discrimination by gender (not only did some men’s Carnival organizations not want to admit women, but women’s krewes also did not want men). Nevertheless, when the dust settled, Comus, Momus and Proteus had stopped parading (although they went on with their invitation-only Carnival balls).
After the ordinance controversy, native son musician Harry Connick Jr., along with his district attorney father and others, formed a consciously nonexclusive krewe named after the son of the Greek muse Calliope. Taking over Proteus’ old slot on the night of Lundi Gras (Fat Monday), Orpheus, with both male and female members, has enthralled parade-goers. (After signing a city-mandated affidavit saying there is no discrimination in its membership policies, Proteus returned to the streets for Mardi Gras 2000, rolling before Orpheus.)
During the 1990s, almost every year seemed to bring forth more parades and extravagant floats, bigger and fancier beads, more free entertainment options and more exuberant Mardi Gras-themed promotions from alcohol companies and other marketers. Special coverage of Mardi Gras on cable television’s MTV conveyed an irrepressible party destination, luring the spring break crowd.
The “Show-Me” show
Playboy balcony on Bourbon Street, Mardi Gras 1999
To dismay of some native cognoscenti concerned about a low-brow image overshadowing the pageantry and traditional folkways of Mardi Gras, the presence of Bunnies on Bourbon Street focused media attention on a phenomenon that had become increasingly prevalent.
For Mardi Gras 1999, a contingent from Playboy Enterprises International, including scantily clad Bunnies featured in its flagship magazine, took over the Temptations balcony on the 300 block of Bourbon Street, causing a sensation. Deploying a documentary team to gather material for Playboy.com and other ventures, the media company seemed intent on milking ka-ching from Mardi Gras monkey business, i.e., the bartering of beads for flashes of flesh.
The promotion and exploitation of sensationalized, risqué images of the revels — buxom Bunnies gone wild, college co-eds on Bourbon Street being solicited for flashes and subjected to leering hoots and jeers from louche frat boys dangling gaudy plastic charms from balconies — seemed to coincide with a change in the celebration’s demographics. The local tourism intelligentsia chafed as the bawdy reputation conjured by outside media interests increasingly attracted visitors more interested in inebriated escapades and flashes of nudity than Carnival’s cultural significance, storied pageantry and traditional family orientation.
While Mardi Gras has always served as a forum for expressing sexual fantasies, flashing for beads is a relatively recent phenomenon. It all started innocently enough. One theory holds that after float parades were banned from the French Quarter’s narrow streets in 1973, locals with access to Mardi Gras trinkets and balconies invented a new form of entertainment to fill the void: the flesh-for-beads show. Back then, flashing was a spontaneous and casual affair, with beads a convenient medium of exchange that facilitated fun and conviviality, enabling one to quickly establish a connection with a total stranger.
Beads and other trinkets, known as “throws,” have been tossed from floats since as least 1910 — transforming parades into a participatory experience, as spectators beg and scramble for treasure. As recently as the 1960s, most Mardi Gras beads were hand-strung and made of glass. They were too expensive to be thrown in liberal quantities by float riders. Catching a single strand was considered a blessed event.
March 2000 cover of Playboy
Flaunting the “naughty” side of Mardi Gras in a national publication with millions of readers ignited a hullabaloo pitting tourism executives concerned about the city’s “image” against businesses with vested interests in the French Quarter “bead economy.”
When inexpensive, mass-produced Mardi Gras plastic beads from the Orient arrived on the scene in the early-to-mid 1970s, they weren’t sold in French Quarter emporiums — only locals knew where to procure them. And it was locals, no doubt including striptease dancers employed on Bourbon Street, who had access to what where then private balconies. (Bars with public balconies on Bourbon Street only began to appear in the early 1980s.)
By the late 1990s, what had begun in a spirit of lighthearted indulgence had given way to a voyeuristic atmosphere swarming with professional and amateur paparazzi. Flasher images migrated to the Internet and vendors of salacious Mardi Gras videos peddled their wares online and via late-night television. Consumption of, and participation in, Mardi Gras immodesty had become a leisure activity, and “Show your tits!” had become as much a part of the Carnival lexicon as the traditional cry “Throw me something, Mister!”
Despite periodic pronouncements portending a crackdown — “We will enforce the public nudity laws,” Mayor Marc Morial declared at a Mardi Gras press conference in 1995 — there’d been no concerted effort to dispel the notion that flashing was an accepted form of behavior at Mardi Gras. Then on the eve of Mardi Gras 2000, Playboy published an eight-page spread highlighting what was described as a “nonstop bacchanalia,” where flashing breasts for beads was “outrageously contagious.”
This flaunting of the “naughty” side of Mardi Gras ignited a huge hullabaloo. The police held a press conference in the 300 block of Bourbon Street — the same block as the Playboy balcony — to announce a “zero tolerance” anti-nudity policy. They subsequently upped the ante by vowing stricter enforcement of an obscure law prohibiting the throwing of objects from balconies.
But in the end, the threatened crackdown fizzled. While some city officials and leading citizens thought the breast-baring antics were tarnishing the reputation of the celebration and the city, certain business interests regarded it as a bona fide tourist attraction. Merchants, bar owners and hoteliers in the French Quarter, where the sale of beads and balcony access had become big business, objected to the prospect of patrons being hauled off to jail. Alas, when what Souther calls “the lucrative image of saturnalia” collides with the letter of the law, the famously laissez-faire city of Mardi Gras merriment has a way of bending.
The fallout from the Playboy episode, which sparked a feeding frenzy in the local media, was the first of several noteworthy news stories to reverberate through Carnival in the new century. The 9/11 terrorist attacks, Hurricane Katrina and the Super Bowl-winning New Orleans Saints became overarching themes in 2002, 2006 and 2010, respectively — offering poignant reminders of how New Orleans’ most distinctive civic ritual invariably channels the zeitgeist of the surrounding culture.
Red, white and blue in 2002
“Higgins Hounds” (or terriers, as the case may be) in the 2002 Mystick Krewe of Barkus parade
In the wake of 9/11, patriotic themes came to the fore in a holiday that always reflects the preoccupations of the surrounding culture.
Inspired by the fearless rescue dogs at Ground Zero, the Mystic Krewe of Barkus, a canine Mardi Gras organization that debuted in 1993, paraded to the theme Freedom’s Best Friend: Saluting Canine Heroes. Ambling through the French Quarter on the second Sunday before Fat Tuesday, the procession of dogs and human escorts featured an impressive display of patriotic spirit and ingenious costumes, floats and props. The military hardware included a camouflage a “U.S. Dog Force Tank” and a “Higgins Hounds” replica of the amphibious landing craft designed and produced in New Orleans by the flamboyant entrepreneur Andrew Jackson Higgins. (Higgins, former World War II General Dwight D. Eisenhower once declared, “won the war for us.”) There were “Terriers Against Terrorism – United We Bark,” the “Boogie Woogie Bugle Dog of Kennel ‘B,’ ” an “America’s Canine Heroes” float with representations of a Liberty Dog statue (“Liberty, Justice and T-bones for All”), and an Uncle Sam pooch (“I want YOU to rub my tummy!”). “Barksy Ross” made an appearance draped in the stars and stripes, in honor of the legendary flagmaker Betsy Ross.
In other Carnival action that year, the Krewe of Elvis, toting a US flag emblazoned with the face of young Elvis, marching to the theme “An American Trilogy,” a song about the Civil War recorded by The King. Firefighters and other real-life heroes from New York City rode in the Endymion parade. The Orpheus Smokey Mary choo-choo train float became the Smokey Mary Freedom Train, and riders in Zulu handed out patriotic coconuts. All over town, decorative stars and stripes mingled with the usual purple, green and gold regalia. And topically minded maskers and float designers targeted Osama bin Laden, whose most indelible cameo came in Endymion. One of the lead floats, decked out in red, white and blue, featured a large bald eagle; in its claws was the bloodied, turbaned head of the monster himself, his tongue hanging out.
After the Flood, “the most important Mardi Gras ever”
Each ribbon represented a Zulu brother who had perished either as a direct result of havoc unleashed in the wake of Hurricane Katrina or from other causes during the six months that elapsed between the storm and Mardi Gras.
Mardi Gras 2006 occurred just six months after levee and floodwall failures in the wake of Hurricane Katrina nearly wiped out the city. Steeped in meaning and fraught with emotion, it became a crucial test of the city’s ability to recover — and a therapeutic antidote of sorts for the woefully inept government response to the disaster.
There was never really any doubt Mardi Gras would take place in New Orleans; it was mainly a question of scale. When a police strike forced the cancellation of parades in 1979, revelers still swarmed the French Quarter on Fat Tuesday. The uncertainly surrounding Mardi Gras 2006 hinged on the extent to which a beleaguered city government would facilitate the most recognized manifestation of the festivities: “official” float parades on public streets.
Media outlets fixated on portraying Katrina as a racial morality tale that exposed, as Gotham puts it, “a profound disconnect between the branded and commodified image of New Orleans as a place of fun and entertainment” and the underlying reality of a dysfunctional city with a marginalized underclass and a painful legacy of Jim Crow segregation. Negotiations about Mardi Gras 2006, meanwhile, played out in the context of a broader debate about the future of the city itself. There was concern that if the Lower Ninth Ward and other predominantly black neighborhoods weren’t rebuilt, New Orleans could lose the wellspring of its Afrocentric cultural heritage, which had nurtured its Mardi Gras Indian, brass band and second-line traditions.
With a large percentage of the city’s black population displaced by the storm, the prospect of devoting scarce city resources to parading struck some as ill-advised, if not a slap in the face. Mayor C. Ray Nagin initially opposed the idea of having parades, but changed his tune after Zulu, closing ranks with other krewes, opted in.
Zulu had taken a big hit. It’s Mid-City clubhouse flooded and most of its members lost their homes in the storm. Their brethren were more scattered than members of most other Mardi Gras clubs.
But Zulu’s determination to parade in 2006 would prove to be of “historic importance,” writes Laborde in Krewe. Mardi Gras had sparked an unprecedented media frenzy, putting New Orleans under a microscope as reporters from around the world came to glean insights into the post-Katrina psyche of the (albeit much reduced) populace and dissect the merits of “revelry amid the ruins.” Not having Zulu in the mix would have been a public relations disaster, eliciting the inevitable juxtaposition of whites parading while displaced blacks suffered. Zulu’s resolve showed that the Mardi Gras spirit, which is fundamentally associated with optimism and positive thinking, crossed color lines. “In its own innocent way,” concludes Laborde, “Zulu may have saved Carnival.”
“The most important Mardi Gras ever!” announced the cover of the February 2006 edition of New Orleans magazine. As it turned out, the city’s most important cultural institution would not only offer a respite from the trauma of loss and displacement, as well as a cathartic forum for channeling frustration and delivering satirical commentary through costuming and parade themes. It also provided an opportunity for people to take control of their own destiny and make an affirmative statement to the world: We’re here; the city is open for business and can handle a big event; and we will honor and preserve the traditions we hold dear.
Krewe of PAN float in the 2006 Krewe du Vieux parade
Never had there been such a psychic need to find humor in tragedy, as the expense of public figures who bungled the response to Hurricane Katrina.
In the long and storied history of Mardi Gras, never had there been such a bounty of fodder for poking fun, or such a need to find humor in tragedy. The first parade of the season, the Krewe du Vieux, with approximately 900 members, couldn’t come close to accommodating everyone who wanted to participate, because of public safety issues relating to the number of people who could be moved safely through the French Quarter. Participation in Mardi Gras had become a coping mechanism — and a civic duty.
With a play on the French phrase meaning “such is life,” the Krewe du Vieux’s theme,C’est Levee, referenced levee failures in the wake of the storm (the U.S. Army Corps of Engineers made for a ripe target). Delighted throngs of mostly local spectators beamed and hollered as each of the 17 sub-krewes presented its own uniquely twisted take on all things Katrina-related. The Krewe of PAN, with a float entitled “Buy Us Back, Chirac!” offered a plea for France (Jacques Chirac was president as the time) to reverse the Louisiana Purchase as a way to address U.S. government concerns about the tremendous cost of rebuilding New Orleans. The Krewe of Mama Roux presented “Home is Where the Tarp Is,” with members costumed in the ubiquitous blue material that protects damaged roofs throughout the city. Abandoned “Katrina refrigerators” containing rotted food had become billboards for graffiti folk art in the wake of the storm, so members of the Knights of MONDU donned individually decorated refrigerator headpieces — metaphors for the stench of failed leadership. The Krewe of K.A.O.S. (Kommittee for the Aggravation of Organized Society) lampooned former Federal Emergency Management Agency Director Michael Brown, a k a “Brownie,” with an unadorned float featuring an empty throne. Signs announced that Brown, anointed as grand marshal by the krewe, was “out to dinner” and that decorations and beads and were “on the way.”
Overall, Mardi Gras 2006 saw fewer parades, and krewes that did roll generally made due with fewer members, floats, marching bands, flambeaux, dance troupes and mounted posses. But their efforts helped New Orleans believe in itself again, and indeed many krewes stepped up their community agendas, raising money to cover the cost of Mardi Gras police services and spur recovery efforts.
Celebrating the Saints in 2010
Members of the Divine Protectors of Endangered Pleasures, a k a the Divas, parading through the French Quarter on the Friday before Mardi Gras 2010
Channeling the spirit of the moment as New Orleans reveled in its first-ever Super Bowl victory.
Four years later, Mardi Gras not only provided the perfect excuse to extend the celebration over the Super Bowl Saints into a New Orleans-style party marathon. It was also a golden opportunity to affirm the progress of the city’s recovery and what Mardi Gras and the true “spirit of New Orleans” were all about.
The team’s dramatic 31 to 28 overtime win against the Minnesota Vikings in the NFC Championship game unleashed a tsunami of euphoria that would crest two weeks later, on Super Bowl Sunday (Feb. 7), and keep right on rippin’ and rollin’ through Fat Tuesday (Feb. 16).
For some, the atmosphere leading up to the big game recalled the first Mardi Gras after Katrina. Back then, it was like a bittersweet homecoming; people came together in the streets for an emotional outpouring of communal solidarity, sharing their love for New Orleans. Now everyone was smiling through their tears over the heroic glory march of the Saints — perennial underdogs who’d struggled mightily over the years to reward their fans’ diehard devotion.
The success of the team had become inextricably intertwined with an urban renewal movement. Saints players and staff were heavily invested in the civic and philanthropic life of the community. Quarterback and team leader Drew Brees — having overcome a potentially career-ending shoulder injury that turned out to be a blessing in disguise because it brought him to New Orleans — found himself invariably cast as civic savior and principal protagonist in a poignant narrative evoking themes of resilience, rebuilding and resurrection.
In 2006, recurring motifs included looting, political and bureaucratic incompetence, FEMA trailers, abandoned refrigerators and blue tarps (the ubiquitous material used to cover storm-damaged buildings). Mardi Gras 2010 bore witness to flying pigs, Lombardi trophies, black and gold (the colors of the Saints) and the fleur-de-lis — the symbol of the Bourbon monarchy of France, the City of New Orleans and the World Champion Saints. (The Vince Lombardi Trophy, awarded to Super Bowl winners, is named after the legendary NFL coach who guided the Green Bay Packers to victory in Super Bowls I and II.)
And, of course, “Who Dat.” The term, with roots in African American musical variety theater dating back to the late 1800s, had long been a popular rallying cry of Saints fans, a.k.a. the Who Dat Nation. It implies an irreverent posing of a challenge, as in: Who dat? Who dat? Who dat say dey gonna beat dem Saints?
Although it had the trappings of a Mardi Gras parade — high school marching bands, dance troupes, glitzy floats and outstretched hands grabbing for throws — Lombardi Gras was of a different order of magnitude. Magic was in the air and multitudes were in an ecstatic frenzy.
Spontaneous Who Dat chants and thunderous, cheering ovations erupted. The players were like exuberant kids on the loose — bounding around, hanging off the floats, high-fiving joyous fans, even jumping off to dance and make merry in the streets. They led chants, offered toasts and tributes, and sang along to the ubiquitous anthem of the Saints’ season: “Halftime (Stand Up and Get Crunk)” by Atlanta rap duo the Ying Yang Twins. (The floats were rigged with microphones and powerful P.A. systems.) Drawing the biggest (and most enraptured) crowd ever to watch a New Orleans parade — estimates ran as high as 800,000 — the spectacle of Lombardi Gras set an almost impossibly high bar for any other city producing a championship salute.
Sean Payton rode on the last float, the massive Smokey Mary choo-choo train. At historic Gallier Hall, where politicians and dignitaries gathered, the victorious coach offered a toast.
“Here’s to the best Mardi Gras week in the history of this city,” he proclaimed, holding aloft the Lombardi trophy.
Having guided the Saints to a fairy-tale realm, while also serving as a key catalyst for helping New Orleans recover from a deluge of mythic proportions, Brees was arguably the most heroic — and beloved — Mardi Gras monarch of all time.
Photo by Lisa Dubois
Some observers suggested that after the intense exhilaration of the Super Bowl and Lombardi Gras, Mardi Gras would be anticlimactic. But with the prospect of every parade becoming an extended black-and-gold Who Dat party for the city and the Saints, revelers anticipated the seasonal rituals in a way they could only previously have dreamed about. Mardi Gras was the frosting on the victory cake.
That New Orleans would revel in the historic moment in the biggest way possible was never in doubt, and indeed much of the fun would come from seeing the myriad ways in which locals, so well schooled in the art of diversion, would creatively amplify and interpret all things Who Dat/Saints.
The theme would translate on many levels, above all else making Who Dat Party Gras the most jubilant party ever in America’s most celebrated party town. Marquee Saints players rode on floats in big parades. Spontaneous get-crunk dances erupted all along the route. Saints signs, banners and regalia were everywhere. An incredible array of costumes paid homage to the team and the surrounding hoopla. Revelers were blessed with bright sunshine and warming temperatures going into the final weekend and prevailing straight through Fat Tuesday. The King of New Orleans, Drew Brees, reigned as Bacchus XLII. The Orpheus parade unexpectedly stopped on Canal Street so that Coach Payton, who was riding on one of the lead floats, could hit the street and share the Lombardi Trophy with bedazzled spectators lining the barricades.
Even the most grizzled veterans of high times in the Crescent City had to pinch themselves: Could it possibly get any better than this?
In the French Quarter on Mardi Gras, with the sun shining brightly on the Who Dat Nation, trophy replicas seemed to be everywhere — the costume theme/accessory du jour. It had become the symbol not just of a football dream come true, but also of triumph over adversity endured in the wake of devastating levee failures. Like the first Mardi Gras after the flood, the 2010 celebration had become an empowering forum for commemoration and catharsis.
A prodigious creative vortex
2011 Redbeans parade
In the wake of Hurricane Katrina, thanks to a proliferation of upstarts looking to put their stamp on the festivities, do-it-yourself Mardi Gras artistry has achieved remarkable — perhaps even unprecedented — visibility.
Although New Orleans has struggled to bring its population back to pre-storm levels, Mardi Gras, as a vehicle for preserving and incubating of cultural traditions, continues to thrive. A highly dynamic phenomenon — in which old customs are constantly being reinterpreted and inspired new ideas can stil attract a devoted following — the celebration is at the vortex of a bourgeoning post-Katrina arts scene that has played a crucial role in the restoration of a city where creative expression through performance, procession and masquerade is not only a consuming passion but a way of life.
Evidence of the vibrant creative ferment is found in the bounty of new dance troupes and strolling/rolling clubs that have sprouted since the storm. Female dance ensembles like the Camel Toe Lady Steppers, Muff-A-Lottas and Bearded Oysters have, like the Pussyfooters before them, have refreshed the Mardi Gras landscape, adding idiosyncrasy to mainstream parades. The excitement surrounding the Saints helped give birth to the 610 Stompers — “ordinary men with extraordinary moves” and shiny gold shoes, so-named because the group’s founder holds season tickets in Superdome section 610.
Other noteworthy post-Katrina phenomena include the rootsy Redbeans Parade — a ritual that can be thought of as the culinary nod to Mardi Gras Indians. Whereas the Indians use mostly beads, glass crystals and feathers to make their painstakingly elaborate suits, these revelers use red beans and rice — a staple dish traditionaly served on Mondays in New Orleans — to individual creative effect.
2011 ‘tit Rex float
Fantasy and illusion, imagination and transformation, go hand in hand with creativity and Mardi Gras — which not so much an “event” as a cultural phenomenon that is expressed through a range of art forms and a dizzying amalgamation of happenings and habits.
— Photo by Pat Jolly
Do-it-yourself Mardi Gras artistry is also on display in the ‘tit Rex “micro parade,” which first rolled through the Bywater neighborhood, on the second Saturday before Fat Tuesday, in 2009. Formed in part in reaction to “bigness” in the surrounding culture, the procession (“ ’tit” in its name is an abbreviation of petit) taps into the downsizing trend with handcrafted, shoebox-size floats that are pulled by their artist-creators, who pass out a variety of homemade miniature throws.
The ways in which symbols and idols become feedstock for costuming and commemoration are myriad. Hence the Krewe of Rolling Elvi — Elvis impersonators who ride motorscooters — and the Krewe of St. Joan of Ark, a Twelfth Night procession honoring of the pious-peasant-girl-turned-heroine, a k a the Maid of Orleans, who was born on Twelfth Night, 1412. Also new to Mardi Gras since Katrina: Chewbacchus, a krewe comprised of puppet masters, science fiction aficionados and mad scientists. (The name is derived from Chewbacca, a k a Chewie, the furry sidekick to Han Solo in the Star Wars movies.)
As a cultural phenomenon expressed through a range of art forms and a dizzying amalgamation of happenings and habits that can be experienced on many different levels, Mardi Gras is more multifaceted than ever. Mainstream parades range from extravagant spectacles with huge floats and special effects to traditional processions built on old cotton wagons with wooden-spoked wheels dating from the 19th century. Revels run the gamut from glittering, invitation-only balls with debutante queens and maids to spontaneous eruptions of dance and joie de vivre in the street; from family-oriented parade-viewing picnics to the touristy bacchanal of Bourbon Street. And thriving on the fringes, in alternative realities of their own making, is a veritable cornucopia of subcultural processions and presentations that, collectively, represent an authentic and compelling expression of indigenous folkways.
|
In any case, the Church in effect rationalized Carnival as an expression of the occasional need for carefree folly. Because the day before Ash Wednesday, which marked the beginning of Lent, was a day of feasting — as symbolized by the ritual slaughter of a fatted bull or ox (boeuf gras) — it came to be known as Fat Tuesday or, as the French would say, Mardi Gras.
Mardi Gras became an “official” Christian holiday in 1582, when Pope Gregory XIII instituted the namesake Gregorian calendar still in use today. By recognizing Mardi Gras as an overture to Lent, the idea was for all the partying and foolery to be over with when it came time to observe the requisite austerities.
In medieval times, the feast of the Epiphany (January 6) — also known as Kings’ Day or Twelfth Night (it’s the twelfth day of Christmas, the day the gift-bearing Magi visited the Christ child) — evolved into a major celebration alongside Carnival. Monarchs would don their finest regalia, maybe even wager in a game of dice. Children received presents to commemorate the gifts given by the kings to the baby Jesus. In the great houses of Europe, the holiday became a glittering finale to a 12-day Christmas cycle, with elaborate entertainments featuring conjurers, acrobats, jugglers, harlequins and other humorous characters — notable among them the Lord of Misrule, whose task was to orchestrate the festivities. He is kin to Carnival’s King of the Fools (most famously represented by the character Quasimodo in Victor Hugo’s novel The Hunchback of Notre Dame).
Jesters on Fat Tuesday
Jesters in Carnival represent the license to poke fun with abandon, just as jesters in the medieval courts of Europe could speak truth to power with impunity.
|
yes
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
yes_statement
|
"mardi" gras was "originally" a christian "holiday".. the "origins" of "mardi" gras can be traced back to christianity.
|
https://www.exploregod.com/articles/what-is-mardi-gras
|
What Is Mardi Gras? | Explore God Article
|
What Is Mardi Gras?
Embed This Content
When you think of Mardi Gras, you probably think wild parties. What is Mardi Gras? Where did it come from?
You’ve seen the pictures. You’ve heard the stories. You may have even experienced the party yourself.
Every February or March, Mardi Gras celebrations take cities like New Orleans by storm. Beads, alcohol, parades, and even more alcohol flow through the streets of the French Quarter. And that’s not even the biggest celebration.
In Brazil, the holiday is known as Carnaval. It is the most popular holiday of the year, and the partying, parades, and celebrations last for an entire week.1 The event draws millions of tourists from around the world.
So what exactly is this festival known as Mardi Gras? Where did it come from and why do so many people around the world celebrate it every year?
Religious Origins
Mardi Gras, also known as Fat Tuesday, is actually linked to another religious holiday. For centuries, Christians have celebrated Easter to commemorate Jesus’ crucifixion and his subsequent resurrection from the grave.
For people of faith, Easter is a time of celebration and feasting. Jesus’ victory over death gives them a joyful hope for a new life and a restored relationship with God.
In order to prepare for this celebration, early Christians developed another religious season in the church calendar called Lent.2 For forty days prior to Easter, Christians reflect, repent, and fast in order to prepare themselves to experience the full meaning of Good Friday and Easter Sunday.
Just as people carefully prepare for big events in their personal lives—a wedding, a graduation, or a big move—Lent invites people of faith to make their hearts ready to remember Jesus’ death, commemorate his sacrifice, and celebrate his resurrection.
Lenten Sacrifices
A little more history on Lent is necessary. The forty days of the Lenten season begin on Ash Wednesday (about six weeks before Easter) and continue until Easter, not counting Sundays, as Sundays are still considered days of celebration.
The number of days is based on the biblical significance of the number forty—specifically, the forty years the Israelites spent wandering in the desert and Jesus’ forty-day fast in the wilderness.3
Historically, Christians have given up something during Lent as a way to refocus on their relationship with God. Lent is considered an opportunity to forgo something one typically enjoys in order to identify with Jesus and remember the sacrifice that he made.
Most often, this includes fasting from certain food or drinks, like chocolate or coffee. Today, some Christians give up more modern luxuries, such as the Internet, social media, or e-mail; reading books, magazines, or newspapers; shopping; or watching television or listening to music.
None of these things are inherently evil. The idea is to abstain from these subtle but powerful influences in order to become less distracted and better equipped to focus one’s attention on God.
On Ash Wednesday, the beginning of Lent, some Christians attend services and place ashes on their foreheads as an outward symbol of the repentance and fast they are undertaking.
Fat Tuesday
This brings us to Mardi Gras. The day before Ash Wednesday came to represent one’s last chance to indulge in rich foods, intoxicating drink, or anything else one is giving up for Lent. Hence, the day became known as Fat Tuesday, or in French, Mardi Gras.
As mentioned, another name for this festival is Carnaval o rCarnival. The word “carnaval” comes from Latin terms that mean “to remove meat,” a phrase that came to be associated with fasting during Lent.4 As historian Jill Foran notes, “People living in Paris, France, hundreds of years ago would parade a fattened bull through the city’s streets on Mardi Gras. This show reminded everyone not to eat meat during Lent.”5
Mardi Gras Today
While the origins of the holiday are religious in nature, most revelers today simply use the festival as an opportunity to celebrate, dress up in costumes, enjoy a parade, indulge in overeating or drinking, or engage in general lewd behavior.6 Indeed, Mardi Gras is known for its “anything goes” kind of atmosphere, where generally discouraged social behaviors are instead accepted with a shrug.
However, as with most holidays, Mardi Gras can have significant meaning for participants—whether in Brazil, their local bar, or their own homes—outside of its shadier reputation. Celebrating a meal with friends or family before entering a season of intentional abstinence can provide healthy nourishment for the soul.
The Purpose of the Holiday
Either way, Mardi Gras marks the approach of a significant holiday season—Lent and Easter. The importance of Mardi Gras is not found in a week-long party, drunken revelry, or a parade of multicolored floats.
Mardi Gras—Fat Tuesday—signals the coming of a time of repentance, realignment with God, and ultimately, celebration of Jesus’ sacrifice for all people.
The word “Lent” comes from a Saxon word that originally meant “length,” referring to the springtime season in the northern hemisphere when the days were lengthening and signs of new life were appearing. See Bobby Ross, Living the Christian Year: Time to Inhabit the Story of God (Downers Grove, IL: InterVarsity Press, 2009), 129.
|
What Is Mardi Gras?
Embed This Content
When you think of Mardi Gras, you probably think wild parties. What is Mardi Gras? Where did it come from?
You’ve seen the pictures. You’ve heard the stories. You may have even experienced the party yourself.
Every February or March, Mardi Gras celebrations take cities like New Orleans by storm. Beads, alcohol, parades, and even more alcohol flow through the streets of the French Quarter. And that’s not even the biggest celebration.
In Brazil, the holiday is known as Carnaval. It is the most popular holiday of the year, and the partying, parades, and celebrations last for an entire week.1 The event draws millions of tourists from around the world.
So what exactly is this festival known as Mardi Gras? Where did it come from and why do so many people around the world celebrate it every year?
Religious Origins
Mardi Gras, also known as Fat Tuesday, is actually linked to another religious holiday. For centuries, Christians have celebrated Easter to commemorate Jesus’ crucifixion and his subsequent resurrection from the grave.
For people of faith, Easter is a time of celebration and feasting. Jesus’ victory over death gives them a joyful hope for a new life and a restored relationship with God.
In order to prepare for this celebration, early Christians developed another religious season in the church calendar called Lent.2 For forty days prior to Easter, Christians reflect, repent, and fast in order to prepare themselves to experience the full meaning of Good Friday and Easter Sunday.
Just as people carefully prepare for big events in their personal lives—a wedding, a graduation, or a big move—Lent invites people of faith to make their hearts ready to remember Jesus’ death, commemorate his sacrifice, and celebrate his resurrection.
Lenten Sacrifices
A little more history on Lent is necessary. The forty days of the Lenten season begin on Ash Wednesday (about six weeks before Easter) and continue until Easter, not counting Sundays, as Sundays are still considered days of celebration.
|
yes
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
yes_statement
|
"mardi" gras was "originally" a christian "holiday".. the "origins" of "mardi" gras can be traced back to christianity.
|
https://www.spanish.academy/blog/mardi-gras-in-latin-america-carnival-and-hispanic-culture/
|
Mardi Gras in Latin America: Carnival and Hispanic Culture
|
Mardi Gras in Latin America: Carnival and Hispanic Culture
Mardi Gras is upon us! The festivities have arrived. Every year, they bring plenty of color and celebrations around the world.
If you’re not familiar with Mardi Gras and its origins, it’s a Christian holiday that dates back thousands of years ago to pagan spring and fertility rites. Today, Mardi Gras is a cultural phenomenon.
In Hispanic culture, Mardi Gras is known as Carnaval. The celebrations are so emblematic and often full of debauchery that many of them have evolved to be week-long festivals that are a prelude to Lent. In Spanish-speaking countries, Mardi Gras is a celebration that must be experienced.
Let’s dive into the vivid Mardi Gras celebrations of Latin America.
Are Carnival and Mardi Gras Different?
The name Mardi Gras comes from the French words “mardi” (Tuesday) and “gras” (Fat). The concept of “Fat Tuesday” refers to the day before Ash Wednesday and the start of Lent.
The 40 days of Lent that follow are meant to be a period of penance and fasting that culminate on Easter Sunday. In the old days, the Mardi Gras tradition consisted of binging on rich, fatty foods; anticipating the arrival of several weeks of fasting and sacrifice.
The tradition originated with the arrival of Christianity to Rome, leading to its expansion to countries like England, Portugal, Spain, Italy, and ultimately the Americas.
Mardi Gras and Carnival are the same holiday, although they vary depending on the people celebrating it and their cultural traditions. The word carnaval comes from the Medieval Latin word Carnelevarium, which means “to take away meat”; thus, the two names of this festivity are closely related.
Latin American Mardi Gras Celebrations
If you celebrate Mardi Gras back home, you know this holiday is full of flashy costumes, savory foods, live music, and all the dancing you can imagine! Like in New Orleans and Venice; Latin American Mardi Gras celebrations also go above and beyond in being memorable to those who attend. The festivities are marked by a cultural blend of traditions that have evolved since they were brought to America by European colonizers.
Let’s take a look at how eight Latin American countries celebrate Carnival each year!
Colombia
El Carnaval de Barranquilla was first celebrated after the Spanish brought the festivities to the city of Cartagena. Originally, only African slaves took to the streets with musical instruments, traditional garments, song, and dance.
Today, Colombian Mardi Gras welcomes over 300,000 people from all over the country celebrating Colombia’s diversity, folklore, and modern art. Barranquilla’s Carnaval parade is the second largest in the world and has a different theme each year.
México
Mardi Gras celebrations in Mexico are as large and diverse as the country.
Towns like Mazatlán focus on highlighting Mexico’s culture, adding banda and grupera music to the mix. It’s the third largest Mardi Gras celebration in the world and welcomes thousands of visitors from all over Sinaloa and surrounding states. The cultural relevance of Mazatlán’s Mardi Gras celebrations includes hosting La Velada de las Artes (Evening of the Arts) and honoring the winner of the Premio Mazatlán de la Literatura (Mazatlan Award for Literature).
El Carnaval de Mazatlán is such a welcoming party that it’s out of the question to be upset during the festivities. On the second day of the festival, attendants celebrate the quema del mal humor (burning of the bad mood). This tradition is an essential part of the Mazatlán Carnival and allows those who celebrate it to get rid of negative vibes by burning them before jumping in on a celebration that goes on for a week. Sinaloenses party under the philosophy of hasta que el cuerpo aguante, meaning you party until your body drops.
Other Mexican towns including Oaxaca, Mérida, Veracruz, and Campeche embrace the folklore and traditions of the different ethnicities and indigenous cultures. Mexican parades include costumes resembling Spanish colonizers, Mayan and Aztec characters, and mythical creatures. The celebration encourages partygoers to wear costumes and masks. There’s live music, plenty of drinking, and eating as many traditional Mexican dishes as you can imagine.
Guatemala
Mardi Gras celebrations in Guatemala usually begin on Thursday and finish on Martes de Carnaval. Musical parades take place all over the country but the true celebration focuses on younger generations. It’s tradition for kids to wear costumes and prepare unique handmade cascarones, colorful egg shells full of confetti known aspica-pica.
In the days leading up to Mardi Gras, cascarones are for sale all over the streets of Guatemala, and they’re a creative activity many Guatemalan kids enjoy at school. The highlight of the celebration is when children crack the cascarones on each other’s heads, making it a holiday suitable for families and people of all ages.
Venezuela
Mardi Gras is celebrated in different parts of Venezuela, the largest celebration being held in El Callao. The festivities focus on highlighting the ethnic diversity of Venezuela and feature parades with Calypso music and emblematic characters known as los diablos (the devils) del Callao.
Los diablos represent the magical-religious part of the festival. They dress in white, red, black, and yellow, covering their faces with large masks with threatening faces. With their contagious dance, they make those around them jump in on the fun. The Carnival of El Callao is known worldwide. In 2016, it was declared Intangible Heritage of Humanity by UNESCO.
Uruguay
Uruguay is recognized for having the longest Mardi Gras celebrations in the world. Beginning on January 21st, Uruguay’s Carnival is known for the music, fantasy, and rich colors of the parades. It lasts over 45 days with the festivities finishing in mid-March.
Montevideo hosts comedy shows, theater, and costume contests. Mardi Gras is an immense holiday for Uruguayans; it even has a museum! If you get a chance to dabble in these events, you can even join dance rehearsals and practice your Sambaskills.
Bolivia
Located at an altitude of 3,700 meters above sea level, the town of Oruro hosts Bolivia’s biggest Mardi Gras celebration. It takes place over six days and displays a range of popular arts in the form of masks, textiles, and embroidery. The main event of the carnival is the procesión (parade) where dancers strut for twenty hours in a four-kilometer stretch. El Carnaval de Oruro has more than 28,000 dancers and 10,000 musicians distributed into fifty groups. It was declared Intangible Heritage of Humanity by UNESCO in 2001.
Dominican Republic
The Dominican Republic’s Mardi Gras celebrations are a gathering of people who hit the streets looking to enjoy a good time. The elements that stand out are a mixture of African traditions brought by the slaves transported to the New World by European colonizers.
El Carnaval Dominicano hosts the Califé Show, where they make fun of controversial politicians and celebrities, making it a playful and amusing celebration. Another curiosity of this holiday is when partygoers hide candy for children to find and pretend to steal their neighbor’s farm animals.
Brazil
A must for true partygoers and fans of Mardi Gras. It was first celebrated in 1840 with polka and waltz dancing, and in 1917, Samba became the star of the event. The Carnival of Río de Janeiro has become more than a traditional celebration, it’s home to the best Samba dancers and world-class dance competitions.
Mardi Gras in Brazil welcomes millions of people from around the world. It’s a celebration unlike anything you’ve ever seen or experienced with the bright colors, elaborate sequin costumes, delicious food, and large parade floats.
How do You Celebrate Mardi Gras?
Carnival in Latin America is a huge deal, it has continued to evolve and grow. I hope this blog post gives you more insight on how Latinos enjoy this fantastic holiday.
Are you ready to kick-off the festivities? How do you celebrate Carnival back home? Is it as big as in Latin America? I would love to hear your thoughts. Leave a comment and tell me your favorite part about Mardi Gras!
Native speaker of English and Spanish, I’m a sustainable travel and social impact entrepreneur who loves going to new places and meeting people from different cultures and backgrounds. I love spending time with my family and being outdoors with my dogs. Writing, communicating, and creating are my passions. I strongly believe there are more creative ways to solve the issues our world is facing and I like doing my part in any way I can.
About Us
With over 10 years of experience, HSA is where your goals merge with our teachers’ passion: to improve your Spanish fluency. Custom-tailored to fit your needs, you choose your program, schedule, favorite teachers, pace of learning, and more.. Learn More
|
Mardi Gras in Latin America: Carnival and Hispanic Culture
Mardi Gras is upon us! The festivities have arrived. Every year, they bring plenty of color and celebrations around the world.
If you’re not familiar with Mardi Gras and its origins, it’s a Christian holiday that dates back thousands of years ago to pagan spring and fertility rites. Today, Mardi Gras is a cultural phenomenon.
In Hispanic culture, Mardi Gras is known as Carnaval. The celebrations are so emblematic and often full of debauchery that many of them have evolved to be week-long festivals that are a prelude to Lent. In Spanish-speaking countries, Mardi Gras is a celebration that must be experienced.
Let’s dive into the vivid Mardi Gras celebrations of Latin America.
Are Carnival and Mardi Gras Different?
The name Mardi Gras comes from the French words “mardi” (Tuesday) and “gras” (Fat). The concept of “Fat Tuesday” refers to the day before Ash Wednesday and the start of Lent.
The 40 days of Lent that follow are meant to be a period of penance and fasting that culminate on Easter Sunday. In the old days, the Mardi Gras tradition consisted of binging on rich, fatty foods; anticipating the arrival of several weeks of fasting and sacrifice.
The tradition originated with the arrival of Christianity to Rome, leading to its expansion to countries like England, Portugal, Spain, Italy, and ultimately the Americas.
Mardi Gras and Carnival are the same holiday, although they vary depending on the people celebrating it and their cultural traditions. The word carnaval comes from the Medieval Latin word Carnelevarium, which means “to take away meat”; thus, the two names of this festivity are closely related.
Latin American Mardi Gras Celebrations
If you celebrate Mardi Gras back home, you know this holiday is full of flashy costumes, savory foods, live music, and all the dancing you can imagine!
|
yes
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
yes_statement
|
"mardi" gras was "originally" a christian "holiday".. the "origins" of "mardi" gras can be traced back to christianity.
|
https://www.thedailymeal.com/10-things-you-didn-t-know-about-mardi-gras/22814
|
11 Things You Didn't Know About Mardi Gras
|
11 Things You Didn't Know About Mardi Gras
In the middle of a long, cold winter, you need to have a reason to let a little loose. Months after Christmas and New Year's, the next cause for celebration is the highly anticipated holiday of Mardi Gras. This colorful and boisterous holiday marks the beginning of a new season, a turning over a new leaf of sorts, and is an overall joyous and uplifting celebration that many New Orleans natives and travelers look forward to.
But before the beads and the masquerade masks, the origins of Mardi Gras pre-dates Christianity. Though now widely accepted as a Christian holiday that signifies the last hurrah before Lent, the day before Ash Wednesday originated as a pagan celebration of springtime and fertility with the Roman festivals of Saturnalia and Lupercalia. When Christianity arrived in ancient Rome, leaders decided to incorporate the holiday as a celebration known as "Carnival." Derived from the word carnelevarium, which means to take away or remove meat, Carnival, on the other hand, was designed as a day of excess meat eating.
Since then, this celebration of excess has morphed into a landmark event in New Orleans that involves a ton of music, booze, and food. And while you probably know about traditions like King Cake, do you know exactly why they hide a plastic baby in it? Do you know why the official colors of Mardi Gras are purple, green and gold or why many wear masks? To help you party with a purpose, we rounded up some interesting facts about this rambunctious holiday that you might not have ever known.
This article was originally published by Lauren Gordon on Feb. 28, 2014.
|
11 Things You Didn't Know About Mardi Gras
In the middle of a long, cold winter, you need to have a reason to let a little loose. Months after Christmas and New Year's, the next cause for celebration is the highly anticipated holiday of Mardi Gras. This colorful and boisterous holiday marks the beginning of a new season, a turning over a new leaf of sorts, and is an overall joyous and uplifting celebration that many New Orleans natives and travelers look forward to.
But before the beads and the masquerade masks, the origins of Mardi Gras pre-dates Christianity. Though now widely accepted as a Christian holiday that signifies the last hurrah before Lent, the day before Ash Wednesday originated as a pagan celebration of springtime and fertility with the Roman festivals of Saturnalia and Lupercalia. When Christianity arrived in ancient Rome, leaders decided to incorporate the holiday as a celebration known as "Carnival." Derived from the word carnelevarium, which means to take away or remove meat, Carnival, on the other hand, was designed as a day of excess meat eating.
Since then, this celebration of excess has morphed into a landmark event in New Orleans that involves a ton of music, booze, and food. And while you probably know about traditions like King Cake, do you know exactly why they hide a plastic baby in it? Do you know why the official colors of Mardi Gras are purple, green and gold or why many wear masks? To help you party with a purpose, we rounded up some interesting facts about this rambunctious holiday that you might not have ever known.
This article was originally published by Lauren Gordon on Feb. 28, 2014.
|
no
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
yes_statement
|
"mardi" gras was "originally" a christian "holiday".. the "origins" of "mardi" gras can be traced back to christianity.
|
https://www.eater.com/22268353/king-cake-history-tradition-mardi-gras
|
The King Cake Tradition, Explained - Eater
|
Share this story
ShareAll sharing options for:The King Cake Tradition, Explained
Americans usher in the new year with diets and lifestyle resolutions galore, but many people across the globe — particularly those from predominantly Catholic countries — celebrate the calendar change with a sweet pastry known as king cake. It first appears in bakery cases at the beginning of each year and can be found at the center of celebrations through early spring. Some associate it with Mardi Gras, others with a celebration known as Epiphany.
King cake is eaten on January 6 in honor of Epiphany, or Twelfth Night, which historically marks the arrival of the three wise men/kings in Bethlehem who delivered gifts to the baby Jesus. (The plastic baby hidden inside king cakes today is a nod to this story.) King cake also appears on tables throughout the Carnival season, which runs from Epiphany to Fat Tuesday (the day before Ash Wednesday and the start of Lent), at which point practitioners typically abstain from such indulgences as cake.
The pastry goes by different names around the world, and comes in varying shapes and styles. Here now, an exploration of the history of this baked good, the traditions surrounding it, and a brief look at king cakes across the globe.
Peter Kramer/NBC/NBC NewsWire via Getty Images
What is king cake?
A sweet, circular pastry, cake, or bread that is the centerpiece of a historically Catholic celebration known as Epiphany, which falls on January 6. Today it takes on many different forms and is found at a variety of similar celebrations with religious origins. Most Americans are likely familiar with Louisiana-style king cakes that consist of a cake-y bread dough twisted into a ring and decorated with colored icing and sprinkles. Variants can be made from cake batter or bread dough or pastry, but almost all versions are shaped into a circle or oval to mimic the appearance of a king’s crown.
Every king cake contains a trinket — often a small figurine in the shape of a baby — which plays a crucial part in the celebration of the holiday that inspired this pastry. Whomever finds the trinket in their slice of cake gets to be the “king” for a day.
Where did it originate?
King cake is said to have originated in Old World France and Spain and came to be associated with Epiphany during the Middle Ages. When it was brought to the New World (along with Catholicism and Christianity), the tradition evolved further.
In New Orleans, king cake and Mardi Gras go hand in hand: The cakes can be found starting in early January and are available up until Ash Wednesday and the start of Lent. The symbolic bean or baby baked (or embedded) into the king cake is important to Mardi Gras celebrations because the person who gets the piece containing the baby must host the next year’s celebration.
How is king cake made?
To make it, sweet dough is twisted into a round and sometimes adorned with colored sugar doughs before being baked. Some versions are split and then filled with cream or fruit; others are topped with candied fruit, icing, and colored sugar. Louisiana-style king cake is almost always decorated in the colors associated with Mardi Gras: green, gold, and purple (representing faith, power, and justice).
Why is there a plastic baby inside my king cake?
Shutterstock
While there’s a long history of hiding trinkets inside king cakes, the modern tradition of a small plastic baby started in New Orleans. A commercial bakery called McKenzie's popularized the baby trinket that was baked into cakes back in the 1950s; they were originally made of porcelain but later swapped out for an easier-to-find plastic version. These days the plastic baby figurine is typically sold along with the already-baked cake and hidden by the purchaser, rather than coming baked inside (due to concerns about eating something that’s been baked around a piece of plastic).
The baby inside the king cake is such an important tradition that each year during Carnival, the New Orleans’ NBA team unveils a seasonal King Cake Baby mascot (which is absolutely terrifying, by the way).
What other countries serve king cakes?
In France, galette des rois translates literally as “cake of kings,” and is a flaky pastry cake made from puff pastry that is typically filled with a frangipane almond cream (or occasionally fruit or chocolate). A decorative pattern is scored into the top of it before baking, and sometimes the finished cake is topped with a paper crown. Traditionally, there is a “fève,” or bean, hidden inside.
The king cakes of New Orleans more closely resemble those of Spanish-speaking countries rather than the king cake that originated in France.
Rosca de reyes, served in Spain and Latin America, is a ring-shaped sweet bread that can also be topped with candied fruit, in addition to a light layer of icing.
Bolo rei, the Portuguese version of king cake, is also ring-shaped and is filled with candied fruit and sometimes nuts.
Bulgaria’s banitsa is generally served on New Year’s Eve, and also on other special occasions like weddings or festivals. It consists of sheets of phyllo dough wrapped around soft cheese and it contains charms as well as written fortunes.
The vasilopita in Greece and Cyprus is traditionally served on New Year’s Day, and closely resembles the French galette. It is round and flat with almonds on top that sometimes denote the year. Vasilopita also usually has a coin baked into it.
The common denominator between all of these cakes is that they all have a small trinket or figurine — such as a bean, a coin, a nut, or a tiny baby figurine — hidden inside. Whoever finds the trinket in their slice of cake gets to be “king” for a day and is also said to have good luck.
Where can I get my own king cake?
If you happen to be located in New Orleans, there are bakeries galore selling king cakes — whether you’re in the market for the traditional brioche ring version or something fancied up with peanut butter or bacon. Outside of Louisiana, every major city, particularly if there’s a sizable Catholic presence, will also be home to at least a couple of bakeries catering to king cake lovers this time of year.
And for those who want to go the DIY route, there are no shortage of king cake recipes online, including quick-and-lazy variations involving canned cinnamon rolls. Just don’t forget to include the baby.
|
Whomever finds the trinket in their slice of cake gets to be the “king” for a day.
Where did it originate?
King cake is said to have originated in Old World France and Spain and came to be associated with Epiphany during the Middle Ages. When it was brought to the New World (along with Catholicism and Christianity), the tradition evolved further.
In New Orleans, king cake and Mardi Gras go hand in hand: The cakes can be found starting in early January and are available up until Ash Wednesday and the start of Lent. The symbolic bean or baby baked (or embedded) into the king cake is important to Mardi Gras celebrations because the person who gets the piece containing the baby must host the next year’s celebration.
How is king cake made?
To make it, sweet dough is twisted into a round and sometimes adorned with colored sugar doughs before being baked. Some versions are split and then filled with cream or fruit; others are topped with candied fruit, icing, and colored sugar. Louisiana-style king cake is almost always decorated in the colors associated with Mardi Gras: green, gold, and purple (representing faith, power, and justice).
Why is there a plastic baby inside my king cake?
Shutterstock
While there’s a long history of hiding trinkets inside king cakes, the modern tradition of a small plastic baby started in New Orleans. A commercial bakery called McKenzie's popularized the baby trinket that was baked into cakes back in the 1950s; they were originally made of porcelain but later swapped out for an easier-to-find plastic version. These days the plastic baby figurine is typically sold along with the already-baked cake and hidden by the purchaser, rather than coming baked inside (due to concerns about eating something that’s been baked around a piece of plastic).
The baby inside the king cake is such an important tradition that each year during Carnival, the New Orleans’ NBA team unveils a seasonal King Cake Baby mascot (which is absolutely terrifying,
|
yes
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
yes_statement
|
"mardi" gras was "originally" a christian "holiday".. the "origins" of "mardi" gras can be traced back to christianity.
|
https://itravelforthestars.com/2020/02/20/mardi-gras-new-orleans/
|
Things to Know about Mardi Gras before ANY New Orleans Trip I ...
|
Things to Know about Mardi Gras before ANY New Orleans Trip
Originally published on Thursday, February 20, 2020
If you’ve taken a French class in the U.S.A., you’ve probably heard of Mardi Gras. It’s most known for being a celebration of “fat Tuesday,” the day before Lent starts, and it’s popularly celebrated in New Orleans. As I took French throughout school and participated in French club in college, I did the King Cake every year and would tell others that there’s a crazy festival down in New Orleans each year to celebrate the day. I went to New Orleans in January because I wanted to avoid the Mardi Gras festivities, as it’s just not really my thing.
Everything I thought I knew about Mardi Gras? Wrong, and definitely not enough. Even going in January, I was still able to see some Mardi Gras festivities. And it’s not something they do for tourism – it’s a huge part of the culture. It really hit home that Mardi Gras is very misunderstood and there’s so much more to this cultural event.
Mardi Gras in Europe
The Fight between Carnival and Lent by Pieter Brueghel the Elder.
Public domain.
A Background on Christianity: If you’re not familiar with the Christian custom of Lent, it’s the 46-day period before Easter Sunday, and thus always begins on a Wednesday. Lent is the period designated to commemorate Jesus’ 40-day-long journey in the desert. Christians give up things like meat, eggs, and sweets for this period of time, though a lot of people today choose one “vice” and give it up for Lent. The 6 other days are Sundays, which are like “cheat days.”
You’ve probably heard the song “The 12 Days of Christmas,” because Christmas traditionally lasts 12 days starting on December 25th. This period is called Advent and ends on January 6, which is Epiphany or Twelfth Night.
The Origins of Mardi Gras: Mardi Gras, translated to “Fat Tuesday,” is the day before Lent starts. It’s also called Shrove Tuesday or Pancake Tuesday, but those have more of a religious connotation. It makes sense that hundreds of years ago, people would come up with the idea of indulging in all the food and fun activities that they would be giving up for 40 days. This was done with lavish parties and was celebrated in many European cultures. Like many other holidays, this one called for masked costumes.
(It’s worth noting, however, that Epiphany, also called King’s Day, is celebrated more in Europe now and Mardi Gras is relatively unheard of outside of Canada and the U.S.A. They also celebrate Shrove/Pancake Tuesday but it’s a religious tradition that isn’t as rowdy.)
Coming to the New World
Jean Baptiste Le Moyne Sieur de Bienville, a French-Canadian explorer, landed in what would be around present-day Empire, Louisiana, on March 2, 1699. It was actually the day before Mardi Gras so he and his men named the area “Pointe du Mardi Gras.” The next day, they had a small party to celebrate the holiday – what some would consider America’s first Mardi Gras.
This is where the story gets a bit more interesting. Today, many would think that modern Mardi Gras was planted and grew in Louisiana, but that’s actually not the case. Mr. Bienville actually got up and migrated to modern-day Mobile, Alabama. By 1703, there was a small settlement there, but they were able to celebrate a proper Mardi Gras. The tradition has not been broken to this day.
The next year, they established a secret society, like a “krewe”, called Masque de la Mobile. In 1711, the town had the first Mardi Gras parade and the secret societies and parades got more elaborate from then on.
With that in mind, let’s switch back to New Orleans, which was established in 1718 by Mr. Bienville. (He did a lot back then.) The 1740s brought Mardi Gras Balls to Louisiana, but they were still following the model of excessive parties. In 1763, the Spaniards took control of New Orleans, and Mardi Gras was banned. However there are still some records of Mardi Gras festivities under Spanish rule. The tradition must have at least been underground because when Louisiana became a U.S. state in 1812, Mardi Gras was back up and running. Fifty years of oppression couldn’t stop this centuries-old tradition.
The New Orleans locals began with parades as well, but they were more processions of horses and performers rather than the float-filled parades we see there today. But by the mid-19th century, Mardi Gras in New Orleans had become a wreck. While it was still celebrated every year, the parties were known to get too rough and became synonymous with violence. The locals felt they had to choose between one of their favourite customs and feeling safe.
This is when the city of Mobile comes back into play. In 1857, a group of young men in a society called the Mistick Krewe of Comus came from Mobile and put on a proper parade – one with “tableaux cars,” or floats. They also brought flambeaux, or flaming torches with multiple wicks, that are still a staple of Mardi Gras.
The year 1872 introduced the “king of carnival,” or “Rex,” and his job was to preside over the parade. His job is also to pay for the party – or more specifically, the cake, because who can pay for an entire Mardi Gras parade? This is also when the three colours of Mardi Gras were introduced – purple, green and gold. They’re said to symbolise justice, faith and power respectively, but it’s also said that someone at some time just liked the colours.
And so the modern-day Mardi Gras was born.
Mardi Gras Today
Mardi Gras isn’t just a parade people put on each year; it’s a culture, and a process. There are different krewes, which are like socities or clubs, that have themes and come together to coordinate what each parade and possibly ball is going to be like. The costumes are a really important part, and one can tell from the amount of work that goes into the costumes – both in design and in making them lightweight.
The floats are also very important and so are throws, which are like party favors that get thrown out to the crowd. If you’re in the crowd, you’re supposed to yell, “Throw me something, mister!” and you’ll get some throws. These are traditionally known to be plastic beads and coins, though they can also be things like small toys and stuffed animals. There’s also some tradition that women show their boobs for throws, which is more in line of what we think of when we think of Mardi Gras. The krewes on floats are kept masked and anonymous.
The Krewes also have Mardi Gras balls, which are of course very formal. However these are all by invite only.
Beads can be found everywhere in New Orleans.
Another important aspect of Mardi Gras is the King Cake. If you’ve been to Europe for Epiphany, you may have seen another type of King Cake, but despite the name they’re actually very different. Mardi Gras King Cake is shaped kind of like a crude bundt cake and it’s cinnamon flavoured. It’s often filled with other flavours such as raspberry or lemon. It’s topped with purple, green and gold icing and a plastic baby. Whoever finds the baby in their cake is the next King of Carnival!
It is said that the baby represents baby Jesus but the actual origins of this tradition are less symbolic. Someone somewhere just liked porcelain babies and put them into the cake. You may also notice that babies are outside of the cake now, which is obviously to prevent possible choking. However this is also to ensure that the person who gets the baby doesn’t slip it into his pocket and lie about not getting it.
But there’s still more that’s not very known. For one, Mardi Gras is also celebrated all throughout during festival season, which is January 6th until Mardi Gras. So if you go to New Orleans during this time, you can see some parades and parade practices. There’s also handy apps to help you! If you look up WWL Mardi Gras Parade Tracker or WDSU Parade Tracker on your app store, these are both free apps that allow you to see parades and parade practices.
Mardi Gras can also be very child-friendly. Nudity isn’t allowed in certain parts of the parade, and you can simply ask a local for advice on where to go if you have children.
Mardi Gras isn’t just limited to New Orelans, either. Several other cities and towns celebrate this festivity, such as the birthplace of modern Mardi Gras, Mobile. Rural areas of Louisiana also celebrate it in a different way, called the Courir de Mardi Gras, which is a horse run and a chicken catch. Though New Orleans is most famous for this festivity, it’s by far not the only place to celebrate it.
If you’d like to learn more about this very historic and large holiday, you can go to Mardi Gras World, which is the most popular museum highlighting floats of Mardi Gras. A less expensive alternative (or addition) is the Mardi Gras Museum of Costumes and Culture, which has a beautiful collection of costumes and does great docent tours. They also have a selection of costumes that you can dress up in.
Mardi Gras is so much more than getting drunk, throwing beads, and eating cake. My trip to New Orleans gave me so much more knowledge on the extensive history of this holiday as well as how important it is to local culture. It really can’t be ignored if you want to visit New Orleans, because the two go hand in hand.
Please note that this post may contain affiliate links. These are at no additional cost to you but I receive a commission if you make a purchase through the link, and the commission helps me run my blog. Thanks for your support!
Sharing is Caring
Want to support me?
Leave a Reply
Hey! Welcome to I Travel for the Stars, an art and history travel blog. My name is Lilly and I'm a part-time female (mostly solo) budget traveller with a world of a bucket a list! Follow my stories and advice as I travel the globe, following the stars. ★
Search
Currently in...
Baltimore
Sign Up for Postcards!
Name: Email:
Around the Web
Sign up for my Postcards!
For every trip I send a "postcard" straight to your inbox! Join my newsletter list for updates every few months.
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Cookie
Duration
Description
_ga
2 years
The _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
|
Public domain.
A Background on Christianity: If you’re not familiar with the Christian custom of Lent, it’s the 46-day period before Easter Sunday, and thus always begins on a Wednesday. Lent is the period designated to commemorate Jesus’ 40-day-long journey in the desert. Christians give up things like meat, eggs, and sweets for this period of time, though a lot of people today choose one “vice” and give it up for Lent. The 6 other days are Sundays, which are like “cheat days.”
You’ve probably heard the song “The 12 Days of Christmas,” because Christmas traditionally lasts 12 days starting on December 25th. This period is called Advent and ends on January 6, which is Epiphany or Twelfth Night.
The Origins of Mardi Gras: Mardi Gras, translated to “Fat Tuesday,” is the day before Lent starts. It’s also called Shrove Tuesday or Pancake Tuesday, but those have more of a religious connotation. It makes sense that hundreds of years ago, people would come up with the idea of indulging in all the food and fun activities that they would be giving up for 40 days. This was done with lavish parties and was celebrated in many European cultures. Like many other holidays, this one called for masked costumes.
(It’s worth noting, however, that Epiphany, also called King’s Day, is celebrated more in Europe now and Mardi Gras is relatively unheard of outside of Canada and the U.S.A. They also celebrate Shrove/Pancake Tuesday but it’s a religious tradition that isn’t as rowdy.)
Coming to the New World
Jean Baptiste Le Moyne Sieur de Bienville, a French-Canadian explorer, landed in what would be around present-day Empire, Louisiana, on March 2, 1699.
|
yes
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
yes_statement
|
"mardi" gras was "originally" a christian "holiday".. the "origins" of "mardi" gras can be traced back to christianity.
|
https://www.newworldencyclopedia.org/entry/Mardi_Gras
|
Mardi Gras - New World Encyclopedia
|
Mardi Gras, or Fat Tuesday, refers to events of the Carnival celebration, beginning on or after the Christian feasts of the Epiphany (Three Kings Day) and culminating on the day before Ash Wednesday, which is the beginning of Lent. Mardi Gras is French for "Fat Tuesday," reflecting the practice of the last night of eating rich, fatty foods before the ritual Lenten sacrifices and fasting of the Lenten season. This tradition is traced back to medieval Christian times in Europe.
Contents
Today, the Mardi Gras celebrations are not limited to feasts but focus more on parades, costumes, masks, and revelry. Cities with major festivities, such as New Orleans, draw large numbers of tourists to participate and enjoy the activities, often without awareness of the original religious significance.
Description
Mardi Gras, or Fat Tuesday, refers to events of the Carnival celebration, beginning on or after the Christian feasts of the Epiphany (Three Kings Day) and culminating on the day before Ash Wednesday. Mardi Gras is French for "Fat Tuesday," reflecting the practice of the last night of eating rich, fatty foods before the ritual Lenten sacrifices and fasting of the Lenten season. This it is the last day of "fat eating" or "gorging" before the fasting period of Lent.[1]Carnival translates as "farewell to meat": Carnias in carnivorous, and vale as in valediction, valedictorian, etc.[2] As this is the last day of the Christian liturgical season historically known as Shrovetide, before the penitential season of Lent, related popular practices, such as indulging in food that one might give up as their Lenten sacrifice for the upcoming forty days, are associated with the celebrations.
In countries such as the United Kingdom, Mardi Gras is known as Shrove Tuesday, which is derived from the word shrive, meaning "to administer the sacrament of confession to; to absolve."[1] Shrove Tuesday is observed by many Christians, who "make a special point of self-examination, of considering what wrongs they need to repent, and what amendments of life or areas of spiritual growth they especially need to ask God's help in dealing with."[2] As the day before the beginning of Lent, Shrove Tuesday is observed in many Christian countries through participating in confession and absolution, the ritual burning of the previous year's Holy Week palms, finalizing one's Lenten sacrifice, as well as eating pancakes and other sweets.[3] Many Christian congregations thus observe the day through the holding of pancake breakfasts, as well as the ringing of church bells to remind people to repent of their sins before the start of Lent.[3]
On Shrove Tuesday, churches also burn the palms distributed during the previous year's Palm Sunday liturgies to make the ashes used during the services held on the very next day, Ash Wednesday.[4]
History
The tradition of marking the start of Lent has been documented for centuries. Ælfric of Eynsham's "Ecclesiastical Institutes" from around 1000 C.E. states: "In the week immediately before Lent everyone shall go to his confessor and confess his deeds and the confessor shall so shrive him as he then may hear by his deeds what he is to do [in the way of penance]."[5] By the time of the late Middle Ages, the celebration of Shrovetide lasted until the start of Lent.[6] It was traditional in many societies to eat pancakes or other foods, such as fasnachts and pączki, made with the butter, eggs, and fat that would be given up during the Lenten season. The specific custom of British Christians eating pancakes on Shrove Tuesday dates to the sixteenth century.[7]
The origins of Mardi Gras can be traced to medieval Europe:
Many Old World parades celebrated a distinctive figure, the Boeuf Gras, or fatted ox, the ancient symbol of the last meat to be eaten before the beginning of the Lenten fast. Dating to medieval times, it is perhaps the modern celebration’s clearest and strongest link to the historic and traditional origins of our Carnival celebration.[8]
This traditional revelry followed France to her colonies, arriving in North America with the Le Moyne brothers, Pierre Le Moyne d'Iberville and Jean-Baptiste Le Moyne de Bienville, in the late seventeenth century, when King Louis XIV sent the pair to defend France's claim on the territory of Louisiane, which included what are now the U.S. states of Alabama, Mississippi, Louisiana and part of eastern Texas.
The expedition entered the mouth of the Mississippi River on the evening of March 2, 1699). The party proceeded upstream to a place on the east bank about 60 miles (100 km) downriver from where New Orleans is today, and made camp. Realizing it to be the eve of Mardi Gras back in France, Pierre Le Moyne d’Iberville named the spot Point du Mardi Gras (French: "Mardi Gras Point").[9]
In 1703, French settlers in Mobile established the first organized Mardi Gras celebration tradition in what was to become the United States.[10] The first informal mystic society, or krewe, "Masque de la Mobile" was formed in Mobile in 1704; it lasted until 1709. From 1711 through 1861, the Boeuf Gras Society held parades featuring a large white bull's head; later an actual bull was part of the parade.[10]
The first Mardi Gras celebrations held in New Orleans are recorded to have taken place in the 1730s. In the 1740s, Louisiana's governor, the Marquis de Vaudreuil, established elegant society balls, which became the model for the Mardi Gras balls held today.[10] The tradition in New Orleans expanded to the point that it became synonymous with the city in popular perception, and embraced by residents of New Orleans beyond those of French or Catholic heritage.
After the French-Indian War most of Louisiana was ceded to Spain in the Treaty of Paris of 1763. With this change in leadership over the region, Mardi Gras celebrations and the high society balls came to a halt. People of color were prohibited from wearing masks, feathers, and attending night time balls.[11] They continued to form organizations in which they celebrated, however.
The Louisiana purchase of 1803 granted a total of around 827,000 square miles of land to the United States for around 15 million dollars. Under early American rule, rules implemented by the Spanish continued to be upheld. After a major slave revolt in 1811, and the rise of a popular belief that spies for Aaron Burr were using masks as disguises, stricter regulations were enforced.[11] As time went on, Creoles and other members of the New Orleans community were able to convince the American Government to reinstate the balls in 1823 and later in 1827 make masking on the street legal. In 1841, the first formal parade in celebration of Mardi Gras was held. During the Civil War float building was prohibited, but people continued to march on foot in celebration. Finally, in 1875, the Louisiana State Legislature declared Mardi Gras a legal holiday in the state of Louisiana.[12]
Traditions
The festival season varies from city to city, with some traditions treating only the final three-day period before Ash Wednesday as the Mardi Gras.[13] Others, such as the one in New Orleans, consider Mardi Gras to stretch the entire period from Twelfth Night (the last night of Christmas which begins Epiphany) to Ash Wednesday.[14]
Europe
Mardi Gras as part of Carnival is also an important celebration in various Anglican and Catholic European nations.[1]
In the Czech Republic it is a folk tradition to celebrate Mardi Gras, which is called Masopust (meat-fast). There are celebration in many places including Prague[15] but the tradition also prevails in the villages such as Staré Hamry.[16]
The celebration on the same day in Germany is known by many different names, such as Schmutziger Donnerstag or Fetter Donnerstag (Fat Thursday), Unsinniger Donnerstag, Weiberfastnacht, Greesentag and others. This is often only one part of the carnival events held during one or even two weeks before Ash Wednesday, known as Karneval, Fasching, or Fastnacht depending on the region.
In Italy Mardi Gras is called Martedì Grasso (Fat Tuesday). It is the main day of Carnival (Carnevale) along with the Thursday before, called Giovedí Grasso (Fat Thursday), which begins the celebrations.[17] The most famous Carnivals in Italy are in Venice, Viareggio, and Ivrea. Ivrea has the characteristic "Battle of Oranges" that finds its roots in medieval times.
In Sweden the celebration is called Fettisdagen, when you eat fettisdagsbulle or fastlagsbulle (literally "fat Tuesday roll") also called Semla a traditional sweet roll. Originally, this was the only day one should eat this food.[18]
United States
While not observed nationally throughout the United States, a number of traditionally ethnic French cities and regions in the country have notable celebrations. While Mobile, Alabama and New Orleans, Louisiana, have the oldest and most famous celebration, other cities along the Gulf Coast with early French colonial heritage, from Pensacola, Florida; Galveston, Texas; to Lake Charles and Lafayette, Louisiana; and north to Natchez, Mississippi and Alexandria, Louisiana, have active Mardi Gras celebrations.
In the rural Acadiana area, many Cajuns celebrate with the Courir de Mardi Gras, a tradition that dates to medieval celebrations in France.[19]
St. Louis, Missouri, founded in 1764 by French fur traders, claims to host the second largest Mardi Gras celebration in the United States. The celebration is held in the historic French neighborhood, Soulard, and attracts thousands of people from around the country.[20] The city's celebration begins with "12th night," held on Epiphany, and ends on Fat Tuesday. The season is peppered with various parades celebrating the city's rich French Catholic heritage.
Mobile, Alabama
Knights of Revelry parade down Royal Street in Mobile during the 2010 Mardi Gras season.
Mobile Carnival poster from 1900.
Mardi Gras is the annual Carnival celebration in Mobile, Alabama. It is the oldest annual Carnival celebration in the United States, started by Frenchman Nicholas Langlois in 1703 when Mobile was the capital of Louisiana, fifteen years before New Orleans was founded.[21] Beginning as a French Catholic tradition, Mardi Gras in Mobile has now evolved into a mainstream multi-week celebration across the spectrum of cultures in Mobile, regardless of religious affiliation.
Although Mobile has traditions of exclusive societies, with formal masked balls and elegant costumes, the celebration has evolved to become typified by public parades where members of societies, often masked, on floats or horseback, toss gifts (known as throws) to the general public. The masked balls or dances, where non-masked men wear white tie and tails (full dress or costume de rigueur) and the women wear full length evening gowns, are oriented to adults, with some mystic societies treating the balls as an extension of the debutante season of their exclusive social circles. Various nightclubs and local bars offer their own particular events.
Beyond the public parades, Mardi Gras in Mobile involves many various mystic societies, some having begun in 1704, or ending with the Civil War, while new societies were formed every century. Some mystic societies are never seen in public parades, but rather hold invitation-only events for their secret members, with private balls beginning in November, each year.
New Orleans, Louisiana
Mardi Gras parade - New Orleans Louisiana, 2020
The holiday of Mardi Gras is celebrated in all of Louisiana, especially the city of New Orleans. Celebrations are concentrated for about two weeks before and through Shrove Tuesday. Usually there is one major parade each day (weather permitting); many days have several large parades. The largest and most elaborate parades take place the last five days of the Mardi Gras season. In the final week, many events occur throughout New Orleans and surrounding communities, including parades and balls (some of them masquerade balls).
The parades in New Orleans are organized by social clubs known as krewes; most follow the same parade schedule and route each year. The earliest-established krewes were the Mistick Krewe of Comus, the earliest, Rex, the Knights of Momus and the Krewe of Proteus. Several modern "super krewes" are well known for holding large parades and events, such as the Krewe of Endymion (which is best known for naming celebrities as grand marshals for their parades), the Krewe of Bacchus (similarly known for naming celebrities as their Kings), as well as the Zulu Social Aid & Pleasure Club—a predominantly African American krewe.
While many tourists center their Carnival season activities on Bourbon Street and in New Orleans and Dauphin, major parades originate in the Uptown and Mid-City districts and follow a route along St. Charles Avenue and Canal Street, on the upriver side of the French Quarter. On Mardi Gras Day, the Tuesday before Ash Wednesday, the last parades of the season wrap up and the celebrations come to a close with the "Meeting of the Courts" (known locally as the Rex Ball) between Rex and Comus.[22]
Revelers on St. Charles Avenue, 2007
Costumes
Mardi Gras, as a celebration of life before the more-somber occasion of Ash Wednesday, is a time of fun and frivolity and nearly always involves the use of masks and costumes by its participants. In New Orleans, for example, they often take the shape of fairies, animals, people from myths, or various Medieval costumes.[23] However, many costumes today are simply elaborate creations of colored feathers and capes. The Venice tradition has brought golden masks into the usual round of costumes.[24]
Throws
Tree covered with Mardi Gras beads
Mardi Gras throws are strings of beads, doubloons, cups, or other trinkets passed out or thrown from the floats in the New Orleans Mardi Gras, the Mobile Mardi Gras and parades all throughout the Gulf Coast of the United States, to spectators lining the streets. The "throws" consist of necklaces of plastic beads, coins called doubloons, which are stamped with krewes' logos, parade themes and the year, plus an array of plastic cups and toys such as Frisbees, figurines, and trinkets. The plastic cups that are used as throws are sometimes referred to as "New Orleans dinnerware."[25]
Beads used on Mardis Gras (known as Shrove Tuesday in some regions) are purple, green, and gold, with these three colors containing the Christian symbolism of justice, faith, and power, respectively.[26]
As Fat Tuesday concludes the period of Carnival (Shrovetide), Mardis Gras beads are taken off oneself on the following day, Ash Wednesday, which begins the penitential season of Lent. One of the "solemn practices of Ash Wednesday is to pack all the beads acquired during the parade season into bags and boxes and taken them to the attic".[27]
Credits
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article
in accordance with New World Encyclopediastandards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
|
Mardi Gras, or Fat Tuesday, refers to events of the Carnival celebration, beginning on or after the Christian feasts of the Epiphany (Three Kings Day) and culminating on the day before Ash Wednesday, which is the beginning of Lent. Mardi Gras is French for "Fat Tuesday," reflecting the practice of the last night of eating rich, fatty foods before the ritual Lenten sacrifices and fasting of the Lenten season. This tradition is traced back to medieval Christian times in Europe.
Contents
Today, the Mardi Gras celebrations are not limited to feasts but focus more on parades, costumes, masks, and revelry. Cities with major festivities, such as New Orleans, draw large numbers of tourists to participate and enjoy the activities, often without awareness of the original religious significance.
Description
Mardi Gras, or Fat Tuesday, refers to events of the Carnival celebration, beginning on or after the Christian feasts of the Epiphany (Three Kings Day) and culminating on the day before Ash Wednesday. Mardi Gras is French for "Fat Tuesday," reflecting the practice of the last night of eating rich, fatty foods before the ritual Lenten sacrifices and fasting of the Lenten season. This it is the last day of "fat eating" or "gorging" before the fasting period of Lent.[1]Carnival translates as "farewell to meat": Carnias in carnivorous, and vale as in valediction, valedictorian, etc.[2] As this is the last day of the Christian liturgical season historically known as Shrovetide, before the penitential season of Lent, related popular practices, such as indulging in food that one might give up as their Lenten sacrifice for the upcoming forty days, are associated with the celebrations.
In countries such as the United Kingdom, Mardi Gras is known as Shrove Tuesday, which is derived from the word shrive, meaning "to administer the sacrament of confession to; to absolve. "[1]
|
yes
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
yes_statement
|
"mardi" gras was "originally" a christian "holiday".. the "origins" of "mardi" gras can be traced back to christianity.
|
https://usa.usembassy.de/etexts/factover/holidays.htm
|
USIA - Portrait of the USA, Holidays
|
Americans share three national holidays with many countries:
Easter Sunday, Christmas Day, and New Year's Day.
Easter, which falls on a spring Sunday that varies from
year
to year, celebrates the Christian belief in the resurrection of
Jesus Christ. For Christians, Easter is a day of religious
services and the gathering of family. Many Americans follow old
traditions of coloring hard-boiled eggs and giving children
baskets of candy. On the next day, Easter Monday, the president
of the United States holds an annual Easter egg hunt on the White
House lawn for young children.
Christmas Day, December 25, is another Christian holiday;
it
marks the birth of the Christ Child. Decorating houses and yards
with lights, putting up Christmas trees, giving gifts, and
sending greeting cards have become traditions even for many
non-Christian Americans.
New Year's Day, of course, is January 1. The celebration
of
this holiday begins the night before, when Americans gather to
wish each other a happy and prosperous coming year.
UNIQUELY AMERICAN HOLIDAYS
Eight other holidays are uniquely American (although some of
them have counterparts in other nations). For most Americans, two
of these stand out above the others as occasions to cherish
national origins: Thanksgiving and the Fourth of July.
Thanksgiving Day is the fourth Thursday in November, but
many Americans take a day of vacation on the following Friday to
make a four-day weekend, during which they may travel long
distances to visit family and friends. The holiday dates back to
1621, the year after the Puritans arrived in Massachusetts,
determined to practice their dissenting religion without
interference.
After a rough winter, in which about half of them died, they
turned for help to neighboring Indians, who taught them how to
plant corn and other crops. The next fall's bountiful harvest
inspired the Pilgrims to give thanks by holding a feast. The
Thanksgiving feast became a national tradition -- not only
because so many other Americans have found prosperity but also
because the Pilgrims' sacrifices for their freedom still
captivate the imagination. To this day, Thanksgiving dinner
almost always includes some of the foods served at the first
feast: roast turkey, cranberry sauce, potatoes, pumpkin pie.
Before the meal begins, families or friends usually pause to give
thanks for their blessings, including the joy of being united for
the occasion.
The Fourth of July, or Independence Day, honors the
nation's
birthday -- the signing of the Declaration of Independence on
July 4, 1776. It is a day of picnics and patriotic parades, a
night of concerts and fireworks. The flying of the American flag
(which also occurs on Memorial Day and other holidays) is
widespread. On July 4, 1976, the 200th anniversary of the
Declaration of Independence was marked by grand festivals across
the nation.
Besides Thanksgiving and the Fourth of July, there are six
other uniquely American holidays.
Martin Luther King Day: The Rev. Martin Luther King, Jr.,
an
African-American clergyman, is considered a great American
because of his tireless efforts to win civil rights for all
people through nonviolent means. Since his assassination in 1968,
memorial services have marked his birthday on January 15. In
1986, that day was replaced by the third Monday of January, which
was declared a national holiday.
Presidents' Day: Until the mid-1970s, the February 22
birthday of George Washington, hero of the Revolutionary War and
first president of the United States, was a national holiday. In
addition, the February 12 birthday of Abraham Lincoln, the
president during the Civil War, was a holiday in most states. The
two days have been joined, and the holiday has been expanded to
embrace all past presidents. It is celebrated on the third Monday
in February.
Memorial Day: Celebrated on the fourth Monday of May, this
holiday honors the dead. Although it originated in the aftermath
of the Civil War, it has become a day on which the dead of all
wars, and the dead generally, are remembered in special programs
held in cemeteries, churches, and other public meeting places.
Labor Day: The first Monday of September, this holiday
honors the nation's working people, typically with parades. For
most Americans it marks the end of the summer vacation season,
and for many students the opening of the school year.
Columbus Day: On October 12, 1492, Italian navigator
Christopher Columbus landed in the New World. Although most other
nations of the Americas observe this holiday on October 12, in
the United States it takes place on the second Monday in October.
Veterans Day: Originally called Armistice Day, this
holiday
was established to honor Americans who had served in World War I.
It falls on November 11, the day when that war ended in 1918, but
it now honors veterans of all wars in which the United States has
fought. Veterans' organizations hold parades, and the president
customarily places a wreath on the Tomb of the Unknowns at
Arlington National Cemetery, across the Potomac River from
Washington, D.C.
OTHER CELEBRATIONS
While not holidays, two other days of the year inspire
colorful celebrations in the United States. On February 14,
Valentine's Day, (named after an early Christian martyr),
Americans give presents, usually candy or flowers, to the ones
they love. On October 31, Halloween (the evening before
All
Saints or All Hallows Day), American children dress up in funny
or scary costumes and go "trick or treating": knocking on doors
in their neighborhood. The neighbors are expected to respond by
giving them small gifts of candy or money. Adults may also dress
in costume for Halloween parties.
Various ethnic groups in America celebrate days with special
meaning to them even though these are not national holidays.
Jews, for example, observe their high holy days in September, and
most employers show consideration by allowing them to take these
days off. Irish Americans celebrate the old country's patron
saint, St. Patrick, on March 17; this is a high-spirited day on
which many Americans wear green clothing in honor of the "Emerald
Isle." The celebration of Mardi Gras -- the day before the
Christian season of Lent begins in late winter -- is a big
occasion in New Orleans, Louisiana, where huge parades and wild
revels take place. As its French name implies (Mardi Gras means
"Fat Tuesday," the last day of hearty eating before the
penitential season of Lent), the tradition goes back to the
city's settlement by French immigrants. There are many other such
ethnic celebrations, and New York City is particularly rich in
them.
It should be noted that, with the many levels of American
government, confusion can arise as to what public and private
facilities are open on a given holiday. The daily newspaper is a
good source of general information, but visitors who are in doubt
should call for information ahead of time.
|
Veterans' organizations hold parades, and the president
customarily places a wreath on the Tomb of the Unknowns at
Arlington National Cemetery, across the Potomac River from
Washington, D.C.
OTHER CELEBRATIONS
While not holidays, two other days of the year inspire
colorful celebrations in the United States. On February 14,
Valentine's Day, (named after an early Christian martyr),
Americans give presents, usually candy or flowers, to the ones
they love. On October 31, Halloween (the evening before
All
Saints or All Hallows Day), American children dress up in funny
or scary costumes and go "trick or treating": knocking on doors
in their neighborhood. The neighbors are expected to respond by
giving them small gifts of candy or money. Adults may also dress
in costume for Halloween parties.
Various ethnic groups in America celebrate days with special
meaning to them even though these are not national holidays.
Jews, for example, observe their high holy days in September, and
most employers show consideration by allowing them to take these
days off. Irish Americans celebrate the old country's patron
saint, St. Patrick, on March 17; this is a high-spirited day on
which many Americans wear green clothing in honor of the "Emerald
Isle." The celebration of Mardi Gras -- the day before the
Christian season of Lent begins in late winter -- is a big
occasion in New Orleans, Louisiana, where huge parades and wild
revels take place. As its French name implies (Mardi Gras means
"Fat Tuesday," the last day of hearty eating before the
penitential season of Lent), the tradition goes back to the
city's settlement by French immigrants. There are many other such
ethnic celebrations, and New York City is particularly rich in
them.
It should be noted that, with the many levels of American
government, confusion can arise as to what public and private
facilities are open on a given holiday.
|
yes
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
yes_statement
|
"mardi" gras was "originally" a christian "holiday".. the "origins" of "mardi" gras can be traced back to christianity.
|
https://yesterdaysamerica.com/the-rich-heritage-of-mardi-gras-in-new-orleans/
|
The Rich Heritage of Mardi Gras in New Orleans - Yesterday's America
|
The Rich Heritage of Mardi Gras in New Orleans
Millions of people all across America consider Mardi Gras to be cause for celebration, but no place does Mardi Gras quite like New Orleans. The Big Easy is home to some of the most astonishing, famous public festivities every single year. It’s also considered the place to come celebrate, drawing tourists and adventurous spirits from all over the world.
How much do you really know about Mardi Gras in New Orleans? What traditions and happenings are considered “must-sees”? What are we really celebrating when we celebrate Mardi Gras and where did the holiday originate? Let’s address the answers to all of these questions and more.
What Is Mardi Gras?
Mardi Gras is more than just a colorful cultural phenomenon. It’s also a Christian holiday that comes attached to a rich history, as well as one that has complicated connections to early pagan fertility rites. Mardi Gras as an occasion is celebrated all over the world, especially in areas with a large Roman Catholic population.
Also called Fat Tuesday, Shrove Tuesday is the day before Lent officially begins (Ash Wednesday). Traditionally, it was a day to eat, drink, and be merry one last time before the sacrifice, penitence, and heavy fasting associated with Lent began in earnest. In particular, people prepared elaborate feasts to use up food items that were not allowed during Lent. (Examples include butter, eggs, meat, or dairy.)
Although the Catholic Church has relaxed many of the dietary restrictions associated with Lent since those days, rich feasts and merrymaking remain popular ways to celebrate Shrove Tuesday and get ready to observe Lent.
In Southern Louisiana and New Orleans, in particular, the celebrations associated with Mardi Gras start roughly two weeks prior to Shrove Tuesday. Festivities include parades organized by New Orleans social clubs called krewes, social events like balls, and multicultural festivals of all types.
A Look at the Origins of Mardi Gras
Of course, Shrove Tuesday and the approach of Lent are only part of why people celebrate Mardi Gras as we know it today. According to historians, it also has connections to pagan fertility rituals that date back thousands of years. (The spring rites of Lupercalia and Saturnalia are just two examples.)
Once Christianity made its way to Rome, the religious leaders of the time had their work cut out for them when it came to converting the masses. More often than not, they found it easier and more beneficial to simply “Christianize” existing traditions instead of attempting to abolish them completely.
That said, the debauchery and wanton nature of the original rites were eventually transitioned into Mardi Gras, something that was just as much a prelude to the forty-day Lenten period as it was about welcoming spring with a huge celebration.
Naturally, we call the occasion Mardi Gras (or “Fat Tuesday”) because of the all-out binging on any stores of eggs, cheese, meat, or milk that remained in the house before fasting and eating only fish during the lengthy period of Lent. It’s also thought that pre-Lenten festivals are known as “carnival” because of this tradition. (The Medieval Latin term “carnelevarium” means “to take away meat.”)
Mardi Gras Comes to America
Although no one knows for certain when the very first Mardi Gras celebration took place here in America, most historians believe it happened on March 3, 1699. French explorers Bienville and Iberville arrived in the area which is now Louisiana, bringing with them the French tradition of Mardi Gras. They decided to have a proper celebration and wound up dubbing the spot “Point du Mardi Gras” as a result.
In the decades to come, French settlements all over Louisiana – New Orleans included – would continue to celebrate Mardi Gras each and every year. They did so by organizing parades, holding lavish masked balls, and serving elaborate communal feasts. The fun would come to a temporary end once the Spanish gained control of New Orleans and the surrounding areas. However, the bans would only remain active until 1812 when Louisiana officially became a U.S. state.
Mardi Gras in New Orleans would see a boisterous revival in 1827. That’s when a group of dedicated students decided to dress up in colorful costumes and dance their way up and down the streets of the city. (They were mimicking festivities they’d seen in person while visiting Paris.) Ten years after that, New Orleans’s first proper Mardi Gras parade in recorded history would take place, setting the stage for a much-loved tradition that continues today.
The First Mardi Gras Krewes
The year 1857 would see yet another Mardi Gras tradition take place – the first krewe-organized event. A secret society made up of New Orleans businessmen would organize and sponsor a grand torch-lit Mardi Gras procession. The procession would include many of the celebratory staples parade-goers know and love today, including floats and marching bands.
This particular group of businessmen called themselves the Mistick Krewe of Comus, the very first Mardi Gras krewe. Ever since, krewes, in general, have been an important part of Mardi Gras in New Orleans and elsewhere in Louisiana. They are the ones responsible for sponsoring and organizing the parades, balls, and other events without which Mardi Gras couldn’t be considered complete.
Today, there are over 60 different active krewes with more being organized all the time. Krewes aren’t just active during Mardi Gras, either. Many organize additional events throughout the year, as well as visit nursing homes, establish social activities for young people, and otherwise make their communities better places to be.
Mardi Gras Around the World
Currently, Louisiana is the only U.S. state where Mardi Gras is considered a legal holiday. However, American people from coast to coast also love to celebrate in their own unique ways. In some states – like Mississippi and Alabama – Mardi Gras is considered to be almost as big a deal as it is in Louisiana.
Mardi Gras also continues to be celebrated in multiple nations around the world. This is particularly the case anywhere the population boasts a significant percentage of Roman Catholics. The following are just a few examples:
In Brazil, Mardi Gras traditions look much like the ones we have here in America, benefiting from a unique blend of African, Native American, and European influences.
Our neighbors up north in Canada love Mardi Gras as well. Quebec City, in particular, throws a giant yearly bash known as the Quebec Winter Carnival.
In Germany, costume balls and parades are the order of the day, just as they are here in America. They also promote women’s empowerment with a tongue-in-cheek tradition that calls for the cutting off of men’s ties.
Italians make their way to Venice, a city that’s been known throughout history for its breathtaking masked balls. The Venetians don’t disappoint, either, doing justice to a time-honored tradition that dates back to the 13th century.
Denmark finds children dressing up in costume and going door to door to gather candy for Fastevalan, similar to what American children do on Halloween. On Easter Sunday, Danish children also ritually (but non-violently) flog their parents.
This year, Mardi Gras takes place on February 28th. How will you be celebrating?
The Origins of Popular Mardi Gras Traditions
As touched on above, there’s no doubt in anyone’s mind that Mardi Gras is a huge deal, not only in New Orleans but elsewhere as well. However, many people have no real idea where some of the most time-honored Mardi Gras traditions got their start. Did you know the reasons behind the following?
Wearing Masks
Masks are nearly as synonymous with Mardi Gras as Santa Claus is with Christmas, but they’re more than just beautiful ways revelers express themselves and get ready to have a good time. The tradition of donning masks on Mardi Gras dates back hundreds of years to a time when various social classes weren’t normally allowed to mingle to the extent they do today.
Masks were a way to hide one’s identity during the festivities. Anyone, from the rich elite to the very poor, could mask themselves, be anyone, and go where they pleased without being judged. The concept continues to this day, as it is legal in New Orleans for all Mardi Gras attendees to wear masks, although some business owners may post signage requesting that they are removed before coming inside. It’s required by law for float riders to wear them.
Flambeaux
Flambeaux are flame torches, as well as time-honored symbols of Mardi Gras in many cultures. They were traditionally carried through the streets so the fun and partying could continue until well after dark, usually by slaves or free people of color who were looking to earn a little extra money. (Festival goers customarily tossed coins to torch bearers in thanks for lighting the way for the floats and festivities.) Most flambeaux consisted of shredded rope soaked thoroughly in pitch.
Naturally, we no longer need old-school torches to light up the streets of New Orleans on Mardi Gras. However, the tradition still remains. It’s just become more of a performance than anything else, with torchbearers dancing, performing acrobatics, or spinning their lights to entertain the crowd.
Throwing Beads
Everyone’s familiar with the concept of Mardi Gras beads. However, the beads as a tradition started with their colors – gold, purple, and green. These particular shades were chosen by the very first Carnival king back in 1872 and represent power (gold), faith (green), and justice (purple). Originally, the beads of various colors were to be tossed to people that represented one or more of those three qualities.
The very first Mardi Gras beads were made of glass, which naturally wasn’t the best material for something made for tossing. That said, the tradition of throwing the beads didn’t really become a deeply ingrained part of the festivities until plastic beads became the norm.
The Carnival King
If you’ve ever spent Mardi Gras in New Orleans, you may already know that each and every year a new festival king is crowned. He is known as Rex, King of the Carnival. The very first Rex was crowned in 1872, and – although no one knows this for sure – he was said to have been the Grand Duke Alexis of Russia.
Alexis had been visiting the United States and befriended George Armstrong Custer over the course of a hunting expedition in the Midwest. His visit to New Orleans was said to be organized by a group of local businessmen looking for a way to draw even more business as well as tourism to the city after the Civil War.
To this day, a new Rex is chosen each year by the Rex Organization. It is always someone who is prominent in New Orleans society. Rex is also ceremonially given the key to the city by the mayor each year.
Zulu Coconuts
Each of the many Mardi Gras krewes active today has its own set of traditions that they bring to the table each Mardi Gras. One of the oldest traditionally black krewes is the Zulu Social Aid and Pleasure Club. Their claim to fame is the handing out of Zulu coconuts (or “golden nuggets”) to parade goers and revelers.
The first recorded references to these coconuts date back to 1910. At that point in time, the nuts were left in their natural state. However, in later years, it became a tradition to adorn them with elaborate paint and decorations instead. Although there are many traditions Mardi Gras revelers look forward to, the possible receiving of a Zulu coconut is one of the most highly prized.
As you can see, Mardi Gras in New Orleans is so much more than just a reason to have a little fun and celebrate life every spring. It’s also a time-honored tradition steeped in history and rich in local culture. It’s not hard to see why so many people from all over the world consider the Crescent City to be the place to celebrate.
Our goal at YesterdaysAmerica.com is to reconnect people to their community, their neighbors, and their past by offering a curbside view of hometown history and often forgotten aspects of American life.
|
The Rich Heritage of Mardi Gras in New Orleans
Millions of people all across America consider Mardi Gras to be cause for celebration, but no place does Mardi Gras quite like New Orleans. The Big Easy is home to some of the most astonishing, famous public festivities every single year. It’s also considered the place to come celebrate, drawing tourists and adventurous spirits from all over the world.
How much do you really know about Mardi Gras in New Orleans? What traditions and happenings are considered “must-sees”? What are we really celebrating when we celebrate Mardi Gras and where did the holiday originate? Let’s address the answers to all of these questions and more.
What Is Mardi Gras?
Mardi Gras is more than just a colorful cultural phenomenon. It’s also a Christian holiday that comes attached to a rich history, as well as one that has complicated connections to early pagan fertility rites. Mardi Gras as an occasion is celebrated all over the world, especially in areas with a large Roman Catholic population.
Also called Fat Tuesday, Shrove Tuesday is the day before Lent officially begins (Ash Wednesday). Traditionally, it was a day to eat, drink, and be merry one last time before the sacrifice, penitence, and heavy fasting associated with Lent began in earnest. In particular, people prepared elaborate feasts to use up food items that were not allowed during Lent. (Examples include butter, eggs, meat, or dairy.)
Although the Catholic Church has relaxed many of the dietary restrictions associated with Lent since those days, rich feasts and merrymaking remain popular ways to celebrate Shrove Tuesday and get ready to observe Lent.
In Southern Louisiana and New Orleans, in particular, the celebrations associated with Mardi Gras start roughly two weeks prior to Shrove Tuesday. Festivities include parades organized by New Orleans social clubs called krewes, social events like balls, and multicultural festivals of all types.
A Look at the Origins of Mardi Gras
Of course, Shrove Tuesday and the approach of Lent are only part of why people celebrate Mardi Gras as we know it today.
|
yes
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
no_statement
|
"mardi" gras was not "originally" a christian "holiday".. the roots of "mardi" gras do not lie in christianity.
|
https://theessentialbs.com/2019/09/05/20-modern-traditions-with-pagan-origins/
|
20 Modern Traditions with Pagan Origins | TheEssentialBS.com
|
20 Modern Traditions with Pagan Origins
This post may contain affiliate links and we will receive a small commission if you make a purchase after clicking on our link. Read the Disclosure Policy.
We have so many traditions in the modern day that it would be near impossible to consider the origins of them all. But that being said, some of these traditions came from downright surprising places–including paganism.
Many of the things we do every day don’t seem to have any connection to religion at all, but they still got their start in the world of polytheism. Even more surprising, there are traditions that we associate with specific religions (like Christianity) that most definitely got their start in other religions.
Today, we’re taking a look a 20 modern things with pagan origins. Which one surprised you the most?
Our Obsession with Cats
When we ooh and ahh over our temperamental and adorable furry friends, we’re taking place in a tradition that stretches back to ancient Egypt.
Although cats have known it since the beginning of time, it was the Egyptians who elevated felines to the status of gods in their religion. In fact, the goddess Bastet was depicted as having the head of cat. But don’t worry dog lovers, there’s plenty of pagan fun for you too in the dog-headed Egyptian god, Anubis.
Knocking on Wood
These days, knocking on wood is a way to ward off bad luck. But for the ancient Celts, there was a deeper meaning.
In traditional Celtic religion, there was the belief that trees were home to spirits, fairies, or other supernatural beings. Knocking on wood was a way to curry favor with good spirits or distract bad spirits from foiling your plans.
Christmas
Considering that Christmas is a time when Christians celebrate the birth of Jesus, it may seem a little odd that it has connections to paganism, but those links are there.
Because the date of the birth of Jesus is not given in the Christian Bible, December 25th seems like a random pick. But some have theorized that this was done so that the holiday corresponded with the pagan winter solstice–as a way to wean new Christians off of the festivities.
Halloween
Halloween–a pagan holiday? Who would have thought? (We’re kidding!)
The pagan holiday of Samhain also occurs on the 31st, and it is a time for honoring the dead. It’s also considered to be a time when the boundary between our world and the next is at its weakest. Slowly over time, this festival sparked the interest of non-pagans as well, and now we have Halloween as we know it today.
Days of the Week
So it turns out that we’re all pagans seven days a week–at least if we’re going by day names.
For example, Friday (everyone’s favorite day of the week) comes from the name Freya–Norse goddess of love. Even those dreaded Mondays are pagan. That word comes from “monandaeg”–day of the moon goddess. Turns out, the name for every day of the week comes from some tradition of European paganism.
Months of the Year
Just like the names for the days of the week have pagan origins, so do our names for the 12 months on the calendar.
For example, June is named after Juno–Roman queen of the gods and wife to Jupiter. These naming conventions apparently troubled the early Christian church enough to attempt to replace them with more “wholesome” names, but we all know how hard it is to get people to try something new. They ultimately failed in this attempt, and we’ve kept the traditional names ever since.
Covering Your Mouth to Yawn
Covering your mouth to yawn is just a common courtesy, right? No one wants your warm breath on them! Turns out, that’s not entirely true–even covering your mouth to yawn has pagan origins.
In pagan Rome, doctors had a clever (but completely wrong) theory about yawning and infant mortality. They noticed that lots of children died young, and they also noticed that babies were unable to cover their mouth when they yawned. Their diagnosis? Yawning allowed a person’s vital life essence to escape their body. And apparently covering your piehole with your hand was the only way to stop an untimely death.
Wedding Rings
While wedding rings themselves are not explicitly pagan, the fact that we place them on our ring fingers most certainly is.
In traditional Greek and Roman beliefs, your fourth or “ring” finger was thought to have a vein that ran directly to your heart. By placing your wedding ring here, you were making a strong and eternal commitment to love.
Easter
Easter is an important Christian holiday when believers celebrate the resurrection of Jesus. That’s just good Christian fun, right? Nope, sorry. It’s pagan.
First up is the name. The term Easter is derived from Eostre–a germanic pagan goddess. And even the Easter bunny has a little pagan streak to him too! As the goddess Eostre is sometimes associated fertility, followers would sometimes present her with colored eggs as a way to encourage pregnancy.
Fingers Crossed
When we cross our fingers, we’re wishing for luck (or we’re telling a lie). But this practice is a far cry from the pagan tradition that it originated from.
In ancient times, it wasn’t one person who would cross their fingers. Rather, two people would use their index fingers to make a cross. This was done as an attempt to harness the power of any good spirits that might be hiding nearby.
Flower Crowns
You’re most likely to see flower crowns at Coachella these days, but in ancient Greece, they helped bring worshippers closer to specific deities.
Different plants were associated with different gods, so wearing a specific kind of flower wreath would help bring you favor with a specific deity. For example, Zeus was associated with oak, while Aphrodite was associated with myrtle.
Bridesmaids
We’ve all heard nightmare stories about the difficulties of being a bridesmaid. There’s usually a pushy bride and a hideous dress involved. But in pagan times, it’s a surprise that anyone agreed to be a bridesmaid!
In ancient times, bridesmaids wore identical dresses and veils to that of the bride–so at least you didn’t have to worry about a tacky bridesmaid’s dress. But all this matchiness had a purpose, and that purpose was to trick evil spirits into attacking a bridesmaid instead of the bride herself.
Gift Giving
Gift giving is a practice that has been around forever and is nearly universal. But that doesn’t mean pagans didn’t put their own unique spin on it!
There are all sorts of superstitions surrounding gifts–including things like not giving knives, shoes, or opal as presents.
Groundhog Day
Groundhog Day is a modern tradition that didn’t evolve from paganism–it’s just straight-up pagan itself if you think about it.
According to the laws of Groundhog Day, if the groundhog sees his shadow on February 2nd, that means we’re in for another six weeks of winter. Obviously this is done in jest, but what else would you describe this process as other than a form of divination?
Nike
These days Nike may be a giant sportswear company, but none of their success would have been possible without paganism in ancient Greece.
The goddess Nike was worshipped in ancient Greece as the goddess of victory. She would often be worshipped after a successful military win. So, it makes sense why the company would use her name to promote their brand.
Lady Justice
In courthouses across the world you can see depictions of lady justice with her blindfold and scales. These days, it’s meant to represent the impartiality of the law. But actually, these statues have a pagan origin.
Whether we realize it or not, these are actually depictions of the ancient Roman goddess of justice, Justitia. Although the name may have changed over the years, her personification and meaning these days is nearly identical to those in Roman times.
Mardi Gras
Mardi Gras is yet another Christian holiday that just happens to “conveniently” fall near a pagan holiday.
While these days Mardi Gras may mark the day before Lent begins, in ancient times, it was associated with festivals for Saturn–the Roman god of agriculture. Both celebrations involve wild parties, so it’s not hard to see how the two became associated.
Mother Earth
In modern times, we use “Mother Earth” as a way to personify the environment–usually in the context of protecting it. But pagan cultures have had a Mother Earth for millennia.
When you look at polytheistic religions, you’re likely to find some kind of earth goddess personified as a maternal figure. The Greeks had Gaia, Hindus have Prithvi, and some Native American traditions have the Spider Grandmother.
US Medal of Honor
In the United States military, the Medal of Honor is the highest honor a soldier can achieve. They can be given for various acts of valor, and if you look closely, you might just notice something a little pagan about them.
While the design varies depending on the military branch, the Roman goddess Minerva almost always makes an appearance. Considering that she is the goddess of war, this connection makes sense.
The Tooth Fairy
Although the tooth fairy is generally a fun fictional character for kids, children’s teeth were a way bigger deal in ancient times than they are now.
For example, in medieval Europe, baby teeth were buried or burned to keep them out of the hands of evil witches. This is one tradition we’re glad got sanitized and commercialized.
|
But all this matchiness had a purpose, and that purpose was to trick evil spirits into attacking a bridesmaid instead of the bride herself.
Gift Giving
Gift giving is a practice that has been around forever and is nearly universal. But that doesn’t mean pagans didn’t put their own unique spin on it!
There are all sorts of superstitions surrounding gifts–including things like not giving knives, shoes, or opal as presents.
Groundhog Day
Groundhog Day is a modern tradition that didn’t evolve from paganism–it’s just straight-up pagan itself if you think about it.
According to the laws of Groundhog Day, if the groundhog sees his shadow on February 2nd, that means we’re in for another six weeks of winter. Obviously this is done in jest, but what else would you describe this process as other than a form of divination?
Nike
These days Nike may be a giant sportswear company, but none of their success would have been possible without paganism in ancient Greece.
The goddess Nike was worshipped in ancient Greece as the goddess of victory. She would often be worshipped after a successful military win. So, it makes sense why the company would use her name to promote their brand.
Lady Justice
In courthouses across the world you can see depictions of lady justice with her blindfold and scales. These days, it’s meant to represent the impartiality of the law. But actually, these statues have a pagan origin.
Whether we realize it or not, these are actually depictions of the ancient Roman goddess of justice, Justitia. Although the name may have changed over the years, her personification and meaning these days is nearly identical to those in Roman times.
Mardi Gras
Mardi Gras is yet another Christian holiday that just happens to “conveniently” fall near a pagan holiday.
While these days Mardi Gras may mark the day before Lent begins, in ancient times, it was associated with festivals for Saturn–the Roman god of agriculture.
|
no
|
Festivals
|
Was Mardi Gras originally a Christian holiday?
|
no_statement
|
"mardi" gras was not "originally" a christian "holiday".. the roots of "mardi" gras do not lie in christianity.
|
https://everydaywanderer.com/know-before-you-go-mardi-gras-tips
|
Mardi Gras Tips - What to Know BEFORE You Attend Mardi Gras
|
While Mardi Gras is celebrated in many destinations around the world, New Orleans is THE place to celebrate Mardi Gras season in the United States. Beginning on January 6th (also known as Twelfth Night or Epiphany Eve) and leading up to Ash Wednesday, Mardi Gras is a time for festive floats, glittery costumes, deliciously fattening foods, and other revelry leading up to the Lenten season. These Mardi Gras tips will tell you everything you need to know before you head to the Big Easy.
When is Mardi Gras 2023?
The next Mardi Gras season starts on January 6, 2023 and ends on Fat Tuesday (also known as Shrove Tuesday), February 21, 2023.
Mardi Gras is celebrated from January 6th until Fat Tuesday, the day before Ash Wednesday.
Although Mardi Gras in America typically conjures up visions of drunken crowds packed onto Bourbon Street in New Orleans where women flash their bare breasts in an attempt to have cheap plastic beads tossed their way, Fat Tuesday is celebrated around the world. With roots dating back to pagan festivals celebrated in Rome before the arrival of Christianity, pre-Lenten festivities are also popular in places with a high concentration of Catholics, like:
Brazil, from Recife to Rio de Janeiro,
Venice, Italy,
Limburg (in the southern part of the Netherlands), and
many other destinations around the world that have been influenced by the Portuguese, Italians, and Dutch (most notably the Caribbean and Central and South America).
If you’re not from New Orleans, the expressions and traditions that accompany Mardi Gras may be foreign. So before you chow down on king cake or beg a parade float rider to “Throw me something, mister!” here’s everything you need to know before you attend Mardi Gras.
Although New Orleans is the King of Mardi Gras celebrations in the US, seasonal festivities take place elsewhere. Mardi Gras is most commonly celebrated in communities with a large Catholic population like Rio de Janeiro and Venice.
Have You Celebrated Mardi Gras in New Orleans?
Share your favorite photo with me by tagging @sagescott.kc on Instagram and using the hashtag #everydaywanderer
To help offset the costs of running EverydayWanderer.com, you’ll find affiliate links lightly sprinkled throughout the site.If you choose to make a purchase viaone of these links, there’s no additional cost to you, but I’ll earn a teeny tiny commission.You can read all of the legal blah blah blah (as my little niece says) on thefull disclosure page.
Important Mardi Gras Words
If you’re not from New Orleans, and if you weren’t raised Catholic, some of these Mardi Gras words may be a bit foreign. (And I don’t just mean the French and Latin ones…)
What is Twelfth Night?
Also known as Epiphany Eve, or the day before Epiphany, Twelfth Night takes place on January 6th. It’s also the official start of the Mardi Gras season, and is aptly named because it takes place twelve days after Christmas.
Fun Fact: A note for all of you procrastinators out there: In some countries, it’s considered bad luck to have Christmas decorations displayed after Twelfth Night.
During Ash Wednesday services, a priest smudges palm ashes onto the forehead of each parishioner in the sign of the cross.
What is Ash Wednesday?
Ash Wednesday kicks off the Lenten season leading up to Easter. Because I was raised Catholic, I know it as a holy day of prayer and fasting. We ate very light basic meals — like vegetable soup or grilled cheese — and were not supposed to eat anything with meat. Obviously.
During Ash Wednesday Mass, ashes from the previous Palm Sunday’s fronds are ceremoniously spread across each parishioner’s forehead. As the priest marks the sign of the cross in ash smudge, he pronounces, “From dust you came and from dust you will return.”
What is Lent?
Lent is a solemn reflective period 46 days (40 fasting days and six Sundays) before Easter. It begins on Ash Wednesday and ends on Easter Sunday. During Lent, Christians focus on faith and recognize the sacrifice Christ made when he died on the cross.
The Catholic Church’s rules state that everyone 14 or older should abstain from eating meat on Ash Wednesday, Good Friday, and all Fridays during Lent. Additionally, Catholics who are between 18 and 60 are asked to fast on Ash Wednesday and Good Friday.
The Lenten season begins with Ash Wednesday and ends with Easter Sunday.
But just as faith and religion are deeply personal convictions, many Christians forge their own paths during Lent. Some people give up meat, fish, eggs, and fats entirely during Lent. Others abstain from consuming something they particularly enjoy during the Lenten season, like alcohol, candy, coffee, or Facebook.
In recent years, some people have chosen to augment their sacrifices with acts of kindness and generosity. In the US this is commonly known as Christian Random Acts of Kindness (#crak). Across the Atlantic, 40acts is an approach embraced by some in the UK. Although I am no longer a practicing Catholic, I absolutely support the idea of people being kinder to one another. Lord knows our world could use more of that!
But let’s get back to the Mardi Gras season that leads up to Ash Wednesday and Lent!
In Venice, the Mardi Gras season is called Carnival.
What is Carnival?
Carnival is a name for the pre-Lenten festivities that begin on January 6th and lead up to Ash Wednesday. If you were raised Catholic, you might associate Lent with 40 days and 40 nights of a church-imposed vegetarian diet. (And if you were raised Catholic in my house, you’ll further associate it with canned salmon made into a sub-par version of meatloaf and topped with creamed vegetables.)
Sage Advice: In some places around the world, this celebration is spelled Carnaval.
For anyone not fluent in Latin (which is quite possible since the language perished in the 8th century), the name Carnival is based on the word carnelevarium which means to remove meat. (See what I mean? A church-imposed vegetarian diet indeed!)
So What’s the Difference Between Carnival (or Carnaval) and Mardi Gras?
Technically, Carnival is the entire pre-Lenten season, from January 6th to Shrove Tuesday. And whether you call it Shrove Tuesday, Mardi Gras, or Fat Tuesday, it is the final day of Carnival.
However, the term Mardi Gras is often used to mean both the Mardi Gras season (also known as Carnival in some parts of the world) and the final Tuesday before Lent starts. In other words, Mardi Gras can mean the entire period between the Twelfth Night and Mardi Gras or it can be that specific Tuesday before the Lenten season begins.
Not confusing at all, right?
Now that we’re all on the same page, here are several important Mardi Gras tips to help you laissez les bon temps rouler (that’s how they say “let the good times roll”) in the Big Easy.
When is Mardi Gras?
Mardi Gras starts 12 days after Christmas on January 6th. It runs through Fat Tuesday (literally Mardi Gras in French) and ends six weeks before Easter. Because Easter moves around the Gregorian calendar based on the phases of the moon, Mardi Gras can last anywhere from four to nine weeks.
What Does Mardi Gras Celebrate?
During the Middle Ages, Christians throughout Europe would eat up the rich fatty foods on hand — like meat, eggs, milk, and cheese — in the days leading up to the Lenten season. Modern Christians, especially Catholics, celebrate Mardi Gras as a last hoorah of fattening food, sweet treats, and alcohol before giving up meat, forgoing dessert, and fasting over the weeks that lead up to Easter.
How Did Mardi Gras Start?
It is believed that Mardi Gras is based on the pagan festivals of Saturnalia and Lupercalia that were celebrated throughout the Roman Empire. Occuring at the end of December, Saturnalia was an upbeat, party-like fete during which Roman social norms were set aside and role reversal was common. The tradition of wearing masks and dressing up in elaborate costumes can be linked to Saturnalia.
Whether it’s called Carnival or Mardi Gras, the pre-Lenten celebration features elaborate costumes that often defy social norms, encourage cross-dressing, or are a bit risque.
Taking place in February, Lupercalia was observed to keep evil spirits at bay, focus on good health, and encourage fertility as spring dawned. Lupercalia involved feasting, drinking, and all sorts of delightful debauchery. And you can probably see the link between Lupercalia to Mardi Gras, especially if you’ve ever celebrated Mardi Gras on Bourbon Street.
As Christianity spread throughout Europe in the Middle Ages, the centuries-old pagan celebrations evolved into church-sanctioned traditions. However, the opportunity to set aside social norms, cross-dress, eat, drink, and be merry, remained in tact.
Enjoying This Article?
Sign up for the newsletter!
Thank You for Subscribing!
Where Was the First Mardi Gras in the United States?
It depends who you ask.
In the US, the first unofficial Mardi Gras took place on March 3, 1699. After sailing from Brest, France, in October 1698, French explorers Pierre Le Moyne d’Iberville and Sieur de Bienville landed about 60 miles south of modern-day New Orleans that spring. With fond memories of the Mardi Gras celebrations they were missing back home, the explorers named the spot Point du Mardi Gras and held a small celebration. Louisianans cite this as the first Mardi Gras celebration in the US.
But just over the state line in Mobile, Alabamans stake a similar claim. Founded by the French in 1702, Mobile hosted its first Mardi Gras celebration in 1703. And while French explorers technically had a small pre-Lenten fete in Louisiana a few years earlier, New Orleans wasn’t founded until 1718 and the first Mardi Gras on record wasn’t held until 1857.
Where is Mardi Gras Celebrated?
Although specific traditions may vary from destination to destination, Mardi Gras is celebrated across Europe and throughout the Americas. The Monday and Tuesday before Ash Wednesday (also known as Lundi Gras and Mardi Gras) are public holidays throughout much of Europe. I’m not going to lie, missing two days of school for Carnaval only added to the awesomeness of this holiday as a kid living in Maastricht, the Netherlands.
Known as Carnival, Mardi Gras is celebrated throughout Brazil, most notably in Rio de Janeiro.
In the United States, the last day of the pre-Lenten season is not commonly recognized as a holiday, but there are a few exceptions. Mardi Gras is a state holiday in Louisiana. Just over the state line in Alabama, Mardi Gras is also a recognized holiday in Mobile and Baldwin Counties.
What does Mardi Gras Mean in French?
Mardi is the French word for Tuesday, and gras is the French word for fat. That’s why Mardi Gras is also known as Fat Tuesday. Mardi Gras is also known as Shrove Tuesday.
Got it! So Lundi Gras Means…
Lundi is the French word for Monday and gras is the French word for fat. So Lundi Gras is a lot like Mardi Gras, only 24 hours earlier in Mardi Gras season.
So What Day of the Week is Associated with Boeuf Gras?
Not so fast. Just to keep you on your toes, Boeuf Gras has nothing to do with a day of the week and everything to do with not eating meat during Lent. Boeuf is the French word for cow and gras is still the French word for fat. So unlike Lundi Gras and Mardi Gras which are connected with days of the week, Boeuf Gras literally means “fattened cow” in French. More important than glittery masks or bead necklaces, it’s one of the oldest symbols of Mardi Gras.
A boeuf gras (fattened cow) is an important symbol during Mardi Gras.
Why? Because in the Middle Ages, the fattened cow would have been paraded through the streets before it was slaughtered and eaten as a final meal before Lent. Kinda makes you look forward to a church-imposed vegetarian diet, doesn’t it?
When are the Mardi Gras Parades in New Orleans?
Mardi Gras parades take place in New Orleans on January 6th, every weekend leading up to Fat Tuesday, and on Lundi Gras and Mardi Gras. While the Mardi Gras festivities that take place in the French Quarter are the most famous (or perhaps infamous), they tend to feature intoxicated revelers, more skin than you might want to see, and costumes you’d rather not explain to your kids. Plus also, the narrow streets of the French Quarter aren’t wide enough for floats to glide through. So one of the most helpful Mardi Gras tips is where to celebrate Fat Tuesday outside of the French Quarter.
Sage Advice: Whether you visit the Big Easy for Mardi Gras or another time during the year, here are the best things to do in New Orleans.
A balcony in New Orleans’s French Quarter decorated for Mardi Gras.
For a more family-friendly Mardi Gras experience (and to experience Mardi Gras like a local), take in the parades in Uptown New Orleans instead. Parades are also held in New Orleans suburbs like Covington and Metairie.
If you want to know what Mardi Gras parades are taking place in and around New Orleans today (or in the near future), download the Mardi Gras parade tracker. Not only will it deliver up-to-the-minute information on more than 80 parades in the greater New Orleans area, it will help you Mardi Gras like a monarch.
Asking for Mardi Gras throws during a parade.
What is a Mardi Gras Throw?
From beaded necklaces to special coins and even coconuts, Mardi Gras throws are the goodies tossed from floats or passed out to bystanders during Mardi Gras parades.
It’s a New Orleans tradition to ask for a throw by saying, “Throw me something, mister!” (And despite what you may have heard, you are NOT required to flash your bare breasts for a shot at catching a Mardi Gras throw.)
Whether made with plastic or metallic beads, whether plain or embellished with other trinkets, you’ll find Mardi Gras beads everywhere after a Mardi Gras parade.
Can You Recycle Mardi Gras Beads?
That’s a good question! And, after crews pulled 93,000 pounds of Mardi Gras beads out of storm drains in New Orleans, it does make me wonder what sort of impact the made-in-China plastic trinkets have on the Pelican State. The answer is that Mardi Gras beads cannot be recycled. And by “recycled” I mean melted down and made into something else. However, they can be reused or upcycled.
One of the most eco-friendly Mardi Gras tips is to check out the Krewe of Arc’s ArcGNO Recycle Center in Metairie. They collect Mardi Gras throws — including Mardi Gras beads — and resell them. Shop online for all kinds of Mardi Gras goodies. You’ll find everything from basic Mardi Gras throw beads to specialty beads that extend to other holidays like St. Patrick’s Day and the Fourth of July. You can also purchase krewe throws, Mardi Gras trinkets, costume accessories, and more!
What are the Mardi Gras Colors?
Mardi Gras colors are purple, gold, and green. The three hues are said to be tied to these virtues:
Purple symbolizes justice,
Gold represents power, and
Green stands for faith
As legend has it, when Russian Grand Duke Alexis Romanov attended Mardi Gras in 1872, he handed out a strand of glass beads to the people he met who demonstrated the color’s significance.
What is a Krewe?
Krewes are associations that organize parades, balls, and other events during Carnival. In and around New Orleans there are between 50 and 80 krewes. These are some of the most notable organizations.
Established in 1856, the Mistick Krewe of Comus organized the first Fat Tuesday parade in New Orleans. And, by being pioneers in the Mardi Gras mayhem, they established a custom that most other krewes still follow today. Their parades feature a themed, torch-lit procession of elaborate floats and masked costumes. And it’s all followed by a masked ball.
Fun Fact: The Mistick Krewe of Comus gets its name from komos, a ritualistic drunken procession performed by revelers in ancient Greece.
Flambeaux (plural for flambeau, or a flaming torch) comes from the French word flambe, meaning “flame.” The first official Mardi Gras flambeaux debuted with the Mistick Krewe of Comus on Fat Tuesday in 1857. In the beginning, the flambeaux were needed for revelers to see the Carnival parades at night.
Founded a generation later, the Krewe of Rex organizes one of Mardi Gras’s most popular parades. Leveraging the Latin word for king (which is, you guessed it, rex), this krewe crowned its first King of Carnival back in 1872 when Russian Grand Duke Alexis Romanov was in attendance. There is a tight connection between Mistick Krewe of Comus and the Rex Organization. In fact, the two krewes hold their annual balls together on Mardi Gras night.
Sage Advice: Although every parade is amazing in its own way, locals say that the krewes of Orpheus, Bacchus, and Endymion (also known as the three super krewes) are the most detailed, humongous, and extravagant of all Mardi Gras parades.
Rex is the King of Carnival. But here’s an important Mardi Gras tip: Never EVER refer to him as King Rex. Because rex is the Latin word for king, that’s like saying King King, and they don’t waste time repeating words during Mardi Gras in the Big Easy.
Who is the King of Mardi Gras?
Known as the King of Carnival and Monarch of Merriment, Rex symbolically presides over Mardi Gras. A member of the Rex Organization, the crowned king is typically a prominent member of the community actively involved in philanthropic and civic-minded projects.
Rather than the king’s real-life wife, life partner, or other “plus one” the Queen of Carnival is selected from the debutantes being presented that Mardi Gras season.
Both the King and Queen of Carnival are chosen in the spring, and they must keep their identities secret for nearly a year until it is revealed on Lundi Gras.
What are Traditional Mardi Gras Foods?
In a town known for its delicious, rich, and filling food, traditional mardi gras food doesn’t disappoint! Savory dishes to try include:
What is a King Cake?
Like the fleur de lis, beignets, and Mardi Gras, king cake made its way to New Orleans via France. Typically decorated with purple, gold, and green sprinkles, a king cake looks like a braided bundt cake and tastes a bit like a cinnamon roll. Hidden inside the cake is a small baby to represent Jesus, the King of Glory.
If you’re the lucky person who bites into a piece of king cake and discovers baby Jesus, one of several things may occur. If you don’t break a tooth, then it is generally expected that you’re already lucky and should experience increased prosperity. In some families, the person who finds the baby king is expected to make or purchase next year’s king cake. And at some gatherings, the person who discovers the pint-sized person in their slice of cake is crowned the King (or Queen) of that event.
Sage Advice: King cakes were originally made well before the first plastics were formed out of phenol and formaldehyde, so French pastry chefs baked a small porcelain baby into the cake. In an era of everything plastic (see “Can you recycle Mardi Gras beads?”) the last thing you want to do is bake a figurine that will melt into your cake. Instead, once your cake has cooled, carefully insert the plastic baby Jesus into your king cake before frosting.
Want to celebrate Mardi Gras with a king cake at home? Here are a few recipes to try:
If you know your way around the kitchen (and have better luck working with yeast than I do), this is one of the most authentic king cake recipes.
If scalding milk and working with yeast is outside of your culinary wheelhouse, or you just want to get out of the kitchen faster, try this easy king cake recipe that uses packaged cinnamon rolls as a super smart shortcut.
Have You Attended Mardi Gras in New Orleans?
What did you like most about the experience? Any additional tips and tricks to pass along to others planning to celebrate Mardi Gras in New Orleans? Share your experiences in the comments section below.
Looking for more information to plan your Louisiana vacation?Check out my additional recommendations to help youplan your trip to Louisiana including what to see and do in Louisiana, the best places to stay in Louisiana, where to eat in Louisiana, and more!
I learned so much from your post! Even as a Catholic, the connection between Mardi Gras and Catholic (and other) traditions is very insightful. I also love that you’ve included a bit about what to do with all those colorful Mardi Gras beads. I didn’t even consider the waste and recyclability before reading this, but now I will look to the Recycle Center in Metairie to repurpose mine. Great tip!
Mardi Gras is an old tradition and has many iterations around the world. My dream is to be in Venice and wear one of those beautiful masks. The crowds are probably horrendous, but I love the idea of being completely anonymous and admired for a few hours.
In this digital age where everyone is anonymous for maybe 15 minutes of a lifetime, I agree that it would be totally amazing to be anonymous for a few hours. And, after all, that is the whole purpose behind the mask. Funny how what’s old is new again at some point in human history!
There are so many things in this post that I didn’t even realize I didn’t know about Mardi Gras! I had no idea there was a connection between Carnival and Mardi Gras. It makes so much sense though! Thanks for making me smarter!
I had no idea what Mardi Gras was all about, and like you I’m a non-practicing Catholic so I should have known. I thought it was a big weekend party. I didn’t know it lasted so long. I would love to see it sometime!
This is a very thorough explanation of the origin of Mardi Gras and its associated terms. I even learned something like the Twelfth Night! I’m not Catholic but I knew and have practiced some of the other things like Ash Wednesday and Lent. Being from Alabama, I’m very familiar with Mobile claiming the first Mardi Gras but you know, I wasn’t around back then! LOL!
Fantastic guide here and to be honest, I dont really know much Mardi Gras. In the past it has never appealed to me but I got friends in Louisiana who keep wanting me to pop over the pond and check out this festival. So much information here to take in but gives me a better idea on what to expect if I ever do make it out there.
How interesting! I was in NO for Mardi Gras about 30 years ago and had no idea about the origin or meaning of the festival. Funnily enough I now write about Greece and the Epiphany and Lent is a big deal in the Greek Orthodox religion and celebrations at the end of it are huge. Next time I’m in New Orleans I’m going to think about things quite differently!
We loved visiting New Orleans for Mardi Gras. Doing Mardi Gras at some other place around the world is on our travel wish list. Venice for Carnivale in full costume would be wonderful. Before we went, I really did not realize how many different parades there were. Great to read some of the history. And for you to provide some of the related Christian terms linked to Mardi Gras. We were happy to leave the vast collection of beads behind to be re-used. Although I might have kept a representative sample if I had known what the colours mean. I was happy when I found a baby in my King Cake! It still sits on my desk . What a great comprehensive guide.
This is such an informative article. I love that you included the past about Mardi Gras and not just that it’s the party holiday that it is today. Looks like a great time to celebrate it in New Orleans though!
What a great read. It’s one of those events that you know about although don’t really KNOW about. Thanks for the insights. We enjoy the European versions of the Carnaval, especially in Aguillas in Spain. kx
I hear ya’! I am most familiar with the Carnaval (Dutch spelling) celebrations in the southern part of the Netherlands because I lived there for nearly four years as a teenager. Have fun this Carnival season!
We lived in coastal Mississippi for years and never made to N.O. for Mardi Gras but always enjoyed the parades in our area. Your post is a perfect explanation of what it is and the meanings/colors behind it. Great job, thanks for sharing!
Great post! I love that you give so much background to the Mardi Gras celebrations. I actually had no idea there were multiple parades in New Orleans. I’ve always heard Shrove Tuesday called Pancake Tuesday here. I never considered that so many of the Mardi Gras beads would end up in the storm drains, 93,000 lb is an insane amount of beads!
Fantastic post! Mardi Gras to me brings to mind a hard-partying college trip I made with some girlfriends decades ago. Clearly, we need to go back and experience it in a more grown-up way! Since we live in Colombia, we’re enjoying experiencing Carnival in various Latin American cultures and it’s remarkable how similar they all are.
There is so much good information in this post. Even though I have never been to New Orleans during Mardi Gras season, it is almost impossible to avoid learning about or experiencing it in some way or another at any time of year. I think we visited the warehouse where some of the floats were made and I also distinctly remember trying a king cake. With that said, I learned a ton from reading your post, and I’m so glad that you tied in Mardi Gras with Carnival, especially since I will be in Brazil for Carnival this year (albeit in Sao Paulo, not Rio). Top-notch write up with some really great tips and ideas!
Oooooooh, I bet Brazil during Carnival will be a blast! This time of year always makes me long for the many Carnavals (spelled the Dutch way) we got to celebrate when we lived in Maastricht. Such a fun holiday!
This is a great ! I have lived in Louisiana all off my life and I will be honest I have never been to Mardi Gras in New Orleans but we have a HUGE celebration in Lake Charles . We are no longer in a Mardi Gras Krewe but we still are invited to balls. Thanks for including my Boudin Stuffed King Cake.
Your kind words made my day! To have the endorsement of a native Louisianan makes me feel like I’ve done this treasured tradition justice. Your Boudin Stuffed King Cake recipe sounds amazing, and my daughter and I hope to make it this Mardi Gras season. Thanks for sharing it!
TRAVEL RESOURCES
LET'S CONNECT!
To help offset the costs of running EverydayWanderer.com, you’ll find affiliate links lightly sprinkled throughout the site.If you choose to make a purchase viaone of these links, there’s no additional cost to you, but I’ll earn a teeny tiny commission.You can read all of the legal blah blah blah (as my little niece says) on thefull disclosure page.
|
”) in the Big Easy.
When is Mardi Gras?
Mardi Gras starts 12 days after Christmas on January 6th. It runs through Fat Tuesday (literally Mardi Gras in French) and ends six weeks before Easter. Because Easter moves around the Gregorian calendar based on the phases of the moon, Mardi Gras can last anywhere from four to nine weeks.
What Does Mardi Gras Celebrate?
During the Middle Ages, Christians throughout Europe would eat up the rich fatty foods on hand — like meat, eggs, milk, and cheese — in the days leading up to the Lenten season. Modern Christians, especially Catholics, celebrate Mardi Gras as a last hoorah of fattening food, sweet treats, and alcohol before giving up meat, forgoing dessert, and fasting over the weeks that lead up to Easter.
How Did Mardi Gras Start?
It is believed that Mardi Gras is based on the pagan festivals of Saturnalia and Lupercalia that were celebrated throughout the Roman Empire. Occuring at the end of December, Saturnalia was an upbeat, party-like fete during which Roman social norms were set aside and role reversal was common. The tradition of wearing masks and dressing up in elaborate costumes can be linked to Saturnalia.
Whether it’s called Carnival or Mardi Gras, the pre-Lenten celebration features elaborate costumes that often defy social norms, encourage cross-dressing, or are a bit risque.
Taking place in February, Lupercalia was observed to keep evil spirits at bay, focus on good health, and encourage fertility as spring dawned. Lupercalia involved feasting, drinking, and all sorts of delightful debauchery. And you can probably see the link between Lupercalia to Mardi Gras, especially if you’ve ever celebrated Mardi Gras on Bourbon Street.
As Christianity spread throughout Europe in the Middle Ages, the centuries-old pagan celebrations evolved into church-sanctioned traditions.
|
no
|
Bibliography
|
Was Shakespeare the real author of all his plays and poems?
|
yes_statement
|
"shakespeare" is the true "author" of all his "plays" and "poems".. all of "shakespeare"'s "plays" and "poems" were written by him.
|
https://en.wikipedia.org/wiki/Shakespeare_authorship_question
|
Shakespeare authorship question - Wikipedia
|
Oxford, Bacon, Derby, and Marlowe (clockwise from top left, Shakespeare centre) have each been proposed as the true author.
The Shakespeare authorship question is the argument that someone other than William Shakespeare of Stratford-upon-Avon wrote the works attributed to him. Anti-Stratfordians—a collective term for adherents of the various alternative-authorship theories—believe that Shakespeare of Stratford was a front to shield the identity of the real author or authors, who for some reason—usually social rank, state security, or gender—did not want or could not accept public credit.[1] Although the idea has attracted much public interest,[2][a] all but a few Shakespeare scholars and literary historians consider it a fringe theory, and for the most part acknowledge it only to rebut or disparage the claims.[3]
Supporters of alternative candidates argue that theirs is the more plausible author, and that William Shakespeare lacked the education, aristocratic sensibility, or familiarity with the royal court that they say is apparent in the works.[12] Those Shakespeare scholars who have responded to such claims hold that biographical interpretations of literature are unreliable in attributing authorship,[13] and that the convergence of documentary evidence used to support Shakespeare's authorship—title pages, testimony by other contemporary poets and historians, and official records—is the same used for all other authorial attributions of his era.[14] No such direct evidence exists for any other candidate,[15] and Shakespeare's authorship was not questioned during his lifetime or for centuries after his death.[16]
Despite the scholarly consensus,[17] a relatively small[18] but highly visible and diverse assortment of supporters, including prominent public figures,[19] have questioned the conventional attribution.[20] They work for acknowledgment of the authorship question as a legitimate field of scholarly inquiry and for acceptance of one or another of the various authorship candidates.[21]
Overview
The arguments presented by anti-Stratfordians share several characteristics.[22] They attempt to disqualify William Shakespeare as the author and usually offer supporting arguments for a substitute candidate. They often postulate some type of conspiracy that protected the author's true identity,[23] which they say explains why no documentary evidence exists for their candidate and why the historical record supports Shakespeare's authorship.[24]
Most anti-Stratfordians suggest that the Shakespeare canon exhibits broad learning, knowledge of foreign languages and geography, and familiarity with Elizabethan and Jacobeancourt and politics; therefore, no one but a highly educated individual or court insider could have written it.[25] Apart from literary references, critical commentary and acting notices, the available data regarding Shakespeare's life consist of mundane personal details such as vital records of his baptism, marriage and death, tax records, lawsuits to recover debts, and real estate transactions. In addition, no document attests that he received an education or owned any books.[26] No personal letters or literary manuscripts certainly written by Shakespeare of Stratford survive. To sceptics, these gaps in the record suggest the profile of a person who differs markedly from the playwright and poet.[27] Some prominent public figures, including Walt Whitman, Mark Twain, Helen Keller, Henry James, Sigmund Freud, John Paul Stevens, Prince Philip, Duke of Edinburgh and Charlie Chaplin, have found the arguments against Shakespeare's authorship persuasive, and their endorsements are an important element in many anti-Stratfordian arguments.[19][28][29]
At the core of the argument is the nature of acceptable evidence used to attribute works to their authors.[30] Anti-Stratfordians rely on what has been called a "rhetoric of accumulation",[31] or what they designate as circumstantial evidence: similarities between the characters and events portrayed in the works and the biography of their preferred candidate; literary parallels with the known works of their candidate; and literary and hidden allusions and cryptographic codes in works by contemporaries and in Shakespeare's own works.[32]
In contrast, academic Shakespeareans and literary historians rely mainly on direct documentary evidence—in the form of title page attributions and government records such as the Stationers' Register and the Accounts of the Revels Office—and contemporary testimony from poets, historians, and those players and playwrights who worked with him, as well as modern stylometric studies. Gaps in the record are explained by the low survival rate for documents of this period.[33] Scholars say all these converge to confirm William Shakespeare's authorship.[34] These criteria are the same as those used to credit works to other authors and are accepted as the standard methodology for authorship attribution.[35]
Case against Shakespeare's authorship
Little is known of Shakespeare's personal life, and some anti-Stratfordians take this as circumstantial evidence against his authorship.[36] Further, the lack of biographical information has sometimes been taken as an indication of an organised attempt by government officials to expunge all traces of Shakespeare, including perhaps his school records, to conceal the true author's identity.[37][38]
Shakespeare's background
Shakespeare was born, brought up, and buried in Stratford-upon-Avon, where he maintained a household throughout the duration of his career in London. A market town of around 1,500 residents about 100 miles (160 km) north-west of London, Stratford was a centre for the slaughter, marketing, and distribution of sheep, as well as for hide tanning and wool trading. Anti-Stratfordians often portray the town as a cultural backwater lacking the environment necessary to nurture a genius, and depict Shakespeare as ignorant and illiterate.[39]
Shakespeare's father, John Shakespeare, was a glover (glove-maker) and town official. He married Mary Arden, one of the Ardens of Warwickshire, a family of the local gentry. Both signed their names with a mark, and no other examples of their writing are extant.[40] This is often used as an indication that Shakespeare was brought up in an illiterate household. There is also no evidence that Shakespeare's two daughters were literate, save for two signatures by Susanna that appear to be "drawn" instead of written with a practised hand. His other daughter, Judith, signed a legal document with a mark.[41] Anti-Stratfordians consider these marks and the rudimentary signature style evidence of illiteracy, and consider Shakespeare's plays, which "depict women across the social spectrum composing, reading, or delivering letters," evidence that the author came from a more educated background.[42]
Anti-Stratfordians consider Shakespeare's background incompatible with that attributable to the author of the Shakespeare canon, which exhibits an intimacy with court politics and culture, foreign countries, and aristocratic sports such as hunting, falconry, tennis, and lawn-bowling.[43] Some find that the works show little sympathy for upwardly mobile types such as John Shakespeare and his son, and that the author portrays individual commoners comically, as objects of ridicule. Commoners in groups are said to be depicted typically as dangerous mobs.[44]
Shakespeare's six surviving signatures have often been cited as evidence of his illiteracy.
The absence of documentary proof of Shakespeare's education is often a part of anti-Stratfordian arguments. The free King's New School in Stratford, established 1553, was about half a mile (0.8 kilometres) from Shakespeare's boyhood home.[45]Grammar schools varied in quality during the Elizabethan era, and there are no documents detailing what was taught at the Stratford school.[46] However, grammar school curricula were largely similar, and the basic Latin text was standardised by royal decree. The school would have provided an intensive education in Latin grammar, the classics, and rhetoric at no cost.[47] The headmaster, Thomas Jenkins, and the instructors were Oxford graduates.[48] No student registers of the period survive, so no documentation exists for the attendance of Shakespeare or any other pupil, nor did anyone who taught or attended the school ever record that they were his teacher or classmate. This lack of documentation is taken by many anti-Stratfordians as evidence that Shakespeare had little or no education.[49]
Anti-Stratfordians also question how Shakespeare, with no record of the education and cultured background displayed in the works bearing his name, could have acquired the extensive vocabulary found in the plays and poems. The author's vocabulary is calculated to be between 17,500 and 29,000 words.[50][b] No letters or signed manuscripts written by Shakespeare survive. The appearance of Shakespeare's six surviving authenticated[51] signatures, which they characterise as "an illiterate scrawl", is interpreted as indicating that he was illiterate or barely literate.[52] All are written in secretary hand, a style of handwriting common to the era,[53] particularly in play writing,[54] and three of them utilize breviographs to abbreviate the surname.[53]
Name as a pseudonym
Shakespeare's name was hyphenated on the cover of the 1609 quarto edition of the Sonnets.
In his surviving signatures William Shakespeare did not spell his name as it appears on most Shakespeare title pages. His surname was spelled inconsistently in both literary and non-literary documents, with the most variation observed in those that were written by hand.[55] This is taken as evidence that he was not the same person who wrote the works, and that the name was used as a pseudonym for the true author.[56]
Shakespeare's surname was hyphenated as "Shake-speare" or "Shak-spear" on the title pages of 15 of the 32 individual quarto (or Q) editions of Shakespeare's plays and in two of the five editions of poetry published before the First Folio. Of those 15 title pages with Shakespeare's name hyphenated, 13 are on the title pages of just three plays, Richard II, Richard III, and Henry IV, Part 1.[c][57] The hyphen is also present in one cast list and in six literary allusions published between 1594 and 1623. This hyphen use is construed to indicate a pseudonym by most anti-Stratfordians,[58] who argue that fictional descriptive names (such as "Master Shoe-tie" and "Sir Luckless Woo-all") were often hyphenated in plays, and pseudonyms such as "Tom Tell-truth" were also sometimes hyphenated.[59]
Reasons proposed for the use of "Shakespeare" as a pseudonym vary, usually depending upon the social status of the candidate. Aristocrats such as Derby and Oxford supposedly used pseudonyms because of a prevailing "stigma of print", a social convention that putatively restricted their literary works to private and courtly audiences—as opposed to commercial endeavours—at the risk of social disgrace if violated.[60] In the case of commoners, the reason was to avoid prosecution by the authorities: Bacon to avoid the consequences of advocating a more republican form of government,[61] and Marlowe to avoid imprisonment or worse after faking his death and fleeing the country.[62]
Lack of documentary evidence
Ben Jonson's "On Poet-Ape" from his 1616 collected works is taken by some anti-Stratfordians to refer to Shakespeare.
Anti-Stratfordians say that nothing in the documentary record explicitly identifies Shakespeare as a writer;[63] that the evidence instead supports a career as a businessman and real-estate investor; that any prominence he might have had in the London theatrical world (aside from his role as a front for the true author) was because of his money-lending, trading in theatrical properties, acting, and being a shareholder. They also believe that any evidence of a literary career was falsified as part of the effort to shield the true author's identity.[64]
Alternative authorship theories generally reject the surface meaning of Elizabethan and Jacobean references to Shakespeare as a playwright. They interpret contemporary satirical characters as broad hints indicating that the London theatrical world knew Shakespeare was a front for an anonymous author. For instance, they identify Shakespeare with the literary thief Poet-Ape in Ben Jonson's poem of the same name, the socially ambitious fool Sogliardo in Jonson's Every Man Out of His Humour, and the foolish poetry-lover Gullio in the university play The Return from Parnassus (performed c. 1601).[65] Similarly, praises of "Shakespeare" the writer, such as those found in the First Folio, are explained as references to the real author's pen-name, not the man from Stratford.[66]
Circumstances of Shakespeare's death
Shakespeare died on 23 April 1616 in Stratford, leaving a signed will to direct the disposal of his large estate. The language of the will makes no mention of personal papers, books, poems, or the 18 plays that remained unpublished at the time of his death. In an interlineation, the will mentions monetary gifts to fellow actors for them to buy mourning rings.[67]
The effigy of Shakespeare's Stratford monument as it was portrayed by Dugdale in 1656, as it appears today, and as it was portrayed in 1748 before the restoration
Any public mourning of Shakespeare's death went unrecorded, and no eulogies or poems memorialising his death were published until seven years later as part of the front matter in the First Folio of his plays.[68]
Oxfordians think that the phrase "our ever-living Poet" (an epithet that commonly eulogised a deceased poet as having attained immortal literary fame), included in the dedication to Shakespeare's sonnets that were published in 1609, was a signal that the true poet had died by then. Oxford had died in 1604, five years earlier.[69]
Shakespeare's funerary monument in Stratford consists of a demi-figure effigy of him with pen in hand and an attached plaque praising his abilities as a writer. The earliest printed image of the figure, in Sir William Dugdale's Antiquities of Warwickshire (1656), differs greatly from its present appearance. Some authorship theorists argue that the figure originally portrayed a man clutching a sack of grain or wool that was later altered to help conceal the identity of the true author.[70] In an attempt to put to rest such speculation, in 1924 M. H. Spielmann published a painting of the monument that had been executed before the 1748 restoration, which showed it very similar to its present-day appearance.[71] The publication of the image failed to achieve its intended effect, and in 2005 Oxfordian Richard Kennedy proposed that the monument was originally built to honour John Shakespeare, William's father, who by tradition was a "considerable dealer in wool".[72]
Shakespeare scholars see no reason to suspect that the name was a pseudonym or that the actor was a front for the author: contemporary records identify Shakespeare as the writer, other playwrights such as Ben Jonson and Christopher Marlowe came from similar backgrounds, and no contemporary is known to have expressed doubts about Shakespeare's authorship. While information about some aspects of Shakespeare's life is sketchy, this is true of many other playwrights of the time. Of some, next to nothing is known. Others, such as Jonson, Marlowe, and John Marston, are more fully documented because of their education, close connections with the court, or brushes with the law.[75]
Literary scholars employ the same methodology to attribute works to the poet and playwright William Shakespeare as they use for other writers of the period: the historical record and stylistic studies,[76] and they say the argument that there is no evidence of Shakespeare's authorship is a form of fallacious logic known as argumentum ex silentio, or argument from silence, since it takes the absence of evidence to be evidence of absence.[77] They criticise the methods used to identify alternative candidates as unreliable and unscholarly, arguing that their subjectivity explains why at least as many as 80 candidates[10] have been proposed as the "true" author.[78] They consider the idea that Shakespeare revealed himself autobiographically in his work as a cultural anachronism: it has been a common authorial practice since the 19th century, but was not during the Elizabethan and Jacobean eras.[79] Even in the 19th century, beginning at least with Hazlitt and Keats, critics frequently noted that the essence of Shakespeare's genius consisted in his ability to have his characters speak and act according to their given dramatic natures, rendering the determination of Shakespeare's authorial identity from his works that much more problematic.[80]
Historical evidence
Shakespeare's honorific "Master" was represented as "Mr." on the title page of The Rape of Lucrece (O5, 1616).
The historical record is unequivocal in ascribing the authorship of the Shakespeare canon to a William Shakespeare.[81] In addition to the name appearing on the title pages of poems and plays, this name was given as that of a well-known writer at least 23 times during the lifetime of William Shakespeare of Stratford.[82] Several contemporaries corroborate the identity of the playwright as an actor,[83] and explicit contemporary documentary evidence attests that the Stratford citizen was also an actor under his own name.[84]
In the rigid social structure of Elizabethan England, William Shakespeare was entitled to use the honorific "gentleman" after his father's death in 1601, since his father was granted a coat of arms in 1596.[87] This honorific was conventionally designated by the title "Master" or its abbreviations "Mr." or "M." prefixed to the name[74] (though it was often used by principal citizens and to imply respect to men of stature in the community without designating exact social status).[88] The title was included in many contemporary references to Shakespeare, including official and literary records, and identifies William Shakespeare of Stratford as the same William Shakespeare designated as the author.[89] Examples from Shakespeare's lifetime include two official stationers' entries. One is dated 23 August 1600 and entered by Andrew Wise and William Aspley:
This latter appeared on the title page of King Lear Q1 (1608) as "M. William Shak-speare: HIS True Chronicle Historie of the life and death of King LEAR and his three Daughters."[92]
Shakespeare's social status is also specifically referred to by his contemporaries in Epigram 159 by John Davies of Hereford in his The Scourge of Folly (1611): "To our English Terence Mr. Will: Shake-speare";[93] Epigram 92 by Thomas Freeman in his Runne and A Great Caste (1614): "To Master W: Shakespeare";[94] and in historian John Stow's list of "Our moderne, and present excellent Poets" in his Annales, printed posthumously in an edition by Edmund Howes (1615), which reads: "M. Willi. Shake-speare gentleman".[95]
Contemporary legal recognition
Both explicit testimony by his contemporaries and strong circumstantial evidence of personal relationships with those who interacted with him as an actor and playwright support Shakespeare's authorship.[98]
William Camden defended Shakespeare's right to bear heraldic arms about the same time he listed him as one of the great poets of his time.
The historian and antiquary Sir George Buc served as Deputy Master of the Revels from 1603 and as Master of the Revels from 1610 to 1622. His duties were to supervise and censor plays for the public theatres, arrange court performances of plays and, after 1606, to license plays for publication. Buc noted on the title page of George a Greene, the Pinner of Wakefield (1599), an anonymous play, that he had consulted Shakespeare on its authorship. Buc was meticulous in his efforts to attribute books and plays to the correct author,[99] and in 1607 he personally licensed King Lear for publication as written by "Master William Shakespeare".[100]
In 1602, Ralph Brooke, the York Herald, accused Sir William Dethick, the Garter King of Arms, of elevating 23 unworthy persons to the gentry.[101] One of these was Shakespeare's father, who had applied for arms 34 years earlier but had to wait for the success of his son before they were granted in 1596.[102] Brooke included a sketch of the Shakespeare arms, captioned "Shakespear ye Player by Garter".[103] The grants, including John Shakespeare's, were defended by Dethick and Clarenceux King of ArmsWilliam Camden, the foremost antiquary of the time.[104] In his Remaines Concerning Britaine—published in 1605, but finished two years previously and before the Earl of Oxford died in 1604—Camden names Shakespeare as one of the "most pregnant witts of these ages our times, whom succeeding ages may justly admire".[105]
Recognition by fellow actors, playwrights and writers
Actors John Heminges and Henry Condell knew and worked with Shakespeare for more than 20 years. In the 1623 First Folio, they wrote that they had published the Folio "onely to keepe the memory of so worthy a Friend, & Fellow aliue, as was our Shakespeare, by humble offer of his playes". The playwright and poet Ben Jonson knew Shakespeare from at least 1598, when the Lord Chamberlain's Men performed Jonson's play Every Man in His Humour at the Curtain Theatre with Shakespeare as a cast member. The Scottish poet William Drummond recorded Jonson's often contentious comments about his contemporaries: Jonson criticised Shakespeare as lacking "arte" and for mistakenly giving Bohemia a coast in The Winter's Tale.[106] In 1641, four years after Jonson's death, private notes written during his later life were published. In a comment intended for posterity (Timber or Discoveries), he criticises Shakespeare's casual approach to playwriting, but praises Shakespeare as a person: "I loved the man, and do honour his memory (on this side Idolatry) as much as any. He was (indeed) honest, and of an open, and free nature; had an excellent fancy; brave notions, and gentle expressions ..."[107]
In addition to Ben Jonson, other playwrights wrote about Shakespeare, including some who sold plays to Shakespeare's company. Two of the three Parnassus plays produced at St John's College, Cambridge, near the beginning of the 17th century mention Shakespeare as an actor, poet, and playwright who lacked a university education. In The First Part of the Return from Parnassus, two separate characters refer to Shakespeare as "Sweet Mr. Shakespeare", and in The Second Part of the Return from Parnassus (1606), the anonymous playwright has the actor Kempe say to the actor Burbage, "Few of the university men pen plays well ... Why here's our fellow Shakespeare puts them all down."[108]
An edition of The Passionate Pilgrim, expanded with an additional nine poems written by the prominent English actor, playwright, and author Thomas Heywood, was published by William Jaggard in 1612 with Shakespeare's name on the title page. Heywood protested this piracy in his Apology for Actors (1612), adding that the author was "much offended with M. Jaggard (that altogether unknown to him) presumed to make so bold with his name." That Heywood stated with certainty that the author was unaware of the deception, and that Jaggard removed Shakespeare's name from unsold copies even though Heywood did not explicitly name him, indicates that Shakespeare was the offended author.[109] Elsewhere, in his poem "Hierarchie of the Blessed Angels" (1634), Heywood affectionately notes the nicknames his fellow playwrights had been known by. Of Shakespeare, he writes:
Playwright John Webster, in his dedication to The White Devil (1612), wrote, "And lastly (without wrong last to be named), the right happy and copious industry of M. Shake-Speare, M. Decker, & M. Heywood, wishing what I write might be read in their light", here using the abbreviation "M." to denote "Master", a form of address properly used of William Shakespeare of Stratford, who was titled a gentleman.[111]
In a verse letter to Ben Jonson dated to about 1608, Francis Beaumont alludes to several playwrights, including Shakespeare, about whom he wrote,
Historical perspective of Shakespeare's death
The monument to Shakespeare, erected in Stratford before 1623, bears a plaque with an inscription identifying Shakespeare as a writer. The first two Latin lines translate to "In judgment a Pylian, in genius a Socrates, in art a Maro, the earth covers him, the people mourn him, Olympus possesses him", referring to Nestor, Socrates, Virgil, and Mount Olympus. The monument was not only referred to in the First Folio, but other early 17th-century records identify it as being a memorial to Shakespeare and transcribe the inscription.[113] Sir William Dugdale also included the inscription in his Antiquities of Warwickshire (1656), but the engraving was done from a sketch made in 1634 and, like other portrayals of monuments in his work, is not accurate.[114]
Shakespeare's will, executed on 25 March 1616, bequeaths "to my fellows John Hemynge Richard Burbage and Henry Cundell 26 shilling 8 pence apiece to buy them [mourning] rings". Numerous public records, including the royal patent of 19 May 1603 that chartered the King's Men, establish that Phillips, Heminges, Burbage, and Condell were fellow actors in the King's Men with William Shakespeare; two of them later edited his collected plays. Anti-Stratfordians have cast suspicion on these bequests, which were interlined, and claim that they were added later as part of a conspiracy. However, the will was proved in the Prerogative Court of the Archbishop of Canterbury (George Abbot) in London on 22 June 1616, and the original was copied into the court register with the bequests intact.[115]
John Taylor was the first poet to mention in print the deaths of Shakespeare and Francis Beaumont in his 1620 book of poems The Praise of Hemp-seed.[116] Both had died four years earlier, less than two months apart. Ben Jonson wrote a short poem "To the Reader" commending the First Folio engraving of Shakespeare by Droeshout as a good likeness. Included in the prefatory commendatory verses was Jonson's lengthy eulogy "To the memory of my beloved, the Author Mr. William Shakespeare: and what he hath left us" in which he identifies Shakespeare as a playwright, a poet, and an actor, and writes:
Sweet Swan of Avon! what a sight it were
To see thee in our waters yet appear,
And make those flights upon the banks of Thames,
That so did take Eliza, and our James!
Here Jonson links the author to Stratford's river, the Avon, and confirms his appearances at the courts of Elizabeth I and James I.[117]
Leonard Digges wrote the elegy "To the Memorie of the Deceased Authour Maister W. Shakespeare" in the 1623 First Folio, referring to "thy Stratford Moniment". Living four miles from Stratford-upon-Avon from 1600 until attending Oxford in 1603, Digges was the stepson of Thomas Russell, whom Shakespeare in his will designated as overseer to the executors.[118][119]William Basse wrote an elegy entitled "On Mr. Wm. Shakespeare" sometime between 1616 and 1623, in which he suggests that Shakespeare should have been buried in Westminster Abbey next to Chaucer, Beaumont, and Spenser. This poem circulated very widely in manuscript and survives today in more than two dozen contemporary copies; several of these have a fuller, variant title "On Mr. William Shakespeare, he died in April 1616", which unambiguously specifies that the reference is to Shakespeare of Stratford.[120]
Evidence for Shakespeare's authorship from his works
Shakespeare's are the most studied secular works in history.[121] Contemporary comments and some textual studies support the authorship of someone with an education, background, and life span consistent with that of William Shakespeare.[122]
Ben Jonson and Francis Beaumont referenced Shakespeare's lack of classical learning, and no extant contemporary record suggests he was a learned writer or scholar.[123] This is consistent with classical blunders in Shakespeare, such as mistaking the scansion of many classical names, or the anachronistic citing of Plato and Aristotle in Troilus and Cressida.[124] It has been suggested that most of Shakespeare's classical allusions were drawn from Thomas Cooper's Thesaurus Linguae Romanae et Britannicae (1565), since a number of errors in that work are replicated in several of Shakespeare's plays,[125] and a copy of this book had been bequeathed to Stratford Grammar School by John Bretchgirdle for "the common use of scholars".[126]
Later critics such as Samuel Johnson remarked that Shakespeare's genius lay not in his erudition, but in his "vigilance of observation and accuracy of distinction which books and precepts cannot confer; from this almost all original and native excellence proceeds".[127] Much of the learning with which he has been credited and the omnivorous reading imputed to Shakespeare by critics in later years is exaggerated, and he may well have absorbed much learning from conversations.[128][129] And contrary to previous claims—both scholarly and popular—about his vocabulary and word coinage, the evidence of vocabulary size and word-use frequency places Shakespeare with his contemporaries, rather than apart from them. Computerized comparisons with other playwrights demonstrate that his vocabulary is indeed large, but only because the canon of his surviving plays is larger than those of his contemporaries and because of the broad range of his characters, settings, and themes.[130]
Title page of the 1634 quarto of The Two Noble Kinsmen by John Fletcher and Shakespeare
Beginning in 1987, Ward Elliott, who was sympathetic to the Oxfordian theory, and Robert J. Valenza supervised a continuing stylometric study that used computer programs to compare Shakespeare's stylistic habits to the works of 37 authors who had been proposed as the true author. The study, known as the Claremont Shakespeare Clinic, was last held in the spring of 2010.[132] The tests determined that Shakespeare's work shows consistent, countable, profile-fitting patterns, suggesting that he was a single individual, not a committee, and that he used fewer relative clauses and more hyphens, feminine endings, and run-on lines than most of the writers with whom he was compared. The result determined that none of the other tested claimants' work could have been written by Shakespeare, nor could Shakespeare have been written by them, eliminating all of the claimants whose known works have survived—including Oxford, Bacon, and Marlowe—as the true authors of the Shakespeare canon.[133]
Shakespeare's style evolved over time in keeping with changes in literary trends. His late plays, such as The Winter's Tale, The Tempest, and Henry VIII, are written in a style similar to that of other Jacobean playwrights and radically different from that of his Elizabethan-era plays.[134] In addition, after the King's Men began using the Blackfriars Theatre for performances in 1609, Shakespeare's plays were written to accommodate a smaller stage with more music, dancing, and more evenly divided acts to allow for trimming the candles used for stage lighting.[135]
In a 2004 study, Dean Keith Simonton examined the correlation between the thematic content of Shakespeare's plays and the political context in which they would have been written. He concludes that the consensus play chronology is roughly the correct order, and that Shakespeare's works exhibit gradual stylistic development consistent with that of other artistic geniuses.[136] When backdated two years, the mainstream chronologies yield substantial correlations between the two, whereas the alternative chronologies proposed by Oxfordians display no relationship regardless of the time lag.[137][138]
Textual evidence from the late plays indicates that Shakespeare collaborated with other playwrights who were not always aware of what he had done in a previous scene. This suggests that they were following a rough outline rather than working from an unfinished script left by an already dead playwright, as some Oxfordians propose. For example, in The Two Noble Kinsmen (1612–1613), written with John Fletcher, Shakespeare has two characters meet and leaves them on stage at the end of one scene, yet Fletcher has them act as if they were meeting for the first time in the following scene.[139]
History of the authorship question
Bardolatry and early doubt
Despite adulatory tributes attached to his works, Shakespeare was not considered the world's greatest writer in the century and a half following his death.[140] His reputation was that of a good playwright and poet among many others of his era.[141]Beaumont and Fletcher's plays dominated popular taste after the theatres reopened in the Restoration Era in 1660, with Ben Jonson's and Shakespeare's plays vying for second place. After the actor David Garrick mounted the Shakespeare Stratford Jubilee in 1769, Shakespeare led the field.[142] Excluding a handful of minor 18th-century satirical and allegorical references,[143] there was no suggestion in this period that anyone else might have written the works.[4] The authorship question emerged only after Shakespeare had come to be regarded as the English national poet and a unique genius.[144]
By the beginning of the 19th century, adulation was in full swing, with Shakespeare singled out as a transcendent genius, a phenomenon for which George Bernard Shaw coined the term "bardolatry" in 1901.[145] By the middle of the century his genius was noted as much for its intellectual as for its imaginative strength.[146] The framework with which early 19th century thinkers imagined the English Renaissance focused on kings, courtiers, and university-educated poets; in this context, the idea that someone of Shakespeare's comparatively humble background could produce such works became increasingly unacceptable.[147][6] Although still convinced that Shakespeare was the author of the works, Ralph Waldo Emerson expressed this disjunction in a lecture in 1846 by allowing that he could not reconcile Shakespeare's verse with the image of a jovial actor and theatre manager.[148] The rise of historical criticism, which challenged the authorial unity of Homer's epics and the historicity of the Bible, also fuelled emerging puzzlement over Shakespeare's authorship, which in one critic's view was "an accident waiting to happen".[149]David Strauss's investigation of the biography of Jesus, which shocked the public with its scepticism of the historical accuracy of the Gospels, influenced the secular debate about Shakespeare.[150] In 1848, Samuel Mosheim Schmucker endeavoured to rebut Strauss's doubts about the historicity of Christ by applying the same techniques satirically to the records of Shakespeare's life in his Historic Doubts Respecting Shakespeare, Illustrating Infidel Objections Against the Bible. Schmucker, who never doubted that Shakespeare was Shakespeare, unwittingly anticipated and rehearsed many of the arguments later offered for alternative authorship candidates.[151]
Open dissent and the first alternative candidate
Delia Bacon was the first writer to formulate a comprehensive theory that Shakespeare was not the writer of the works attributed to him.
Shakespeare's authorship was first openly questioned in the pages of Joseph C. Hart's The Romance of Yachting (1848). Hart argued that the plays contained evidence that many different authors had worked on them. Four years later Dr. Robert W. Jameson anonymously published "Who Wrote Shakespeare?" in the Chambers's Edinburgh Journal, expressing similar views. In 1856 Delia Bacon's unsigned article "William Shakspeare and His Plays; An Enquiry Concerning Them" appeared in Putnam's Magazine.[152]
As early as 1845, Ohio-born Delia Bacon had theorised that the plays attributed to Shakespeare were actually written by a group under the leadership of Sir Francis Bacon, with Walter Raleigh as the main writer.[153] Their purpose was to inculcate an advanced political and philosophical system for which they themselves could not publicly assume responsibility.[154] She argued that Shakespeare's commercial success precluded his writing plays so concerned with philosophical and political issues, and that if he had, he would have overseen the publication of his plays in his retirement.[155]
Francis Bacon was the first single alternative author proposed in print, by William Henry Smith, in a pamphlet published in September 1856 (Was Lord Bacon the Author of Shakspeare's Plays? A Letter to Lord Ellesmere).[156] The following year Delia Bacon published a book outlining her theory: The Philosophy of the Plays of Shakspere Unfolded.[157] Ten years later, Judge Nathaniel Holmes of Kentucky published the 600-page The Authorship of Shakespeare supporting Smith's theory,[158] and the idea began to spread widely. By 1884 the question had produced more than 250 books, and Smith asserted that the war against the Shakespeare hegemony had almost been won by the Baconians after a 30-year battle.[159] Two years later the Francis Bacon Society was founded in England to promote the theory. The society still survives and publishes a journal, Baconiana, to further its mission.[160]
These arguments against Shakespeare's authorship were answered by academics. In 1857 the English critic George Henry Townsend published William Shakespeare Not an Impostor, criticising what he called the slovenly scholarship, false premises, specious parallel passages, and erroneous conclusions of the earliest proponents of alternative authorship candidates.[161]
Search for proof
In 1853, with the help of Ralph Waldo Emerson, Delia Bacon travelled to England to search for evidence to support her theories.[162] Instead of performing archival research, she sought to unearth buried manuscripts, and unsuccessfully tried to persuade a caretaker to open Bacon's tomb.[163] She believed she had deciphered instructions in Bacon's letters to look beneath Shakespeare's Stratford gravestone for papers that would prove the works were Bacon's, but after spending several nights in the chancel trying to summon the requisite courage, she left without prising up the stone slab.[164]
Ciphers became important to the Baconian theory, as they would later to the advocacy of other authorship candidates, with books such as Ignatius L. Donnelly's The Great Cryptogram (1888) promoting the approach. Dr. Orville Ward Owen constructed a "cipher wheel", a 1,000-foot strip of canvas on which he had pasted the works of Shakespeare and other writers and mounted on two parallel wheels so he could quickly collate pages with key words as he turned them for decryption.[165] In his multi-volume Sir Francis Bacon's Cipher Story (1893), he claimed to have discovered Bacon's autobiography embedded in Shakespeare's plays, including the revelation that Bacon was the secret son of Queen Elizabeth, thus providing more motivation to conceal his authorship from the public.[165]
A feature in the Chicago Tribune on the 1916 trial of Shakespeare's authorship. From left: George Fabyan; Judge Tuthill; Shakespeare and Bacon; William Selig.
Perhaps because of Francis Bacon's legal background, both mock and real jury trials figured in attempts to prove claims for Bacon, and later for Oxford. The first mock trial was conducted over 15 months in 1892–93, and the results of the debate were published in the Boston monthly The Arena. Ignatius Donnelly was one of the plaintiffs, while F. J. Furnivall formed part of the defence. The 25-member jury, which included Henry George, Edmund Gosse, and Henry Irving, came down heavily in favour of William Shakespeare.[166] In 1916, Judge Richard Tuthill presided over a real trial in Chicago. A film producer brought an action against a Baconian advocate, George Fabyan. He argued that Fabyan's advocacy of Bacon threatened the profits expected from a forthcoming film about Shakespeare. The judge determined that ciphers identified by Fabyan's analysts proved that Francis Bacon was the author of the Shakespeare canon, awarding Fabyan $5,000 in damages. In the ensuing uproar, Tuthill rescinded his decision, and another judge, Frederick A. Smith, dismissed the case.[167]
In 1907, Owen claimed he had decoded instructions revealing that a box containing proof of Bacon's authorship had been buried in the River Wye near Chepstow Castle on the Duke of Beaufort's property. His dredging machinery failed to retrieve any concealed manuscripts.[168] That same year his former assistant, Elizabeth Wells Gallup, financed by George Fabyan, likewise travelled to England. She believed she had decoded a message, by means of a biliteral cipher, revealing that Bacon's secret manuscripts were hidden behind panels in Canonbury Tower in Islington.[169] None were found. Two years later, the American humorist Mark Twain publicly revealed his long-held anti-Stratfordian belief in Is Shakespeare Dead? (1909), favouring Bacon as the true author.[170]
In the 1920s Walter Conrad Arensberg became convinced that Bacon had willed the key to his cipher to the Rosicrucians. He thought this society was still active, and that its members communicated with each under the aegis of the Church of England. On the basis of cryptograms he detected in the sixpenny tickets of admission to Holy Trinity Church in Stratford-upon-Avon, he deduced that both Bacon and his mother were secretly buried, together with the original manuscripts of Shakespeare's plays, in the Lichfield Chapter house in Staffordshire. He unsuccessfully petitioned the Dean of Lichfield to allow him both to photograph and excavate the obscure grave.[171][172] Maria Bauer was convinced that Bacon's manuscripts had been imported into Jamestown, Virginia, in 1653, and could be found in the Bruton Vault at Williamsburg. She gained permission in the late 1930s to excavate, but authorities quickly withdrew her permit.[173] In 1938 Roderick Eagle was allowed to open the tomb of Edmund Spenser to search for proof that Bacon was Shakespeare, but found only some old bones.[174]
Other candidates emerge
By the end of the 19th century other candidates had begun to receive attention. In 1895 Wilbur G. Zeigler, an attorney, published the novel It Was Marlowe: A Story of the Secret of Three Centuries, whose premise was that Christopher Marlowe did not die in 1593, but rather survived to write Shakespeare's plays.[175] He was followed by Thomas Corwin Mendenhall who, in the February 1902 issue of Current Literature, wrote an article based upon his stylometric work titled "Did Marlowe write Shakespeare?"[176]Karl Bleibtreu, a German literary critic, advanced the nomination of Roger Manners, 5th Earl of Rutland, in 1907.[177] Rutland's candidacy enjoyed a brief flowering, supported by a number of other authors over the next few years.[178] Anti-Stratfordians unaffiliated to any specific authorship candidate also began to appear. George Greenwood, a British barrister, sought to disqualify William Shakespeare from the authorship in The Shakespeare Problem Restated (1908), but did not support any alternative authors, thereby encouraging the search for candidates other than Bacon.[179]John M. Robertson published The Baconian Heresy: A Confutation in 1913, refuting the contention that Shakespeare had expert legal knowledge by showing that legalisms pervaded Elizabethan and Jacobean literature.[180] In 1916, on the three-hundredth anniversary of Shakespeare's death, Henry Watterson, the long-time editor of The Courier-Journal, wrote a widely syndicated front-page feature story supporting the Marlovian theory and, like Zeigler, created a fictional account of how it might have happened.[181] After the First World War, Professor Abel Lefranc, an authority on French and English literature, argued the case for William Stanley, 6th Earl of Derby, as the author based on biographical evidence he had gleaned from the plays and poems.[182]
With the appearance of J. Thomas Looney's Shakespeare Identified (1920),[183] Edward de Vere, 17th Earl of Oxford, quickly ascended as the most popular alternative author.[184] Two years later Looney and Greenwood founded the Shakespeare Fellowship, an international organisation to promote discussion and debate on the authorship question, which later changed its mission to propagate the Oxfordian theory.[185] In 1923 Archie Webster published "Was Marlowe the Man?" in The National Review, like Zeigler, Mendenhall and Watterson proposing that Marlowe wrote the works of Shakespeare, and arguing in particular that the Sonnets were an autobiographical account of his survival.[186] In 1932 Allardyce Nicoll announced the discovery of a manuscript that appeared to establish James Wilmot as the earliest proponent of Bacon's authorship,[187] but recent investigations have identified the manuscript as a forgery probably designed to revive Baconian theory in the face of Oxford's ascendancy.[188]
Another authorship candidate emerged in 1943 when writer Alden Brooks, in his Will Shakspere and the Dyer's hand, argued for Sir Edward Dyer.[189] Six years earlier Brooks had dismissed Shakespeare as the playwright by proposing that his role in the deception was to act as an Elizabethan "play broker", brokering the plays and poems on behalf of his various principals, the real authors. This view, of Shakespeare as a commercial go-between, was later adapted by Oxfordians.[190] After the Second World War, Oxfordism and anti-Stratfordism declined in popularity and visibility.[191] Copious archival research had failed to confirm Oxford or anyone else as the true author, and publishers lost interest in books advancing the same theories based on alleged circumstantial evidence. To bridge the evidentiary gap, both Oxfordians and Baconians began to argue that hidden clues and allusions in the Shakespeare canon had been placed there by their candidate for the benefit of future researchers.[192]
To revive interest in Oxford, in 1952 Dorothy and Charlton Ogburn Sr. published the 1,300-page This Star of England,[193] now regarded as a classic Oxfordian text.[194] They proposed that the "fair youth" of the sonnets was Henry Wriothesley, 3rd Earl of Southampton, the offspring of a love affair between Oxford and the Queen, and that the "Shakespeare" plays were written by Oxford to memorialise the passion of that affair. This became known as the "Prince Tudor theory", which postulates that the Queen's illicit offspring and his father's authorship of the Shakespeare canon were covered up as an Elizabethan state secret. The Ogburns found many parallels between Oxford's life and the works, particularly in Hamlet, which they characterised as "straight biography".[195] A brief upsurge of enthusiasm ensued, resulting in the establishment of the Shakespeare Oxford Society in the US in 1957.[196]
In 1955 Broadway press agent Calvin Hoffman revived the Marlovian theory with the publication of The Murder of the Man Who Was "Shakespeare".[197] The next year he went to England to search for documentary evidence about Marlowe that he thought might be buried in his literary patron Sir Thomas Walsingham's tomb.[198] Nothing was found.
A series of critical academic books and articles held in check any appreciable growth of anti-Stratfordism, as academics attacked its results and its methodology as unscholarly.[199] American cryptologistsWilliam and Elizebeth Friedman won the Folger Shakespeare Library Literary Prize in 1955 for a study of the arguments that the works of Shakespeare contain hidden ciphers. The study disproved all claims that the works contain ciphers, and was condensed and published as The Shakespearean Ciphers Examined (1957). Soon after, four major works were issued surveying the history of the anti-Stratfordian phenomenon from a mainstream perspective: The Poacher from Stratford (1958), by Frank Wadsworth, Shakespeare and His Betters (1958), by Reginald Churchill, The Shakespeare Claimants (1962), by H. N. Gibson, and Shakespeare and His Rivals: A Casebook on the Authorship Controversy (1962), by George L. McMichael and Edgar M. Glenn. In 1959 the American Bar Association Journal published a series of articles and letters on the authorship controversy, later anthologised as Shakespeare Cross-Examination (1961). In 1968 the newsletter of The Shakespeare Oxford Society reported that "the missionary or evangelical spirit of most of our members seems to be at a low ebb, dormant, or non-existent".[200] In 1974, membership in the society stood at 80.[201]
Authorship in the mainstream media
The freelance writer Charlton Ogburn Jr., elected president of The Shakespeare Oxford Society in 1976, promptly began a campaign to bypass the academic establishment; he believed it to be an "entrenched authority" that aimed to "outlaw and silence dissent in a supposedly free society". He proposed fighting for public recognition by portraying Oxford as a candidate on equal footing with Shakespeare.[202] In 1984 Ogburn published his 900-page The Mysterious William Shakespeare: the Myth and the Reality, and by framing the issue as one of fairness in the atmosphere of conspiracy that permeated America after Watergate, he used the media to circumnavigate academia and appeal directly to the public.[203] Ogburn's efforts secured Oxford as the most popular alternative candidate. He also kick-started the modern revival of the Oxfordian movement by adopting a policy of seeking publicity through moot court trials, media debates, television, and other outlets. These methods were later extended to the Internet, including Wikipedia.[204]
A device from Henry Peacham's Minerva Britanna (1612) has been used by Baconians and Oxfordians alike as coded evidence for concealed authorship of the Shakespeare canon.[205]
Ogburn believed that academics were best challenged by recourse to law, and on 25 September 1987 three justices of the Supreme Court of the United States convened a one-day moot court at the Metropolitan Memorial United Methodist Church, to hear the Oxfordian case. The trial was structured so that literary experts would not be represented, but the burden of proof was on the Oxfordians. The justices determined that the case was based on a conspiracy theory, and that the reasons given for this conspiracy were both incoherent and unpersuasive.[206] Although Ogburn took the verdict as a "clear defeat", Oxfordian columnist Joseph Sobran thought the trial had effectively dismissed any other Shakespeare authorship contender from the public mind and provided legitimacy for Oxford.[207] A retrial was organised the next year in the United Kingdom to potentially reverse the decision. Presided over by three Law Lords, the court was held in the Inner Temple in London on 26 November 1988. On this occasion Shakespearean scholars argued their case, and the outcome confirmed the American verdict.[208]
Due in part to the rising visibility of the authorship question, media coverage of the controversy increased, with many outlets focusing on the Oxfordian theory. In 1989 the Public Broadcasting Service television show Frontline broadcast "The Shakespeare Mystery", exposing the interpretation of Oxford-as-Shakespeare to more than 3.5 million viewers in the US alone.[209] This was followed in 1992 by a three-hour Frontline teleconference, "Uncovering Shakespeare: an Update", moderated by William F. Buckley, Jr.[210] In 1991 The Atlantic Monthly published a debate between Tom Bethell, presenting the case for Oxford,[211] and Irvin Leigh Matus, presenting the case for Shakespeare.[212] A similar print debate took place in 1999 in Harper's Magazine under the title "The Ghost of Shakespeare". Beginning in the 1990s Oxfordians and other anti-Stratfordians increasingly turned to the Internet to promulgate their theories, including creating several articles on Wikipedia about the candidates and the arguments, to such an extent that a survey of the field in 2010 judged that its presence on Wikipedia "puts to shame anything that ever appeared in standard resources".[213]
On 14 April 2007 the Shakespeare Authorship Coalition issued an Internet petition, the "Declaration of Reasonable Doubt About the Identity of William Shakespeare", coinciding with Brunel University's announcement of a one-year Master of Arts programme in Shakespeare authorship studies (since suspended). The coalition intended to enlist broad public support so that by 2016, the 400th anniversary of Shakespeare's death, the academic Shakespeare establishment would be forced to acknowledge that legitimate grounds for doubting Shakespeare's authorship exist, a goal that was not successful.[214] More than 1,200 signatures were collected by the end of 2007, and as of 23 April 2016, the 400th anniversary of Shakespeare's death and the self-imposed deadline, the document had been signed by 3,348 people, including 573 self-described current and former academics. On 22 April 2007, The New York Times published a survey of 265 American Shakespeare professors on the Shakespeare authorship question. To the question of whether there is good reason to question Shakespeare's authorship, 6 per cent answered "yes", and 11 percent "possibly". When asked their opinion of the topic, 61 per cent chose "A theory without convincing evidence" and 32 per cent chose "A waste of time and classroom distraction".[215]
In 2010 James S. Shapiro surveyed the authorship question in Contested Will: Who Wrote Shakespeare? Approaching the subject sociologically, Shapiro found its origins to be grounded in a vein of traditional scholarship going back to Edmond Malone, and criticised academia for ignoring the topic, which was, he argued, tantamount to surrendering the field to anti-Stratfordians.[216] Shapiro links the revival of the Oxfordian movement to the cultural changes that followed the Watergate conspiracy scandal that increased the willingness of the public to believe in governmental conspiracies and cover-ups,[217] and Robert Sawyer suggests that the increased presence of anti-Stratfordian ideas in popular culture can be attributed to the proliferation of conspiracy theories since the 9/11 attacks.[218]
In September 2011, Anonymous, a feature film based on the "Prince Tudor" variant of the Oxfordian theory, written by John Orloff and directed by Roland Emmerich, premiered at the Toronto International Film Festival. De Vere is portrayed as a literary prodigy who becomes the lover of Queen Elizabeth, with whom he sires Henry Wriothesley, 3rd Earl of Southampton, only to discover that he himself may be the Queen's son by an earlier lover. He eventually sees his suppressed plays performed through the front man, William Shakespeare, who is portrayed as an opportunistic actor and the movie's comic foil. Oxford agrees to Elizabeth's demand that he remain anonymous as part of a bargain for saving their son from execution as a traitor for supporting the Essex Rebellion against her.[219]
Two months before the release of the film, the Shakespeare Birthplace Trust launched a campaign attacking anti-Stratfordian arguments by means of a web site, 60 Minutes With Shakespeare: Who Was William Shakespeare?, containing short audio contributions recorded by actors, scholars and other celebrities,[220] which was quickly followed by a rebuttal from the Shakespeare Authorship Coalition.[221] Since then, Paul Edmondson and Stanley Wells have written a short e-book, Shakespeare Bites Back (2011),[222] and edited a longer book of essays by prominent academic Shakespeareans, Shakespeare Beyond Doubt (2013), in which Edmondson says that they had "decided to lead the Shakespeare Authorship Campaign because we thought more questions would be asked by our visitors and students because of Anonymous, because we saw, and continue to see, something very wrong with the way doubts about Shakespeare's authorship are being given academic credibility by the Universities of Concordia and Brunel, and because we felt that merely ignoring the anti-Shakespearians was inappropriate at a time when their popular voice was likely to be gaining more ground".[223]
Alternative candidates
While more than 80 historical figures have been nominated at one time or another as the true author of the Shakespearean canon,[10] only a few of these claimants have attracted significant attention.[224] In addition to sole candidates, various "group" theories have also achieved a notable level of interest.[225]
Group theories
Various group theories of Shakespearean authorship were proposed as early as the mid-19th century. Delia Bacon's The Philosophy of the Plays of Shakespeare Unfolded (1857), the first book focused entirely on the authorship debate, also proposed the first "group theory". It attributed the works of Shakespeare to "a little clique of disappointed and defeated politicians" led by Sir Walter Raleigh which included Sir Francis Bacon and perhaps Edmund Spenser, Lord Buckhurst, and Edward de Vere, 17th Earl of Oxford.[226]
Gilbert Slater's The Seven Shakespeares (1931) proposed that the works were written by seven different authors: Francis Bacon, Edward de Vere, 17th Earl of Oxford, Sir Walter Raleigh, William Stanley, 6th Earl of Derby, Christopher Marlowe, Mary Sidney, Countess of Pembroke, and Roger Manners, 5th Earl of Rutland.[227] In the early 1960s, Edward de Vere, Francis Bacon, Roger Manners, William Herbert and Mary Sidney were suggested as members of a group referred to as "The Oxford Syndicate".[228] Christopher Marlowe, Robert Greene and Thomas Nashe have also been proposed as participants. Some variants of the group theory also include William Shakespeare of Stratford as the group's manager, broker and/or front man.[229]
Sir Francis Bacon
The leading candidate of the 19th century was one of the great intellectual figures of Jacobean England, Sir Francis Bacon, a lawyer, philosopher, essayist and scientist. Bacon's candidacy relies upon historical and literary conjectures, as well as alleged cryptographic evidence.[230]
Bacon was proposed as sole author by William Henry Smith in 1856 and as a co-author by Delia Bacon in 1857.[231] Smith compared passages such as Bacon's "Poetry is nothing else but feigned history" with Shakespeare's "The truest poetry is the most feigning" (As You Like It, 3.3.19–20), and Bacon's "He wished him not to shut the gate of your Majesty's mercy" with Shakespeare's "The gates of mercy shall be all shut up" (Henry V, 3.3.10).[232] Delia Bacon argued that there were hidden political meanings in the plays and parallels between those ideas and Bacon's known works. She proposed him as the leader of a group of disaffected philosopher-politicians who tried to promote republican ideas to counter the despotism of the Tudor-Stuart monarchies through the medium of the public stage.[233] Later Bacon supporters found similarities between a great number of specific phrases and aphorisms from the plays and those written by Bacon in his waste book, the Promus. In 1883, Mrs. Henry Pott compiled 4,400 parallels of thought or expression between Shakespeare and Bacon.[234]
In a letter addressed to John Davies, Bacon closes "so desireing you to bee good to concealed poets", which according to his supporters is self-referential.[235] Baconians argue that while Bacon outlined both a scientific and moral philosophy in The Advancement of Learning (1605), only the first part was published under his name during his lifetime. They say that his moral philosophy, including a revolutionary politico-philosophic system of government, was concealed in the Shakespeare plays because of its threat to the monarchy.[236]
Baconians suggest that the great number of legal allusions in the Shakespeare canon demonstrate the author's expertise in the law. Bacon became Queen's Counsel in 1596 and was appointed Attorney General in 1613. Bacon also paid for and helped write speeches for a number of entertainments, including masques and dumbshows, although he is not known to have authored a play. His only attributed verse consists of seven metrical psalters, following Sternhold and Hopkins.[237]
Since Bacon was knowledgeable about ciphers,[238] early Baconians suspected that he left his signature encrypted in the Shakespeare canon. In the late 19th and early 20th centuries many Baconians claimed to have discovered ciphers throughout the works supporting Bacon as the true author. In 1881, C. F. Ashmead Windle, an American, claimed she had found carefully worked-out jingles in each play that identified Bacon as the author.[239] This sparked a cipher craze, and probative cryptograms were identified in the works by Ignatius Donnelly,[240] Orville Ward Owen, Elizabeth Wells Gallup,[241] and Dr. Isaac Hull Platt. Platt argued that the Latin word honorificabilitudinitatibus, found in Love's Labour's Lost, can be read as an anagram, yielding Hi ludi F. Baconis nati tuiti orbi ("These plays, the offspring of F. Bacon, are preserved for the world.").[242]
Edward de Vere, 17th Earl of Oxford
Since the early 1920s, the leading alternative authorship candidate has been Edward de Vere, 17th Earl of Oxford and Lord Great Chamberlain of England. Oxford followed his grandfather and father in sponsoring companies of actors, and he had patronised a company of musicians and one of tumblers.[243] Oxford was an important courtier poet,[244] praised as such and as a playwright by George Puttenham and Francis Meres, who included him in a list of the "best for comedy amongst us". Examples of his poetry but none of his theatrical works survive.[245] Oxford was noted for his literary and theatrical patronage. Between 1564 and 1599, 33 works were dedicated to him, including works by Arthur Golding, John Lyly, Robert Greene and Anthony Munday.[246] In 1583 he bought the sublease of the first Blackfriars Theatre and gave it to the poet-playwright Lyly, who operated it for a season under Oxford's patronage.[247]
Oxfordians believe certain literary allusions indicate that Oxford was one of the most prominent "suppressed" anonymous and/or pseudonymous writers of the day.[248] They also note Oxford's connections to the London theatre and the contemporary playwrights of Shakespeare's day, his family connections including the patrons of Shakespeare's First Folio, his relationships with Queen Elizabeth I and Shakespeare's patron, the Earl of Southampton, his knowledge of Court life, his private tutors and education, and his wide-ranging travels through the locations of Shakespeare's plays in France and Italy.[249] The case for Oxford's authorship is also based on perceived similarities between Oxford's biography and events in Shakespeare's plays, sonnets and longer poems; perceived parallels of language, idiom, and thought between Oxford's letters and the Shakespearean canon; and the discovery of numerous marked passages in Oxford's Bible that appear in some form in Shakespeare's plays.[250]
The first to lay out a comprehensive case for Oxford's authorship was J. Thomas Looney, an English schoolteacher who identified personality characteristics in Shakespeare's works—especially Hamlet—that painted the author as an eccentric aristocratic poet, a drama and sporting enthusiast with a classical education who had travelled extensively to Italy.[251] He discerned close affinities between the poetry of Oxford and that of Shakespeare in the use of motifs and subjects, phrasing, and rhetorical devices, which led him to identify Oxford as the author.[184] After his Shakespeare Identified was published in 1920, Oxford replaced Bacon as the most popular alternative candidate.[252]
Oxford's purported use of the "Shakespeare" pen name is attributed to the stigma of print, a convention that aristocratic authors could not take credit for writing plays for the public stage.[253] Another motivation given is the politically explosive "Prince Tudor theory" that the youthful Oxford was Queen Elizabeth's lover; according to this theory, Oxford dedicated Venus and Adonis, The Rape of Lucrece, and the Sonnets to their son, England's rightful Tudor Prince, Henry Wriothesley, who was brought up as the 3rd Earl of Southampton.[194]
Oxfordians say that the dedication to the sonnets published in 1609 implies that the author was dead prior to their publication and that 1604 (the year of Oxford's death) was the year regular publication of "newly corrected" and "augmented" Shakespeare plays stopped.[254] Consequently, they date most of the plays earlier than the standard chronology and say that the plays which show evidence of revision and collaboration were left unfinished by Oxford and completed by other playwrights after his death.[255]
Christopher Marlowe
The poet and dramatist Christopher Marlowe was born into the same social class as Shakespeare—his father was a cobbler, Shakespeare's a glove-maker. Marlowe was the older by two months, and spent six and a half years at Cambridge University. He pioneered the use of blank verse in Elizabethan drama, and his works are widely accepted as having greatly influenced those of Shakespeare.[256] Of his seven plays, all but one or two were first performed before 1593.
The Marlovian theory argues that Marlowe's documented death on 30 May 1593 was faked. Thomas Walsingham and others are supposed to have arranged the faked death, the main purpose of which was to allow Marlowe to escape trial and almost certain execution on charges of subversive atheism.[257] The theory then argues that Shakespeare was chosen as the front behind whom Marlowe would continue writing his highly successful plays.[258] These claims are founded on inferences derived from the circumstances of his apparent death, stylistic similarities between the works of Marlowe and Shakespeare, and hidden meanings found in the works and associated texts.
Marlovians note that, despite Marlowe and Shakespeare being almost exactly the same age, the first work linked to the name William Shakespeare—Venus and Adonis—was on sale, with Shakespeare's name signed to the dedication, 13 days after Marlowe's reported death,[259] having been registered with the Stationers' Company on 18 April 1593 with no named author.[260] Lists of verbal correspondences between Marlowe's and Shakespeare's work have also been compiled.[261]
Marlowe's candidacy was initially suggested in 1892 by T. W. White, who argued that Marlowe was one of a group of writers responsible for the plays, the others being Shakespeare, Greene, Peele, Daniel, Nashe and Lodge.[262] He was first proposed as the sole author of Shakespeare's "stronger plays" in 1895 by Wilbur G. Zeigler.[263] His candidacy was revived by Calvin Hoffman in 1955 and, according to Shapiro, a recent surge in interest in the Marlowe case "may be a sign that the dominance of the Oxfordian camp may not extend much longer than the Baconian one".[264]
William Stanley, 6th Earl of Derby
William Stanley, 6th Earl of Derby, was first proposed as a candidate in 1891 by James Greenstreet, a British archivist, and later supported by Abel Lefranc and others.[265] Greenstreet discovered that a Jesuit spy, George Fenner, reported in 1599 that Derby "is busye in penning commodyes for the common players".[266] That same year Derby was recorded as financing one of London's two children's drama companies, Paul's Boys; he also had his own company, Derby's Men, which played multiple times at court in 1600 and 1601.[267] Derby was born three years before Shakespeare and died in 1642, so his lifespan fits the consensus dating of the works. His initials were W. S., and he was known to sign himself "Will", which qualified him to write the punning "Will" sonnets.[268]
Derby travelled in continental Europe in 1582, visiting France and possibly Navarre. Love's Labour's Lost is set in Navarre and the play may be based on events that happened there between 1578 and 1584.[269] Derby married Elizabeth de Vere, whose maternal grandfather was William Cecil,[270] thought by some critics to be the basis of the character of Polonius in Hamlet. Derby was associated with William Herbert, 3rd Earl of Pembroke, and his brother Philip Herbert, Earl of Montgomery and later 4th Earl of Pembroke, the "Incomparable Pair" to whom William Shakespeare's First Folio is dedicated.[271] When Derby released his estates to his son James around 1628–29, he named Pembroke and Montgomery as trustees. Derby's older brother, Ferdinando Stanley, 5th Earl of Derby, formed a group of players, the Lord Strange's Men, some of whose members eventually joined the King's Men, one of the companies most associated with Shakespeare.[272]
Notes
Footnotes
^The UK and US editions of Shapiro 2010 differ significantly in pagination. The citations to the book used in this article list the UK page numbers first, followed by the page numbers of the US edition in parentheses.
^The low figure is that of Manfred Scheler. The upper figure, from Marvin Spevack, is true only if all word forms (cat and cats counted as two different words, for example), compound words, emendations, variants, proper names, foreign words, onomatopoeic words, and deliberate malapropisms are included.
Citations
^Prescott 2010, p. 273: "'Anti-Stratfordian' is the collective name for the belief that someone other than the man from Stratford wrote the plays commonly attributed to him."; McMichael & Glenn 1962, p. 56.
^Kathman 2003, p. 621: "...antiStratfordism has remained a fringe belief system"; Schoenbaum 1991, p. 450; Paster 1999, p. 38: "To ask me about the authorship question ... is like asking a palaeontologist to debate a creationist's account of the fossil record."; Nelson 2004, pp. 149–51: "I do not know of a single professor of the 1,300-member Shakespeare Association of America who questions the identity of Shakespeare ... antagonism to the authorship debate from within the profession is so great that it would be as difficult for a professed Oxfordian to be hired in the first place, much less gain tenure..."; Carroll 2004, pp. 278–9: "I have never met anyone in an academic position like mine, in the Establishment, who entertained the slightest doubt as to Shakespeare's authorship of the general body of plays attributed to him."; Pendleton 1994, p. 21: "Shakespeareans sometimes take the position that to even engage the Oxfordian hypothesis is to give it a countenance it does not warrant."; Sutherland & Watts 2000, p. 7: "There is, it should be noted, no academic Shakespearian of any standing who goes along with the Oxfordian theory."; Gibson 2005, p. 30: "...most of the great Shakespearean scholars are to be found in the Stratfordian camp..."
^Taylor 1989, p. 167: By 1840, admiration for Shakespeare throughout Europe had become such that Thomas Carlyle "could say without hyperbole" that "'Shakspeare is the chief of all Poets hitherto; the greatest intellect who, in our recorded world, has left record of himself in the way of literature.'"
^Dobson 2001, p. 31: "These two notions—that the Shakespeare canon represented the highest achievement of human culture, while William Shakespeare was a completely uneducated rustic—combined to persuade Delia Bacon and her successors that the Folio's title page and preliminaries could only be part of a fabulously elaborate charade orchestrated by some more elevated personage, and they accordingly misread the distinctive literary traces of Shakespeare's solid Elizabethan grammar-school education visible throughout the volume as evidence that the 'real' author had attended Oxford or Cambridge."
^Bate 1998, p. 90: "Their [Oxfordians'] favorite code is the hidden personal allusion ... But this method is in essence no different from the cryptogram, since Shakespeare's range of characters and plots, both familial and political, is so vast that it would be possible to find in the plays 'self-portraits' of, once more, anybody one cares to think of."; Love 2002, pp. 87, 200: "It has more than once been claimed that the combination of 'biographical-fit' and cryptographical arguments could be used to establish a case for almost any individual ... The very fact that their application has produced so many rival claimants demonstrates their unreliability." Shapiro 2010, pp. 304–13 (268–77); Schoone-Jongen 2008, p. 5: "in voicing dissatisfaction over the apparent lack of continuity between the certain facts of Shakespeare's life and the spirit of his literary output, anti-Stratfordians adopt the very Modernist assumption that an author's work must reflect his or her life. Neither Shakespeare nor his fellow Elizabethan writers operated under this assumption."; Smith 2008, p. 629: "...deriving an idea of an author from his or her works is always problematic, particularly in a multi-vocal genre like drama, since it crucially underestimates the heterogeneous influences and imaginative reaches of creative writing."
^Wadsworth 1958, pp. 163–4: "The reasons we have for believing that William Shakespeare of Stratford-on-Avon wrote the plays and poems are the same as the reasons we have for believing any other historical event ... the historical evidence says that William Shakespeare wrote the plays and poems."; McCrea 2005, pp. xii–xiii, 10; Nelson 2004, p. 162: "Apart from the First Folio, the documentary evidence for William Shakespeare is the same as we get for other writers of the period..."
^Love 2002, pp. 198–202, 303–7: "The problem that confronts all such attempts is that they have to dispose of the many testimonies from Will the player's own time that he was regarded as the author of the plays and the absence of any clear contravening public claims of the same nature for any of the other favoured candidates."; Bate 1998, pp. 68–73.
^Bate 1998, p. 73: "No one in Shakespeare's lifetime or the first two hundred years after his death expressed the slightest doubt about his authorship."; Hastings 1959, pp. 486–8: "...no suspicions regarding Shakespeare's authorship (except for a few mainly humorous comments) were expressed until the middle of the nineteenth century".
^Dobson 2001, p. 31; Greenblatt 2005: "The idea that William Shakespeare's authorship of his plays and poems is a matter of conjecture and the idea that the 'authorship controversy' be taught in the classroom are the exact equivalent of current arguments that 'intelligent design' be taught alongside evolution. In both cases an overwhelming scholarly consensus, based on a serious assessment of hard evidence, is challenged by passionately held fantasies whose adherents demand equal time."
^Price 2001, p. 9: "Nevertheless, the skeptics who question Shakespeare's authorship are relatively few in number, and they do not speak for the majority of academic and literary professionals."
^Wells 2003, p. 388; Dobson 2001, p. 31: "Most observers, however, have been more impressed by the anti-Stratfordians' dogged immunity to documentary evidence"; Shipley 1943, p. 38: "the challenger would still need to produce evidence in favour of another author. There is no such evidence."; Love 2002, p. 198: "...those who believe that other authors were responsible for the canon as a whole ... have been forced to invoke elaborate conspiracy theories."; Wadsworth 1958, p. 6: "Paradoxically, the skeptics invariably substitute for the easily explained lack of evidence concerning William Shakespeare, the more troublesome picture of a vast conspiracy of silence about the 'real author', with a total lack of historical evidence for the existence of this 'real author' explained on the grounds of a secret pact"; Shapiro 2010, p. 255 (225): "Some suppose that only Shakespeare and the real author were in the know. At the other extreme are those who believe that it was an open secret".
^Kells, Stuart (2019). Shakespeare's Library: Unlocking the Greatest Mystery in Literature. Counterpoint. p. Introduction. ISBN978-1640091832.: "Not a trace of his library was found. No books, no manuscripts, no letters, no diaries. The desire to get close to Shakespeare was unrequited, the vacuum palpable."
^Shipley 1943, pp. 37–8; Bethell 1991, pp. 48, 50; Schoone-Jongen 2008, p. 5; Smith 2008, p. 622: "Fuelled by scepticism that the plays could have been written by a working man from a provincial town with no record of university education, foreign travel, legal studies or court preferment, the controversialists proposed instead a sequence of mainly aristocratic alternative authors whose philosophically or politically occult meanings, along with their own true identity, had to be hidden in codes, cryptograms and runic obscurity."
^Callaghan 2013, p. 11: "It is a 'fact' that the survival rate for early modern documents is low and that Shakespeare lived in a world prior to the systematic, all-inclusive collection of data that provides the foundation of modern bureaucracy."
^Matus 1994, p. 47: "...on the mysterious disappearance of the accounts of the highest immediate authority over theatre in Shakespeare's age, the Lord Chamberlains of the Household. Ogburn imagines that these records, like those of the Stratford grammar school, might have been deliberately eradicated 'because they would have showed how little consequential a figure Shakspere cut in the company.'"
^Matus 1994, p. 32: "Ogburn gives voice to his suspicion that the school records disappeared because they would have revealed William's name did not appear among those who attended it."
^Price 2001, pp. 213–7, 262; Crinkley 1985, p. 517: "It is characteristic of anti-Stratfordian books that they make a list of what Shakespeare must have been—a courtier, a lawyer, a traveler in Italy, a classicist, a falconer, whatever. Then a candidate is selected who fits the list. Not surprisingly, different lists find different candidates."
^Barrell 1940, p. 6: "The main contention of these anti-Stratfordians is that 'William Shakespeare' was a pen-name, like 'Molière,' 'George Eliot,' and 'Mark Twain,' which in this case cloaked the creative activities of a master scholar in high circles".
^Matus 1994, pp. 166, 266–7, cites James Lardner, "Onward and Upward with the Arts: the Authorship Question", The New Yorker, 11 April 1988, p. 103: "No obituaries marked his death in 1616, no public mourning. No note whatsoever was taken of the passing of the man who, if the attribution is correct, would have been the greatest playwright and poet in the history of the English language."; Shapiro 2010, p. 243.
^Wadsworth 1958, pp. 163–4; Murphy 1964, p. 4: "For the evidence that William Shakespeare of Stratford-on-Avon (1564–1616) wrote the works attributed to him is not only abundant but conclusive. It is of the kind, as Sir Edmund Chambers puts it, 'which is ordinarily accepted as determining the authorship of early literature.'"; Nelson 2004, p. 149: "Even the most partisan anti-Stratfordian or Oxfordian agrees that documentary evidence taken on its face value supports the case for William Shakespeare of Stratford-upon-Avon ... as author of the poems and plays"; McCrea 2005, pp. xii–xiii, 10,
^Dawson 1953, p. 165: "...in my opinion it is the basic unsoundness of method in this and other works of similar subject matter that explains how sincere and intelligent men arrive at such wild conclusions"; Love 2002, p. 200; McCrea 2005, p. 14; Gibson 2005, p. 10.
^Pendleton 1994, p. 29: "...since he had, as Clarenceux King, responded less than three years earlier to Brooke's attack on the grant of arms to the father of 'Shakespeare ye Player' ... Camden thus was aware that the last name on his list was that of William Shakespeare of Stratford. The Camden reference, therefore, is exactly what the Oxfordians insist does not exist: an identification by a knowledgeable and universally respected contemporary that 'the Stratford man' was a writer of sufficient distinction to be ranked with (if after) Sidney, Spenser, Daniel, Holland, Jonson, Campion, Drayton, Chapman, and Marston. And the identification even fulfils the eccentric Oxfordian ground-rule that it be earlier than 1616."
^Price 1997, pp. 168, 173: "While Hollar conveyed the general impressions suggested by Dugdale's sketch, few of the details were transmitted with accuracy. Indeed, Dugdale's sketch gave Hollar few details to work with ... As with other sketches in his collection, Dugdale made no attempt to draw a facial likeness, but appears to have sketched one of his standard faces to depict a man with facial hair. Consequently, Hollar invented the facial features for Shakespeare. The conclusion is obvious: in the absence of an accurate and detailed model, Hollar freely improvised his image of Shakespeare's monument. That improvisation is what disqualifies the engraving's value as authoritative evidence."
^Love 2002, p. 81: "As has often been pointed out, if Shakespeare had read all the books claimed to have influenced him, he would never have had time to write a word of his own. He probably picked up many of his ideas from conversation. If he needed legal knowledge it was easier to extract this from Inns-of-Court drinkers in the Devil Tavern than to search volumes of precedents."
^Nosworthy 2007, p. xv: "we should beware of assuming Shakespeare's wholesale dependence on books. The stories, to any educated Elizabethan, were old and familiar ones".
^Simonton 2004, p. 210: "If the Earl of Oxford wrote these plays, then he not only displayed minimal stylistic development over the course of his career (Elliot & Valenza, 2000), but he also wrote in monastic isolation from the key events of his day."
^Simonton 2004, p. 210, note 4: "For the record, I find the traditional attribution to William Shakespeare of Stratford highly improbable ... I really would like Edward de Vere to be the author of the plays and poems ... Thus, I had hoped that the current study might strengthen the case on behalf of the Oxfordian attribution. I think that expectation was proven wrong."
Churchill, Reginald Charles (1958). Shakespeare and His Betters: A History and a Criticism of the Attempts Which Have Been Made to Prove That Shakespeare's Works Were Written by Others. London: Max Reinhardt.
Niederkorn, William S. (2004). "Jumping O'er Times: The Importance of Lawyers and Judges in the Controversy over the Identity of Shakespeare, as Reflected in the Pages of the New York Times". Tennessee Law Review. Tennessee Law Review Association. 72 (1): 67–92. ISSN0040-3288.
|
Neither Shakespeare nor his fellow Elizabethan writers operated under this assumption. "; Smith 2008, p. 629: "...deriving an idea of an author from his or her works is always problematic, particularly in a multi-vocal genre like drama, since it crucially underestimates the heterogeneous influences and imaginative reaches of creative writing. "
^Wadsworth 1958, pp. 163–4: "The reasons we have for believing that William Shakespeare of Stratford-on-Avon wrote the plays and poems are the same as the reasons we have for believing any other historical event ... the historical evidence says that William Shakespeare wrote the plays and poems."; McCrea 2005, pp. xii–xiii, 10; Nelson 2004, p. 162: "Apart from the First Folio, the documentary evidence for William Shakespeare is the same as we get for other writers of the period..."
^Love 2002, pp. 198–202, 303–7: "The problem that confronts all such attempts is that they have to dispose of the many testimonies from Will the player's own time that he was regarded as the author of the plays and the absence of any clear contravening public claims of the same nature for any of the other favoured candidates. "; Bate 1998, pp. 68–73.
^Bate 1998, p. 73: "No one in Shakespeare's lifetime or the first two hundred years after his death expressed the slightest doubt about his authorship. "; Hastings 1959, pp.
|
yes
|
Bibliography
|
Was Shakespeare the real author of all his plays and poems?
|
yes_statement
|
"shakespeare" is the true "author" of all his "plays" and "poems".. all of "shakespeare"'s "plays" and "poems" were written by him.
|
https://www.psu.edu/news/research/story/probing-question-did-shakespeare-really-write-all-those-plays/
|
Probing Question: Did Shakespeare really write all those plays ...
|
Probing Question: Did Shakespeare really write all those plays?
"Done to death by slanderous tongues." So wrote William Shakespeare in his play, Much Ado About Nothing. Or did he? Even people who have never actually read Shakespeare have heard the theories: Shakespeare's plays were written by Francis Bacon! Shakespeare's plays were written by the Earl of Oxford! Shakespeare's plays were written by anyone, anyone, but William Shakespeare!
"Lunacy," says Patrick Cheney, Distinguished Professor of English and Comparative Literature, gesturing to the early twentieth-century inventor of the Oxford theory, J. Thomas Looney. "The Shakespeare authorship controversy is all conspiracy. Not a single reputable scholar I know has the least doubt that William Shakespeare of Stratford-upon-Avon wrote the plays and poems ascribed to him."
One of the chief arguments of those who doubt his authorship is that Shakespeare lacked the education and experience to have produced such a wide-ranging body of work. Not so, argues Cheney, noting that William Shakespeare had a superior education, some of it acquired from grammar school in Stratford, but much expanded upon as an adult. Adds Cheney, research shows that even in a pre-library age, Shakespeare had a good deal of access to books. "Shakespeare was not simply a genius; he was by all accounts a voracious reader: the plots from nearly all his plays and poems come from books."
As for lacking experience, anti-Stratfordians (as the authorship doubters are sometimes called) usually point to scenes featuring royals or to plays set in foreign countries, and argue that a provincial commoner such as Shakespeare could not have been familiar enough with these topics to have written his worldly plays. Cheney is not impressed by such arguments. "Neither royalty nor international travel has ever been a prerequisite for good fiction," he notes. "As a member of a royal acting company, Shakespeare had plenty of opportunity to experience the courts of sovereigns first-hand. And as an avid reader of history, he could certainly re-create a foreign country in his fictions."
The most popular of the anti-Stratfordian theories is that the plays attributed to Shakespeare were written by the Earl of Oxford. However, explains Cheney, Oxford died in 1604, and significant evidence indicates that some of Shakespeare's work was produced years later. (For instance, The Tempest was influenced by a voyage to the Americas that did not occur until 1610). "The case for Oxford depends on the erasure of history," says Cheney.
The entire authorship controversy itself "is a product of modernity," he adds, noting, "For over two hundred years after Shakespeare's death, it did not occur to anyone to challenge his authorship."
Explains Cheney, the rising middle class of the nineteenth century could not believe that a mere country stripling could have written what scholar Stephen Greenblatt calls "the most important body of imaginative literature of the last thousand years." But those who can't believe that a man with a grammar-school education wrote these plays and poems overlook a sobering fact of literary history: the inventors of modern English literature were overwhelmingly from the working class. "Not only was Shakespeare the son of a glover, but Ben Jonson was the son of bricklayer, and Edmund Spenser the son of a tailor, while Christopher Marlowe was the son of a butcher," says Cheney. "The case for the Earl of Oxford is about the belief of class-conscious gentlemen that only an aristocrat could produce great works of literature. Perhaps we should let Spenser, Marlowe, and Jonson know."
Cheney believes there is an important question now being asked about Shakespeare's authorship, and it has nothing to do with the Earl of Oxford. Instead, it asks what kind of author William Shakespeare really was. "Was he a consummate businessman concerned only with the commercial success of his acting company, or was he also a literary poet-playwright who cared about preserving his artistic legacy?" In two recent books, Cheney has tried to reclassify Shakespeare as at once a man of the theater and a writer with a literary career: "Our fullest understanding of Shakespeare needs to come to terms with both."
Says Cheney: "It is true, when students come into my Shakespeare courses, they typically want to ask only a single question: 'Did Shakespeare really write all his plays?' When they leave, I hope they're more inclined to ask, 'How did it come to be that the world's greatest man of the theater also penned some of the most extraordinary poems in English?' Shakespeare wrote those plays—and poems. Read them; see them: listen to them. They are our great cultural inheritance, the real legacy of William Shakespeare."
Patrick Cheney, Ph. D., is Distinguished Professor of English and Comparative Literature in the College of the Liberal Arts. His book Shakespeare's Literary Authorship is just out from Cambridge University Press. You can reach him at [email protected].
|
Probing Question: Did Shakespeare really write all those plays?
"Done to death by slanderous tongues." So wrote William Shakespeare in his play, Much Ado About Nothing. Or did he? Even people who have never actually read Shakespeare have heard the theories: Shakespeare's plays were written by Francis Bacon! Shakespeare's plays were written by the Earl of Oxford! Shakespeare's plays were written by anyone, anyone, but William Shakespeare!
"Lunacy," says Patrick Cheney, Distinguished Professor of English and Comparative Literature, gesturing to the early twentieth-century inventor of the Oxford theory, J. Thomas Looney. "The Shakespeare authorship controversy is all conspiracy. Not a single reputable scholar I know has the least doubt that William Shakespeare of Stratford-upon-Avon wrote the plays and poems ascribed to him. "
One of the chief arguments of those who doubt his authorship is that Shakespeare lacked the education and experience to have produced such a wide-ranging body of work. Not so, argues Cheney, noting that William Shakespeare had a superior education, some of it acquired from grammar school in Stratford, but much expanded upon as an adult. Adds Cheney, research shows that even in a pre-library age, Shakespeare had a good deal of access to books. "Shakespeare was not simply a genius; he was by all accounts a voracious reader: the plots from nearly all his plays and poems come from books. "
As for lacking experience, anti-Stratfordians (as the authorship doubters are sometimes called) usually point to scenes featuring royals or to plays set in foreign countries, and argue that a provincial commoner such as Shakespeare could not have been familiar enough with these topics to have written his worldly plays. Cheney is not impressed by such arguments. "Neither royalty nor international travel has ever been a prerequisite for good fiction," he notes. "As a member of a royal acting company, Shakespeare had plenty of opportunity to experience the courts of sovereigns first-hand. And as an avid reader of history, he could certainly re-create a foreign country in his fictions. "
The most popular of the anti-Stratfordian theories is that the plays attributed to Shakespeare were written by the Earl of Oxford. However, explains Cheney,
|
yes
|
Bibliography
|
Was Shakespeare the real author of all his plays and poems?
|
yes_statement
|
"shakespeare" is the true "author" of all his "plays" and "poems".. all of "shakespeare"'s "plays" and "poems" were written by him.
|
https://shakespeareauthorship.com/howdowe.html
|
How We Know That Shakespeare Wrote Shakespeare: The ...
|
3d. Shakespeare bought the Blackfriar's Gatehouse in London in 1613. On the deed dated 10 March 1613, John Hemmyng, gentleman (also spelled Hemming on the same page) acted as trustee for the buyer, "William Shakespeare of Stratford-upon-Avon." This property is disposed of in Shakespeare's will.
So William Shakespeare of Stratford-upon-Avon, gentleman, was the actor who performed in the plays in the company for which William Shakespeare wrote plays. Shakespeare was also a sharer in the syndicate that owned the Globe theater. There were three parties to the agreement: Nicholas Brend, who owned the grounds upon which the Globe was built; Cuthbert and Richard Burbage, who were responsible for half the lease; and five members of the Chamberlain's Men -- William Shakespeare, John Heminges, Augustine Philips, Thomas Pope, and William Kempe -- who were responsible for the other half of the lease. Each of these men had a 1/10 share in the profits. The share dropped to 1/12 when Henry Condell and William Sly joined in 1605-08, and dropped to 1/14 in 1611when Ostler came in.
It may seem like overkill to ask if William Shakespeare the Globe-sharer was the same William Shakespeare of Stratford-upon-Avon, gentleman, since all the sharers were obviously members of the acting company. That he was the same man is easily proven by legal documents.
4b. In a mortgage deed of trust dated 7 October 1601 by Nicholas Brend to John Bodley, John Collet, and Matthew Browne, in which Bodley was given control of the Globe playhouse, the Globe is described as being tenanted by "Richard Burbadge and Willm Shackspeare gent."
4c. In a deed of trust dated 10 October 1601 by Nicholas Brend to John Bodley, legally tightening up the control of Bodley of the Globe, again the theater is described as being tenanted by "Richard Burbage and William Shakspeare gentlemen."
4d. In a deed of sale of John Collet's interest to John Bodley in 1608, the Globe is once more described as being tenanted by "Richard Burbadge and Willm Shakespeare, gent."
(Notice the variation in spelling of Shakespeare's surname between the three documents, all originating in London. For some reason variants of the name seem to be a major point in the minds of some Oxfordians, but such differences are no more significant than similar variants of Richard Burbage's name in the same documents. See The Spelling and Pronunciation of Shakespeare's Name.)
So now we've established that William Shakespeare of Stratford-upon-Avon was an actor in the company that performed the plays of William Shakespeare, and was also a sharer in the theater in which the plays were presented. To anyone with a logical mind, it follows that this William Shakespeare of Stratford-upon-Avon was also the writer of the plays and poems that bear his name. He is the man with the right name, at the right time, and at the right place.
Now, it is true that there exists no play or poem attributed to "William Shakespeare of Stratford-upon-Avon." The name on the works is "William Shakespeare." There also exists no comparable attribution for virtually any of Shakespeare's contemporaries, the only exceptions being some cases where some ambiguity might exist, such as "John Davies of Hereford" and "William Drummond of Hawthornden." But his contemporaries knew who he was, and there was never any doubt in the minds of those who knew him. Following is the most important evidence of this.
5c. In 1615 Edmund Howes published a list of "Our moderne, and present excellent Poets" in John Stow's Annales. He lists the poets "according to their priorities (social rank) as neere I could," with Knights listed first, followed by gentlemen. In the middle of the 27 listed, number 13 is "M. Willi. Shakespeare gentleman."
5f. In the First Folio, John Heminges and Henry Condell said they published the Folio "onely to keepe the memory of so worthy a Friend, & Fellow alive, as was our Shakespeare, by humble offer of his playes." Heminges and Condell had been fellow actors with William Shakespeare in the King's Men for many years, and had been remembered in his will.
5g. In the same volume, Ben Jonson wrote a poem "To the memory of my beloved, The Author Mr. William Shakespeare," in which he says,
Sweet Swan of Avon! what a sight it were
To see thee in our waters yet appeare,
And make those flights upon the bankes of Thames,
That so did take Eliza, and our James!
Here not only does Jonson tie the author to William Shakespeare of Stratford-upon-Avon, but he puts him in James I's court. (See 2c and 2d above.) Oxfordians sometimes attempt to claim that this evidence could apply to Oxford by asserting that Oxford owned an estate on the Avon river. While it's true that one of the many estates Oxford inherited from his father was at Bilton on the Avon river, he sold this estate in 1580 (43 years before Jonson's poem), and there is no evidence that he was ever physically present there.
5h. Also in the Folio, Leonard Digges wrote an elegy "To the Memorie of the deceased Authour Maister W. Shakespeare," in which he refers to "thy Stratford Moniment." Digges presumably knew what he was talking about; he was the stepson of William Shakespeare's friend Thomas Russell, and had close ties to Stratford for most of his life. The only surviving letter by him, written a few years before his death, contains gossip of the "mad relations of Stratford," including Thomas Combe, to whom William Shakespeare had left his ceremonial sword in his will.
5n. Sir Richard Baker, a contemporary of Shakespeare and a friend of John Donne, published Chronicle of the Kings of England in 1643. Sir Richard was an avid fan of the theater, also writing Theatrum Redivium, or the Theatre Vindicated. In the Chronicle, for Elizabeth's reign he notes statesmen, seamen, and soldiers, and literary figures who are mostly theologians with the exception of Sidney. In conclusion he says,
After such men, it might be thought ridiculous to speak of Stage-players; but seeing excellency in the meanest things deserves remembering . . . For writers of Playes, and such as had been Players themselves, William Shakespear and Benjamin Johnson, have specially left their Names recommended to Posterity.
|
"
(Notice the variation in spelling of Shakespeare's surname between the three documents, all originating in London. For some reason variants of the name seem to be a major point in the minds of some Oxfordians, but such differences are no more significant than similar variants of Richard Burbage's name in the same documents. See The Spelling and Pronunciation of Shakespeare's Name.)
So now we've established that William Shakespeare of Stratford-upon-Avon was an actor in the company that performed the plays of William Shakespeare, and was also a sharer in the theater in which the plays were presented. To anyone with a logical mind, it follows that this William Shakespeare of Stratford-upon-Avon was also the writer of the plays and poems that bear his name. He is the man with the right name, at the right time, and at the right place.
Now, it is true that there exists no play or poem attributed to "William Shakespeare of Stratford-upon-Avon." The name on the works is "William Shakespeare." There also exists no comparable attribution for virtually any of Shakespeare's contemporaries, the only exceptions being some cases where some ambiguity might exist, such as "John Davies of Hereford" and "William Drummond of Hawthornden." But his contemporaries knew who he was, and there was never any doubt in the minds of those who knew him. Following is the most important evidence of this.
5c. In 1615 Edmund Howes published a list of "Our moderne, and present excellent Poets" in John Stow's Annales. He lists the poets "according to their priorities (social rank) as neere I could," with Knights listed first, followed by gentlemen. In the middle of the 27 listed, number 13 is "M. Willi. Shakespeare gentleman. "
5f.
|
yes
|
Bibliography
|
Was Shakespeare the real author of all his plays and poems?
|
yes_statement
|
"shakespeare" is the true "author" of all his "plays" and "poems".. all of "shakespeare"'s "plays" and "poems" were written by him.
|
https://www.shakespeare.org.uk/explore-shakespeare/shakespedia/william-shakespeare/shakespeare-authorship-question/
|
The Shakespeare Authorship Question
|
Did Shakespeare Really Write All of His Plays?
Created in collaboration with Warwick Business School, University of Warwick as part of our Massive Open Online Course, 'Shakespeare and His World'.
Transcript
Reid: Hi, I'm Jennifer Reid, and I am the course mentor on Shakespeare and His World. I'm here today with Jonathan [Bate] to talk about the authorship question. We're in week one of Shakespeare and His World, and it's bound to have come up by now. So we'd like to just address the question in person and put it to bed once and for all.
Bate: OK, thanks very much, Jen. Yeah, this is the one that if you're a Shakespeare scholar, and you get in a taxi anywhere in the world, the first question is, "So, was Shakespeare really Shakespeare? Was it the man from Stratford?" Well, the answer to that is yes.
The thing about any kind of scholarship is that you begin with the evidence. And there is ample evidence that William Shakespeare, a man from Stratford-upon-Avon, and born in this place, became an actor, became a playwright, then eventually returned to Stratford and died.
Behind me on the wall is a facsimile of his bust in Holy Trinity Church here in Stratford, in which he's got his hand on a piece of paper. In the other hand, there would have been a quill, although, over time, the quill tended to be stolen and had to be replaced. And underneath that bust, there's also an inscription. This was the bust put there - his monument above his grave - very soon after his death. And on that inscription, it describes him as having the greatest intellect since Socrates in ancient Greece, and being the greatest poet since Virgil in ancient Rome.
There's pretty strong evidence that Stratford, his family, his neighbours, remembered him as a great writer. And there's so much more evidence than that, that the writer was the actor, the actor was the man from Stratford.
I've got In front of me here a facsimile of the First Folio. We'll be talking more about Shakespeare in print and the First Folio throughout the course. But early on in the book is a wonderful poem in praise of Shakespeare by Ben Jonson. Ben Jonson; friend, rival, fellow actor, fellow playwright. And he describes Shakespeare there, in that poem, as "the sweet swan of Avon."
He makes it clear that his fellow writer, the author of these 36 plays is a writer from by the river Avon. Shakespeare, known as a man from Stratford. And what's more, Jonson was very involved in the production of the First Folio. He worked closely with Shakespeare's fellow actors, John Heminges and Henry Condell, the leading surviving actors from his time. They're the ones who put together the First Folio, and they talk about Shakespeare as a writer. Indeed, Jonson talks in his conversations with other writers in his notebooks about Shakespeare's techniques of writing.
Of course, Heminges and Condell, the fellow actors, are remembered in the will of Shakespeare, the man from Stratford. So there's a tight nexus of relationships between these people. There are all sorts of other local details as well. For example, the fact that Shakespeare got into print with "Venus and Adonis", his narrative poem, the most popular poem of the age, the poem that made his name.
That was printed by Richard Field, a fellow schoolboy from the grammar school here in Stratford.
Reid: The First Folio was published after Shakespeare's death. Are there any references to him during his lifetime by other authors?
Bate: Yes, indeed. Throughout his life, there are a range of people who refer to Shakespeare as a writer, and indeed as a great writer. I've got a fascinating book here. It's called Wits Commonwealth. It was published in 1598, so quite early in Shakespeare's career, by a man called Francis Meres, who was very keen on literature. He wanted to give a sense of the greatness, the dignity, of all the new English literature being written in the 1590s, in his time. He wants to say that British writers are as good as those of classical antiquity.
So we find him here, for example, saying that just as the Latin tongue, the Latin language was glorified by great writers like Virgil, and Ovid, and Horace, so the English language has been glorified by the wonderful poetry of Sir Philip Sidney, Edmund Spenser, Samuel Daniel, Michael Drayton, William Warner, Shakespeare, Marlowe, and Chapman.
So Shakespeare there, in the company of other writers. And indeed, Meres goes on a few pages later to say that "The greatness of Shakespeare as a writer was the range of his work." Not only his poems, which Meres suggests are like those of the Roman poet Ovid, but also his comedies and tragedies.
Now, you might say, if you're a conspiracy theorist, well, that's only saying that these works were performed and published with the name William Shakespeare on the page. Maybe he was just a stooge, just a front man, and someone else actually wrote them. And, of course, over the years there have been a number of theories of this sort. People like Christopher Marlowe and, indeed, a variety of aristocrats, Lord Bacon, the Earl of Oxford, have been proposed as the true author of Shakespeare. But the intriguing thing about Meres is that he does mention Christopher Marlowe, Francis Bacon, the Earl of Oxford, elsewhere as writers. Yes, these other men did write. But Meres, who seemed to know everybody in the London literary world, is quite clear that they're different people from Shakespeare.
So the evidence of Meres, and, as I say, a number of other people publishing books in Shakespeare's lifetime praising his poetry, make the connection with the actor, with the man from Stratford.
Reid: So what is the strongest piece of evidence we have that Shakespeare the actor from Stratford-upon-Avon was the Shakespeare that wrote the plays?
Bate: OK, well, we've seen the evidence from Stratford itself -- the bust. We've seen the evidence of his fellow actors. But in terms of external verification -- again, scholarship always looks for external verification. That's a way of obviating the idea that, oh, it was all a conspiracy, and Ben Jonson and Shakespeare's family were all in on it. But I think the most fascinating piece of external verification is the combination of these two things, a document and a book.
As we'll discover later in the course, Shakespeare was very concerned with the fact that his father's reputation had decayed as a result of financial problems. And Shakespeare was very keen to restore the good name of his family. So acting on behalf of his family, he managed to get a coat of arms for the family so he could call himself a gentleman. And there's a long process, getting a coat of arms. You had to go to an office called the heralds' office. But he duly got it, and the coat of arms is reproduced here. But one of the officials in the heralds' office who gave out these coats of arms said that various people from vulgar backgrounds, sort of insufficiently high-class people, were getting coats of arms. And among them, he said, was Shakespeare the player.
Now, there were two other men in the heralds' office who disagreed, and they defended Shakespeare's right to have a coat of arms on the grounds that his father and mother had a good pedigree in Stratford-upon-Avon. So the complaint about the coat of arms for Shakespeare the player is intimately linked to the references back to Stratford. So nobody doubts that Shakespeare the player, came from Stratford, was the son of John Shakespeare and Mary Arden.
But the really interesting thing is that one of the two men in the heralds' office who defended Shakespeare the player's right to a coat of arms also spoke about Shakespeare the writer, Shakespeare the poet and dramatist. And what's more, that man was William Camden, one of the most learned men in England. And he'd been Ben Jonson's schoolmaster at Westminster School. He knew the literary scene inside out. And in one of his books, which is a kind of overflow from his history of England – it was called "The Remains of a Greater History" – he talks about the great writers, the pregnant wits, as he calls them, of his own time. And there's a list of the writers there, and William Shakespeare is bang in the middle of it.
Reid: Well, you certainly convinced me. If there is such strong evidence, why is there this controversy, and when did it start?
Bate: Well, that's a great question. I think the way to begin an answer to that is to think about other conspiracy theories. Was there a second gunman assassinating John F. Kennedy? Was Marilyn Monroe secretly murdered? I think the answer is wherever there is great fame and a kind of cult, then inevitably, heresies, alternative views, conspiracy theories tend to emerge. Elvis is alive and well, and that kind of idea.
So if we ask ‘When did this begin’, the idea that Shakespeare, the actor from Stratford, was not the author of the plays, the answer is round about the Victorian period. That's to say, for over 200 years after Shakespeare's death, nobody questioned that Shakespeare, the man from Stratford, Shakespeare the player, was Shakespeare the writer. For 200 years the question didn't occur to anybody. Nobody had any doubts. What then happened in the Victorian period is there was a rather eccentric American lady called Delia Bacon who became convinced that Shakespeare couldn't have been Shakespeare, and that maybe someone called Francis Bacon, a famous writer, a famous politician, was Shakespeare. And she started finding all sorts of hidden codes that led her to believe that Bacon was Shakespeare.
She ended up in a private lunatic asylum not far from Stratford, actually. She had hoped to dig up Shakespeare's bones and find some secret document. But that was sort of where it began. And then it was really in the early twentieth century that other theories emerged. There was a schoolmaster called Thomas Looney who accepted it wasn't Bacon, but again thought; how could this grammar schoolboy from the provinces have known so much about courts and aristocracy?
So he suggested that it was Edward de Vere, the Earl of Oxford. And then all sorts of other people came forward. Maybe it was the sixth Earl of Derby, or the fifth Earl of Stanley, and so the list goes on. Or even people suggesting maybe Queen Elizabeth or King James wrote the works of Shakespeare.
I'm rather disappointed in both Delia Bacon and Thomas Looney. Delia Bacon was an American. She came from a country where it was supposed to be possible to go from a log cabin to the White House. And equally, Looney was a schoolmaster. And he should have known that the great grammar school education that was available to Shakespeare in Stratford, as it was to other middle-class boys in Shakespeare's time, meant that you could become a great and sophisticated writer without going to university. How did Shakespeare know about the life of the court? Because the acting companies were invited to perform at the court. That was the very rationale of having acting companies.
So it does seem that a lot of the arguments come down to a kind of snobbery-- the idea that such a great mind could not have come from such a humble background. But I do also think that the other factor is to do with the Romantic movement of the nineteenth century. That's to say, it was with the Romantic poets, with people like Samuel Taylor Coleridge, Lord Byron, John Keats, that you got the idea that a great poet must have a rather glamorous, romantic kind of life.
The evidence about Shakespeare's actual life is really rather boring. There are all these documents concerning property transactions, a sort of Shakespeare the businessman. For the Romantics, that wasn't really glamorous enough. You know in the Romantic period, the most famous poet in Europe was Lord Byron. And so I think it sort of became inevitable that people would think well, we need Shakespeare to have a bit of glamour, to be a lord.
So in a way, I think it's a kind of off-shoot of the Romantic movement. Because, of course, it was with the Romantics that the great cult of Shakespeare took off. It was the Romantics who were the first to say, Shakespeare is the greatest genius there has ever been. So in a way, I think the authorship controversy emerged out of a kind of disappointment that the hard evidence of the documents didn't quite have the colour and the glamour to go with the idea of Shakespeare as the quintessential genius.
I think by the later twentieth century, the phenomenon, the controversy was dying away. But then, of course, with the advent of the internet, it came back in a big way. Because, marvellous thing that the internet is, the problem is that there isn't a system of independent verification where you can discover which websites are actually based on evidence, and which are based on conspiracy theory.
So I'm afraid it's not going to go away. But from our point of view, from the point of view of the course, we feel, on the basis of the evidence we've laid out, other evidence that's available in a number of books that we'll be listing on the course site, the matter is settled. And it's not a matter that we want to discuss further, either within the films or in the forum.
Useful
Follow us
Help us keep Shakespeare's story alive
Thank you for your support to help care for the world's greatest Shakespeare heritage and keep his story alive.
The independent charity that cares for the world’s greatest Shakespeare heritage sites in Stratford-upon-Avon, and promotes the enjoyment and understanding of his works, life and times all over the world. Celebrating Shakespeare is at the heart of everything we do.
We use essential and non-essential cookies that improve the functionality and experience of the website. For more information, see our Cookies Policy.
Necessary cookies
Necessary cookies ensure the smooth running of the website, including core functionality and security. The website cannot function properly without these cookies.
Analytics cookies
Analytics cookies
Analytical cookies are used to determine how visitors are using a website, enabling us to enhance performance and functionality of the website. These are non-essential cookies but are not used for advertising purposes.
Advertising cookies
Advertising cookies
Advertising cookies help us monitor the effectiveness of our recruitment campaigns as well as enabling advertising to be tailored to you through retargeting advertising services. This means there is the possibility of you seeing more adverts from the Shakespeare Birthplace Trust on other websites that you visit.
|
So Shakespeare there, in the company of other writers. And indeed, Meres goes on a few pages later to say that "The greatness of Shakespeare as a writer was the range of his work." Not only his poems, which Meres suggests are like those of the Roman poet Ovid, but also his comedies and tragedies.
Now, you might say, if you're a conspiracy theorist, well, that's only saying that these works were performed and published with the name William Shakespeare on the page. Maybe he was just a stooge, just a front man, and someone else actually wrote them. And, of course, over the years there have been a number of theories of this sort. People like Christopher Marlowe and, indeed, a variety of aristocrats, Lord Bacon, the Earl of Oxford, have been proposed as the true author of Shakespeare. But the intriguing thing about Meres is that he does mention Christopher Marlowe, Francis Bacon, the Earl of Oxford, elsewhere as writers. Yes, these other men did write. But Meres, who seemed to know everybody in the London literary world, is quite clear that they're different people from Shakespeare.
So the evidence of Meres, and, as I say, a number of other people publishing books in Shakespeare's lifetime praising his poetry, make the connection with the actor, with the man from Stratford.
Reid: So what is the strongest piece of evidence we have that Shakespeare the actor from Stratford-upon-Avon was the Shakespeare that wrote the plays?
Bate: OK, well, we've seen the evidence from Stratford itself -- the bust. We've seen the evidence of his fellow actors. But in terms of external verification -- again, scholarship always looks for external verification. That's a way of obviating the idea that, oh, it was all a conspiracy, and Ben Jonson and Shakespeare's family were all in on it.
|
yes
|
Bibliography
|
Was Shakespeare the real author of all his plays and poems?
|
yes_statement
|
"shakespeare" is the true "author" of all his "plays" and "poems".. all of "shakespeare"'s "plays" and "poems" were written by him.
|
https://kids.britannica.com/students/article/William-Shakespeare/277015
|
William Shakespeare - Students | Britannica Kids | Homework Help
|
Related resources for this article
Introduction
(1564–1616). More than 400 years after they were written, the plays and poems of William Shakespeare are still widely performed, read, and studied—not only in his native England, but also all around the world. His works have been translated into almost every language and have inspired countless adaptations. On the stage, in the movies, and on television, Shakespeare’s plays are watched by vast audiences. People read his plays again and again for pleasure. Shakespeare is often called the English national poet. He is considered by many to be the greatest dramatist of all time.
Courtesy of Folger Shakespeare Library; CC-BY-SA 4.0
Shakespeare’s continued popularity is due to many things. His plays are filled with action, his characters are believable, and his language can be thrilling to hear or read. He is astonishingly clever with words and images. Underlying all this is Shakespeare’s deep insight into humanity—how people of all kinds think, feel, and act. Shakespeare was a writer of great perceptiveness and poetic power. He used these talents to present characters showing the full range of human emotions and conflicts.
Courtesy of Folger Shakespeare Library; CC-BY-SA 4.0
While watching a Shakespearean tragedy, the audience may be moved and shaken. Shakespeare sets husband against wife, father against child, and the individual against society. He uncrowns kings, levels the nobleman with the beggar, and questions the gods. Great men fall victim to an unstoppable train of events set in motion by their misjudgments. These plays are complex investigations of character and motive.
A Shakespearean comedy is full of fun. The characters are lively; the dialogue is witty. In the end, young lovers are wed; old babblers are silenced; wise men are content. The comedies are largely joyous and romantic.
Shakespeare’s history plays dramatize the sweep of English history in the late Middle Ages. They tell the story of the period’s kings and the rise of the house of Tudor. Shakespeare intercuts scenes among the rulers with scenes among those who are ruled. This creates a rich picture of English life at a particular historical moment—a time when England was struggling with its own sense of national identity and experiencing a new sense of power. (For more information on Shakespeare, his works, and his world, seeWilliam Shakespeare at a glance. For a collection of videos for teachers, seeteaching Shakespeare.)
Shakespeare’s Life
Boyhood in Stratford
Chris Wright
William Shakespeare was born in Stratford-upon-Avon, England, in 1564. This was the sixth year of the reign of Queen Elizabeth I. Shakespeare was christened on April 26 of that year. The day of his birth is unknown. It has long been celebrated on April 23, the feast day of St. George.
Shakespeare was the third child and oldest son of John and Mary Arden Shakespeare. Two sisters, Joan and Margaret, died before he was born. The other children were Gilbert, a second Joan, Anne, Richard, and Edmund. Only the second Joan outlived William.
Shakespeare’s father was a tanner and glovemaker. He was an alderman of Stratford for years. He also served a term as high bailiff, or mayor. Toward the end of his life John Shakespeare lost most of his money. When he died in 1601, he left William only a little real estate. Not much is known about Mary Shakespeare, except that she came from a wealthier family than her husband.
Chris Wright
Stratford-upon-Avon is in Warwickshire, in the Midlands region of central England. In Shakespeare’s day the area was well farmed and heavily wooded. The town itself was prosperous and progressive. It was proud of its grammar school. Young Shakespeare almost certainly went to that school, though when or for how long is not known. He may have been a pupil there until about the age of 15. His studies must have been mainly in Latin. The schooling was of good quality. All four schoolmasters at the school during Shakespeare’s boyhood were graduates of Oxford University.
Nothing definite is known about Shakespeare’s boyhood. Because of the content of his plays, it is thought that he must have learned early about the woods and fields, about birds, insects, and small animals, about trades and outdoor sports, and about the country people he later portrayed with such good humor. Then and later Shakespeare must have picked up an amazing stock of facts about hunting, hawking, fishing, dances, music, and other arts and sports. Among other subjects, he also must have learned about alchemy, astrology, folklore, medicine, and law. As good writers do, Shakespeare must have collected information both from books and from daily observation of the world around him.
Marriage and Life in London
AdstockRF
In 1582, when Shakespeare was 18, he married Anne Hathaway. She was from Shottery, a village a mile (1.6 kilometers) from Stratford. Anne was eight years older than Shakespeare. From this difference in their ages, a story arose that they were unhappy together. Their first daughter, Susanna, was born in 1583. In 1585 a twin boy and girl, Hamnet and Judith, were born.
Encyclopædia Britannica, Inc.
What Shakespeare did between 1583 and 1592 is not known. Long after Shakespeare’s death, people began to tell various stories about what Shakespeare had done during this period. They say that he may have taught school, worked in a lawyer’s office, served on a rich man’s estate, or traveled with a company of actors. One famous story says that about 1584 he and some friends were caught poaching on the estate of Sir Thomas Lucy of Carlecote, near Warwick, and were forced to leave town. A less likely story is that he was in London in 1588. There he was supposed to have held horses for theater patrons and later to have worked in the theaters as a page.
By 1592, however, Shakespeare was definitely in London and was already recognized as an actor and playwright. He was then 28 years old. In that year Robert Greene, a playwright, accused Shakespeare of borrowing from the plays of others.
Between 1592 and 1594, plague kept the London theaters closed most of the time. During these years Shakespeare wrote his earliest sonnets and two long narrative poems, Venus and Adonis and The Rape of Lucrece. Both were printed by Richard Field, a schoolmate from Stratford. These long poems were well received and helped establish Shakespeare as a poet.
Shakespeare Prospers
Encyclopædia Britannica, Inc.
From about 1594 onward, Shakespeare was an important member of a theatrical company called the Lord Chamberlain’s Men. It became the most successful company of actors in England. Until 1598 Shakespeare’s theater work was confined to a district northeast of London. This was outside the city walls, in the parish of Shoreditch. Located there were two playhouses, The Theatre and the Curtain. Both were managed by James Burbage, whose son Richard Burbage was Shakespeare’s friend and the greatest tragic actor of his day. Along with Shakespeare, Richard Burbage was a member of the Lord Chamberlain’s Men.
Up to 1596 Shakespeare lived near The Theatre and the Curtain in Bishopsgate, where the North Road entered the city. Sometime between 1596 and 1599, he moved across the Thames River to a district called Bankside. There, the Rose Theatre had been built by Philip Henslowe, who was James Burbage’s chief competitor in London as a theater manager. Another theater, the Swan, was built nearby in Bankside. The Burbages also moved to this district in 1598 and built the famous Globe Theatre there for the Lord Chamberlain’s Men. The theater’s sign showed Atlas supporting the world. Shakespeare was associated with the Globe Theatre for the rest of his active life. He owned shares in it, which brought him much money.
Meanwhile, in 1597, Shakespeare had bought New Place, one of the largest houses in Stratford. During the next three years he bought other property in Stratford and in London. In 1596 Shakespeare’s father, probably at his son’s suggestion, applied for and was granted a coat of arms. It bore the motto Non sanz droict—Not without right. From this time on, Shakespeare’s father could write “Gentleman” after his name. This probably meant much to Shakespeare, for in his day actors were classed legally with criminals and vagrants.
Shakespeare’s name first appeared on the title pages of his printed plays in 1598. In the same year the English writer Francis Meres, in Palladis Tamia; Wit’s Treasury, praised him as England’s greatest playwright in comedy and tragedy. Meres’s comments on 12 of Shakespeare’s plays showed that Shakespeare’s genius was recognized in his own time. Other writers of his time also praised Shakespeare. Writer and poet John Weever lauded “honey-tongued Shakespeare.” Ben Jonson, a major playwright, poet, and literary critic, granted that Shakespeare had no rival in the writing of comedy, even in the ancient Classical world. He wrote that Shakespeare equaled the ancients in tragedy as well. Jonson sometimes criticized Shakespeare harshly, including for not following the Classical rules of drama—for not limiting his plays to one location and about one day of action. Jonson also faulted Shakespeare for mixing high and low elements—lofty poetry with vulgarity and kings with clowns—in his plays.
Honored As Actor and Playwright
Queen Elizabeth I died in 1603. King James I followed her to the throne. Shakespeare’s flourishing theatrical company, the Lord Chamberlain’s Men, was taken under the king’s patronage and was renamed the King’s Men. Shakespeare and the other actors were made officers of the royal household. In 1608–09 the company began to perform regularly at the Blackfriars Theatre. This was a smaller and more aristocratic theater than the Globe. While the Globe was a large open-air public playhouse, Blackfriars was a “private” indoor theater with high admission charges. Thereafter the company alternated between the two playhouses, with Blackfriars becoming its theater for the winter season. Plays by Shakespeare were also performed at the royal court and in the castles of the nobles.
Shakespeare is not known to have acted after 1603. During his acting career Shakespeare seems to have played only secondary roles, such as old Adam in As You Like It and the ghost in Hamlet.
In 1607 Shakespeare’s older daughter Susanna married John Hall, a doctor. That same year Shakespeare’s brother Edmund, also a London actor, died at the age of 27. The next year Shakespeare’s first grandchild, Elizabeth, was born. (Hamnet, Shakespeare’s only son, had died at the age of 11, in 1596.)
Death and Burial at Stratford
Shakespeare retired from his theater work and returned to Stratford about 1612. In 1613 the Globe Theatre burned. Shakespeare lost much money because of the catastrophe, but he was still wealthy. He had a financial share in the building of the new Globe. A few months before the fire Shakespeare had bought as an investment a house in the fashionable Blackfriars district of London.
On April 23, 1616, Shakespeare died in Stratford at the age of about 52. This date is according to the Old Style, or Julian, calendar of his time. The New Style, or Gregorian, calendar date is May 3, 1616. Shakespeare was buried in the chancel of the Church of the Holy Trinity in Stratford.
A stone slab—a reproduction of the original one, which it replaced in 1830—marks his grave. It bears an inscription, perhaps written by himself:
Good friend, for Jesus’ sake forbear To dig the dust enclosed here. Blest be the man that spares these stones, And curst be he that moves my bones.
Chris Wright
On the north wall of the chancel is a monument to Shakespeare, which seems to have been built by 1623. It consists of a portrait bust enclosed in a stone frame. Below it is an inscription in Latin and English celebrating Shakespeare’s genius. This bust and an engraving by Martin Droeshout, prefixed to the First Folio edition of Shakespeare’s plays (1623), are the only pictures of Shakespeare that have been accepted as showing his true likeness. Another probably authentic likeness of Shakespeare is the “Chandos” portrait, an oil painting attributed to J. Taylor from about 1610.
Shakespeare’s will, still in existence, bequeathed most of his property to Susanna and her daughter. He left small mementoes to friends. Shakespeare mentioned his wife only once, leaving her his “second best bed” with its furnishings. Much has been written about this odd bequest. Some people have interpreted it as being a slight toward Shakespeare’s wife. Others have contended that it may have been a special mark of affection. The “second best bed” was probably the one they used. The best bed may have been the one reserved for guests. At any rate, Shakespeare’s wife was entitled by law to one-third of her husband’s goods and real estate and to the use of their home for life. She died in 1623.
The will contains three signatures of Shakespeare. These, with three others, are the only known specimens of his handwriting in existence. Several experts also regard some lines in the manuscript of Sir Thomas More as Shakespeare’s own handwriting. Shakespeare spelled his name in various ways. His father’s papers show about 16 spellings. Shakspere, Shaxpere, and Shakespeare are the most common.
Ben Jonson wrote a eulogy of Shakespeare that is remarkable for its feeling and acuteness. In it he said:
Leave thee alone, for the comparison Of all that insolent Greece or haughty Rome Sent forth, or since did from their ashes come. Triumph, my Britain, thou hast one to show To whom all scenes of Europe homage owe. He was not of an age, but for all time! . . . . . . . . . . . . . . . Sweet Swan of Avon! what a sight it were To see thee in our waters yet appear, And make those flights upon the banks of Thames, That so did take Eliza, and our James!
Did Shakespeare Really Write the Plays?
The outward events of Shakespeare’s life are ordinary. He appears to have been a hard-working member of the middle class. Shakespeare steadily gathered wealth and apparently took good care of his family. In modern times, many people have found it impossible to believe that such a seemingly ordinary man could have written the plays. They feel that he could not have known such heights and depths of passion. They believe that the people around Shakespeare expressed little realization of his greatness. Some say that a man with his level of schooling could not have learned about the professions, the aristocratic sports of hawking and hunting, the speech and manners of the upper classes.
Readers, playgoers, actors, and writers in Shakespeare’s own lifetime—and for more than a century and a half after—never questioned that Shakespeare was the author of the plays. Since the 1800s many people have tried to prove that Shakespeare did not write the plays or that others did. For a long time the leading candidate was Sir Francis Bacon. Books on the Shakespeare-Bacon argument would fill a library. After Bacon became less popular as a candidate, Christopher Marlowe, William Stanley, 6th earl of Derby, and then other people were suggested as the authors. Nearly every famous Elizabethan was named. Some people have even claimed that “Shakespeare” is an assumed name for a whole group of poets and playwrights.
Since the late 20th century, the strongest candidate proposed (other than Shakespeare himself) as the author of the plays is Edward de Vere, 17th earl of Oxford. It is true that Oxford did write poetry, as was common among gentleman of the time. He may also have written some plays. A major problem with the theory that Oxford wrote the Shakespeare plays is that he died in 1604. Many of Shakespeare’s plays—including such great works as King Lear, Macbeth, and The Tempest—were written between 1604 and about 1614.
In addition, people who lived at the same time as Shakespeare never suggested that anyone other than him had written the plays. Shakespeare was a well-known actor who performed in London’s top acting company. He was widely known by the leading writers of his time as well. Both Ben Jonson and John Webster praised him as a dramatist. Many other tributes to Shakespeare as a great writer appeared during his lifetime. Shakespeare’s fellow actors John Heminge and Henry Condell collected the plays into a book called the First Folio and wrote a foreword describing their methods as editors. Any theory proposing that Shakespeare did not write the plays must suppose that the people of the time were all fooled by some kind of secret arrangement. Those people who were in the know would have had to have maintained the secret of a gigantic literary hoax without a single leak or hint of gossip.
Moreover, to argue that an obscure Stratford boy could not have become the Shakespeare of literature is to ignore the mystery of genius, which cannot be learned in school. Some great writers have had less schooling than Shakespeare. Shakespeare had a good education for the time, though it is true that he did not attend a university. However, university training in Shakespeare’s day centered on theology and on Latin, Greek, and Hebrew texts. Studying these kinds of texts would not have greatly improved Shakespeare’s knowledge of contemporary English life. Shakespeare’s social background was essentially similar to that of other major writers of his time. Most of the great writers of his era were not aristocrats, who had no need to earn a living by their pens.
Secrets of the Sonnets
Many people want to know more about Shakespeare’s private life. They have searched his plays for hints, with little result. However, he left 154 sonnets, published—probably without his involvement—in 1609. Many readers believe that these reveal an important part of his life. However, whether the sonnets are autobiographical—about Shakespeare’s personal life and feelings—has been much debated. Shakespeare was such a skilled dramatist that he could certainly have created an intriguing storyline for the sonnets that had nothing to do with his own life. In any event, as poetry, the sonnets are superb.
Shakespeare’s sonnets tell of the poet-narrator’s close relationship with a young nobleman. This nobleman wrongs him by stealing the affections of a dark-haired sweetheart and by transferring his friendship to another poet. In the end the beloved young nobleman is forgiven.
Whether this really happened or was only invented makes up the “problem of the sonnets.” People have tried to find out who the “friend,” the “dark lady,” and the “rival poet” actually were. One theory is that the friend was William Herbert, earl of Pembroke. Another is that he was Henry Wriothesley, earl of Southampton. Many people assert that Shakespeare’s sonnets are so full of detailed passion they probably refer to some actual happening. However, this cannot be proved.
Shakespeare’s other nondramatic poems include Venus and Adonis and The Rape of Lucrece. Both are full of gorgeous imagery and pagan spirit and are very obviously the work of a young man. There are also about 60 songs scattered throughout the plays. The songs show the finest Elizabethan qualities in their originality, melodies, and rhythms.
Shakespeare and the Elizabethan Age
Elizabethan Times
The English Renaissance reached its peak in the reign of Queen Elizabeth I (1558–1603). In this period England was emerging from the Middle Ages. An absorbing interest in heaven and an afterlife was transformed into an ardent wonder about this world and humankind’s earthly existence. The Elizabethan period was an age marked by curiosity and bold exploration.
Shakespeare lived at a time when the English language was growing fast. It was suited to magnificent poetry. Shakespeare’s vocabulary was enormous, but its size is less remarkable than its expressiveness. English speech reached its peak of strength between 1600 and 1610. Then the King James Version of the Bible was being made, Bacon was writing his famous Essays, and Shakespeare was composing his great tragedies.
The people of the English middle class were thought to be typically stern, moral, and independent. London’s citizens held fast to their rights. They did not hesitate to defy the royal court if it became too arrogant. Nobles, citizens, and common people all loved the stage, its pageantry and poetry. Wealthy people encouraged and supported the actors. They paid for the processions, masques, and tournaments that the public loved to watch. Men of the royal court competed with one another in dress, entertainment, and flattery of the queen.
The queen herself was the symbol of the glory of England. To her people Elizabeth I stood for beauty and greatness. During her reign the country grew in wealth and power, despite plagues and other calamities.
Drama in the Elizabethan Age
Joe Cocks Studio Collection, Shakespeare Centre Library
England’s defeat of the great Spanish naval fleet called the Armada in 1588 raised English spirits high. The English gloried in what they saw as the greatness of their nation. During the years 1590–1600 England became intensely interested in its past. Playwrights catered to this patriotism by writing chronicles, or history plays. These were great sprawling dramas telling the stories of England’s kings. Shakespeare wrote 10 of them. The same interest spread to the history of other countries of Europe.
When Shakespeare came to London, he found the theater alive and strong. Men and women of various social classes enjoyed going to the theater, and plays were shrewdly written for the public’s taste. The theater was as popular then as movies and television are now. London’s first public playhouse, named The Theatre, had been opened in 1576. A group of talented men, the University Wits, had already developed new types of plays out of old forms and had learned what the public wanted.
Playwrights of the time seem to have been practical men, bent on making a living. They may have been well educated, but they were more eager to fill the theaters than to please the critics. The result was that almost from the start the drama was a popular art. It was not, as in France, a learned and classical art.
Shakespeare was quick to detect changes in popular taste. He wrote his plays to be acted, not read. Shakespeare took whatever forms were attracting attention and made them better. To save time he borrowed basic plots from other works. Sometimes Shakespeare expanded and adapted old stories, while sometimes he worked with more recent tales.
A dramatist in those days was also likely to be an actor and producer. He joined a company and became its playwright. He sold his manuscripts to the company and kept no personal rights in them. Revising old plays and working with another writer on new ones were common. Such methods saved time. The demand for plays was great and could never be fully met.
No manuscripts of Shakespeare—with the possible exception of a scene from Sir Thomas More—and very few manuscripts of other dramatists of the period have survived. The dramas were written to be played, not printed, and were hardly considered literature at all.
In the Elizabethan Age, actors were called “players.” A company of players was a cooperative group that shared the profits. Its members had no individual legal or political rights. Instead the company looked for a patron among the rich nobles. Members became the noble patron’s “servants,” or “men,” and received his protection—thus Shakespeare’s company was called the Lord Chamberlain’s Men (later the King’s Men) and its chief rival was called the Admiral’s Men (later Prince Henry’s Men). A company was usually made up of 8 or 10 men who took the main parts. Other actors were hired as needed. Boys or young men took the female parts, for women did not appear on the stage.
The theaters
ROTA/AP Images
Public theaters were usually round, wooden buildings with three galleries of seats. The pit, or main floor, was an open yard and had no roof. There were no seats in the pit, and its occupants were called “groundlings” because they stood on the ground. Admission to the pit was usually a penny. It cost more to watch the play from the galleries, boxes, and stage. Plays were put on in the afternoon. Private theaters were of the same general design, except that they were square and entirely roofed.
Photos.com/Thinkstock
Courtesy of Folger Shakespeare Library; CC-BY-SA 4.0
Shakespeare wrote most of his plays for the Globe Theatre. Historical research indicates that its main stage was about 43 or 44 feet (about 13 meters) wide and that it projected 27 feet (some 8 meters) into the pit. The stage had a roof of its own. Behind the main stage was a recessed inner stage, which could be hidden by curtains. Above the inner stage was a second inner stage, with curtains and a balcony. Above this was a music room. Its front could be used for dramatic action. On top of the stage roof were hoists for raising and lowering actors and props. On performance days a flag was flown from a turret above the hoists.
The Elizabethans may have used no scenery, but their stage was not entirely bare. They used good-sized props, heavy hangings, and elaborate furniture. Their costumes, usually copied from the fashionable clothes of the day, were rich. The outer stage was generally used for outdoor scenes and mass effects. The inner stage was used for indoor scenes and for cozy effects, as scenes between lovers. The upper stage was used for scenes at windows or walls.
The stage influences Shakespeare’s methods
Joe Cocks Studio Collection, Shakespeare Centre Library
The Elizabethan stage had much to do with the form of Shakespeare’s plays. Because the stage was open and free, it permitted quick changes and rapid action. As a result the play Antony and Cleopatra has more than 40 changes of scene. The outer stage, projecting into the audience, encouraged speechmaking. This may be one reason for the long and impassioned speeches of the plays.
With no women actors, boys and young men made up as women seemed natural somehow. With no stage lighting and with the daytime sky above, the author had to write speeches about the time, season, and weather of the play. There are more than 40 such speeches in Macbeth. The actors were close to the audience; the groundlings were close to the aristocrats. Shakespeare had to appeal to them all. He mixes horseplay with philosophy and coarseness with lovely poetry.
Shakespeare’s Plays
Courtesy of Folger Shakespeare Library; CC-BY-SA 4.0
Shakespeare wrote at least 38 plays. The chief sources of his plots were Sir Thomas North’s translation of Plutarch’s Parallel Lives, Raphael Holinshed’s Chronicles of England, Scotland, and Ireland, and a book on English history by Edward Hall. Shakespeare also drew on many other works, including some Italian novelle, or short tales. He borrowed a few plays from older dramas and from English stories. What Shakespeare did with the sources is more important than the sources themselves. If his original gave him what he needed, he used it closely. If not, he changed it. These changes show Shakespeare’s genius as a dramatist.
Some difficulties stand in the way of a modern reader or audience’s enjoyment of Shakespeare’s plays. Shakespeare wrote more than 400 years ago. The language he used is naturally somewhat different from the language of today. Some words have different meanings now than they did in Shakespeare’s time. For example, rage then meant “folly,” while silly could mean “innocence” and “purity.” In Shakespeare’s day, words sounded different too, so that ably could rhyme with eye or tomb with dumb. The way words were put together into phrases was also often different. What sounds formal and stiff to a modern listener might have sounded fresh to an Elizabethan. Modern printed editions of the plays often include notes that can help readers understand the language differences.
The worst handicap to enjoyment of the plays is the notion that Shakespeare is a “classic,” a writer to be approached with awe. The way to escape this difficulty is to remember that Shakespeare wrote his plays for everyday people and that many in the audience were uneducated. They probably regarded him as a funny and exciting entertainer, not as a great poet.
Courtesy of Folger Shakespeare Library; CC-BY-SA 4.0
When studying the plays, it can be helpful to read them twice. The first reading can be a quick one, to get the story. The second, more leisurely, reading can bring out details. The language itself should be studied. It has great expressiveness and concentrated meaning. An edition of the plays with good explanatory notes is helpful.
Encyclopædia Britannica, Inc.
Most of all, it is important to remember that Shakespeare’s plays were intended to be seen acted in a theater, not read. Modern audiences who want to see the plays can choose from numerous film versions as well as many and varied stage productions. Some productions of Shakespeare’s plays try to present them in a way that is as true as possible to how they were probably originally presented. Others may adapt the dramas, slightly or freely. Many productions set the plays in modern or other times.
Shakespeare’s Four Periods
Courtesy of The Shakespeare Theatre at the Folger; photo, Joan Marcus
Shakespeare’s playwriting can be divided into four periods. The first period was his apprenticeship. Between the ages of 26 and 30 Shakespeare was learning his craft. He imitated Roman comedy and tragedy and followed the styles of the playwrights who came just before him. Shakespeare may have written works with other playwrights; such collaborations were a common practice of the time period. The Senecan tragedy, a type of play that told a story of bloody revenge, was in style at this time. Shakespeare’s first tragedy, Titus Andronicus, was this type of revenge drama. It was his only early tragedy. During this early period Shakespeare wrote a number of romantic comedies as well as some chronicle, or history, plays about English kings of the 15th century.
With Hamlet, written about 1599–1601, Shakespeare’s third period begins. For eight years he probed the problem of evil in the world. Shakespeare wrote his great tragedies—Hamlet, Othello, King Lear, Macbeth, and Antony and Cleopatra—during this period. At times he reached an almost desperate pessimism. Even the comedies of this period are bitter.
Courtesy of Folger Shakespeare Library; CC-BY-SA 4.0
In his fourth and last period Shakespeare used a new form—the romance or tragicomedy. His romances tell stories of wandering and separation leading eventually to tearful and joyous reunion. They have a bittersweet mood. The Tempest is the most notable of these late romances.
List of Plays
The following is a list of all of Shakespeare’s plays in the order in which they are thought to have been written. Despite much scholarly argument, it is often impossible to date a play precisely. However, there is some general agreement, especially for plays written in 1588–1601, in 1605–07, and from 1609 onward.
Shakespeare’s Plots and Characters
Shakespeare’s insight into the human condition and his poetic skill combined to make him the greatest of playwrights. His plots alone show that Shakespeare was a master playwright. He built his plays with care. He seldom wrote a speech that did not forward the action, develop a character, or help the imagination of the spectator.
Many of Shakespeare’s plots are nevertheless frankly farfetched. He belonged to an age that favored the romantic and the poetic. Theatergoers often wanted to be carried away to other times and places or to a land of fancy. There were really no such places as Shakespeare’s Bohemia or Illyria or the Forest of Arden, though the names were real. Shakespeare has never been equaled in the invention of supernatural creatures—ghosts, witches, and fairies.
Courtesy of Folger Shakespeare Library; CC-BY-SA 4.0
Courtesy of Folger Shakespeare Library; CC-BY-SA 4.0
Yet Shakespeare’s art is realistic in the sense that it is true to the way people think and act. Shakespeare’s people seem alive and three-dimensional. His best portrayals are those of his great heroes. Yet even Shakespeare’s minor characters are almost as good. For example, Shakespeare created in his plays more than 20 young women, all about the same age, of the same station in life, and with the same social background. They are as different, however, as any 20 young women in real life. The same can be said of Shakespeare’s old women, men of action, churchmen, kings, villains, dreamers, fools, and country people. Shakespeare’s characters are complex. Like real people, they can be great and yet foolish, bad and yet likable, good and yet faulty.
The Poetry of the Plays
Courtesy of Folger Shakespeare Library; CC-BY-SA 4.0
No other writer in the world is so quotable or so often quoted as Shakespeare. He expressed deep thoughts and feeling in words of great beauty or power. In the technical skills of the poet—rhythm, sound, image, and metaphor—Shakespeare remains the greatest of craftsmen. His range is immense. It extends from funny puns to lofty eloquence, from the speech of common men to the language of philosophers.
Shakespeare’s plays are often written in a form of poetry called blank verse. Blank verse is unrhymed. Its meter is iambic pentameter, meaning that each line has 10 syllables alternating between unstressed and stressed syllables. This form was first used in Italy in the 16th century and was soon taken up by English poets. The University Wits, especially Christopher Marlowe, developed it as a dramatic verse form. Shakespeare perfected it. He and later John Milton made blank verse the greatest form for dramatic poetry in English. Blank verse is an excellent form for poetic drama. It is just far enough removed from prose. Blank verse is not monotonous and forced, as rhymed verse sometimes can be. Blank verse is more ordered, swift, and noble than prose. At the same time it is so flexible that it can seem almost as natural as prose if it is well written.
Kurt Hutton—Picture Post/Hulton Archive/Getty Images
To gain an impression of the power and variety of Shakespeare’s poetry, read such passages as Prospero’s speech in The Tempest, Act IV, Scene i:
Our revels now are ended. These our actors, As I foretold you, were all spirits and Are melted into air, into thin air; And, like the baseless fabric of this vision, The cloud-capp’d towers, the gorgeous palaces, The solemn temples, the great globe itself, Yea, all which it inherit, shall dissolve And, like this insubstantial pageant faded, Leave not a rack behind. We are such stuff As dreams are made on, and our little life Is rounded with a sleep.
How sweet the moonlight sleeps upon this bank! Here will we sit and let the sounds of music Creep in our ears. Soft stillness and the night Become the touches of sweet harmony. Sit, Jessica. Look how the floor of heaven Is thick inlaid with patines of bright gold. There’s not the smallest orb which thou behold’st But in his motion like an angel sings, Still quiring to the young-ey’d cherubims; Such harmony is in immortal souls; But whilst this muddy vesture of decay Doth grossly close it in, we cannot hear it.
Joe Cocks Studio Collection, Shakespeare Centre Library
Then compare other great passages, such as Shylock’s (in The Merchant of Venice) “Signior Antonio, many a time and oft”; Mercutio’s (in Romeo and Juliet) “O, then, I see Queen Mab hath been with you”; Richard II’s “No matter where; of comfort no man speak”; Hamlet’s “How all occasions do inform against me”; Claudio’s (in Measure for Measure) “Ay, but to die, and go we know not where”; Othello’s “Soft you, a word or two before you go”; Jaques’s (in As You Like It) “A fool, a fool! I met a fool i’ the forest”; and Cleopatra’s (in Antony and Cleopatra) “Give me my robe, put on my crown.” Each speech could come naturally from the speaker and from no one else. Each is very moving. Each has great rhythmic flow and force.
How the Plays Came Down to Us
Encyclopædia Britannica, Inc.
Since the 1700s scholars have edited and reworked the text of Shakespeare’s plays. They have had to do so because the plays were badly printed, and no original manuscripts of them survive.
In Shakespeare’s day plays were not usually printed under the author’s supervision. When a playwright sold a play to his company, he lost all rights to it. He could not sell it again to a publisher without the company’s consent. When the play was no longer in demand on the stage, the company itself might sell the manuscript. Plays were eagerly read by the Elizabethan public. This was even more true during the plague years, when the theaters were closed. It was also true during times of business depression. Sometimes plays were taken down in shorthand and sold. At other times, a dismissed actor would write down the play from memory and sell it.
About half of Shakespeare’s plays were printed during his lifetime in small, cheap pamphlets called quartos. Most of these were made from fairly accurate manuscripts. A few were in garbled form.
In 1623, seven years after Shakespeare’s death, his collected plays were published in a large, expensive volume called the First Folio. It contains all his plays except two of which he wrote only part—Pericles and The Two Noble Kinsmen. The collection also omits Cardenio, a play that Shakespeare is thought to have written with John Fletcher; this play is now lost (seeDouble Falsehood). The title page of the First Folio features an engraved portrait of Shakespeare that is thought to be an authentic likeness.
The First Folio was authorized by Shakespeare’s acting group, the King’s Men. Two of Shakespeare’s fellow actors, John Heminge and Henry Condell, collected and prepared the plays for publication. Some of the plays in the First Folio were printed from the more accurate quartos and some from manuscripts in the theater. It is certain that many of these manuscripts were in Shakespeare’s own handwriting. Others were copies. Still others, such as the Macbeth manuscript, had been revised by another dramatist.
Shakespearean scholars have studied the First Folio intensively to help determine what Shakespeare actually wrote. They have done so by studying the language, stagecraft, handwriting, and printing of the period and by carefully examining and comparing the different editions. They have modernized spelling and punctuation, supplied stage directions, explained difficult passages, and made the plays easier for the modern reader to understand.
Another hard task has been to find out when the plays were written. The plays themselves have been searched for clues. Other books have been examined. Scholars have tried to match events in Shakespeare’s life with the subject matter of his plays.
These scholars have used detective methods. They have worked with clues, deduction, shrewd reasoning, and external and internal evidence. External evidence consists of actual references in other books. Internal evidence is made up of verse tests and a study of the poet’s imagery and figures of speech, which changed from year to year.
The verse tests follow the idea that a poet becomes more skillful with practice. Scholars long ago noticed that in his early plays Shakespeare used little prose, much rhyme, and certain types of rhythmical and metrical regularity. As he grew older he used more prose, less rhyme, and greater freedom and variety in rhythm and meter.
The Folger collection is the greatest of all. It was assembled by Henry Clay Folger, onetime president of Standard Oil. He bequeathed it to the trustees of Amherst College to be administered for the use of the American people forever. Folger also provided the library building and endowed the library to provide for its expansion and upkeep. The Folger Shakespeare Library opened in 1932. The collection now consists of about 280,000 books and manuscripts, plus playbills, prints, paintings, and other materials, as well as a model Elizabethan theater. The library possesses more than 80 copies of Shakespeare’s First Folio. Though called a Shakespeare library, the Folger collection also includes other rare works from the Renaissance. Indeed, the library contains the world’s second largest collection of books printed in England before 1641.
From
Choose a language from the menu above to view a computer-translated version of this page. Please note: Text within images is not translated, some features may not work properly after translation, and the translation may not accurately convey the intended meaning. Britannica does not review the converted text.
After translating an article, all tools except font up/font down will be disabled. To re-enable the tools or to convert back to English, click "view original" on the Google Translate toolbar.
|
Readers, playgoers, actors, and writers in Shakespeare’s own lifetime—and for more than a century and a half after—never questioned that Shakespeare was the author of the plays. Since the 1800s many people have tried to prove that Shakespeare did not write the plays or that others did. For a long time the leading candidate was Sir Francis Bacon. Books on the Shakespeare-Bacon argument would fill a library. After Bacon became less popular as a candidate, Christopher Marlowe, William Stanley, 6th earl of Derby, and then other people were suggested as the authors. Nearly every famous Elizabethan was named. Some people have even claimed that “Shakespeare” is an assumed name for a whole group of poets and playwrights.
Since the late 20th century, the strongest candidate proposed (other than Shakespeare himself) as the author of the plays is Edward de Vere, 17th earl of Oxford. It is true that Oxford did write poetry, as was common among gentleman of the time. He may also have written some plays. A major problem with the theory that Oxford wrote the Shakespeare plays is that he died in 1604. Many of Shakespeare’s plays—including such great works as King Lear, Macbeth, and The Tempest—were written between 1604 and about 1614.
In addition, people who lived at the same time as Shakespeare never suggested that anyone other than him had written the plays. Shakespeare was a well-known actor who performed in London’s top acting company. He was widely known by the leading writers of his time as well. Both Ben Jonson and John Webster praised him as a dramatist. Many other tributes to Shakespeare as a great writer appeared during his lifetime. Shakespeare’s fellow actors John Heminge and Henry Condell collected the plays into a book called the First Folio and wrote a foreword describing their methods as editors. Any theory proposing that Shakespeare did not write the plays must suppose that the people of the time were all fooled by some kind of secret arrangement.
|
yes
|
Bibliography
|
Was Shakespeare the real author of all his plays and poems?
|
no_statement
|
"shakespeare" was not the sole "author" of all his "plays" and "poems".. some of "shakespeare"'s "plays" and "poems" were not written by him.
|
https://www.pbs.org/wgbh/pages/frontline/shakespeare/reactions/murphyarticle.html
|
William Murphy Article | The Shakespeare Mystery | FRONTLINE | PBS
|
In libraries that catalogue books by the Dewey Decimal System works dealing with the supposedly non-Shakespearean authorship of Shakespeare's plays and poems are classified under 822.33A (English Literature, Shakespearean Authorship) - Modern librarians are hardly responsible for a practice that began (quite innocently a long time ago, but surely the time has come to recognize the melancholy truth that the books properly belong under 132 (Abnormal or Pathological Psychology). Abnormal Psychology attempts "to explain human conduct and thought that cannot be understood in terms of ordinary common sense," and is therefore precisely the discipline by which anti-Stratfordianism must be examined and evaluated.
Most of us know that irrational behavior carried to extremes can result in a form of mental disturbance that may require hospitalization. In many cases, however, the victims seem superficially in control of themselves and act perfectly normal except when concerned with the subject through which their derangement is manifested; here they inhabit a universe of their own creation, like those unfortunate beings in mental institutions who think they are Napoleon or Jesus Christ. In its simplest form the affliction is called Paranoia. Any student who has devoted considerable thought to the question of the authorship of the Shakespearean plays cannot avoid the conclusion that the tortured attempts to prove Shakespeare didn't write his own works are the product of paranoid thinking. The subject is clearly not one for Professors of English or of history but for psychologists.
For the evidence that William Shakespeare of Stratford-on-Avon (1564-1616) wrote the works attributed to him is not only abundant but conclusive. It is of the kind, as Sir Edmund Chambers puts it, "which is ordinarily accepted as determining the authorship of early literature. It is better than anything we have for many of Shakespeare's dramatic contemporaries." If, to satisfy those who insist that anything is possible in a complex world like ours, we admit a theoretical possibility that someone else wrote the works that possibility would have to be expressed as 1 : ~. In the real world of our experience, however, there is not the remotest possibility that anyone else was the author.
It might be profitable to review very briefly the evidence bearing on authorship. It is the same kind of evidence we use to determine what Geoffrey Chaucer wrote, or Dante, or George Washington. In evaluating it scholars use simple common sense, the kind that tells you that if your tire is flat it probably has no air in it. Briefly, it can be reduced to five positive arguments and one negative.
(1) Of the plays in the First Folio of 1623, all of which are universally conceded to be by the same man (although some may be inaccurate in places and may even occasionally show the work of another hand), fifteen were published as separate works in one or more editions during Shakespeare's lifetime; fourteen of these bear Shakespeare's name on the title page. The First Folio is entitled "Mr. William Shakespeares Comedies, Histories, & Tragedies." No one else's name is associated with the quartos or folios, although Shakespeare's name was used by some unscrupulous publishers on the title pages of other plays which he did not write. In short, at the time of the publication of the First Folio, the plays were commonly believed to have been written by someone named William Shakespeare, whoever he might be.
(2) The company that produced Shakespeare's plays numbered among its members John Heminge (or Heminges), Henry Condell, Richard Burbage, and William Shakespeare. It was quite common in those times for men to bequeath sums of money to their friends for the purchase of "memorial rings." The William Shakespeare who died at Stratford-on-Avon in 1616 and was buried there in the Church of the Holy Trinity left in his will money for the purchase of memorial rings to Heminge, Condell, and Burbage. Common sense tells us that the Stratford Shakespeare was the partner of the other three in the theatre.
(3) During his lifetime Shakespeare was referred to specifically by name as a well-known writer at least twenty-three times, not counting the appearance of his name on title pages. The references range in time from 1595 (W. Covell's "All praise worthy Lucretia Sweet Shakespeare") to 1614 (when Sir Thomas Freeman praises the poet in a sonnet entitled "To Master William Shakespeare"). Among those who acknowledged Shakespeare as a poet or playwright during his lifetime were Richard Barnfield, Gabriel Harvey, William Drummond of Hawthornden, Sir John Davies, Edmund Howes (John Stow's successor as editor of the Annals) and, perhaps most significant of all, William Camden, the great teacher and antiquarian. After Shakespeare's death his greatest rival, Ben Jonson, not only commented on his poetry (including a specific reference to Julius Caesar) but also acknowledged that Shakespeare was a friend whom he admired "this side idolatry."
(4) In the most remarkable listing of Elizabethan works recorded by a contemporary, Francis Meres, a young clergyman who came up to London in the mid-1590's, in his Palladis Tamia (1598) mentions Shakespeare by name no less than nine times and as the author of twelve plays, two poems, and some sonnets.
(5) In 1623 appeared the First Folio, the title page of which has already been given. In addition to that, two facts are of interest to us: (i) that in a commendatory poem Ben Jonson referred to the author as "Sweet Swan of Avon"; (ii) that the volume was edited and published by John Heminge and Henry Condell, who tell us in a preface that they undertook the labor "only to keep the memory of so worthy a friend and fellow alive as was our Shakespeare." Common sense would suggest that the Shakespeare of whom they wrote was the one who left them money to buy memorial rings. Again, all the known evidence points to the Stratford Shakespeare as the writer of Hamlet, Macbeth, Henry V, and the other plays and poems that have kept the world at the author's knees for almost four hundred years.
(6) Equally important, in view of the foregoing five arguments, is the fact that none of the plays or poems is attributed to anyone but Shakespeare not only during his lifetime but for a century and a half after his death. No document of the period has been found which connects any other person directly with the plays or poems. All such claims have been thoroughly exploded, but in a brief paper of this kind it is not possible to consider them in detail. This will infuriate those anti-Stratfordians who feel that their own arguments have not been heeded. I must fall back on the same explanation given by H. N. Gibson in his valuable book, The Shakespeare Claimants: "It is hardly necessary to state that I cannot include in my survey every individual argument put forward by every individual theorist. Their very number makes any such idea absolutely impossible. Hundreds of books and pamphlets have been produced in the course of the controversy, and the literature of the Baconians alone would stock a fair-sized library. It is true that there is much repetition and overlapping in these works, but even so it would require several bulky volumes to review them all adequately."
It should be apparent to anyone possessing normal common sense, then, that Shakespeare's authorship of the works is not merely "probable" or "likely," as some softheads have put it, but absolutely compelling. Yet it is common knowledge that after Delia Bacon published her vague notions about authorship in 1856 defenders of her unorthodox views and creators of others multiplied like rabbits, and any reader of the modern newspaper knows that the tribe increases every year. How can it be, one asks, that questions arise about the authorship of Shakespeare's works but not about Jonson's or Greene's or Marlowe's? How is it also that the doubts have given rise to a major preoccupation of thousands of people with a vast body of writing to their credit?
The answer to the first of these questions is not far to seek. It lies in Shakespeare's unique position. He is by common agreement the greatest writer the world has ever known. Despite certain defects - his carelessness, his willingness to write about trivial subjects for the sake of the commercial theatre he worked for - he has spoken directly to the individual human being in the western world in the centuries since his death. His manner of expression was such that people of the most diverse views have found in him the perfect expression of exactly what they themselves felt. It is a commonplace of literary evaluation that Hamlet is Everyman -and Everywoman. We all find something of ourselves in one or another of the characters in the plays. As Emile Legouis has expressed it, in a famous passage: "No other literature, whatever its beauty, does not seem monotonous after Shakespeare. Free of every theory, accepting all of life, rejecting nothing, uniting the real and the poetic, appealing to the most various men, to a rude workman as to a wit, Shakespeare's drama is a great river of life and beauty. All who thirst for art or truth, the comic or the tender, ecstasy or satire, light or shade, can stop to drink from its waters, and at almost every instant of their changing moods find the one drop to slake their thirst." To some extent we all share Legouis' views. But Shakespeare is not only a writer who expresses himself beautifully: he is an oracle, a prophet, almost a divinity. No other mortal writer shares his pinnacle. And so it becomes necessary to deify the poet, to make him more than he is. Just as thousands of years ago man created God in his image, so the anti-Stratfordians have created a Shakespeare in their own image, or in the image of what they would like themselves to be or imagine themselves to be.
To the second question the answer is equally simple: that eccentric ideas arise at random about almost every subject. Whether they catch on is another matter. When the two answers are combined the aberration falls into place. It was not in itself surprising that in 1781, a hundred and sixty-five years after Shakespeare's death, an obscure English divine, James Wilmot, should profess to find similarities between the works of Sir Francis Bacon and Shakespeare and suggest a connection. (His views were not made known until 1805 and then only to a private society.) Nor was it surprising that a similar idea should have occurred to a frustrated spinster in New Haven in 1846. What might appear more surprising to those who do not recognize the force of Legouis' interpretation of Shakespeare's universal appeal is that Delia Bacon's strange spark should have lighted so many fires.
Delia, a crusty, highly intelligent lady, one of the famous female lecturers of the nineteenth century, was a sister of the formidable Congregational minister Leonard Bacon, pastor of the Centre Church on the Green in New Haven, within slingshot range -- unfortunately for Delia -- of the Yale Campus. Miss Bacon had developed her theory in its essentials by 1846, when she took time out to be seduced by a handsome and wealthy young blackguard eleven years her junior. Badly hurt emotionally by the experience, she traveled to England with Emerson's encouragement, bearing his letter of introduction to Thomas Carlyle. She was to remain there until her magnum opus was completed and published (at Nathaniel Hawthorne's expense) in 1857, after which she completed the descent into the insanity that was to darken the remaining two years of her life. The prose in her book, The Philosophy of Shakespeare's Plays Unfolded, was so impenetrable that, according to Hawthorne, only one person was known to have read it through. Miss Bacon promised a second volume dealing with the "historical" proof of non-Shakespearean authorship; the first (and, as it proved, the only) confined itself to an analysis of the plays designed to show that there was a "hidden undercurrent of philosophy" in the works of both Bacon and Shakespeare. The works of the latter were the property of Bacon, Ralegh and others; Shakespeare himself was a blind. The philosophy and the secret of authorship were concealed in an allegory and cipher which she intended to explain later. Unfortunately she was soon committed to a mental institution in England, brought back in confinement to Connecticut, and died in Hartford in 1859.
A year before her book was issued Delia Bacon had found a publisher for her views in an American magazine. But Putnam's printed the first part of her two-part article in 1856 only at the insistence of Ralph Waldo Emerson, who feared for her sanity if the essay should be rejected. With Emerson's recommendation it was not surprising that Putnam's should print her first essay; having printed the first it was not surprising that it should flatly refuse to print the second.
Upon the appearance of the Putnam's article an anguished outcry arose in England from one William Henry Smith, who claimed he had the idea first. Whoever deserved the priority, Smith at least had the advantage of Miss Bacon in the clarity of his style. He asserted simply that Sir Francis Bacon (with whom Delia claimed no kinship, incidentally) was the Real Author and so supplanted Delia's vague and ill-defined notion of multiple but never clearly defined authorship. Soon other men looking for a cause flocked to his banner, or to Miss Bacon's. In America a judge from St. Louis, Nathaniel Holmes, soon to become a professor at the Harvard Law School, exercised the logic of his profession to prove," in the teeth of the evidence, that Sir Francis Bacon was the Real Author. He was not the last lawyer to enter the fray; indeed the attraction the anti-Stratfordian madness has always exercised upon lawyers is enough to persuade any sane person to compose his differences out of court, especially if his cause be just. Needless to say, neither Smith nor Holmes, nor Miss Bacon, adduced any documentary evidence of any kind to link Bacon with the works.
But Delia Bacon had unconsciously struck a chord that vibrated in harmony with the newly-educated middle classes of England and America. By 1877 the hundredth publication on the subject had been printed. Today a student could spend a lifetime reading nothing but anti-Stratfordian argumentation and never come to the end of it. In 1947 a mere listing of books and articles on the subject filled more than 1,500 typewritten pages; it was so vast that no publisher could afford to print it. By 1962 the number of rival claimants had increased to 57.
In order to give some idea of the broad approach of the anti-Stratfordians as a class, however, the following outlines may be delineated.
First and most important, of course, is the necessity of destroying the formidable claims of William Shakespeare. This is done in one of two ways. The first is to deny his authorship negatively: we are told that a man capable of writing such great plays would have left behind a treasure-house of information about himself, or that his contemporaries would have done so. But we know little of Shakespeare. True, we know where and when he was born, died, and was buried; we know the names of his parents, brothers, sisters, wife, children, and grandchildren; we know the names of his colleagues in the theatre and the name and location of the house and the land he bought in Stratford when he became wealthy; we know about his dealings with the College of Heralds, and with his townsmen in the controversy over enclosure of the pasture-land. But we don't know about him. He tells us little about himself - the Sonnets cannot be autobiographical, of course - and we don't know the color of his eyes, or his height, or whether he had table manners and was really in love with his wife.
The second method of denial is as ingenious as the first and requires a simple reversal of it. It holds that the author of great plays must be a great man, as the term "great" is defined by the anti-Stratfordian conducting the argument. But we have abundant testimony, it is claimed, to the shabbiness of Shakespeare's moral character. It is here that the anti-Stratfordians reveal the first clear symptoms of their disturbance. From Appleton Morgan, who in 1880 denounced Shakespeare as "a letterless rustic, with a reputation in his native village for scapegrace escapades, gallantries, and poaching expeditions, rather than for meditation, study, or literary composition," through Gelett Burgess, who in 1948 described him as a "sordid provincial nonentity" who indulged in "petty lawsuits and peddling malt," to Mr. Robert Montgomery of Boston, who in 1955 called the Stratfordian a young provincial" who attended "a hornbook grammar school" in "the filthy little town of Stratford," the denunciations have been violent and picturesque. Shakespeare was a mean and lowly bumpkin who got his wife pregnant before he married her and then left her only his "second-best" bed when he died. He hoarded grain in time of peril, probably to make home-brew. He was a nasty little money-grubber interested only in buying up real estate in his home town and waiting for it to rise in value. He reveals in the Sonnets -- which are autobiographical, of course -- that he had latent homosexual tendencies and that he carried on a protracted and degrading adulterous affair with a repulsive dark-skinned lady who probably gave him a loathsome disease. In short, Shakespeare didn't write the plays because we don't know enough about him -- or because we know too much. The layman takes his choice.
Having disposed of the Stratfordian's pretensions with such thumping finality the doubters then proceed to look for an author who might have written the plays if Shakespeare hadn't gotten around to writing them first. Or, it would be more accurate to say, they reveal the identity of the fellow they've been hiding in the closet all along until the embarrassingly present Shakespeare can be shuffled out of the house. It is an amusing game to watch. The Real Author, when finally revealed, proves to have been chosen, not necessarily for his ability to write the plays (although this is always presumed) but for a variety of other reasons, usually his social position. A partial list of the Real Authors suggested up to now includes Sir Francis Bacon, Viscount Verulam; Edward De Vere, the Seventeenth Earl of Oxford; William Stanley, the Sixth Earl of Derby; Roger Manners, the Fifth Earl of Rutland; Sir Walter Ralegh; Sir Edward Dyer; Christopher Marlowe; Sir Philip Sidney; John Donne; Mary Pembroke (Sidney's sister); Sir Anthony Sherley; Anne Whately (who probably never existed); Anne Hathaway; and Queen Elizabeth. All the candidates who have commanded extensive support are either noble or have connections with the nobility, a not surprising circumstance which will be discussed later. It should be added here that there are those, like Delia Bacon, who are afflicted with what has been called the "Corporation Syndrome," holding that such distinguished literature must be the work of a committee. Its members would include, in addition to Bacon and Oxford, Robert Greene, George Peele, Samuel Daniel, Thomas Nashe, Thomas Lodge, Michael Drayton, and Thomas Dekker.
Agreement among such a disparate group can be expected on only one matter, and there it is unanimous: a loathing of William Shakespeare of Stratford-on-Avon. Beyond that unanimity ceases. Baconians hate Oxfordians even more than they hate Shakespeare, and the Oxfordians, if with an air of contemptuous superiority, return the irrationality in kind. Generally, at any given moment, the two anti-Stratfordian schools which lead the pack in popular superstition hate each other most. During the last thirty years or so the Baconians, after having led the field for almost a century, have given way to the Oxfordians, after Marlowe's stock rallied briefly, then plummeted. Since the pretensions of the latter are if possible even more ridiculous than those of the former, a new candidate is bound to emerge soon perhaps Desdemona or Mistress Quickly.
Having decided the Real Author is, the claimants proceed to "prove" their case. The methods have generally been two in number. The first, popularized in an enormous book by Ignatius Donnelly, The Great Cryptogram (1888), is to find a secret cipher in the works that reveals the fact of Shakespeare's non-authorship and the identity of the Real Author. Donnelly's book was persuasive, even though he coyly refused to reveal the cipher itself but only its message: that Francis Bacon wrote the works.
The most remarkable and surely the most pathetic of the cryptographers was a Mrs. C. F. Ashmead Windle, who published two pamphlets at her own expense, in 1881 and 1882. In Shakespeare's slightest word she sees more devils than vast Hell can hold. The title of every play suggests a jingle which is in itself suggestive. For Othello the jingle is:
The play is supposed to be Bacon's judgment of himself, "since Martius means 'March you us,' and refers to his service; Publius means Publish us,' and refers to his fame."
There is more, much more, of the same. I have not been able to learn whether Mrs. Ashmead Windle died in a private institution or a state hospital.
A contemporary successor, one Mr. J. R. Weagant of Eagle Rock, California, who still lives and should be available for interrogation, circulates cards and slips of paper "proving" that Bacon wrote the works. In one offering he quotes a passage from Antony and Cleopatra and surrounds it with extracted code letters, as follows:
S She shall be buried by her Anthony F
N No grave upon the earth shall clip in it C
A payre so famous: High events as these RR Strike upon those that made them:
characteristic pattern of Bacon sig.
After carefully reflecting upon this revelation, which Mr. Weagant had graciously mailed to me, I wrote him and confessed that he had me completely baffled. Unhappily, in a development that was to become characteristic of my correspondence with anti-Stratfordians, Mr. Weagant declined to clarify his position and I have not heard from him since.
A numerous breed, most of the cryptographers have been Baconians. In 1957, William F. Friedman and his wife Elizebeth published an exhaustive survey and analysis of all the secret codes or ciphers that had been "found" in the works up to that time. The Friedmans brought unusual gifts to the study; he headed the United States cryptanalytic team that cracked the Japanese diplomatic cipher just before Pearl Harbor; his wife was chosen by the International Monetary Fund after World War II to establish its system of secret communications. Observing that any legitimate cipher must have a key that will unlock its secret not only for the anti-Stratfordian with a cause but for everyone else, they demonstrated with crushing finality that none of the ciphers or cryptograms or codes suggested up to that time had any validity whatever. The Friedmans, in one devastating display, employed the system used by one Baconian to prove that they themselves wrote the plays!
The second method of "proving" authorship is even more engaging than the first. The cryptologists at least believed, even if they were deluding themselves, that they had found hidden messages in the plays. The Historical Reconstructionists are above such childish behavior. They simply disregard or dismiss with a lofty contempt all documentary evidence of Shakespearean authorship and go right to the task of finding another Elizabethan who might have written the plays during his lifetime. If the circumstances of the Real Author's life happen to correspond to the accepted dating of the plays so much the better. If they do not the doubter is not troubled. He merely changes the dates, a device freely employed by the Oxfordians ("Romeo and Juliet was first written in 1581-83"). No documentary evidence is given for the changes; the reader is expected to assume that in some scholarly work someone has already established the new date as authoritative, even though nothing of the kind has ever been done.
HISTORICAL RECONSTRUCTION is the device of Alden Brooks in his Will Shakespeare and the Dryer's Hand. Brooks devotes the first 400 pages of his book to demolishing poor "Will Shakespere" of Stratford. (Brooks, like other anti-Stratfordians, sees a great difference between "Shakespere," the Stratford boy, and "Shakespeare," the Real Author; spelling becomes of enormous significance). The Stratford fellow, Brooks concludes, "had nothing of the poetical spirit. . . . He had no literary merit whatever. . . . He was primarily a money-lender and businessman with eye always on the main chance. . . . Tavern frequenter, his swagger and pretense were immense; his morals, of a low standard. According to those whose ill-will he aroused, he was fool, knave, usurer, vulgar showman, illiterate bluffer, philanderer, pander, and brothel keeper."
To a man who can tell us so much about Shakespeare on no visible evidence, no flight of illogical fancy is impossible. "Will" was not an "utterly bad fellow," Brooks tells us. For he had a lasting association with the Real Author, whose achievement he conspired in and helped to conceal. The breathless reader turns to the second part of Brooks's volume and discovers that "Shakespeare" was really Sir Edward Dyer, known to most readers as a major courtier and minor poet. His dates (1543-1608) might seem to disqualify him in the eyes of ordinary people, who note that the earliest published work attributed to Shakespeare was Venus and Adonis in 1593, when Dyer was 50. But Brooks is equal to the challenge. Obviously, the fact that a work was not printed until 1593 or 1595 or 1600 does not mean it could not have been written earlier; with this proposition no student of elementary logic could disagree. Buttressed by its irrefutability, Brooks simply fits the plays and poems as we know them into the known facts of Dyer's life.
Brooks shares two other dark suspicions with most of his fellow doubters. One is that Shakespeare of Stratford was a partner in a gigantic conspiracy of concealment. His name was allowed to be associated with the plays because those in on the secret wanted desperately not to have the Real Author's identity known. The reasons for such modesty vary according to the Real Author's identity; but whether Bacon, Marlowe, Derby, or Oxford is the candidate, the champions of each manage to create hypothetical situations in which exposure would be dangerous. What none has yet presented is any documentary proof to support the assumptions.
The other is the Blue Blood theory, the view that only someone of noble birth and breeding could have written the works. Shakespeare was too base; no one so lowly in origin could possibly have conceived the soaring poetry of the plays and poems. This view is at the root of the position of most of the principal anti-Stratfordians; Bacon, Rutland, Oxford, Essex, Southampton, Dyer, and Cecil were all close to the throne; and even Marlowe, if the son of a shoemaker, at least owned a degree from Cambridge which was granted out of season at the special request of the Queen. Elizabeth herself and Mary Queen of Scots, who have also had champions, are of course royal and could therefore easily have written the plays. (A story which we hope, without too much confidence, is apocryphal is that when a supporter of Queen Elizabeth expounded his views, a listener objected. "What!" he exclaimed, "The Shakespeare plays written by a woman?" "You miss my point," said the first quietly. "Queen Elizabeth was really a man.")
Nowhere is the Blue Blood theory better observed than in one of the most amazing and amusing examples of misguided if laborious scholarship in the history of human folly. This is the work of Charlton Ogburn, Sr., and his wife Dorothy entitled This Star of England ( with a foreword by Charlton, Jr.). The Ogburns believe in the theory first advanced by a man with the unfortunate name of Looney (a Battey and a Feeley are also numbered among the anti-Stratfordians) that Edward DeVere, the Seventeenth Earl of Oxford (1550-1604) was the Real Author either in his own right or as primus inter pares of a syndicate. They also claim he was the father of the Earl of Southampton by a secretly legitimized union with Queen Elizabeth. Since the English are understandably reluctant to accept such a novel view of their Virgin Queen, most of the Ogburns' disciples are American. Their work is of great interest to toilers in many vineyards: to the philosopher for their easy substitution of the declarative sentence for logical argumentation; to the literary historian for their cavalier treatment of dates; to the general student for their superb indifference toward all documentary evidence that might dispute their own superstitions. ("Shakespeare was never referred to, while living, as a writer," we are told; so Francis Meres and Thomas Thorpe, Gabriel Harvey and Sir John Davies and all the rest, are categorically disposed of.)
But it is the psychologist who has most to learn from their labors. The Ogburns and their supporters are preeminent among the anti-Stratfordians in their belief in the congenital literary ability of those of good birth. Any earl would make a good writer; but DeVere was the Seventeenth Earl, the premier earl of the kingdom. The imagination reels at the blueness of his blood. When the Ogburns reflect upon Oxford's position they cannot restrain their adulation: "heir of the ancient and honorable family of DeVere which was second in eminence only to the monarch"; "the proud young Earl, sensitive, generous, impetuous, bred in a conception of honor as absolute as a religious code." Sir Philip Sidney, even though he had demonstrated some poetical ability by writing one of the world's great sonnet sequences, could not have composed the poetry of Shakespeare because, the Ogburns sniff, he was not knighted "until three years before his death."
It is melancholy to report what the reader will already have suspected: that neither the Ogburns nor any other Oxfordian presents any documentary evidence of any kind to link Oxford with the writing of the plays and poems. If for amusement we were to presume that the Stratford Shakespeare didn't write the works, we should still have to conclude that Oxford is among the least likely of the Real Authors.
But A 1200-page volume can be impressive. It convinced some professional writers like Gelett Burgess (who once wrote that he never hoped to see a purple cow but who must have seen even odder fauna in the anti-Stratfordian menagerie). It is not surprising that other innocents have been taken in. And of course it is well-known that many men distinguished for achievements other than those of scholarship or logic have lent their names to the anti-Stratfordian cause and have undoubtedly contributed to the swelling ranks of doubters: Coleridge, Emerson, Mark Twain, Palmerston, Henry James. But the fact is that none of these men devoted any time to a consideration of the evidence that leads directly to Stratfordian authorship and that none was accustomed to dealing with the common and sensible, if rigid procedures by which authorship is determined.
It is clear that anti-Stratfordianism is a symptom of deeper disturbance. The fact has been recognized for almost a century but never thoroughly explored. In 1884, W. C. Wyman noted that the work of Delia Bacon, William Henry Smith, and Judge Nathaniel Holmes was important "not so much for the light which they throw on the question of authorship, as for their interest as examples of wrongheadedness."
If it is true that Delia Bacon died in a mental institution and that Mrs. Ashmead Windle belonged in one, it is also true that most anti-Stratfordians have reconciled their problems with the necessities of daily living and have learned to walk abroad among us. But the symptoms of their distress are still visible. Students of abnormal psychology have neglected a rich mother-lode of basic research by ignoring the anti-Stratfordians, who exhibit clearly defined symptoms that mark the paranoid mind. Among these may be noted the following:
(1) A belief in conspiracy. Most anti-Stratfordians believe that there is a vast conspiracy of silence by the members of what they call the "Stratford Establishment." In May, 1956, twenty-two Oxfordians, including nine lawyers, took a half-page ad in The Shakespeare Newsletter to berate members of the Establishment for refusing to give their case a fair hearing. The fact is, of course, that their case has been heard, thoroughly explored, and found without merit.
The modern conspiracy is simply a counterpart of the earlier one in which the Stratfordian Shakespeare connived with Bacon (or Oxford, or Dyer, or Derby) to keep the Real Author unknown. The fact that the conspiracy was so beautifully managed that no records were left to betray it to later generations is additional proof of its existence.
(2) An extreme hatred of an imaginary enemy. The enemy, of course, is William Shakespeare of Stratford-on-Avon.
(3) The invention of new logical systems to provide desired answers that fail to be revealed by older and more widely accepted ones. Here the Ogburns and their crew are supreme examples. But we should not fail to acknowledge their debt to Ignatius Donnelly and his ciphers, nor forego a word of praise for Mrs. Ashmead Windle and Alden Brooks for their splendid gifts.
(4) Preternatural persuasiveness. When the reader finishes a hundred pages of Delia Bacon or Ignatius Donnelly or Alden Brooks or the Ogburns he is so bedazzled by the outpouring of verbal argumentation that he may forget that none of the logicians has offered any sensible refutation of the positive documentary evidence of Shakespearean authorship.
( 5) Unconscious self-identification of the afflicted with the heroic or the divine. The Real Author must be a person of unusual distinction, royal or noble; and the one who unmasks him shares his distinction because of his sole possession of the knowledge, or of his membership in a small but distinguished coterie that shares the knowledge.
(6)The intense hatred of other heretics and their false gods. If the anti-Stratfordians hate Shakespeare they despise with a raging fury other anti's who don't share their candidate. Perhaps a quarter of the enormous nervous and emotional energy expended on the subject has been devoted to the attempted extermination of rival heresies.
(6)An inability to keep their aberrations within bounds. This is one of the principal marks of the anti-Stratfordian disorder. It is not sufficient for the Ogburns to advance the highly improbable thesis that the Earl of Oxford wrote the works; he must also become the lover of Queen Elizabeth, her secret husband, and the father of the Earl of Southampton. And if Oxford wrote Hamlet why couldn't he have written The Spanish Tragedy too? And so poor Thomas Kyd is denied the one play which history has granted him. An extreme exemplar of this particular aberration is Parker Woodward, whose illness began when he identified Bacon as the author of Shakespeare's works; but he was unable to stop, and before long he had added to Bacon's canon the complete works of Stephen Gosson, Thomas Watson, John Lyly, George Peele, Robert Greene, Christopher Marlowe, Edmund Spenser, Thomas Kyd, Thomas Nashe, Geoffrey Whitney, William Webbe, and Robert Burton.
In the Diagnostic and Statistical Manual: Mental Disorders (American Psychiatric Association, 1952) under the main heading "Disorders of Psychogenic Origin or without clearly defined physical cause or structural change in the brain," and the subheading "Psychotic disorders," there is listed "Paranoid reactions . . . (b) . . . Paranoia." The description is given as follows: "This type of psychotic disorder is extremely rare. It is characterized by an intricate, complex, and slowly developing paranoid system, often logically elaborated after a false interpretation of an actual occurrence. Frequently, the patient considers himself endowed with superior or unique ability. The paranoid system is particularly isolated from much of the normal stream of consciousness, without hallucinations and with relative intactness and preservation of the remainder of their personality in spite of a chronic and prolonged course." (Italics mine.)
When the paranoia is accompanied by a separation of the personality from its surroundings, schizophrenic paranoia results, usually requiring institutionalization, as in the cases of Delia Bacon and, probably, Mrs. Ashmead Windle. The paranoid anti-Stratfordian, with instinctive shrewdness, knows he will be carted off to the booby-hatch if he claims to be George Washington or the Angel Gabriel. But if he only claims to know who really wrote Hamlet and can support his assertions with endless verbiage that dazzles as it dulls, who can prove him wrong?
It must be admitted that the paranoid anti-Stratfordians have been fantastically successful. Poor Mrs. Ashmead Windle may have been demented, but her name will live as long as men study Shakespearean esoterica. She and her kind are the Jack Rubys of literary history. At least we can be grateful that minds that expend their substance on Shakespearean authorship are likely to be harmless as long as they don't stray into other pastures. Imagine the damage they might cause if they were involved in real life. Translated from the world of biography into the world of politics, they become the John Birchers of our day who, by the exercise of some mental process not easily comprehended but closely akin to that of the anti-Stratfordians, believe in the teeth of all the evidence that Dwight Eisenhower and Earl Warren are "conscious agents of the communist conspiracy."
I contend that the irrationalities of the anti-Stratfordians are as harmless as thunder, a loud noise upon the air frightening some but hurting none; I contend that the passionate bickering and the outpouring of verbal vitriol that characterizes their dialogue is a healthy medicine for them and a source of endless amusement for their readers. I confess that I strongly prefer anti-Stratfordian literature to detective stories. Among the anti-Stratfordians one is in pure fairyland, where escape from the pressing problems of real life is complete. In each work is a villain named William Shakespeare who is completely different from the William Shakespeares in other anti-Stratfordian works and is, like them, totally unreal. One meets strange heroes named Bacon or Oxford or Manners who resemble nothing so much as an imaginary ideal in the head of the writer and who often prove to be only thinly disguised versions of the author's view of himself or of his imagined ancestors. The very richness of their dementia is one of their principal charms, as is their fertility. For it is almost as certain as daybreak that before long a now unknown member of the tribe will burst into print with an ingenious theory not previously dreamed of but more ridiculous than any yet proposed. I, for one, will welcome him. He and his kind have provided me with uncountable hours of pleasure, and in simple gratitude I wish for them and their movement a continued long and happy life.
|
After Shakespeare's death his greatest rival, Ben Jonson, not only commented on his poetry (including a specific reference to Julius Caesar) but also acknowledged that Shakespeare was a friend whom he admired "this side idolatry. "
(4) In the most remarkable listing of Elizabethan works recorded by a contemporary, Francis Meres, a young clergyman who came up to London in the mid-1590's, in his Palladis Tamia (1598) mentions Shakespeare by name no less than nine times and as the author of twelve plays, two poems, and some sonnets.
(5) In 1623 appeared the First Folio, the title page of which has already been given. In addition to that, two facts are of interest to us: (i) that in a commendatory poem Ben Jonson referred to the author as "Sweet Swan of Avon"; (ii) that the volume was edited and published by John Heminge and Henry Condell, who tell us in a preface that they undertook the labor "only to keep the memory of so worthy a friend and fellow alive as was our Shakespeare." Common sense would suggest that the Shakespeare of whom they wrote was the one who left them money to buy memorial rings. Again, all the known evidence points to the Stratford Shakespeare as the writer of Hamlet, Macbeth, Henry V, and the other plays and poems that have kept the world at the author's knees for almost four hundred years.
(6) Equally important, in view of the foregoing five arguments, is the fact that none of the plays or poems is attributed to anyone but Shakespeare not only during his lifetime but for a century and a half after his death. No document of the period has been found which connects any other person directly with the plays or poems. All such claims have been thoroughly exploded, but in a brief paper of this kind it is not possible to consider them in detail.
|
yes
|
Theater
|
Was Shakespeare's "Macbeth" cursed from its first performance?
|
yes_statement
|
"shakespeare"'s "macbeth" was "cursed" from its first "performance".. the "curse" of "macbeth" began with its first "performance".
|
https://www.rsc.org.uk/macbeth/about-the-play/the-scottish-play
|
The Curse of the Scottish Play | Macbeth | Royal Shakespeare ...
|
The Curse of the Scottish Play
The Scottish Play. The Bard’s Play. Macbeth is surrounded by superstition and fear of the ‘curse’ – uttering the play’s name aloud in a theatre causes bad luck. But where did this superstition come from?
Macbeth, Act 4 Scene 1
‘Double, double toil and trouble;
Fire burn, and cauldron bubble…’
Sixteenth century Scotland was notorious for its witch-hunts, mainly due to King James VI of Scotland’s obsession with witchcraft. The violent death of his mother, Mary, Queen of Scots by execution in 1587 was said to have inspired James’ dark fascination with magic.
Later, in 1589 when James was sailing back to Scotland from Denmark with his new wife, Anne, their ship encountered violent storms at sea, and they were nearly drowned. The Scottish King blamed the evil spells of witches for conjuring the storm, and following his return to Scotland ordered a witch-hunt in the coastal town of North Berwick. He later wrote Daemonologie, a treatise on witchcraft to further inspire persecution against witches.
Witchcraft to please the king
James became King James I of England in 1603, and his new subjects were keen to appease him and his views on the demonic. Christopher Marlowe’s Doctor Faustus was published in 1604, and its shocking portrayal of witchcraft and association with the devil intensified England’s fear of sorcery.
Shakespeare’s Macbeth followed in 1606 with direct references to James’ earlier misfortune at sea: ‘Though his bark cannot be lost, Yet is shall be tempest-tost’. Shakespeare was also said to have researched the weird sisters in depth; their chants in Macbeth, and ingredients of fenny snake, eye of newt and toe of frog, are supposedly real spells.
Ken Wynne, Joan MacArthur and Edward Atienza as the three witches in Macbeth (1952), directed by John Gielgud.
Accidents, injuries and deaths - the curse of Macbeth
According to folklore, Macbeth was cursed from the beginning. A coven of witches objected to Shakespeare using real incantations, so they put a curse on the play.
Legend has it the play’s first performance (around 1606) was riddled with disaster. The actor playing Lady Macbeth died suddenly, so Shakespeare himself had to take on the part. Other rumoured mishaps include real daggers being used in place of stage props for the murder of King Duncan (resulting in the actor’s death).
The play hasn’t had much luck since. The famous Astor Place Riot in New York in 1849, caused by rivalry between American actor Edwin Forrest and English actor William Charles Macready, resulted in at least 20 deaths and over 100 injuries. Both Forrest and Macready were playing Macbethin opposing productionsat the time.
Other productions have been plagued with accidents, including actors falling off the stage, mysterious deaths, and even narrow misses by falling stage weights, as happened to Laurence Olivier at the Old Vic in 1937.
The cause of the curse
Macbeth was also seen as unlucky by theatre companies as it usually meant that the theatre was in financial trouble. Macbeth was (and still is) a popular play that was guaranteed an income so if it was suddenly announced it could mean that the theatre was struggling. Equally, the high production costs to stage the play could also bankrupt a theatre - referenced in Martin Harrison’s 1998 book, The Language of Theatre.
Breaking the curse
So how can you avoid catastrophe if you utter the play that shall not be named? Exit the theatre, spin around three times, spit, curse and then knock on the theatre door to be allowed back in…
The curse at the RSC
The actor Diana Wynyard (pictured) fell off the stage in Stratford’s 1948 production during the sleep walking scene as she decided to do it with her eyes closed. Apparently, the night before she had told a reporter that she thought the curse was ridiculous.
A few sources reference that she ended up "plunging 15 feet into the pit when she walked off the stage in the sleepwalking scene." She was unhurt.
We use cookies on this website. By using this site you agree that we may store and access cookies on your device.
Find out more about how we use cookies and your options to change your acceptance of cookies.
Close
Unfortunately, payments are no longer supported by Mastercard in your web browser
Chrome 115.0, so you may experience some difficulties using this website. Please either update your browser to the newest version, or choose an alternative browser – visit here or here for help.
If you have any more questions please visit our FAQs
If you would like to complete your booking on the phone instead, please call the Box Office on 01789 331111.
|
A coven of witches objected to Shakespeare using real incantations, so they put a curse on the play.
Legend has it the play’s first performance (around 1606) was riddled with disaster. The actor playing Lady Macbeth died suddenly, so Shakespeare himself had to take on the part. Other rumoured mishaps include real daggers being used in place of stage props for the murder of King Duncan (resulting in the actor’s death).
The play hasn’t had much luck since. The famous Astor Place Riot in New York in 1849, caused by rivalry between American actor Edwin Forrest and English actor William Charles Macready, resulted in at least 20 deaths and over 100 injuries. Both Forrest and Macready were playing Macbethin opposing productionsat the time.
Other productions have been plagued with accidents, including actors falling off the stage, mysterious deaths, and even narrow misses by falling stage weights, as happened to Laurence Olivier at the Old Vic in 1937.
The cause of the curse
Macbeth was also seen as unlucky by theatre companies as it usually meant that the theatre was in financial trouble. Macbeth was (and still is) a popular play that was guaranteed an income so if it was suddenly announced it could mean that the theatre was struggling. Equally, the high production costs to stage the play could also bankrupt a theatre - referenced in Martin Harrison’s 1998 book, The Language of Theatre.
Breaking the curse
So how can you avoid catastrophe if you utter the play that shall not be named? Exit the theatre, spin around three times, spit, curse and then knock on the theatre door to be allowed back in…
The curse at the RSC
The actor Diana Wynyard (pictured) fell off the stage in Stratford’s 1948 production during the sleep walking scene as she decided to do it with her eyes closed. Apparently, the night before she had told a reporter that she thought the curse was ridiculous.
|
yes
|
Theater
|
Was Shakespeare's "Macbeth" cursed from its first performance?
|
yes_statement
|
"shakespeare"'s "macbeth" was "cursed" from its first "performance".. the "curse" of "macbeth" began with its first "performance".
|
https://study.com/learn/lesson/curse-macbeth-superstition-incidents.html
|
Macbeth Curse | Superstition, Incidents & Remedy - Video & Lesson ...
|
Macbeth Curse: Superstition, Incidents & Remedy
Kristin has taught English to children and adults for over two years. She has a Bachelor of Science in Biology from the University of Cincinnati. She also has a TEFL (Teaching English as a Foreign Language) Certificate and experience leading university-level classes in several subjects.
Table of Contents
How do you reverse the Macbeth curse?
There are a few remedies that have been created by actors over the years. However, the most widely accepted one is to leave the theater, spin around 3 times, spit (typically over your shoulder), recite a line from another work of Shakespeare's, and knock to be let back in. Variations include uttering a bad word instead of a line from Shakespeare, or not saying anything at all after the movements.
Why is it unlucky to say Macbeth?
There have been numerous accidents, incidents, and mishaps attributed to the many productions of "Macbeth" over the years. These include deaths, injuries, fires, storms, and a few other inexplicable events. Because of this, it is considered unlucky to even say the name of the play, and the actors say the "M" word or a nickname for the play instead.
Macbeth, fully titled The Tragedie of Macbeth, is a play written by William Shakespeare and performed for the first time in the early 1600s, although the exact year is unknown. It is a commentary on the lust for power, especially in regards to the political environment in the time that the play was written.
The Curse of Macbeth
For people in theater, William Shakespeare's play ''Macbeth'' holds a long legend of curses and bad luck. From its opening night in 1611, many people have been superstitious of the play. Because of this, actors believe they should not say the name ''Macbeth'' in a theater unless they are rehearsing or performing the play. While we're still safe to talk about the play in classrooms, many people believe that mentioning ''Macbeth'' by name will lead to poor production, injuries, and just overall bad luck. This is the ''Curse of Macbeth.'' In the theater, people will only refer to ''Macbeth'' as the ''Scottish Play,'' ''that play,'' or ''the Glamis Comedy.''
An error occurred trying to load this video.
Try refreshing the page, or contact customer support.
You must cCreate an account to continue watching
Register to view this lesson
Are you a student or a teacher?
Create Your Account To Continue Watching
As a member, you'll also get unlimited access to over 88,000
lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you
succeed.
Witchcraft and spells are used in Macbeth, which many people often believe is the cause of the Curse of Macbeth.
Though many people in the theater community believe that the curse is just a superstition, it does have a few suspected origins. The play is believed to contain real spells that are used by the three witches in the play. It is thought that either the use of the "real" spells was enough to curse the play or that the use of the spells angered the witches so much at the time that they decided to curse the play themselves. In the 1600s, representing witchcraft in a play was considered very taboo, which likely encouraged the idea of the curse in the first place.
It is common for a theater to have practices to protect themselves from The Curse of Macbeth during productions.
It would be unrealistic to expect theaters across the world to not perform one of Shakespeare's most popular plays, even if there is a curse attributed to the play. Because of this, many theaters have adopted some practices to counteract or overcome this curse and protect themselves.
What Is the M Word in Theater?
To some theater people, the curse of Macbeth does not just apply when there is a production of the play. The superstition actually applies in regards to the name of the play itself, suggesting that saying the "M" word inside of a theater can bring bad luck and utter disaster to any production. Because of this theory, the play has been given many nicknames, including:
Macbeth is a tragedy play that was written by William Shakespeare and first performed in the 1600s. Though it is a popular play that has been performed all around the world for four hundred years, it has also been credited for sparking a curse for theaters, known as "The Curse of Macbeth." The first performance of the play was a disaster from the beginning, as the young boy cast to play Lady Macbeth died on the opening night of the play, and Shakespeare had to step in to replace him. King James I, the inspiration for the play, hated the violence so much that he banned the play for over a century.
The Witches
William Shakespeare wrote ''Macbeth'' around the same time that King James I began to rule England and Scotland. When King James I was named the new king of England, Shakespeare wanted to secure his role in the court. Before King James, Shakespeare would perform his plays for Queen Elizabeth and he wanted to be sure that he could continue this role with King James.
While writing ''Macbeth,'' Shakespeare included many elements in the plot line that would have been interesting to King James I. One of these elements was the use of the supernatural.
When the play opens, Macbeth is greeted by the three witches who tell Macbeth that he will one day be king of Scotland. The witches make appearances throughout the play, continually casting spells to make prophecies. Those who believe in the ''Curse of Macbeth'' believe that Shakespeare included authentic spells from witches, and because the witches' spells are real, they're awakened when the play is performed. The witches aren't happy that their black magic was used, so they curse the play's performance.
A Long History
There's a long history of productions and people who were affected by the ''Curse of Macbeth.''
On its opening night, the young boy who was to play Lady Macbeth developed a fever and died suddenly. Shakespeare had to take over his role. History says that King James was not happy with the bloodshed in ''Macbeth'' so the play was not performed again in England until 1703, a century later. On the night of its first performance in a hundred years, England had one of its worst storms in history.
Although smaller curses continued, real daggers being used instead of fake or even crowds attacking the actors, the next large curse moment occurred in 1849 at the Astor Place Opera House. A protest being held outside the Opera House escalated to a riot where 23 people died and hundreds were injured.
In the 20th century, the Curse of Macbeth continued. During productions, sets fell down, fires broke out, an actress playing Lady Macbeth died suddenly, an actor playing Macbeth suddenly could not speak when on stage, actors were in car accidents on the way to the theater, an actress playing Lady Macbeth fell off the stage, actors were stabbed by real swords, and one proprietor and actor even had a heart attack.
In 1953, famous actor Charlton Heston was even a victim of the curse. On his opening night, the castle was to be set on fire as part of the production. A wind blew and the fire spread towards the audience. Heston suffered burns on his legs.
Another famous actor that suffered the curse is Alec Baldwin. During his production of the play, he accidentally injured the actor playing Macduff with his sword, cutting open his hand.
Instances of actors being injured or productions gone awry continue through today. As recent as 2013, actor Kenneth Branagh injured another actor in an opening fight scene. For all of these well-documented stories, there are many smaller theaters performing ''Macbeth'' who also claim they have been a victim of the curse.
Is There a Curse?
Among actors and actresses, there is a strong belief that the ''Curse of Macbeth'' is real. For others, the explanation of the curse is explainable. ''Macbeth'' has a lot of bloodshed and sword fights. The play itself has many scenes in the night or with dark skies. Those who do not believe the curse believe that the dark environment and amount of swords on the stage at once creates an environment where someone could be easily injured. Not just that, but the play is over 400 years old and has been performed almost countless times. With that many productions, there are bound to be injuries or just weird things that happen.
Be assured, there is a remedy for the curse. If someone says the name ''Macbeth'' in a theater, he or she should leave the theater, spin around three times, spit over his or her left shoulder, and recite a line from Shakespeare.
Lesson Summary
So the question must be asked again: Is ''Macbeth'' a cursed play? Many people believe so. The ''Curse of Macbeth'' involves the belief that if an actor says the name ''Macbeth'' while in the theater, the production will be cursed. Because of this, actors will only reference the play as the ''Scottish Play'' or ''that play.''
The history of the curse began on the night of its first performance. The young boy playing Lady Macbeth suddenly became ill and died. Since then, many productions and actors have claimed to be a victim of the curse. There are many stories of injuries, accidents, stage sets falling apart, people falling off stages, real daggers and swords being used, and even death.
For those who do not believe the curse, the explanation is simple. The play is over 400 years old, so accidents will happen. But for those who believe, they will not risk the curse falling on them. Instead, if an actor accidentally says ''Macbeth,'' he or she should immediately leave the theater, turn around three times, spit over his or her shoulder, and quote a line from Shakespeare.
Video Transcript
The Curse of Macbeth
For people in theater, William Shakespeare's play ''Macbeth'' holds a long legend of curses and bad luck. From its opening night in 1611, many people have been superstitious of the play. Because of this, actors believe they should not say the name ''Macbeth'' in a theater unless they are rehearsing or performing the play. While we're still safe to talk about the play in classrooms, many people believe that mentioning ''Macbeth'' by name will lead to poor production, injuries, and just overall bad luck. This is the ''Curse of Macbeth.'' In the theater, people will only refer to ''Macbeth'' as the ''Scottish Play,'' ''that play,'' or ''the Glamis Comedy.''
The Witches
William Shakespeare wrote ''Macbeth'' around the same time that King James I began to rule England and Scotland. When King James I was named the new king of England, Shakespeare wanted to secure his role in the court. Before King James, Shakespeare would perform his plays for Queen Elizabeth and he wanted to be sure that he could continue this role with King James.
While writing ''Macbeth,'' Shakespeare included many elements in the plot line that would have been interesting to King James I. One of these elements was the use of the supernatural.
When the play opens, Macbeth is greeted by the three witches who tell Macbeth that he will one day be king of Scotland. The witches make appearances throughout the play, continually casting spells to make prophecies. Those who believe in the ''Curse of Macbeth'' believe that Shakespeare included authentic spells from witches, and because the witches' spells are real, they're awakened when the play is performed. The witches aren't happy that their black magic was used, so they curse the play's performance.
A Long History
There's a long history of productions and people who were affected by the ''Curse of Macbeth.''
On its opening night, the young boy who was to play Lady Macbeth developed a fever and died suddenly. Shakespeare had to take over his role. History says that King James was not happy with the bloodshed in ''Macbeth'' so the play was not performed again in England until 1703, a century later. On the night of its first performance in a hundred years, England had one of its worst storms in history.
Although smaller curses continued, real daggers being used instead of fake or even crowds attacking the actors, the next large curse moment occurred in 1849 at the Astor Place Opera House. A protest being held outside the Opera House escalated to a riot where 23 people died and hundreds were injured.
In the 20th century, the Curse of Macbeth continued. During productions, sets fell down, fires broke out, an actress playing Lady Macbeth died suddenly, an actor playing Macbeth suddenly could not speak when on stage, actors were in car accidents on the way to the theater, an actress playing Lady Macbeth fell off the stage, actors were stabbed by real swords, and one proprietor and actor even had a heart attack.
In 1953, famous actor Charlton Heston was even a victim of the curse. On his opening night, the castle was to be set on fire as part of the production. A wind blew and the fire spread towards the audience. Heston suffered burns on his legs.
Another famous actor that suffered the curse is Alec Baldwin. During his production of the play, he accidentally injured the actor playing Macduff with his sword, cutting open his hand.
Instances of actors being injured or productions gone awry continue through today. As recent as 2013, actor Kenneth Branagh injured another actor in an opening fight scene. For all of these well-documented stories, there are many smaller theaters performing ''Macbeth'' who also claim they have been a victim of the curse.
Is There a Curse?
Among actors and actresses, there is a strong belief that the ''Curse of Macbeth'' is real. For others, the explanation of the curse is explainable. ''Macbeth'' has a lot of bloodshed and sword fights. The play itself has many scenes in the night or with dark skies. Those who do not believe the curse believe that the dark environment and amount of swords on the stage at once creates an environment where someone could be easily injured. Not just that, but the play is over 400 years old and has been performed almost countless times. With that many productions, there are bound to be injuries or just weird things that happen.
Be assured, there is a remedy for the curse. If someone says the name ''Macbeth'' in a theater, he or she should leave the theater, spin around three times, spit over his or her left shoulder, and recite a line from Shakespeare.
Lesson Summary
So the question must be asked again: Is ''Macbeth'' a cursed play? Many people believe so. The ''Curse of Macbeth'' involves the belief that if an actor says the name ''Macbeth'' while in the theater, the production will be cursed. Because of this, actors will only reference the play as the ''Scottish Play'' or ''that play.''
The history of the curse began on the night of its first performance. The young boy playing Lady Macbeth suddenly became ill and died. Since then, many productions and actors have claimed to be a victim of the curse. There are many stories of injuries, accidents, stage sets falling apart, people falling off stages, real daggers and swords being used, and even death.
For those who do not believe the curse, the explanation is simple. The play is over 400 years old, so accidents will happen. But for those who believe, they will not risk the curse falling on them. Instead, if an actor accidentally says ''Macbeth,'' he or she should immediately leave the theater, turn around three times, spit over his or her shoulder, and quote a line from Shakespeare.
|
Macbeth is a tragedy play that was written by William Shakespeare and first performed in the 1600s. Though it is a popular play that has been performed all around the world for four hundred years, it has also been credited for sparking a curse for theaters, known as "The Curse of Macbeth." The first performance of the play was a disaster from the beginning, as the young boy cast to play Lady Macbeth died on the opening night of the play, and Shakespeare had to step in to replace him. King James I, the inspiration for the play, hated the violence so much that he banned the play for over a century.
The Witches
William Shakespeare wrote ''Macbeth'' around the same time that King James I began to rule England and Scotland. When King James I was named the new king of England, Shakespeare wanted to secure his role in the court. Before King James, Shakespeare would perform his plays for Queen Elizabeth and he wanted to be sure that he could continue this role with King James.
While writing ''Macbeth,'' Shakespeare included many elements in the plot line that would have been interesting to King James I. One of these elements was the use of the supernatural.
When the play opens, Macbeth is greeted by the three witches who tell Macbeth that he will one day be king of Scotland. The witches make appearances throughout the play, continually casting spells to make prophecies. Those who believe in the ''Curse of Macbeth'' believe that Shakespeare included authentic spells from witches, and because the witches' spells are real, they're awakened when the play is performed. The witches aren't happy that their black magic was used, so they curse the play's performance.
A Long History
There's a long history of productions and people who were affected by the ''Curse of Macbeth.''
On its opening night, the young boy who was to play Lady Macbeth developed a fever and died suddenly. Shakespeare had to take over his role. History says that King James was not happy with the bloodshed in ''Macbeth'' so the play was not performed again in England until 1703, a century later.
|
yes
|
Theater
|
Was Shakespeare's "Macbeth" cursed from its first performance?
|
yes_statement
|
"shakespeare"'s "macbeth" was "cursed" from its first "performance".. the "curse" of "macbeth" began with its first "performance".
|
https://www.folger.edu/podcasts/shakespeare-unlimited/shakespeare-unlimited-episode-57/
|
Anecdotal Shakespeare | Folger Shakespeare Library
|
Anecdotal Shakespeare
Shakespeare Unlimited: Episode 57
The curses associated with the Scottish play. Using a real skull for the Yorick scene in Hamlet. Over the centuries, these and other fascinating theatrical anecdotes have attached themselves to the plays of William Shakespeare.
Many of these stories have been told and retold, over and over, century after century – with each new generation inserting the names of new actors into the story and telling the story as if it just occurred. “So, one night, David Garrick was backstage” becomes “So, one night, Edmund Kean was backstage,” which then becomes, “So, one night, Richard Burton was backstage.” And so on.
Our guest, Paul Menzer, is a professor and the director of the Shakespeare and Performance graduate program at Mary Baldwin College in Staunton, Virginia. His book Anecdotal Shakespeare: A New Performance History was published by Bloomsbury Arden Shakespeare in 2015. He was interviewed by Neva Grant.
Transcript
This podcast is called “Truths Would Be Tales Where Now Half-Tales Be Truths.” Theater exists to tell stories. And while this podcast is about theater, and it’s about stories, it’s not about the scripted drama onstage. Instead, it’s about the other stories. The ones about what happens when actors onstage go off script, what goes on backstage, and what theater people do after the show ends each night.
Paul Menzer of Mary Baldwin College in Stanton, Virginia, has written a delightful new book about the anecdotes that, over centuries, have attached themselves to the plays of William Shakespeare. What he’s found is kind of amazing. Many of these stories have been told and retold over and over, century after century, with each new generation inserting the names of new actors into the story and telling the story as if it just occurred. “So, one night, David Garrick was backstage” becomes “So, one night, Edmund Kean was backstage,” which then becomes “So, one night, Richard Burton was backstage,” and so on.
Paul’s book is titled Anecdotal Shakespeare: A New Performance History, and he came in to talk about it with Neva Grant.
NEVA GRANT: I think the best way to start this conversation, which is a conversation all about anecdotes, is with a story.
PAUL MENZER: I will start with the story that started it all, for me at least. An anecdote I’ve heard probably over 25 or 30 years, but I will just give you one version of it.
GRANT: Okay.
MENZER: In the 1950s, two actors named Robert Newton and Wilfrid Lawson were performing in Richard III at the Lyric Theatre in Hammersmith. And one Saturday, for a matinee, their agents came to town before the show, and the four men had a kind of pre-show lunch, during which they put a few bottles away.
And come show time, Robert Newton, who was playing Richard III, and so therefore has to open the show, walks onstage, or staggers onstage, rather, followed by a vapor trail of wine. And he approaches the edge of the stage and begins the famous opening lines to Richard III and he says, “Now is… Now is the winter… Of our discon…”
And before he can butcher another iamb, a voice, a woman’s voice, rings out from the audience and says, “You, sir, are drunk.”
And Robert Newton stares out over the footlights into the audience, comes down to the edge of the stage, and says, “Madam, if you think I’m drunk, just wait till you see the Duke of Buckingham.”
GRANT: But, as I understand it, you caught wind of this story by watching the Johnny Carson show.
MENZER: That’s right.
GRANT: And it wasn’t that actor at all, but an entirely different actor that this had happened to, someone that our audience probably knows better, which is the great British actor Peter O’Toole.
MENZER: That’s right.
GRANT: Same story.
MENZER: That’s right.
GRANT: So explain that a little bit.
MENZER: Well, I became engrossed by that story, and more generally by theater anecdotes, by watching the Johnny Carson show back when I was 10 or 11, and would stay up late when I shouldn’t have been watching that.
And I was absolutely enraptured by guys like Peter O’Toole, Richard Harris, Richard Burton—even Oliver Reed, who would come out and tell these, what were to me, just brilliant, hilarious, original theater stories, including the one that I just recounted. Though of course, when I heard it, at the age of, say, 10 or 11, it was about Peter O’Toole and Richard Harris.
GRANT: Right.
MENZER: Then, flash forward to just a few years ago, an actor I work with told me the version that I just told you about Robert Newton and Wilfrid Lawson, and he insisted upon its singularity.
And at that point it struck me that I had heard this story over and over again over the years with different actors slotted into the template of its narrative.
And indeed, I went and did some research and found, maybe over the last 250 years, a dozen different versions of that same story, with maybe a dozen different actors in it.
GRANT: Dating back how far?
MENZER: Well, you know, the earliest version I found is a, sort of proto version of it, say in the seven, I think 1767.
And it’s a story that David Garrick tells, that he was performing, not Richard III, but another history play, Henry VIII, and sent a note to the guy who’s playing the Bishop of Winchester, that it was show time. And the actor playing the Bishop of Winchester sent him back a note saying “the Bishop of Winchester is getting drunk at The Bear, and damn your eyes if he will appear tonight.”
Now that’s a somewhat different version of it, but then it shows up with a version with Edmund Kean in it. It shows up with a more obscure actor in the 18th century named Bailie Nicoll Jarvie. It shows up with Olivier. On and on and on, these actors continue to tell the same story over and over again.
And interesting, about Peter O’Toole. On the night Peter O’Toole died, not too terribly long ago, I think December 15, 2013, maybe, there was a production of Twelfth Night at the Delacorte Theater in New York. And Stephen Fry, a great writer, anecdotalist himself, and a great actor, was playing Malvolio in that production. And he came out onstage after the show, to offer a sort of ad hoc eulogy for Peter O’Toole, who had just passed away. And he memorialized O’Toole by telling a number of theater anecdotes about him, including one beginning, “One time when Peter O’Toole was playing Richard III…” and ending in a punch line that you have already heard.
GRANT: Right.
MENZER: Yeah.
GRANT: “If you think I’m drunk, you should see Lord Buckingham.”
MENZER: Exactly.
GRANT: So, I guess, as you’re beginning to piece this together, as you become a teacher and a scholar, you decide, “Well, wait. If this is just the one, there have to be more of these. There have to be more of these stories that have kind of worked their way through theater lore, over time, on up to the modern day.” Right?
MENZER: Correct, yeah. And I started to collect them. And what I found was that particular plays by Shakespeare, particularly the most popular, maybe even canonical or hypercanonical, plays by Shakespeare, each of them have one or two anecdotes that have followed it across the years. Dates change, names shift, but the story stays the same.
And I got very interested in thinking about, “Can we tell the history of Shakespeare in performance through the anecdotes that most durably attached themselves to those plays?” And then, “What is it about that particular anecdote that attaches itself to that particular play?”
Because I’ve become… I came to think that the attachment is not arbitrary, that perhaps this anecdote is ferreting out something that’s, sort of, burrowed down in the body of the play.
GRANT: It’s commenting on the play in a way.
MENZER: I think so, too. And so what I realized, as I began writing a book that I thought was a performance history, told through these anecdotes, what I’ve come to realize it was also… These anecdotes are a form of what I call “vernacular criticism” by the actors that appear in the plays. In other words, this is a form of literary criticism told through anecdotes by the actors who appear in the plays and notice something about the play, that the play can’t quite express itself, but that the anecdote does.
GRANT: I want to dive into the anecdotes really soon, but before we do, I just want to talk about a couple more theories about why these anecdotes might even exist. And the first one being people are just naturally curious about what happens behind the scenes in the theater, right?
MENZER: Absolutely. I mean nothing is more tantalizing than a closed curtain. You know, if you took a heat map vision from the air of the stage, you’d see there’s a lot more heat going on backstage, than often there is onstage.
And like the anecdote I just told, I opened with, it begins in the bar and moves to the stage, and so therefore reveals something about the off-stage life of these actors before they step onstage and become a character.
GRANT: And, of course, these actors in their off-stage lives, are often larger than life characters.
MENZER: And that is one thing that these attitudes are retailing, are giving us. They are a form of celebrity gossip, of course, but they extend the actor from just a character into a legend. And what makes an actor a legend is often what goes on offstage, not just what goes on…
GRANT: Sure, sure. Then you have another really interesting theory about why these anecdotes evolved, and that is that theater, by its very nature, is repetitive and so, naturally, we want an anecdote like this. Something surprising, something unexpected, to kind of jazz it up.
MENZER: I mean, I think the reason that actors tell them is that for all of its reputation for “different every night,” liveness, ephemerality—theater is, as you say, a very repetitive endeavor.
I mean actors live literally prescripted lives. They have to live out that whole journey eight days a week—twice on Saturdays. And therefore, it is a sort of annihilatingly mundane, repetitious thing to do. And so I think what the anecdotes do, is introduce difference into repetition, offer the possibility that something else might happen tonight, than what’s in the script.
GRANT: Can you give me an example of that? Something that interrupts the action in an unexpected way, takes it off in an unusual direction.
MENZER: There is an oft-told anecdote about Hamlet, particularly at the moment where Hamlet asks Rosencrantz to “play upon this pipe” and Rosencrantz insists that he cannot. And this anecdote shows up with John Philip Kemble, the Keans, Booths, on and on and on.
And the anecdote runs that during a provincial performance, it’s always a provincial performance, when some star actor is touring the provinces and has taken on some supernumerary to play Rosencrantz and Guildenstern…
GRANT: An amateur.
MENZER: An amateur, right. This is a key feature of the anecdote.
Hamlet insists that the amateur actor “play upon this pipe,” and he does it with such vehemence, that the amateur actor finally says, “Well, okay, I will” and plays God Save the King or Lady Coventry’s Minuet, or something like that.
GRANT: Which totally stops things in its track.
MENZER: Totally stops things in its track, and I’m fascinated that the earliest version of this anecdote that I found, what the tune that the amateur actor plays is God Save the King.
GRANT: Right.
MENZER: Which is the last song that Hamlet wants to hear. [LAUGH]
GRANT: Right, right, and what a layered message there.
MENZER: Yeah.
GRANT: But it must have, I think, as you say in your book, it would have caused the audience to, you know…
MENZER: It would have caused this English audience to stand, which is what one does during God Save the King, and then replay exactly what Claudius has done just moments earlier, rise during a performance of The Mouse Trap, and walk out.
GRANT: And as you point out in the book, often these kind of anecdotes of surprise come up in the tragedies because the comedies leave a little more room for improvisation.
MENZER: That’s right. It was not my design, but I was surprised as I began this research that most of the materials that were coming up were from tragedies.
And I don’t think it’s an accident that the anecdotes gather around tragedies. And I think it has to do with those interruptions that, as a play is moving towards its tragic ends, there’s a desperate need to insert a wedge of unpredictability into the play, to prevent it from completing in the way that we all know it’s going to complete.
GRANT: You know one of the anecdotes that our audience probably knows the best is the story, or, I guess I should say the curse, that hangs over the play Macbeth.
I think this is the one that has sort of made it out into the popular culture. How did that evolve? What’s the story behind that?
MENZER: Gosh, it’s very interesting. This is where the project diverts a little bit from its template, in that with the curse of Macbeth, I ended up doing some debunking. Whereas most of the rest of the book is bunking. [LAUGH] Right? I wanted to sort of prolong and extend these anecdotes, but in the case of Macbeth, I went at it from on the other end, which was to ask “How did this particular anecdote evolve and endure?” Because, as you say, it probably is the best known anecdote about Shakespeare.
GRANT: But for those people who don’t know, let’s just explain, really briefly.
MENZER: Absolutely.
GRANT: This is a curse where if you are, if you are in the theater, you are not to say the name of that play inside the theater. You call it The Scottish Play.
MENZER: Or Mackers, or The M Play, or The Scot, yeah.
GRANT: Right. Because if you say the name Macbeth, what will happen?
MENZER: All sorts of things. All sorts of accidents will attend upon you. Sandbags will fall from the heavens, actors will fall through traps, people will break their legs, etcetera, etcetera, on and on.
GRANT: Whether they’re in that play or any other play, right?
MENZER: That’s right. But particularly, the production of Macbeth will become doomed if you say Macbeth inside the theater, other than, of course, with your scripted dialogue, which insists that you do.
And so, therefore, all sorts of counter rituals have been evolved to undo that curse. If you do say Macbeth in the theater, you can go outside, turn around three times, spit, and knock for readmittance. There’s another theory that if you say, if you recite Portia’s “quality of mercy” speech from The Merchant of Venice, that will undo the curse, etcetera, etcetera.
So it’s evolved this entire sort of folklore, fake lore, if you will, around the idea of the curse.
GRANT: Why did it happen?
MENZER: It’s very interesting. In my book, one thing that I discovered is that the idea of the curse… When people talk about the curse, they always refer to it as “the ancient curse of Macbeth,” and they date it back to one of its very first performances, in the early 17th century. I could not find in my research any mention of the curse until about the 1930s. But from the 1930s onward, we always refer to it as “an ancient curse,” even though it appears to be an early 20th century invention. In fact, in some of my research, I discovered that in the 18th century, the cursed play, the bad-luck play, was All’s Well That Ends Well, not Macbeth whatsoever.
GRANT: Yeah.
MENZER: But I think that I have a little bit of a half-baked, maybe even quarter-baked, theory about how the curse theory evolved. I think in some ways it’s a form of, you know, it’s a form of publicity that arose during a particular production of Macbeth in the 1930s, during a particular production where a lot of things were, in fact, going wrong.
GRANT: Yeah.
MENZER: And this idea of the curse emerged. But I think, you know, and this is where… this is why I think that anecdotes are a form of dramatic criticism.
Macbeth, after all, is a play about disillusionment. Ultimately, we find out near the end of the play that all the things that felt mysterious about the play, not “of woman born,” Dunsinane, the forest coming “to Dunsinane,” that, in fact, they’re quite banal.
GRANT: Yeah.
MENZER: Right? Macduff was not “of woman born”; he was the product of a caesarean birth. You know, it’s not a marching forest, it’s a bunch of soldiers with branches, right?
GRANT: Yeah.
MENZER: So the play disillusions us. It turns out the witches are not prophets, they’re historians, right?
GRANT: Right. The play strips away all that mystique.
MENZER: The play strips all that, punctures all the magic that we believed in. And I think the curse is a way of reinflating the play.
GRANT: So let’s move on to another anecdote, or really, just a series of anecdotes. Back to Hamlet and the skulls.
MENZER: The skulls.
GRANT: Yes, or the skull, or over time, the many skulls.
MENZER: The many skulls…
GRANT: That appear in the scene with Yorick in the graveyard.
MENZER: One of the most enduring anecdotes about Hamlet, productions of Hamlet, concerns the realness of the skull.
Very, very early on in the play’s history, there began to be criticism of actors using real live skulls, or real dead skulls, right? Rather than a prop.
And this story of the real skull in Hamlet endures, endures, endures. A recent example is a 2008 production of Hamlet at the Royal Shakespeare Company starring David Tennant, in which a story ultimately began to circulate that David Tennant was not using a fake skull, but using a real skull, for Yorick. And it was the skull of a pianist named Tchaikovsky, Andrei Tchaikovsky, who had bequeathed his skull to the RSC, to be used for productions of Hamlet in the 1980s. And other actors had rehearsed with the skull before, including Mark Rylance, but David Tennant was apparently the first actor to use this skull onstage.
Now, once news got out, it created kind of a stir and headlines and huge kerfuffle. And the director of Hamlet, who’s now the executive director, the artistic director of the RSC, Gregory Duran, said, “Well, we’ve replaced it with a fake skull.” Just to sort of quiet the hubbub.
So when the show moved to London, they had replaced the real skull with a fake one, except that, when the show finally closed, Duran revealed they actually had never replaced the fake skull. But, of course, the point here is that the audiences don’t know the difference.
GRANT: Right.
MENZER: You cannot tell the difference, as an audience member, between a real skull and a fake one. And for me, what that story, what that anecdote, rehearses is Hamlet’s preoccupation with the difference between seems and is.
He says, No, madam, I do not “seem” sad. I am sad. Right? I don’t seem melancholy or mourning for my father. I am melancholy and in mourning for my father.
But the very fact that he draws attention to the fact that mourning can be performed gets at his problem, of sincerity verses insincerity.
GRANT: So the authenticity of the skull on the stage, becomes a way of… Like almost like a footnote, a way of commenting about that very phenomenon.
MENZER: Beautifully put, yeah. It is a footnote. It’s a way for actors… that anecdote sort of becomes a way, it’s a kind of glow at the edge of the play that sort of expresses this kind of anecdotal unconscious that the play has. This concern over the realness of the prop that stands in for Yorick’s.
GRANT: But you know, what’s so interesting about this anecdote is that it sounds like unless the news gets out, that it is a real skull, as happened in that account you just gave, it’s really more for the actors than for the audience. Right?
MENZER: That’s right. That’s right, I mean when Mark Rylance, who I said rehearsed with the skull, ultimately rejected the idea of using it in a performance, he said that he couldn’t get past the idea that it was a real skull, and that it was meant to play Yorick. Which means that in some ways skulls can’t even play skulls onstage, right?
But it is for the actor, because as an audience member, the audience has to be told that something is real, to know that it is real. Right?
GRANT: Yeah.
MENZER: Otherwise, it’s just a prop.
GRANT: Right.
MENZER: Right? So again it’s a way of dilating over this problem in the play, between the real and its resemblance.
GRANT: And again, as you point out, this is not the first instance of a live, or of a real skull appearing in the play. That, too, dates almost all the way back to Shakespeare’s time, right?
MENZER: Yes, that’s exactly right. I mean there’s many, many instances of real skulls being used in performance. And there are many instances too, interestingly, of people bequeathing their own heads to play Yorick.
At the University of Pennsylvania, in their rare book room, they have the skull of a man named John Reed. Now, John Reed was a gaslighter, a lamplighter, at the Walnut Street Theatre in Philadelphia, which is the longest continuously operating American theater. And in his will, he very specifically bequeathed his skull to play Yorick, and so it did for many, many years.
You can now look at the skull in the rare book room, of all places. It’s strange that a skull would be in a rare book room, except that it makes for a good read, because when the skull comes to your table, the skull has writing on the top of it. And what is written on the top of it are the names of famous American and British luminary actors who performed Hamlet to this skull over the years. Charles Kean, Edwin Booth, Edmund Forrest, on and on. So these stars have sort of literally overwritten the skull that was meant to play Yorick.
But this goes on and on. There’s many instances of people bequeathing their skulls to play Yorick.
GRANT: And these are all such fascinating stories. I mean, how did you find all this information? Where did you do your research? What kind of sources did you find?
MENZER: Well there’s a number of places, because if you’re not worried about the facticity of them, the factualness of them, it doesn’t matter where you find them. So, I found them in actor memoirs, biographies of actors, letters from actors. The Folger Shakespeare Library has a huge cache of theatrical scrapbooks.
Now, these theatrical scrapbooks fall into several categories. Many theaters in the 18th and 19th century, in, I guess, a kind of early form of a clip service, seem to have employed somebody to go through the daily newspapers, which if you’re Drury Lane in London in the 18th century, probably means seven to eight different newspapers. And some functionary’s job was obviously to go through the newspapers and clip out everything about the theater on any given day.
GRANT: That’s not the worst job in the world.
MENZER: It’s not the worst job in the world. And so these scrapbooks that have been kept for decades, and decades, and decades for Drury Lane, the Theatre Royal, the Haymarket, etcetera, are just a fascinating compendium. Many of them theater reviews, of course, but they’re also sort of tidbits, gossips, greenroom chatter, that sort of thing about the lives of actors, and I found a lot of anecdotes there.
GRANT: As you just pointed out, you were not doing serious fact checking. What was the word you used? The “facticity.”
MENZER: Yes.
GRANT: Right. So, as you pointed out, you were not on a mission of determining whether all of these stories were factual. Because I think, for starters, that would have probably made your head explode. But I think, even more importantly, I mean that really wasn’t your purpose here to say, “Well, this one happened, and this one didn’t.”
MENZER: No, I was… I had no interest whatsoever in verifying the factualness or the facticity of an anecdote, which, in fact, would seem to violate the very idea of the anecdote. I mean fact checking an anecdote would have seemed beside the point.
GRANT: But at the same time as you pored over all these diaries and scrapbooks and things, you know, certain things must have become clear to you over time. Like, “Well, that one probably never happened. This one probably happened once and then was just embellished and inflated over time.” I mean you must’ve, kind of, even in your own mind, started to form categories of these stories.
MENZER: There is a spectrum of plausibility here. I mean, you know, I believe that people have bequeathed their skulls to play Yorick in Hamlet. I mean, I have held the skull of John Reed and seen Edwin Booth’s name written on the top of it, right? I mean I think that actually happened.
GRANT: Right.
MENZER: You know, one of the more coherent bodies of anecdotes that I explore have to do with Othello. Which, when Othello was played by, you know, as it was for hundreds of years, a white man in blackface, the anecdotes that attend upon Othello are all about the transfer of the blackface makeup from the actor playing Othello to the woman, or the young boy, playing Desdemona.
Now for me… Now, first of all I believe that that actually happened, right? I mean that is a cosmetic difficulty that attends upon blackface performance. But there are many, many anecdotes about it that do extrapolate upon it, and you’ll read anecdotes that “Well, by the end of the play, Desdemona was nearly as black as Othello.”
Which, of course, when you sort of start putting some pressure on that anecdote and sort of palpating a little bit, it’s pretty clear what’s going on there, right? This is an anecdote that is interested in racial mixture, miscegenation. The transfer of the makeup becomes a proxy way of talking about racial exchange.
GRANT: But it’s fascinating, because again it gets back to your point about anecdotes being a commentary on the play. What could be a more rich metaphorical image than that, right?
MENZER: You know, and that, that was one of the first body of anecdotes I started working on. And as I said, that is a very coherent, and I think pretty clear, example of a body of anecdotes that have endured over hundreds and hundreds of years, that are a way of talking about something that is thematically central to the play, but that is also a theatrical technical problem.
And in fact, what happens, too, is in some strange way, the blackface makeup becomes a way of keeping the bodies of Othello and Desdemona separate. There’s a famous anecdote that Ellen Terry tells about playing Desdemona, when she was alternating with Booth and Henry Irving, playing Othello and Iago. And Irving and Booth would alternate.
And she talks about when she played it with Irving, she says, “I was, by the end of show, I was nearly as black as he.” But, she says, when Edwin Booth played it, he would, as he put it, hold a piece of fabric or tapestry in his hand. So, as he says, “I shall never make you black,” you know, in a sort of decorous way. But the idea of a blackface actor saying to his Desdemona, “I shall never make you black,” is a way that the anecdote is retelling the play. It’s an interesting way.
GRANT: As long as we’re on the subject of veracity, can we cycle back to the drunken Richard III story, and what’s your sense about that? I mean, which of those many stories? I mean, it does kind of have an element of truth to it. You can certainly imagine it happening, right?
MENZER: Absolutely. And you know, tales of drunken actors range across the canon. You know, not restricted to Shakespeare or his tragedies, whatsoever. And so it’s certainly worth thinking about. Like, why do we want this to be true of actors? Like, why do we want the idea that the actors are getting drunk before the matinee, or you know, or during intermission?
Many, many anecdotes of actors being able to nip out during intermission to the pub next door, put down a couple of pints, and be back for the second act after intermission. And I think, sort of, one of the questions is, “Why do we want that to be true?” And I think probably it speaks to our awe at their sort of effortless mastery, of their ability to switch on and switch off out of their actorly persona, and into their character.
GRANT: But it also gets to this notion that if you go to a play knowing already that everybody’s going to die at the end, if it’s a tragedy, it’s nice to know that something might happen during intermission that, you know, makes things a little more unpredictable, right?
MENZER: That’s absolutely right. I mean, you know, 99 percent of the time, the play is going to go the way the play is designed to go. I mean, we rehearse very hard, we block things, we get things set, we have technical rehearsals, actually to make sure something doesn’t go wrong. And so, the idea that the actors are having a drink in intermission does again introduce that wild card.
GRANT: Are we still making anecdotes? Or at least embellishing and adding to the ones that already exist?
MENZER: Absolutely, I mean you know, the latest body of anecdotes that are beginning to, sort of, rise up out of the theater are, of course, anecdotes about cell phone usage, right? So a new form of interruption is now being, sort of, retailed and retold through anecdotes. And so yes, this is actually happening, but the anecdotes will begin to, or have already begun to, emerge of actors who answered the phone in character, etcetera, etcetera.
And you know, I mean, you know for a play like Richard III, which Kevin Spacey recently toured the world in and played Richard, quite famously. There’s a lot of anecdotes about Richard III, not just about drunken Richards, but about injuries caused by performing the hump, or performing the limp.
And so, when Kevin Spacey was touring with his Richard III, he would go on Oprah. He would go on Ellen, and tell these anecdotes about how famously, at the end of Spacey’s Richard III, he was hoist to the heavens by his ankles during his slaying at the end of the play. And he would go on these talk shows and tell this anecdote where the audience would gasp as he was hoist to the heavens by his ankles, but he said that was for him, finally, a sort of chiropractic opportunity to straighten his spine out, because he had been hunched over for two and a half hours.
And so this is a classic anecdote. At the moment when the audience think the actor is actually imperiled, that’s the moment where he’s finally relaxed, right? But that story of actors injuring themselves playing Richard III goes way, way back.
GRANT: But there, too. It makes total sense that that would happened. These poor guys who would have been, you know, doubled over.
MENZER: It’s got a ring of plausibility to it. Just enough, just enough. It’s not a myth; it’s not obviously not true.
GRANT: Right, because again, those anecdotes don’t attach themselves to Hamlet. They don’t attach themselves to Romeo and Juliet. They attach themselves to the play where it would be most plausible that that would happen, quite possible.
MENZER: That’s right. I mean, there’s sort of two things to say about that, right?
I mean, obviously, in some ways, the anecdotes that attach themselves to a particular play have to do with the opportunities that the play affords. There are skull anecdotes in Hamlet, because there are skulls in Hamlet. It would be surprising, although wonderful, to find skull anecdotes about A Midsummer Night’s Dream, but you don’t. There are stories about actors injuring themselves playing Richard because of the nature of the play.
At the same time, though, well we have to kind of say, “Well, obviously these anecdotes arise because of the opportunities the play affords. It is still the case that certain elements, certain qualities of the play, produce anecdotes. I mean, Hamlet, for instance, at one point in the play calls for his “tables”—his commonplace book that he wants to write something down in. There are no anecdotes, for instance, about an overly literal prop master who pushes a table out onstage or something. I mean, you can make up anecdotes about these plays, but, you know, they don’t seem to have endured. So I’m really interested in those anecdotes that endure.
GRANT: Well, thank you so much for a fascinating conversation.
MENZER: Oh, it has been my pleasure, and as you can probably tell, the material is marvelous, and I have literally hundreds and thousands of words of anecdotes still sitting in my laptop, waiting for some form of expression.
GRANT: It’s another book.
MENZER: It is another book.
WITMORE: Paul Menzer is a professor and the director of the Shakespeare in Performance graduate program at Mary Baldwin College in Stanton, Virginia. His book Anecdotal Shakespeare: A New Performance History, was published by Bloomsbury Arden Shakespeare in 2015.
He was interviewed by Neva Grant. “Truths Would Be Tales Where Now Half-Tales Be Truths” was produced by Richard Paul. Garland Scott is the associate producer. It was edited by Gail Kern Paster and Esther Ferington. We had technical help from the news operations staff at NPR in Washington, DC.
Shakespeare Unlimited comes to you from the Folger Shakespeare Library. Home to the world’s largest Shakespeare collection, the Folger is dedicated to advancing knowledge and the arts. You can find more about the Folger at our website, folger.edu. For the Folger Shakespeare Library, I’m Folger Director Michael Witmore.
Stay connected
Find out what’s on, read our latest stories, and learn how you can get involved.
|
GRANT: Right. Because if you say the name Macbeth, what will happen?
MENZER: All sorts of things. All sorts of accidents will attend upon you. Sandbags will fall from the heavens, actors will fall through traps, people will break their legs, etcetera, etcetera, on and on.
GRANT: Whether they’re in that play or any other play, right?
MENZER: That’s right. But particularly, the production of Macbeth will become doomed if you say Macbeth inside the theater, other than, of course, with your scripted dialogue, which insists that you do.
And so, therefore, all sorts of counter rituals have been evolved to undo that curse. If you do say Macbeth in the theater, you can go outside, turn around three times, spit, and knock for readmittance. There’s another theory that if you say, if you recite Portia’s “quality of mercy” speech from The Merchant of Venice, that will undo the curse, etcetera, etcetera.
So it’s evolved this entire sort of folklore, fake lore, if you will, around the idea of the curse.
GRANT: Why did it happen?
MENZER: It’s very interesting. In my book, one thing that I discovered is that the idea of the curse… When people talk about the curse, they always refer to it as “the ancient curse of Macbeth,” and they date it back to one of its very first performances, in the early 17th century. I could not find in my research any mention of the curse until about the 1930s. But from the 1930s onward, we always refer to it as “an ancient curse,” even though it appears to be an early 20th century invention. In fact, in some of my research, I discovered that in the 18th century, the cursed play, the bad-luck play, was All’s Well That Ends Well, not Macbeth whatsoever.
GRANT: Yeah.
MENZER:
|
no
|
Theater
|
Was Shakespeare's "Macbeth" cursed from its first performance?
|
yes_statement
|
"shakespeare"'s "macbeth" was "cursed" from its first "performance".. the "curse" of "macbeth" began with its first "performance".
|
https://www.history.com/news/curses-king-tut-tippecanoe-origins
|
6 Famous Curses and Their Origins | HISTORY
|
6 Famous Curses and Their Origins
Throughout history, people have promoted stories of curses for a variety of reasons. To sports fans, curses can help explain their favorite team’s loss. When a cause of death is misunderstood, curses can provide an explanation. For an imperial nation, curses can betray anxiety about being punished for colonizing and taking artifacts. And sometimes, curses come about because someone just wanted to make up a story.
Here are some prominent curses in history.
1. King Tut’s Curse (and Other ‘Mummy’s Curses’)
Hannes Magerstaedt/Getty Images
The burial mask of Egyptian Pharaoh Tutankhamun.
In February 1923, a British archaeological team opened the tomb of Tutankhamun, or “King Tut,” an Egyptian pharaoh during the 14th century B.C. Two months later, when the team’s sponsor died from a bacterial infection, British newspapers claimed without evidence that he’d died because of “King Tut’s curse.” Whenever subsequent members of the team died, the media dredged up the alleged curse again.
King Tut’s curse and other famous “mummy’s curses” were invented by Europeans and Americans while their countries removed priceless artifacts from Egypt. After the Titanic sank in 1912, some newspapers even promoted a conspiracy theory that the ship had sunk because of a “mummy’s curse.”
Though it’s not clear how many people actually took these “curses” seriously, these stories became extremely popular subjects for horror movies like The Mummy (1932) and its many iterations, as well as comedies like Mummy’s Boys (1936) and Abbott and Costello Meet the Mummy (1955).
2. The Curse of the Polish King’s Tomb
Heritage Images/Getty Images
Casimir IV Jagiellon.
In 1973, a group of archaeologists opened the tomb of the 15th-century Polish king Casimir IV Jagiellon in Kraków, Poland. As with the opening of King Tut’s tomb 50 years before, European media hyped up the event, and the researchers involved allegedly joked that they were risking a curse on the tomb by opening it.
When some of the team members began to die shortly after, some media outlets speculated it was due to a curse. Later, experts discovered traces of deadly fungi inside the tomb that can cause lung illnesses when breathed in. This was the cause of their deaths.
3. The Hope Diamond Curse
Evelyn Walsh McLean, one of the owners of the famous Hope diamond, c. 1915.
In the 1660s, the French gem dealer Jean-Baptiste Tavernier purchased a large diamond of unknown origin during a trip to India. Yet by the 20th century, a myth had sprung up in the United States and Europe that Tavernier had stolen the diamond from the statue of a Hindu goddess. The newspapers and jewelers who spread this story claimed the diamond was cursed and brought bad luck to those who owned it.
By 1839, the diamond supposedly ended up with Henry Philip Hope, a Dutch collector based in London and the source of the stone’s modern name—the Hope Diamond. Sometime after this, European and American newspapers began claiming that the Hope Diamond carried a curse.
The French jeweler Pierre Cartier reportedly used these stories to enhance the diamond’s value when he sold it to American heiress Evelyn Walsh McLean in the early 1910s. After she died, it went to a U.S. jewelry company, which exhibited it before donating it in 1958 to the Smithsonian Institution, where it remains today.
5. The Curse of Tippecanoe (or Tecumseh’s Curse)
The Battle of Tippecanoe, where General Harrison fought Tecumshe on Nov 7, 1811.
In the mid-20th century, U.S. media began to note a pattern in presidential deaths. Starting with William Henry Harrison and ending with John F. Kennedy, every 20 years the country elected a president who would die in office.
In the 1930s, Ripley’s Believe It or Not claimed the “pattern” was due to a curse Shawnee Chief Tecumseh placed on Harrison and future presidents after Harrison’s troops defeated Tecumseh’s at the Battle of Tippecanoe in 1811. (Tecumseh died two years later in another battle against Harrison’s troops.) This story likely originated with non-Native Americans and bears a similarity to other “curses” in U.S. books and movies about disturbing Native burial grounds.
6. The Curse of Macbeth
There are lots of superstitions in the world of theatre. It’s bad luck to wish actors good luck, hence the reason people instead tell them to “break a leg.” And it’s also bad luck to say the word “Macbeth” in the theatre except during a performance of the Shakespeare play. Supposedly, this is because tragedy has historically befallen productions of the play. In reality, these stories are a mix of fabrication and selective evidence-picking.
The legend about the play seems to have started with Max Beerbohm, a British cartoonist and critic born in the 1870s, nearly three centuries after Macbeth’s first performance. Beerbohm—possibly annoyed that Macbeth was such a popular play—made up a story that the first actor cast to play Lady Macbeth died right before the play’s opening night.
Since then, this story has become part of a myth that the play is cursed and has brought bad luck to those involved with it. Though there have been real accidents during runs of Macbeth over its more than 400-year history, these accidents gain more attention than accidents during other plays because of the supposed “curse.”
7. The Billy Goat Curse on the Chicago Cubs
A fan pushes a goat in a cart outside of Wrigley Field before the start of the 2017 home opener against the Los Angeles Dodgers on April 10, 2017 in Chicago, Illinois.
As with theatre, there are also a lot of superstitions in the world of sports. One of the most famous is the supposed “billy goat curse” on the Chicago Cubs.
In 1945, a tavern owner named William “Billy Goat” Sianis was reportedly prevented from bringing his pet goat, Murphy, into Chicago’s Wrigley Field to see the Cubs play the Detroit Tigers in the World Series. Supposedly, Sianis put a curse on the Cubs, saying they wouldn’t win this or any other World Series ever again.
Before this, the Cubs had only won the World Series twice before, in 1907 and 1908. When they lost the World Series in 1945, the curse gained credence. In 2016, when the Cubs won the world series for the first time in over a century, U.S. media promoted the idea that the curse was broken.
Fact Check
We strive for accuracy and fairness. But if you see something that doesn't look right, click here to contact us! HISTORY reviews and updates its content regularly to ensure it is complete and accurate.
Sign up for Inside History
Get HISTORY’s most fascinating stories delivered to your inbox three times a week.
|
The legend about the play seems to have started with Max Beerbohm, a British cartoonist and critic born in the 1870s, nearly three centuries after Macbeth’s first performance. Beerbohm—possibly annoyed that Macbeth was such a popular play—made up a story that the first actor cast to play Lady Macbeth died right before the play’s opening night.
Since then, this story has become part of a myth that the play is cursed and has brought bad luck to those involved with it. Though there have been real accidents during runs of Macbeth over its more than 400-year history, these accidents gain more attention than accidents during other plays because of the supposed “curse.”
7. The Billy Goat Curse on the Chicago Cubs
A fan pushes a goat in a cart outside of Wrigley Field before the start of the 2017 home opener against the Los Angeles Dodgers on April 10, 2017 in Chicago, Illinois.
As with theatre, there are also a lot of superstitions in the world of sports. One of the most famous is the supposed “billy goat curse” on the Chicago Cubs.
In 1945, a tavern owner named William “Billy Goat” Sianis was reportedly prevented from bringing his pet goat, Murphy, into Chicago’s Wrigley Field to see the Cubs play the Detroit Tigers in the World Series. Supposedly, Sianis put a curse on the Cubs, saying they wouldn’t win this or any other World Series ever again.
Before this, the Cubs had only won the World Series twice before, in 1907 and 1908. When they lost the World Series in 1945, the curse gained credence. In 2016, when the Cubs won the world series for the first time in over a century, U.S. media promoted the idea that the curse was broken.
Fact Check
We strive for accuracy and fairness. But if you see something that doesn't look right, click here to contact us!
|
no
|
Theater
|
Was Shakespeare's "Macbeth" cursed from its first performance?
|
yes_statement
|
"shakespeare"'s "macbeth" was "cursed" from its first "performance".. the "curse" of "macbeth" began with its first "performance".
|
https://eastrockawaygull.com/4102/a-e/the-curse-of-macbeth/
|
The Curse of Macbeth – The East Rockaway Gull
|
The Curse of Macbeth
William Shakespeare (also known as The Bard) has created a plethora of famous plays for our enjoyment. From Hamlet to Romeo and Juilet you’ve all heard of one of these plays or spinoffs. But one show that is not talked about as much is the Scottish tragedy known as Macbeth. I have recently read the play and I was very entertained by the plot and characters. And it got me thinking, why isn’t this play mentioned or performed more often? And I have to place blame on the Curse of Macbeth.
History:
It all started with the first performance of the show. Opening night, before the actors went on, the actor set to play Lady Macbeth died, leaving Shakespeare himself to take on the role. This wasn’t the first case of a person dying during a run of this show. In another early production (around early 17th century) the actor playing King Duncan was killed, live on stage when the fake dagger was replaced with a real one. Around 1950, another death occurred when Harold Norman as Macbeth was killed during a reenactment of the final battle of “the Scottish play”.
This play was also the source of many violent audience riots, including the 1721 riot at Lincoln’s Inn Field Theater and the 1772 riot at Covent Garden. A more famous Macbeth riot is the 1849 riot in New York. A long standing rivalry between fans of 2 actors turned violent at a showing of Macbeth at New York’s Astor Place Opera House. The fans left 22 dead and over a hundred injured.
Why is the show cursed?:
There is no real answer as to why the show is cursed. Many believe it is because Shakespeare used real spells as the witches’ dialogue. As a result, real witches cursed his show for stealing their spells to use for their persecutors’ entertainment. Other people believe that a show running for almost 500 years is bound to have a fair share of accidents. No one knows for sure but no one is taking any chances.
How to become un-cursed:
Due to all accidents during this production, actors are not allowed to speak the name of this play to avoid the risk of cursing their production. And not just Macbeth. Any play, musical, or theater you’re visiting can get cursed. Just by saying its name. Which is why it is mostly known as “The Bard’s Play” or “The Scottish Play”. If you do utter its name, you must follow these steps to un-curse yourself:
Exit the theater
Spin 3 times
Spit over your left shoulder
Utter a Shakespeare line or utter a profanity
Follow these instructions and your show and yourself will be un-cursed. But to be on the safe side, just don’t mention it at all…unless a death scene gone wrong is what you want in your next show. Macbeth could sleep when he commited murder. Would you risk the same?
|
The Curse of Macbeth
William Shakespeare (also known as The Bard) has created a plethora of famous plays for our enjoyment. From Hamlet to Romeo and Juilet you’ve all heard of one of these plays or spinoffs. But one show that is not talked about as much is the Scottish tragedy known as Macbeth. I have recently read the play and I was very entertained by the plot and characters. And it got me thinking, why isn’t this play mentioned or performed more often? And I have to place blame on the Curse of Macbeth.
History:
It all started with the first performance of the show. Opening night, before the actors went on, the actor set to play Lady Macbeth died, leaving Shakespeare himself to take on the role. This wasn’t the first case of a person dying during a run of this show. In another early production (around early 17th century) the actor playing King Duncan was killed, live on stage when the fake dagger was replaced with a real one. Around 1950, another death occurred when Harold Norman as Macbeth was killed during a reenactment of the final battle of “the Scottish play”.
This play was also the source of many violent audience riots, including the 1721 riot at Lincoln’s Inn Field Theater and the 1772 riot at Covent Garden. A more famous Macbeth riot is the 1849 riot in New York. A long standing rivalry between fans of 2 actors turned violent at a showing of Macbeth at New York’s Astor Place Opera House. The fans left 22 dead and over a hundred injured.
Why is the show cursed?:
There is no real answer as to why the show is cursed. Many believe it is because Shakespeare used real spells as the witches’ dialogue. As a result, real witches cursed his show for stealing their spells to use for their persecutors’ entertainment. Other people believe that a show running for almost 500 years is bound to have a fair share of accidents. No one knows for sure but no one is taking any chances.
How to become un-cursed:
|
yes
|
Theater
|
Was Shakespeare's "Macbeth" cursed from its first performance?
|
yes_statement
|
"shakespeare"'s "macbeth" was "cursed" from its first "performance".. the "curse" of "macbeth" began with its first "performance".
|
https://www.ipl.org/essay/The-Curse-Of-Macbeth-In-William-Shakespeares-FK2B8NQMU5FV
|
The Curse Of Macbeth In William Shakespeare's Play | ipl.org
|
The Curse Of Macbeth In William Shakespeare's Play
The play, Macbeth, has been known world-wide for being one of the most hapless plays in theatre. Macbeth is filled with tragedy, betrayal, and evil. The play has been cancelled multiple times from the “Macbeth Curse”. The “curse” is believed to bring adversities throughout rehearsals and performances of Macbeth. William Shakespeare’s play, Macbeth, is believed to be cursed in particular when exploring the origins of the play, the countless number of recorded tragedies that are connected to the play, and the presence of witchcraft. The “Macbeth Curse” refers to the mistakes that occur during the show. The belief goes that William Shakespeare included substantive black magic spells in the play. Any actors who have a role in the play risk the chance of having evil brought down on them. Since it is bad luck for an actor performing in Macbeth, they refer to the play as “The Scottish Play.” In Macbeth, there are multiple scenes that include physical action. Eventually, you would expect someone to get injured. Furthermore, the play was given the nickname “Curse of Macbeth” after several accidents. WORK CITED!!! King James I was the King of England and had wanted Macbeth…show more content…
For example, the first performance that occurred in 1606, resulted with the boy playing Lady Macbeth dying from a fever backstage. Another example includes a rivalry between two actors, Edwin Forrest and John Macready, which created a riot where 31 people died. The riot took place in front of the theatre while the play was performing with John Macready present in Macbeth. Lastly, the Stratford Festival began with a performance of Macbeth where an old man broke both his legs in a parking lot when struck by a car. Also, the actor playing Lady Macbeth drove her car into a store. There are many more unexplained events that have happened from the “Macbeth Curse” that have not been recorded. WORK
In this essay, the author
Explains that macbeth is one of the most hapless plays in theatre. it is filled with tragedy, betrayal, and evil.
Explains that the "macbeth curse" refers to the mistakes that occur during the show. william shakespeare included substantive black magic spells in the play.
Explains that macbeth is one of the most hapless plays in theatre. it is filled with tragedy, betrayal, and evil.
Explains that the "macbeth curse" refers to the mistakes that occur during the show. william shakespeare included substantive black magic spells in the play.
Explains that king james i was obsessed with witches and demonology, and wanted macbeth to entertain him. william shakespeare researched witchcraft to please the king.
Explains that the main storyline in macbeth was extreme ambition will have horrible consequences. the play was written in two monarchs, queen elizabeth i and king james i.
Explains that macbeth has faced endless numbers of deaths and accidents throughout the years of the plays production.
Explains that the singing coach for macbeth, bantcho banchevsky, fell from the top balcony of the metropolitan opera. police have ruled the death as an apparent suicide.
Explains that many actors have become superstitious about the play macbeth for having a myth. they will never say the name of the scottish play, instead referring to it as "the scottish play."
Explains that william shakespeare incorporated popular traditions and beliefs about witches and witchcraft in macbeth, which was written for king james i.
Shakespeare’s Macbeth is an eventful play that is incorporated with witchcraft. In the time of the Scottish Play, real black magic and paranormal witchcraft was said to be existent. Rumor has it that the play has a curse placed on it from real witches from Shakespeare’s time. The play Macbeth has to do with witches in Shakespeare’s time, how to avoid the curse, and examples of what harm the curse has caused.
“Macbeth” is a tragic play about a gruesome rise to power and the downfall of it all. Macbeth goes down menacing paths in order to get the power he believes he deserves. Macbeth is easily persuaded by a prophecy promised by three witches, this contributes to him making sinister decisions that are not worthwhile.Macbeth encounters many strange/supernatural experiences, struggles with a constant paranoia and finds himself being stuck in a endless rut fuelled by ambition. By the end, he is trapped in a world he had created himself. In other words, you can try to find a way to escape your guilt but it will always be there tormenting you.
Macbeth is the Shakespearean play that features the triumphant uprise and the inevitable downfall of its main character. In this play, Macbeth’s downfall can be considered to be the loss of his moral integrity and this is achieved by ambition, despite this, Lady Macbeth and the witches work through his ambition, furthering to assist his inevitable ruin. Ambition alone is the most significant factor that led to Macbeth’s downfall. The witches are only able to influence his actions through Macbeth’s pre-existing and the three witches see that Macbeth has ambition and uses it to control his action.
A story of tragedy is not uncommon with William Shakespeare and his works of prose. In his plays, death and despair is more likely than honor and prosperity. This is an included facet to Macbeth as well, having sinister themes of greed, manipulation, and brutality. Macbeth, by the infamous playwright, Shakespeare, presents us with multiple aspects factoring into whether the main character controls his actions that lead to the tragic events.
Who is responsible for Macbeth’s downfall, the witches, or Macbeth? Who is responsible for the scorpions in Macbeth’s mind, the savage killing of several people in cold blood, the conception near the end of the play that Macbeth grasps of nihilism, and Macbeth getting so shielded in the prophecies that he can barely see straight? Is it Macbeth... or the witches? The play by William Shakespeare, Macbeth, has many motifs and famous quotes. However, it raises a lot of questions. One of which is the question mentioned earlier, who is responsible for what happened? Who’s fault is it that Macbeth is so mad he says, “Oh, full of scorpions is my mind, dear wife!” (3.2.38). Is it really the witches fault?
Macbeth” is a tragedy written by Shakespeare. The story is played somewhere between 1600 and was performed for King James 1.It is a tragedy about a man’s fall. It could be suggested that macbeth is responsible for the death of king Duncan. According to my point of view, Macbeth didn’t killed Duncan , banquo and macduff 's family because he wanted to be evil because he met the witches .He did everything because of ambition, jealousy , lust for power and because he was power hungry from the beginning. Macbeth made the choices by his own,he wasn’t forced or fated to make any of the choices he made. He killed King Duncan and took the scottish throne for himself. He was so power-hungry that he also wanted to kill Duncan’s son but they ran away and he was proposed as the king. He blamed two
In William Shakespeare’s The Tragedy of Macbeth, Shakespeare introduces us to a man on a mission to assassinate the reigning king of Scotland, King Duncan. Through King Duncan, Shakespeare reveals Macbeth’s crude and unfiltered nature while capturing every second of Macbeth’s sadistic plan. With the use of paradox, internal character struggles, and the idea of fate, Shakespeare provides insight on what madness Macbeth created and the effect his madness has on other characters.
Macbeth, a tragedy written by William Shakespeare, tells of the events in Scotland that led to the death of Duncan, the king, and events that followed afterward. Duncan was killed by Macbeth, but it was his wife that suffered for it, due to her involvement and insistence in the crime. Lady Macbeth suffers from schizophrenia and obsessive compulsive disorder (OCD) throughout the play as displayed by her obsessively washing her hands in her sleep, her paranoia, and aggressive tendencies.
The difficulty humans experience when trying to resist resorting to violence is remarkable. From noble disputes to trivia night bar fights, violence is an alluring tool. In William Shakespeare’s play Macbeth, the titular Macbeth is unable to resist the seductive nature of darker impulses. Spurred on by a trio of witches and his wife, Macbeth murders his liege, King Duncan, and becomes King of Scotland. He rules as a tyrant and his paranoia and bloodlust lead him further into evil. Eventually, the lords of Scotland rise up against him and he is deposed by the deceased king’s son, Malcolm. Macbeth refuses to yield and is killed in battle. The blame for Macbeth’s demise rests entirely on his own shoulders,
Macbeth was screwed from the beginning. Macbeth is a play in which a war hero is introduced to a prophecy that ultimately leads to his own demise due to the impact of his greed. Shakespeare’s Macbeth teaches us that human flaws such as greed can easily lead even the most noble and honorable of people down a dark path. This is shown through the change that Macbeth went through after hearing his prophecy and becoming consumed by greed.
Macbeth is a dark play written by Shakespeare. It is about a kingdom in Scotland in which the people living there turn on each other and don't know who to trust. Macbeth changes from an innocent man to a murderous villain. In the end, his cockiness will get the best of him. Throughout the play Macbeth, Shakespeare uses many literary devices to convey the theme, “guilt cannot be washed away.” The devices he uses are symbolism and foreshadowing.
Macbeth, is a play written by the famous William Shakespeare. The story focuses on this character named Macbeth. It is considered a tragedy in the multitude of plays that have been written. Macbeth in conclusion of the play is labeled as a tyrant and very decisively, insane. The ferocious Macbeth, kills and brutally murders people that begin to stand in his way of him becoming King of Scotland. The main question that can be taken away from the play is, who is to blame for the downfall of this once mighty and courageous warrior? This question can be debated and discussed for years until the answer might be displayed but, in my opinion, I think that Lady Macbeth is the person to blame for the
William Shakespeare’s tragedy Macbeth tells the tale of Macbeth, a scottish soldier who had been foreseen with the crown atop his head by three witches.The story is a gruesome tale of lies and bloodshed. As a result of the prophecy, Macbeth stays in power through greed, ambition, and violence. His wife, Lady Macbeth, first began his use of violence by plotting the murder of King Duncan. Lady Macbeth’s the guilt of her devilish deed served as her fatal trait; Started by using violence to get what was desired lead to a vicious cycle of more and more violence.
Macbeth, by William Shakespeare, is a dark play full of witchcraft and foreshadowing. Lady Macbeth showed scheming qualities throughout the play which had a lot of influence on her husband, Macbeth. Because of her controlling personality, Macbeth was scared to disappoint her. She was the one who positioned the idea of Duncan’s murder into her husband’s mind where he was succumbed by her supremacies and made the ultimate mistake. It was also her idea to place the blame of Duncan’s death on the soldiers. The most prominent example, in my opinion, was that she said she could not get the blood off of her hands. In my opinion Lady Macbeth is more responsible for King Duncan’s murder, and Macbeth’s treason, than Macbeth himself.
Twisted by power, Macbeth is a thrilling tale revealing just how far a man will go to retain his rule. “Upon my head they placed a fruitless crown And put a barren scepter in my grip, Thence to be wrenched with an unlineal hand, No son of mine succeeding.
|
The Curse Of Macbeth In William Shakespeare's Play
The play, Macbeth, has been known world-wide for being one of the most hapless plays in theatre. Macbeth is filled with tragedy, betrayal, and evil. The play has been cancelled multiple times from the “Macbeth Curse”. The “curse” is believed to bring adversities throughout rehearsals and performances of Macbeth. William Shakespeare’s play, Macbeth, is believed to be cursed in particular when exploring the origins of the play, the countless number of recorded tragedies that are connected to the play, and the presence of witchcraft. The “Macbeth Curse” refers to the mistakes that occur during the show. The belief goes that William Shakespeare included substantive black magic spells in the play. Any actors who have a role in the play risk the chance of having evil brought down on them. Since it is bad luck for an actor performing in Macbeth, they refer to the play as “The Scottish Play.” In Macbeth, there are multiple scenes that include physical action. Eventually, you would expect someone to get injured. Furthermore, the play was given the nickname “Curse of Macbeth” after several accidents. WORK CITED!!! King James I was the King of England and had wanted Macbeth…show more content…
For example, the first performance that occurred in 1606, resulted with the boy playing Lady Macbeth dying from a fever backstage. Another example includes a rivalry between two actors, Edwin Forrest and John Macready, which created a riot where 31 people died. The riot took place in front of the theatre while the play was performing with John Macready present in Macbeth. Lastly, the Stratford Festival began with a performance of Macbeth where an old man broke both his legs in a parking lot when struck by a car. Also, the actor playing Lady Macbeth drove her car into a store. There are many more unexplained events that have happened from the “Macbeth Curse” that have not been recorded. WORK
In this essay, the author
Explains that macbeth is one of the most hapless plays in theatre. it is filled with tragedy,
|
yes
|
Theater
|
Was Shakespeare's "Macbeth" cursed from its first performance?
|
yes_statement
|
"shakespeare"'s "macbeth" was "cursed" from its first "performance".. the "curse" of "macbeth" began with its first "performance".
|
https://skeptics.stackexchange.com/questions/4707/is-macbeth-cursed
|
history - Is Macbeth cursed? - Skeptics Stack Exchange
|
I recently read Macbeth and I looked up its history, and apparently there is such a thing as "The Curse of Macbeth". The story is as follows:
Shakespeare, in writing the play, included a lot of details about witchcraftery, and their methods i.e. The excerpt below:
Round around the cauldron go;
In the poison'd entrails throw.
Toad, that under cold stone
Days and nights has thirty-one
Swelter'd venum sleeping got.
Boil thou first i' the charmed pot…
The story then goes on to say that, the witches of the day, and the socerors were furious with the publicising of their deeds, so they cast a curse on the play.
The uncanny thing is, there have been an abnormal amount of incidences for this play in history:
1st performance,1606 -- Shakespeare himself was forced to play Lady Macbeth when Hal Berridge, the boy designated to play the lady with a peculiar notion of hospitality, became inexplicably feverish and died. Moreover, the bloody play so displeased King James I that he banned it for five years.
Amsterdam, 1672 -- the actor playing Macbeth substituted a real dagger for the blunted stage one and with it killed Duncan in full view of the entranced audience.
London, 1703 -- on the day the production opened, England was hit with one of the most violent storms in its history.
1721 – during a performance, a nobleman who was watching the show from the stage decided to get up in the middle of a scene, walk across the stage, and talk to a friend. The actors, upset by this, drew their swords and drove the nobleman and his friends from the theatre. Unfortunately for them, the noblemen returned with the militia and burned the theatre down.
New York’s Astor Place, 1849 -- a riot broke out when a crowd of more than 10,000 New Yorkers gathered to protest the appearance of British actor William Charles Macready, who was engaged in a bitter public feud with an American actor, Edwin Forrest. The protest escalated into a riot, leading the militia to fire into the crowd. Twenty-three people were killed, 36 were wounded, and hundreds were injured.
April 9, 1865 -- Abraham Lincoln chose to take Macbeth with him on board the River Queen on the Potomac River. The president was reading passages, which happened to follow the scene in which Duncan is assassinated, aloud to a party of friends. Within a week, Lincoln himself was dead by a murderer's hand.
1882 -- on the closing night of one production, an actor named J. H. Barnes was engaged in a scene of swordplay with an actor named William Rignold when Barnes accidentally thrust his sword directly into Rignold's chest. Fortunately a doctor was in attendance, but the wound was supposedly rather serious.
1926 -- Sybil Thorndike was almost strangled by a burly actor.
Royal Court Theatre, London, 1928 -- during the first modern-dress production at the, a large set fell down, injuring some members of the cast seriously, and a fire broke out in the dress circle.
3 Answers
3
I'll just go ahead and start with: no. Unfortunately, this can't really be backed up scientifically... but that's because it's not being put forth scientifically. It could theoretically be backed up statistically, by showing the proportion of Macbeth performances with issues compared to those without, and then by comparing that to the amount of other plays with issues to those without globally, but... that's... well, just too much data. And as far as I can tell (and as far as Skeptoid can tell), it doesn't exist. It would be a huge time investment to disprove something that there's no scientific evidence of in the first place.
The problem is that you're asking about... well, a curse. There's a big presumption here, and it's that curses exist. To prove something does or doesn't exist, one must be able to point at an observable phenomenon and attribute it to a natural mechanism. We have neither an observable phenomenon here nor a mechanism with which to understand it. It's the same problem with questions on God or other deities: we can have faith and believe, but there is no scientific, empirical study or experiment that can be done to prove or disprove their existence. Something like this is a proof by tautology: it is because it is.
Related is the assumption that witches of the time--in whatever form they did or didn't exist--could actually cast such a curse. Again, we have no way of proving this (or knowing that there were witches present, or that Shakespeare got his info from witches, or whatever story one wants to believe. Very few of these earlier tales can be cited, as Skeptoid explains above). There's a good question here on Skeptics about black magic that concludes it's not real, but again for this kind of claim, it's nearly impossible to "really" disprove it since one is disproving not-phenomona with more not-phenomena.
If this is really a curse, though, let's call it what it is: a curse of bad luck (both of which are pretty synonymous anyway). But luck is just a human construct that we create to try to find patterns and order between otherwise unrelated events. And in the end, that's what all of these are: unrelated events.
When a rumor goes on this long, as Lagerbaer said in the comments, it creates a pretty powerful self-fulfilling prophecy. People are going to look in one place for problems, ignore them elsewhere, and cry loudly when they arise. What really needs to be asked is, "Are these events, true or false, atypical?" We're talking about one of the most popular plays/stories in the world, one that's been performed for over 400 years now. How many thousands, hundreds of thousands of performances is that? Google "Macbeth performances" and find almost nine million hits. Of course they're not all unique or useful, but this is an exercise in massive scale. Let's say Macbeth is only performed once per year. Your source document has 27 instances noted, but even 27 terrible incidents out of 400 is just 6.75%. Is the margin for accidents in stage performances so drastically less than 6.75% worldwide that this deserves to be considered bad luck?
Furthermore, in looking for a curse, one finds oneself attaching mishaps to an event that would otherwise have no relevant place in that event's context (and are certainly in no way provably causal). England was hit with an extremely violent storm on the day production opened. So? What other billions of actions were taken that day, and why didn't they cause the storm? Lincoln was reading Macbeth a week before he was killed... and? That's not even talking about stage productions, that's implying any one of us could be stricken down for a 9th grade reading assignment.
We're looking for a pattern that's not there, that's all.
In more fun news, in searching for the curse of Macbeth, I first accidentally looked for the "curse of hamlet," and came upon this little gem, which seems appropriate here in name (given the asker's name), though certainly not in meaning (though if one does choose to believe in curses, it certainly helps in setting a very old precedent of sorts).
Note: This answer was submitted very early in the life of Skeptic.SE, before we had established our current community standards. If it was submitted today, it would likely be removed or downvoted for being a theoretical answer, rather than empirically based.
Here is a link to a Skeptoid episode that investigates this. As @Lagerbaer mentioned in a comment on the question, if you look into any play that's been performed as many times as Macbeth has over the last 400 years, you're bound to find a long list of accidents and problems.
The play is cursed by tradition not by labor statistics. Not that I could find any labor statistics or peer reviewed studies on the subject.
The curse in a nutshell...
Theater people are superstitious.
There are lists of things that are
prohibited when you are in a theater,
things you must not do, otherwise the
performance will go terribly wrong.
For example, no actor would ever say
the word Macbeth in a theater – it
would bring certain disaster. Actors,
instead, call it “The Scottish Play”
and the title character “the Scottish
Lord” in order to avoid pronouncing
the word.
-Theatre Superstitions
Why are theatre people superstitious?
... on why it shouldn't surprise
anyone that actors hold superstitious
beliefs. In the best case, when you
put on a show you choose to place
yourself on the knife-edge between
life and death. Yes, the death is
metaphoric (nearly always), but that
changes nothing. People who exist on a
knife-edge are bound to pay close heed
to anything they think will tip them
one way or the other. Put that
together with an innate organizing
faculty and what results? A belief in
gods. So actors follow a simple
religion: Don't piss off the theater
gods. (emphasis mine)
- Review of Supernatural
on Stage: Ghosts and Superstitions of the Theatre.
Probably the best explanation...
Nonbelievers in the curse hold that
aspects of the tragedy make it
accident-prone. The chief culprits are
dim lighting and stage combat,
especially when performed with heavy,
unwieldy broadswords. Also, since
Macbeth is a popular and comparatively
short play, it has frequently served
as a late addition to a theater season
if the company is struggling
financially. Therefore, productions
are under-rehearsed, resulting in
on-stage calamity, and the curse gets
blamed for an already-failing
company’s subsequent closing.
-Why You Shouldn’t Say “Macbeth” in a Theatre
The Tragedy of Macbeth (commonly
called Macbeth) is a play by William
Shakespeare about a regicide and its
aftermath. It is Shakespeare's
shortest tragedy* and is believed to
have been written sometime between
1603 and 1607. - Wikipedia
Life's but a walking shadow, a poor player
That struts and frets his hour upon the stage,
And then is heard no more. It is a tale
Told by an idiot, full of sound and fury,
Signifying nothing. - Macbeth, Act V, scene v.
*Tragedy: A dramatic form (structure) first defined in Aristotle's Poetics (c.335 BCE).
|
I recently read Macbeth and I looked up its history, and apparently there is such a thing as "The Curse of Macbeth". The story is as follows:
Shakespeare, in writing the play, included a lot of details about witchcraftery, and their methods i.e. The excerpt below:
Round around the cauldron go;
In the poison'd entrails throw.
Toad, that under cold stone
Days and nights has thirty-one
Swelter'd venum sleeping got.
Boil thou first i' the charmed pot…
The story then goes on to say that, the witches of the day, and the socerors were furious with the publicising of their deeds, so they cast a curse on the play.
The uncanny thing is, there have been an abnormal amount of incidences for this play in history:
1st performance,1606 -- Shakespeare himself was forced to play Lady Macbeth when Hal Berridge, the boy designated to play the lady with a peculiar notion of hospitality, became inexplicably feverish and died. Moreover, the bloody play so displeased King James I that he banned it for five years.
Amsterdam, 1672 -- the actor playing Macbeth substituted a real dagger for the blunted stage one and with it killed Duncan in full view of the entranced audience.
London, 1703 -- on the day the production opened, England was hit with one of the most violent storms in its history.
1721 – during a performance, a nobleman who was watching the show from the stage decided to get up in the middle of a scene, walk across the stage, and talk to a friend. The actors, upset by this, drew their swords and drove the nobleman and his friends from the theatre. Unfortunately for them, the noblemen returned with the militia and burned the theatre down.
New York’s Astor Place, 1849 -- a riot broke out when a crowd of more than 10,000 New Yorkers gathered to protest the appearance of British actor William Charles Macready, who was engaged in a bitter public feud with an American actor, Edwin Forrest.
|
yes
|
Theater
|
Was Shakespeare's "Macbeth" cursed from its first performance?
|
yes_statement
|
"shakespeare"'s "macbeth" was "cursed" from its first "performance".. the "curse" of "macbeth" began with its first "performance".
|
https://www.mentalfloss.com/article/83857/riot-caused-performance-macbeth
|
The Riot Caused By A Performance of 'Macbeth' | Mental Floss
|
The Riot Caused By A Performance of Macbeth
Is Macbeth really Shakespeareâs most cursed play? (Even more cursed than Allâs Well That Ends Well?) Perhaps.
Terrible occurrences have dogged performances of Macbeth ever since its premiere, when, according to theatrical legend, either an actor was killed onstage during a sword fight, or else a young boy playing Lady Macbeth died in an accident behind the scenes. (The cause of the curse, some say, is that Shakespeare supposedly included real witches' spells in the playâs script.) But perhaps the darkest incident in the playâs long and murky history occurred on May 10, 1849, when a bitter rivalry between two competing Shakespearean actors sparked a devastating riot in downtown Manhattan.
The two actors in question were Englandâs William Macready and Americaâs Edwin Forrest. Both men were at the height of their game at the time, and having twice toured the otherâs country, had established names for themselves on both sides of the Atlantic. But while Macready represented the styles and traditions of Great Britain and classical British theater, Forrestâ13 years his juniorârepresented a fresh and exciting new wave of homegrown performers, born and bred in a recently independent America.
Each actor had ultimately amassed an ardent and bitterly opposed following: Macready appealed to wealthy, upper-class Anglophile audiences while Forrest was idolized by the pro-American working classes as a symbol of anti-authority andâbarely two generations after the Revolutionary Warâanti-British sentiment.
According to the 1849 account of the riot, rival theater owners of the ones that Macready was set to play decided to book Forrest, who was billed as the âAmerican Tragedianâ with the result that Macreadyâs tour was unsuccessful. Whether deliberate or not, when news broke of Forrestâs ploy back in Britain, it did not go down well with British theatregoers, and during Forrestâs second tour of England the roles were reversed: This time, Forrestâs performances failed to attract large audiences, and were widely savaged by the critics.
Forrest openly blamed Macready for manipulating both the press and the people against him, and accused Macreadyâs followers of arranging a widespread boycott of his tour among British high society. Seeking revenge, Forrest attended a performance of Hamlet in Edinburgh with Macready in the title role, and loudly jeered and hissed throughout. (Forrest would later claim that he hissed in protest of a âfancy danceâ that Macready acted, and said âas to the pitiful charge of professional jealousy preferred against me, I dismiss it with the contempt it merits.â) The feud was officially on.
In early 1849, Macready was on a third tour of America and arranged a performance of Macbeth at New Yorkâs Astor Opera House. The theater was one of the cityâs most opulent and a popular hangout for 19th-century New Yorkâs emerging upper classes, but at the premiere performance on March 7, far from attracting a high-society audience, the entire upper tier of the theater had been bought out by a huge number of Forrestâs fanatical working class supporters.
As Macready entered the stage to deliver his first lines, he was resoundingly booed. Jeers of âthree groans for the English bulldog!â and âhuzza for native talent!â echoed down from the gallery; during the previous scene just minutes earlier, according to reports, the same crowd had wildly cheered the entrance of Macduffâthe character who eventually kills Macbeth.
As Macready waited for the commotion to subside, the stage was pelted with eggs, bottles, and rotten fruit and vegetables, and with little alternative, the performance was brought to a humiliatingly premature end.
True to form, across town Forrest was meanwhile staging his own simultaneous rival performance of Macbeth, to a packed house of his own supporters; at the line, âWhat rhubarb, what senna, or what purgative drug / Would scour these English hence?â his audience erupted into cheers and applause.
For Macready, enough was enough. He vowed to cancel his tour and leave the country for good. Only the coercion of New Yorkâs keenest theatergoers and literary giants (as well as a petition, signed by the likes of Washington Irving and Herman Melville) succeeded in changing his mind, and a second premiere performance was arranged for three days later.
The delay gave the authorities time to prepare: Everyone from the staff at the Astor Opera House (who barricaded the theaterâs windows) to the cityâs Whig mayor (who, sensing a shortfall in police numbers, arranged for a 350-man militia to be stationed in nearby Washington Square Park) expected more trouble. But not even they could have expected just how bad Macready and Forrestâs feud was to becomeâjust as the authorities had time to prepare, so did Forrestâs increasingly impassioned followers.
Ahead of the May 10 performance, flyers denouncing Macready and his Anglophile supportersââShall Americans or English rule this city?â one readâwere widely circulated across the city, and by the day of the performance had succeeded in amassing a huge crowd of both disgruntled working class New Yorkers, and newly arrived Irish immigrants, resentful of Great Britainâs failure to act on the famine they had endured back home.
As Macready took to the stage at the Astor on the night of the show, a huge crowd gathered outside, determined to storm the theater, but were beaten back by scores of police. The protest became increasing heated, and as the fighting continued, a company of soldiers joined the fray. But when the rioters began pelting them with rocks and bottles, the militia were given the order to use their rifles. At least 23âbut perhaps as many as 31âpeople were shot dead, several of whom were innocent bystanders.
âAs one window after another cracked, the pieces of bricks and paving stones rattled in on the terraces and lobbies, the confusion increased, till the Opera House resembled a fortress besieged by an invading army rather than a place meant for the peaceful amusement of civilized community.â
âThe New York Tribune
The Astor Place Riot, as it became known, stunned the city. The authoritiesâ violent response to the situation, and the steady realization of how a seemingly lighthearted rivalry had been allowed to spiral so far out of control, sparked considerable introspection and debate.
In the aftermath, the Astor Opera House suffered financially and eventually closed its doors, later becoming the New York Mercantile Library before being demolished in 1891. Forrestâs reputation, too, was damagedâthough not destroyed altogether: he continued to perform (amassing a considerable fortune in the process) ahead of his sudden death in 1872 at the age of just 66. He left much of his great wealth to philanthropic causes, including a home for retired actors he had founded in his native Philadelphia.
Macready, meanwhile, had escaped the Astor by the back door (having reportedly somehow finished the performance) and made it safely back to his hotel. Amid talk that the rioters would seek to track him down there and attack him, he fled the city for Boston, from where he took the first ship back to England. He never returned to America again.
|
The Riot Caused By A Performance of Macbeth
Is Macbeth really Shakespeareâs most cursed play? (Even more cursed than Allâs Well That Ends Well?) Perhaps.
Terrible occurrences have dogged performances of Macbeth ever since its premiere, when, according to theatrical legend, either an actor was killed onstage during a sword fight, or else a young boy playing Lady Macbeth died in an accident behind the scenes. (The cause of the curse, some say, is that Shakespeare supposedly included real witches' spells in the playâs script.) But perhaps the darkest incident in the playâs long and murky history occurred on May 10, 1849, when a bitter rivalry between two competing Shakespearean actors sparked a devastating riot in downtown Manhattan.
The two actors in question were Englandâs William Macready and Americaâs Edwin Forrest. Both men were at the height of their game at the time, and having twice toured the otherâs country, had established names for themselves on both sides of the Atlantic. But while Macready represented the styles and traditions of Great Britain and classical British theater, Forrestâ13 years his juniorârepresented a fresh and exciting new wave of homegrown performers, born and bred in a recently independent America.
Each actor had ultimately amassed an ardent and bitterly opposed following: Macready appealed to wealthy, upper-class Anglophile audiences while Forrest was idolized by the pro-American working classes as a symbol of anti-authority andâbarely two generations after the Revolutionary Warâanti-British sentiment.
According to the 1849 account of the riot, rival theater owners of the ones that Macready was set to play decided to book Forrest, who was billed as the âAmerican Tragedianâ with the result that Macreadyâs tour was unsuccessful.
|
yes
|
Theater
|
Was Shakespeare's "Macbeth" cursed from its first performance?
|
no_statement
|
"shakespeare"'s "macbeth" was not "cursed" from its first "performance".. the "curse" of "macbeth" did not start with its first "performance".
|
https://www.austinchronicle.com/arts/2000-10-13/the-curse-of-the-play/
|
The Curse of the Play - Arts - The Austin Chronicle
|
The Curse of the Play
Robert Faires updated this story in November of 2018. Read that story for the most recent developments in the saga of this cursed play.
The lore surrounding Macbeth and its supernatural power begins with the play's creation in 1606. According to some, Shakespeare wrote the tragedy to ingratiate himself to King James I, who had succeeded Elizabeth I only a few years before. In addition to setting the play on James' home turf, Scotland, Will chose to give a nod to one of the monarch's pet subjects, demonology (James had written a book on the subject that became a popular tool for identifying witches in the 17th century). Shakespeare incorporated a trio of spell-casting women into the drama and gave them a set of spooky incantations to recite. Alas, the story goes that the spells Will included in Macbeth were lifted from an authentic black-magic ritual and that their public display did not please the folks for whom these incantations were sacred. Therefore, they retaliated with a curse on the show and all its productions.
Those doing the cursing must have gotten an advance copy of the script or caught a rehearsal because legend has it that the play's infamous ill luck set in with its very first performance. John Aubrey, who supposedly knew some of the men who performed with Shakespeare in those days, has left us with the report that a boy named Hal Berridge was to play Lady Macbeth at the play's opening on August 7, 1606. Unfortunately, he was stricken with a sudden fever and died. It fell to the playwright himself to step into the role.
It's been suggested that James was not that thrilled with the play, as it was not performed much in the century after. Whether or not that's the case, when it was performed, the results were often calamitous. In a performance in Amsterdam in 1672, the actor in the title role is said to have used a real dagger for the scene in which he murders Duncan and done the deed for real. The play was revived in London in 1703, and on the day the production opened, England was hit with one of the most violent storms in its history.
As time wore on, the catastrophes associated with the play just kept piling up like Macbeth's victims. At a performance of the play in 1721, a nobleman who was watching the show from the stage decided to get up in the middle of a scene, walk across the stage, and talk to a friend. The actors, upset by this, drew their swords and drove the nobleman and his friends from the theatre. Unfortunately for them, the noblemen returned with the militia and burned the theatre down. In 1775, Sarah Siddons took on the role of Lady Macbeth and was nearly ravaged by a disapproving audience. It was Macbeth that was being performed inside the Astor Place Opera House the night of May 10, 1849, when a crowd of more than 10,000 New Yorkers gathered to protest the appearance of British actor William Charles Macready. (He was engaged in a bitter public feud with an American actor, Edwin Forrest.) The protest escalated into a riot, leading the militia to fire into the crowd. Twenty-three people were killed, 36 were wounded, and hundreds were injured. And it was Macbeth that Abraham Lincoln chose to take with him on board the River Queen on the Potomac River on the afternoon of April 9, 1865. The president was reading passages aloud to a party of friends, passages which happened to follow the scene in which Duncan is assassinated. Within a week, Lincoln himself was dead by a murderer's hand.
In the last 135 years, the curse seems to have confined its mayhem to theatre people engaged in productions of the play.
In 1882, on the closing night of one production, an actor named J. H. Barnes was engaged in a scene of swordplay with an actor named William Rignold when Barnes accidentally thrust his sword directly into Rignold's chest. Fortunately a doctor was in attendance, but the wound was supposedly rather serious.
In 1926, Sybil Thorndike was almost strangled by an actor.
During the first modern-dress production at the Royal Court Theatre in London in 1928, a large set fell down, injuring some members of the cast seriously, and a fire broke out in the dress circle.
In the early Thirties, theatrical grande dame Lillian Boylis took on the role of Lady Macbeth but died on the day of final dress rehearsal. Her portrait was hung in the theatre and some time later, when another production of the play was having its opening, the portrait fell from the wall.
In 1934, actor Malcolm Keen turned mute onstage, and his replacement, Alistair Sim, like Hal Berridge before him, developed a high fever and had to be hospitalized.
In 1936, when Orson Welles produced his "voodoo Macbeth," set in 19th-century Haiti, his cast included some African drummers and a genuine witch doctor who were not happy when critic Percy Hammond blasted the show. It is rumored that they placed a curse on him. Hammond died within a couple of weeks.
In 1937, a 30-year-old Laurence Olivier was rehearsing the play at the Old Vic when a 25-pound stage weight crashed down from the flies, missing him by inches. In addition, the director and the actress playing Lady Macduff were involved in a car accident on the way to the theatre, and the proprietor of the theatre died of a heart attack during the dress rehearsal.
In 1942, a production headed by John Gielgud suffered three deaths in the cast -- the actor playing Duncan and two of the actresses playing the Weird Sisters -- and the suicide of the costume and set designer.
In 1947, actor Harold Norman was stabbed in the swordfight that ends the play and died as a result of his wounds. His ghost is said to haunt the Colliseum Theatre in Oldham, where the fatal blow was struck. Supposedly, his spirit appears on Thursdays, the day he was killed.
In 1948, Diana Wynard was playing Lady Macbeth at Stratford and decided to play the sleepwalking scene with her eyes closed; on opening night, before a full audience, she walked right off the stage, falling 15 feet. Amazingly, she picked herself up and finished the show.
In 1953, Charlton Heston starred in an open-air production in Bermuda. On opening night, when the soldiers storming Macbeth's castle were to burn it to the ground onstage, the wind blew the smoke and flames into the audience, which ran away. Heston himself suffered severe burns in his groin and leg area from tights that were accidentally soaked in kerosene.
In 1955, Olivier was starring in the title role in a pioneering production at Stratford and during the big fight with Macduff almost blinded fellow actor Keith Michell.
In a production in St. Paul, Minnesota, the actor playing Macbeth dropped dead of heart failure during the first scene of Act III.
In 1988, the Broadway production starring Glenda Jackson and Christoper Plummer is supposed to have gone through three directors, five Macduffs, six cast changes, six stage managers, two set designers, two lighting designers, 26 bouts of flu, torn ligaments, and groin injuries. (The numbers vary in some reports.)
In 1998, in the Off-Broadway production starring Alec Baldwin and Angela Bassett, Baldwin somehow sliced open the hand of his Macduff.
Add to these the long list of actors, from Lionel Barrymore in the 1920s to Kelsey Grammer just this year, who have attempted the play only to be savaged by critics as merciless as the Scottish lord himself.
To many theatre people, the curse extends beyond productions of the play itself. Simply saying the name of the play in a theatre invites disaster. (You're free to say it all you want outside theatres; the curse doesn't apply.) The traditional way around this is to refer to the play by one of its many nicknames: "the Scottish Play," "the Scottish Tragedy," "the Scottish Business," "the Comedy of Glamis," "the Unmentionable," or just "That Play." If you do happen to speak the unspeakable title while in a theatre, you are supposed to take immediate action to dispel the curse lest it bring ruin on whatever production is up or about to go up. The most familiar way, as seen in the Ronald Harwood play and film The Dresser, is for the person who spoke the offending word to leave the room, turn around three times to the right, spit on the ground or over each shoulder, then knock on the door of the room and ask for permission to re-enter it. Variations involve leaving the theatre completely to perform the ritual and saying the foulest word you can think of before knocking and asking for permission to re-enter. Some say you can also banish the evils brought on by the curse simply by yelling a stream of obscenities or mumbling the phrase "Thrice around the circle bound, Evil sink into the ground." Or you can turn to Will himself for assistance and cleanse the air with a quotation from Hamlet:
"Angels and Ministers of Grace defend us!
Be thou a spirit of health or goblin damn'd,
Being with thee airs from heaven or blasts from hell,
Be thy intents wicked or charitable,
Thou comest in such a questionable shape that I will speak to thee."
Neither director of the current Austin productions has encountered the Macbeth curse personally, although Guy Roberts says that he did "produce a very bad version of the play when I was the artistic director of the Mermaid Theatre Company in New York. But in that case I think we were only cursed by our own inability." Marshall Maresca says that when he was in the 1998 production of Julius Caesar at the Vortex, "Mick D'arcy and I would taunt the curse, call it on. Before the show, everyone would shake hands, say, 'Good show' or 'Break a leg' or the like. Mick and I would look right at each other and just say, 'Macbeth.'"
For additional reference on the Macbeth curse, see Richard Huggett's Supernatural on Stage: Ghosts and Superstitions in the Theatre (NY, Taplinger, 1975).
More of the Story
Dueling Macbeths on the stages of Austin prove once again this Shakespearean killer's undying popularity.
A note to readers: Bold and uncensored, The Austin Chronicle has been Austinâs independent news source for over 40 years, expressing the communityâs political and environmental concerns and supporting its active cultural scene. Now more than ever, we need your support to continue supplying Austin with independent, free press. If real news is important to you, please consider making a donation of $5, $10 or whatever you can afford, to help keep our journalism on stands.
|
The Curse of the Play
Robert Faires updated this story in November of 2018. Read that story for the most recent developments in the saga of this cursed play.
The lore surrounding Macbeth and its supernatural power begins with the play's creation in 1606. According to some, Shakespeare wrote the tragedy to ingratiate himself to King James I, who had succeeded Elizabeth I only a few years before. In addition to setting the play on James' home turf, Scotland, Will chose to give a nod to one of the monarch's pet subjects, demonology (James had written a book on the subject that became a popular tool for identifying witches in the 17th century). Shakespeare incorporated a trio of spell-casting women into the drama and gave them a set of spooky incantations to recite. Alas, the story goes that the spells Will included in Macbeth were lifted from an authentic black-magic ritual and that their public display did not please the folks for whom these incantations were sacred. Therefore, they retaliated with a curse on the show and all its productions.
Those doing the cursing must have gotten an advance copy of the script or caught a rehearsal because legend has it that the play's infamous ill luck set in with its very first performance. John Aubrey, who supposedly knew some of the men who performed with Shakespeare in those days, has left us with the report that a boy named Hal Berridge was to play Lady Macbeth at the play's opening on August 7, 1606. Unfortunately, he was stricken with a sudden fever and died. It fell to the playwright himself to step into the role.
It's been suggested that James was not that thrilled with the play, as it was not performed much in the century after. Whether or not that's the case, when it was performed, the results were often calamitous. In a performance in Amsterdam in 1672, the actor in the title role is said to have used a real dagger for the scene in which he murders Duncan and done the deed for real.
|
yes
|
Theater
|
Was Shakespeare's "Macbeth" cursed from its first performance?
|
no_statement
|
"shakespeare"'s "macbeth" was not "cursed" from its first "performance".. the "curse" of "macbeth" did not start with its first "performance".
|
https://performingarts.nd.edu/news-announcements/something-wicked-this-way-comes-a-look-at-the-curse-of-macbeth/
|
Something Wicked This Way Comes––A Look at the Curse of ...
|
Something Wicked This Way Comes––A Look at the Curse of “Macbeth”
By Connor Reilly '20 | English and Classics Major, October 23, 2019
[About a 4 MIN read]
Warning: DPAC may be inviting a horrific curse upon itself this Halloween.
On October 30–31, Shakespeare at Notre Dame presents a 2-man production of Shakespeare’s scariest play, Macbeth, at the DeBartolo Performing Arts Center. The play has a long reputation as being cursed, and many actors refuse to even pronounce the word “Macbeth” in a theater for fear of bringing down supernatural effects on themselves, calling it instead “The Scottish Play” or “Maccers.” (For an example, see the unimpeachable source of The Simpsons, with special guest Ian McKellen.)
The origin of the curse supposedly comes from Shakespeare including actual dark magic rituals in the witches’ lines in the play. In revenge for spilling their secrets, a coven of witches cursed the play. According to the Royal Shakespeare Company, “Legend has it the play’s first performance (around 1606) was riddled with disaster. The actor playing Lady Macbeth died suddenly, so Shakespeare himself had to take on the part. Other rumored mishaps include real daggers being used in place of stage props for the murder of King Duncan (resulting in the actor’s death).” (This interestingly implies that Duncan’s death was initially portrayed on stage, while the script leaves it out.)
Many misfortunes, injuries, and even deaths have been reported surrounding productions of the play ever since. The Astor Place Riot in New York in 1849, in which a dispute between two actors playing Macbeth in rival productions, inflamed anti-British tensions at a performance that left at least 22 people dead. In one remarkable staging starring Sir Ian McKellan and Dame Judi Dench, a priest sat in the theater every night with a crucifix to protect the actors from the evil forces conjured in the show.
Because of this, many actors avoid saying the name of the play in a theater. I asked a few Shakespeareans at Notre Dame about their thoughts on the superstition. Grant Mudge, the Ryan Producing Artistic Director of the Notre Dame Shakespeare Festival, isn’t taking any chances. “I will err on the side of “The Scottish Play” because it’s fun, and I enjoy the maintenance of the tradition. It’s a fun insider thing for theater people––other people are confused and you get to talk about how Shakespeare maybe put black magic incantations in the play.”
The origin of the curse supposedly comes from Shakespeare’s including actual dark magic rituals in the witches’ lines in the play.
Mary Elsa Henrichs, the Executive Producer of the student-run Not-So-Royal Shakespeare Company, may not believe in the curse of the name, but she avoids it as well. “My personal opinion is that if you end up working in the theater, best practice is to avoid saying the name, not because saying the name will curse the production, but because it could seriously upset a fellow actor who believes in the curse. It’s generally best to avoid upsetting an actor prior to a performance.” However, she does believe in the power of the play. “Do I believe there’s darkness in the play Macbeth, itself? Absolutely. Shakespeare researched real witchcraft to put in his production. To me, it’s a bit terrifying to play around with that stuff.”
For my own part, I believe I the curse of the play. I was the light board operator for the Not-So-Royal Shakespeare Company’s production of Macbeth in Fall 2017, and the play seemed to live up to its reputation. Although Washington Hall’s Lab Theatre, where we perform, is notoriously haunted, the ghost there seemed to kick it up a notch for the play, as though energized by the play’s magic.
One time, as I discussed with the director whether we would need to readjust some of the lights, the light above me swung down in the way I had just described. Another time, coiled wire unraveled in front of the director just as he discussed wanting to move the coil. Finally, and most spectacularly, a lightbulb exploded on opening night during the banquet scene, at which Banquo’s ghost appears to torment Macbeth. It all sounds silly, but I swear that something was up around this play that wasn’t quite normal. I avoid saying “Macbeth” in a theater unless absolutely necessary. We can only hope that this week’s production ends well for Shakespeare at Notre Dame, the Center, and the incredible actors who will be in the show.
Related:
MACBETH
This extraordinary staging features the entire play performed by two actors, Troels Hagen Findsen and Paul O’Mahony, in a dynamic contemporary production that reveals fresh new layers in the timeless story. Highlighting Shakespeare’s themes of manipulation, guilt, and power with boundless energy and surprising wit, Macbeth is both enormously entertaining and chillingly relevant.
|
According to the Royal Shakespeare Company, “Legend has it the play’s first performance (around 1606) was riddled with disaster. The actor playing Lady Macbeth died suddenly, so Shakespeare himself had to take on the part. Other rumored mishaps include real daggers being used in place of stage props for the murder of King Duncan (resulting in the actor’s death).” (This interestingly implies that Duncan’s death was initially portrayed on stage, while the script leaves it out.)
Many misfortunes, injuries, and even deaths have been reported surrounding productions of the play ever since. The Astor Place Riot in New York in 1849, in which a dispute between two actors playing Macbeth in rival productions, inflamed anti-British tensions at a performance that left at least 22 people dead. In one remarkable staging starring Sir Ian McKellan and Dame Judi Dench, a priest sat in the theater every night with a crucifix to protect the actors from the evil forces conjured in the show.
Because of this, many actors avoid saying the name of the play in a theater. I asked a few Shakespeareans at Notre Dame about their thoughts on the superstition. Grant Mudge, the Ryan Producing Artistic Director of the Notre Dame Shakespeare Festival, isn’t taking any chances. “I will err on the side of “The Scottish Play” because it’s fun, and I enjoy the maintenance of the tradition. It’s a fun insider thing for theater people––other people are confused and you get to talk about how Shakespeare maybe put black magic incantations in the play.”
The origin of the curse supposedly comes from Shakespeare’s including actual dark magic rituals in the witches’ lines in the play.
Mary Elsa Henrichs, the Executive Producer of the student-run Not-So-Royal Shakespeare Company, may not believe in the curse of the name, but she avoids it as well. “My personal opinion is that if you end up working in the theater, best practice is to avoid saying the name, not because saying the name will curse the production, but because it could seriously upset a fellow actor who believes in the curse. It’
|
yes
|
Probabilistics
|
Was it possible for Bumblebees to fly according to the laws of aerodynamics?
|
yes_statement
|
"bumblebees" were able to "fly" "according" to the "laws" of "aerodynamics".. "bumblebees" were capable of "flying" based on the principles of "aerodynamics".
|
http://www.todayifoundout.com/index.php/2013/08/bumblebee-flight-does-not-violate-the-laws-of-physics/
|
Bumblebee Flight Does Not Violate the Laws of Physics
|
Bumblebee Flight Does Not Violate the Laws of Physics
There’s an oft repeated “fact” that the humble bumblebee defies all known laws of physics every time it flaps its tiny little bee wings and ascends to the sky. Now obviously this is false, since, well, bumblebees fly all the time and if every time a bee took off it was tearing physics apart, we’d probably realize that was the case when two thirds of our population disappeared after being pulled into tiny, bee-shaped black holes. And, certainly if this was the case, every physicist dreaming of a Nobel Prize would be devoting all their time to breaking the code of bumblebee flight in order to disprove some bit of our understanding of physics. That being said, if you work out the math behind the flight of the bumblebee, you’ll find that it actually shouldn’t be able to fly… so long as you don’t take into account all the relevant factors, which seems to be how this myth got started. Basically, if you calculate it all assuming bumblebees fly like airplanes, then sure, the bumblebee shouldn’t be able to fly. But, of course, bumblebees don’t fly like airplanes.
So where and when did this myth start? The often repeated story goes that many years ago an engineer and a biologist were having dinner and a few drinks, after the topic of conversation turned to each person’s respective field. The biologist asked the engineer to work out how a bee flew- scientists partied wicked hard back in those days. The engineer, keen to show off his skills, quickly jotted down a few calculations and came to the conclusion, that holy crap, a bee shouldn’t be able to fly.
Today, the story is fully ingrained in pop culture and many sites and people without looking into the matter, repeat it as fact, even though one wonders how such a drunken mathematician had the pertinent numbers on hand to perform such calculations on the spot… Hell, the Dreamworks Animation film, Bee Movie, with a budget of $150 million apparently couldn’t spare a few bucks to consult a physicist on the matter, and opened with a variation of the “bee’s shouldn’t be able to fly” myth on a title card, and that’s a film aimed at children, in 2007! Man, we really should be investing more money in schools or at least more factually accurate bee based movies.
As to the origin, it’s always possible, albeit somewhat unlikely, that a drunken scientist did indeed make a “back of an envelope (in some versions it’s a napkin) calculation” that proved bee’s shouldn’t be able to fly. An origin theory with a tad more documented evidence behind it, pins it on a French book published in 1934, Le vol des insectes, which makes passing reference to that fact that simple calculations yield a result that suggests insects, not just bumblebees, shouldn’t be able to fly. Some say it was German physicist Ludwig Prandtl who was responsible for popularising and spreading the myth amongst his peers, whereas others claim that the original calculations were made by one Jacob Ackeret, a Swiss gas dynamicist.
In the aforementioned earliest known reference to such an idea, Le vol des insectes, Antoine Magnan, the author, claims the calculations, in regards to insects disobeying the laws of physics, were made by his friend and assistant, André Sainte-Laguë. Of course, the author should have been skeptical on the accuracy of his friend’s calculations and assumptions given that many insects can fly, but here we are. So while we can’t be sure he was truly the first, the first known calculations on the subject were made by Sainte-Laguë, though this fact doesn’t necessarily mean that another physicist didn’t do similar calculations during a drunken argument, which is good because we like that part of the story. What isn’t known is how the fact first eked into the public consciousness, and it’s likely we’ll never find out due to it being so long ago.
As for the calculations themselves, scientists, engineers and entomologists have gone to great lengths to discredit them, as the original calculations failed to take into account a number of facts about the bee. Most pertinent of these is that bumblebees don’t fly like a plane and they don’t have stiff, rigid wings. With that in mind, the original calculations, which were based mostly on the surface area of the bee’s wings and its weight, aren’t really applicable, since they neglect several factors that need to be taken into account for an accurate calculation. For example, “the effect of dynamic stall“, which would take too long to explain in this article, which is already creeping up on “too long”. So I’ll just briefly say that “Aerodynamic bodies subjected to pitching motions or oscillations exhibit a stalling behavior different from that observed when the flow over a wing at a fixed angle of attack separates” and then refer you to the following if you’re interested in reading up on the subject, which is actually pretty surprisingly interesting; although I was technically being paid to read it, so perhaps that coloured my view on it: Dynamic Stall
The reality is that bees and comparable insects fly in an incredibly complex way that utilises, get this, mini hurricanes! We’ll link all this stuff at the bottom in the references if you’re interesting in the nitty gritty physics, but in lay terms, bees fly by rotating their wings, which creates pockets of low air pressure, which in turn create small eddies above the bee’s wing which lift it into the air and, thus, grant it the ability to fly.
To find this out, scientists have conducted a variety of tests using bees, the most awesome one being by Chinese scientist, Lijang Zeng and his team, who devised system comprised of lasers and tiny mirrors glued to bees back in 2001. This experiment was deemed superior to previous tests, as it didn’t need to use tethered bees (which fly differently) and because it contained lasers, which is of course super cool. We’re fairly certain that a laboratory full of Asian scientists firing tiny laser beams at bees covered in shiny body armour is going to be the next big Syfy channel hit, so remember that you heard about it here first.
In fact, the way bees and other comparable creatures fly is so efficient and causes so little drag, that research into the subject has been backed by various militaries in an attempt to mimic this method of flight with our own tiny insect-like robots, which is just a recipe for another Syfy hit.
So, around 80 years ago a scientists or mathematician of some sort made a rough, mistake filled calculation that claimed bees couldn’t fly. Fast forward almost a century and scientists today are still trying to erase that mistake from the public consciousness with increasingly complex experiments to prove the simple fact that bumblebees can, in fact, fly, and that this doesn’t violate any of our understanding of the laws of physics. The fact that they even had to bother doing this when they could have simply pointed out of the nearest window, with their palm firmly planted on their foreheads, at bees flying around, perhaps says a lot about the gullibility of our species. In the end, as I make my living off dispelling such myths, I’m not complaining. 😉
Even more over, it was according to human laws of physics that bees couldn’t fly. This goes to show, who are we, as a species, to decide what is and isn’t possible. We’re such a high and mighty society that we actually believe that if we can’t explain everything about something it must be a higher power. Here’s an idea, maybe higher powers were created to explain what we couldn’t and provide some way of governing society. Either way you look at it you can tell that not everything needs to be explained in order to be possible. I’ll take a line from good old G.I. Joe and say knowing is half the battle, the other half is knowing we might just be wrong.
Well, it was according to what they knew about aerodynamics at the time, and as the article points out, they were approaching the study from the perspective of fixed-wing aircraft. The laws of physics are the same for all living things.
I agree not everything needs to be explained, but understanding how things work leads to applications that can enhance our lives. It is the entire basis of our current technological civilization.
I think you misunderstand what people use this for. People aren’t arguing that the laws of nature are wrong so pointing out that bee’s do in fact fly wouldn’t defeat their argument. David is much closer. The idea is that scientists make claims all the time, but people who use this logic often use it because they want people to get that our understanding of reality is not accurate. As good as our science is, there are still lots of things we can’t explain and don’t understand completely if at all. People who use this bee fact are usually trying to break peoples infatuation with believing anything that comes out of a scientists mouth. Sometimes it is also used to explain pseudosciences.
@dangthing Indeed! Just today (12 March 2016) I overheard a grandmother using this to explain [her] god”; she was telling her granddaughter at the park that bees only fly through the ‘the miracle power of God!'”
I really wish that it would have been socially appropriate to share this with her. Thank you for taking the time to write it!
FACT: Scientists do not know everything, and that is a good thing. Keeps us learning and scientists employed. The real message of the story or myth is that there is a danger in over applying any thought, theory, or philosophy. Many people make science into a religion rather than a discipline. The story of bumblebees illustrated that the law of aerodynamics was (and perhaps still is) incomplete. People are warned against over applying this story in the same way we must be warned about over applying science. Science has demonstrated value and many who know this tend not to recognize the demonstrated valued in questioning science. It is in questioning and challenging that science is moved forward. Those who defend science too strongly perform a dis-service to science and validate the nut jobs who deny science. There is a healthy balance, a middle ground, that recognizes the value of things understood and not understood. Proven science is true until it is dis-proven. This has happens many times in post and modern science. Don’t be a science, religious, political, or any kind of fanatic.
Isn’t it amazing how many things science tries to explain. Many times it is to disprove God’s existence. So for decades people have discussed how bumblebees fly, the origin of the universe. Some things we are not meant to understand. I love when I hear that something considered fact by Science is disproved. For example: carbon dating, the smallest particle of matter and recently Einstein’s theory of relativity. If you believe in God you can be sure he occasionally says, “Ha, explain this!” We DUMB Christians love to laugh also. Leave the poor bumblebee alone, He doesn’t know he is causing so much consternation. Tell me how to get my left sock out of the dryer when it disappears or how to get the last bit of peanut butter out of the jar. That would be important and helpful.
exactly. And just what are your sources for claiming that carbon dating, the smallest particle of matter and recently Einstein’s theory of relativity” have been disproven?
What, incidentally, is the smallest particle? I know that no decent modern physicist would say we’d •certainly• found the smallest particle. (See Lisa Randall’s excellent “Knocking in Heaven’s Door” for a good explanation of this common misconception.)
Speaking to the gullible nature of humanity.. The laws of physics have been revised to correct errors in the past so if indeed there was a legitimate error that made it so insect flying was a violation of the laws of physics.. it would be important to the scientific community to learn what the error is and to correct it. Repeating it as a fact is not gullibility, it’s jumping on the bandwagon at worst., scientific method at best, with a LOT of grey area in between.
Science has proven many times that what it once claimed to be proven true was never proven, or true. Hence, saying that “proven science is true until it is dis-proven” is an obvious over-the-top flight of fancy in itself, and clearly a statement made by a “nut job” who religiously worships science.
As long as you continue to think why are bees able to fly, you’ll always be stumped. So look at a bee in the air and stop thinking it’s in flight.
If it’s been a mystery for such a long time, then obviously it’s time to think out of the box.
Bees don’t fly, they levitate. And only now since more focus on Hyper Atomic Resonance and Cavity Structural Effect that’s possible with Chitin will science realize why bees levitate.
I believe very much in the truth of science. The laugh for me is how anyone can sit mindfully contemplating the complexity of the “simple” bee and still doubt a deeper wisdom – “Creative Intelligence” if you will. Bees mindlessly hovering (not flying) is too lengthy and complicated a topic to explain in an already very long blog. “Incredibly complex” the author calls it. But the brainless cells that evolved into such a creature accidentally learned the way “that utilises, get this, mini hurricanes.”
If you really want to engage in some intellectual honesty however, a better study of bees and natural selection should not be how they fly/hover but how the species ever lasted these past 100 million years (according to a 2006 Cornell study). You see, worker bees are the most important bees in the colony, but they are sterile and cant reproduce at all. Evolution should predict that mutation should be passed along to the next generation of workers, but as you know, only the queen bee has the babies. Only hundreds if not thousands of scientific research is in circulation to explain this not-so-minor issue.
Bees are really not “simple” at all, they are fascinating and beyond complex. So is the rest of nature. Every cell of it. But in the final analysis it seems that just as I find it beyond humorous and astounding how anyone studying science can shy away from the overwhelming circumstantial evidence to suggest Creative Intelligence (as “circumstantial” as believing that a paved road was not accidentally created by way of an overturned cement truck, though you cant prove it), so too, those who insist the universe is all just one big mistake are likely finding my position just as humorous and preposterous. Apparently we humans are gifted with our strong convictions and we stick to them. Either God has chosen this to be our nature, or natural selection has found this to be crucial for our species. Whichever way you slice it, it’s here to stay. Thank God/evolution for that!
Well, maybe we’re just at the tail end of out evolving the need for systematized, dogmatic religion and other irrational. Maybe we did need it. But the numbers are turning away from it, Jim. I know I’m perfectly happy to get my curiosity about the world satisfied without any religious trappings. And my daughters! Forget it. They are perfectly wonderful, utterly moral creatures, full of compassion, and no god needed.
I’m a scientist. I teach critical thought and take wisdom wherever I find it. I find value and joy in compassion. No sublime Intelligence needed.
Workers are unfertilized eggs. Mutations are passed down from the breeding bees, which generate the workers. Mutations that originate in the workers are not passed down. If it impacts colony survival, it’ll impact the queen’s survival, & thus be selected for or against. That wasn’t even a difficult question.
That whole “everyone has their convictions” thing is a false equivalence designed to make people who lack evidence feel better. Even if someone points out you’re wrong (as I did), you can console yourself by saying it’s “just their opinion” & feel free to ignore them anyway.
That you find the scientific answer “ridiculous” is what intelligent design propaganda set out to accomplish. Fundamentally, it relies on you accepting a short answer that gets you to stop thinking about the question. It’s easy to say “the really smart magic man made everything, & he’s the exception to the rule that complex things need to be created.” It’s much harder to actually explain the myriad of scientific laws that support a self-sustaining universe, particularly to people determined not to hear it.
One extensive sentence of actual explanation – bees fly by rotating their wings, which creates pockets of low air pressure, which in turn create small eddies above the bee’s wing which lift it into the air and, thus, grant it the ability to fly..
The flying bumble bee would break the laws of physics if the wings did not do their job of allowing flight……not so fast there, no laws of physics would ever be broken even if it flight was assisted via levitation, since
it could be using laws that was not known to science today!
I think Chris is right .Dragonfly wings have tubes in them. They are not for blood flow. Evidently they are for sound vibration. They Buzz. I would put forward that Bees ride on sound waves and create low pressure above themselves creating lift.
I certainly enjoyed the banter between my esteemed searchers and would hope that I can add a possible connection to the “Coral Castle” in FL built by a man who apparently possessed special knowledge about moving large pieces of coral by himself by what means we can only guess but there may be a resonance between his knowledge and the design of the bumblebee.
We do not fully understand how high frequencies interact with gravity. Observe the flight patterns of most animals with wings able to fly and you will see a distinct difference between how things like bees and humming birds fly compared to others which flap wings much slower. Bees and like creatures can change their directions in the blink of an eye, pulling many times the “g” forces conventional pilots can. We don’t truly understand everything. You didn’t even link an article, video, nothing disproving. I’m not saying your point isn’t valid, but it’s backed up with mockery of Asian scientists and not fact.
“In the aforementioned earliest known reference to such an idea, Le vol des insectes, Antoine Magnan, the author, claims the calculations, in regards to insects disobeying the laws of physics, were made by his friend and assistant, André Sainte-Laguë.”
Two things. First, well done on catching the grammatical error. Things like that, from people claiming to be writers, make me grit my teeth. And second, nice TARDIS & Doctor pic, there! #4 is my favorite, #12 is my second favorite, and #10 s my third, but there’s plenty for everyone to choose from, LOL!
“Science does not discover “truths”. It builds models of nature and reality based on our current best evidence. So it is always rational and justified to accept scientific findings.”
That conclusion is a non-sequitur. The point of having a skeptical view of “scientific” claims is precisely because so much of it is based on models and not on reality. Models can be programmed to yield any result, so they’re not a de facto accurate representation of reality. As we’ve recently witnessed with all the false predictions peddled by the psudeo-science of anthropogenic global warming alarmism, those predictions failed because the models were wrong (whether the models were intentionally programmed to promote climate-doomsday ideology or whether it was just sloppy science, one can only conjecture), but clearly their false predictions were not rationally justified.
I’m sorry, but I think you’re all wrong. Well, missing the point, more exactly. At least the way I’ve always heard it, the reason the “myth” has stuck around is because it’s actually meant more as a motivational or inspirational parable. Scientific fact or laws of physics have nothing to do with the point that is attempting to be made. You’re all forgetting the last part of the saying:
Aerodynamically the bumblebee shouldn’t be able to fly. So how does he? Nobody ever told him he can’t.
The message is that you can do anything you set your mind to doing. If you believe you can, you can. Kinda like “whether you think you can or can’t, you’re right.”
Forget inverse harmonic frequencies resonating with gravitational fields, or the coefficient of drag, the angle of attack, and high & low pressures creating lift. It’s much more simple. Much more perfect. Whether an incredible product of all the trials and errors of evolution, or a testament to intelligent and divine design (those two are NOT mutually exclusive BTW), the simple truth is the bumblebee flies because he believes he can. Period.
The origins very well may be, or as this article showed, most likely are rooted in some academic challenge, but the saying has stayed in the popular conscious because, I believe, it has a good message.
That said, I hope that when all of y’all were putting out some cookies for Santa last night, you remembered to leave a few carrots for the reindeer. Because it has been scientifically proven that reindeer, not unlike ravens, crows, and wives, can seriously hold a grudge!
Obviously bees aren’t defying the laws of physics and flying miraculously. That’s not the point. The point is that they fly regardless of whether or not scientists comprehend how. Such is the case with myriads of unsolved miracles in our universe – and why science needs to renounce its claim of infallibility! Science deniers are a little goofy, but science skeptics are the genuine realists. Anyone truly grasping the scientific method should note that science will always be wrong more frequently than right.
A lot of truth here, especially pertaining to the debate regarding global warming. The statement by the advocates for the proposition stating – “science has decided it is so, so it is a scientific fact and a decided issue”. Well the truth is as your article points out the scientists are more often wrong than right. And always premature in their judgements. Renders judgments prior to collecting all the evidence.
|
-like robots, which is just a recipe for another Syfy hit.
So, around 80 years ago a scientists or mathematician of some sort made a rough, mistake filled calculation that claimed bees couldn’t fly. Fast forward almost a century and scientists today are still trying to erase that mistake from the public consciousness with increasingly complex experiments to prove the simple fact that bumblebees can, in fact, fly, and that this doesn’t violate any of our understanding of the laws of physics. The fact that they even had to bother doing this when they could have simply pointed out of the nearest window, with their palm firmly planted on their foreheads, at bees flying around, perhaps says a lot about the gullibility of our species. In the end, as I make my living off dispelling such myths, I’m not complaining. 😉
Even more over, it was according to human laws of physics that bees couldn’t fly. This goes to show, who are we, as a species, to decide what is and isn’t possible. We’re such a high and mighty society that we actually believe that if we can’t explain everything about something it must be a higher power. Here’s an idea, maybe higher powers were created to explain what we couldn’t and provide some way of governing society. Either way you look at it you can tell that not everything needs to be explained in order to be possible. I’ll take a line from good old G.I. Joe and say knowing is half the battle, the other half is knowing we might just be wrong.
Well, it was according to what they knew about aerodynamics at the time, and as the article points out, they were approaching the study from the perspective of fixed-wing aircraft. The laws of physics are the same for all living things.
I agree not everything needs to be explained, but understanding how things work leads to applications that can enhance our lives. It is the entire basis of our current technological civilization.
I think you misunderstand what people use this for. People aren’t arguing that the laws of nature are wrong so pointing out that bee’s do in fact fly wouldn’t defeat their argument. David is much closer.
|
yes
|
Probabilistics
|
Was it possible for Bumblebees to fly according to the laws of aerodynamics?
|
yes_statement
|
"bumblebees" were able to "fly" "according" to the "laws" of "aerodynamics".. "bumblebees" were capable of "flying" based on the principles of "aerodynamics".
|
https://euro.eseuro.com/trends/314957.html
|
NASA did not affirm that the flight of bees goes against aerodynamic ...
|
NASA did not affirm that the flight of bees goes against aerodynamic principles
Actually, the text belongs to a motivational talk taken out of context in which a member of NASA sought to motivate a student to pursue her dreams.
A text circulates on social networks accompanied by an image where it is stated that the National Aeronautics and Space Administration (NASA for its acronym in English) shares a poster that says: “Aerodynamically, the body of a bee is not made to fly; the good thing is that the bee does not know it» (sic).
The text is justified by saying that the wings of the insect are too small to support its body in the air and that the bees, by ignoring the laws of physics and their logic, is how they can fly.
This is false, there is a physical explanation for the flight of bees and also, there is no evidence of the alleged poster. And in many of the shared posts, the image does not correspond to a bee, as the post says; but a bumblebee.
Twitter is where it has gone viral the most, the publication has obtained more than 152,000 “likes” and 17,000 retweets until the moment of verification, according to the social network itself.
Among the interactions you can read reactions of all kinds: «God breaking the rules of physics with his designs. Spectacular”, “No, No, No. NASA is the one that doesn’t know physics. If the bee flies, it is because they believe that they know all the laws and thus it shows them that it is not true. The question water would be. Do you believe in NASA? (sic).
With verification tool Google Reverse Image Search the origin of the image that was shared along with the text was searched. When inserting it into the search engine, Google detected that it was not about bees but a Bombus Terrestris; a bumblebee However, it was impossible to find the origin of the photograph.
Although both bees and bumblebees look similar and share a taxonomic family, the latter is larger and belongs to the genus bumbuswhich has more than 250 species.
Along with this, the tool was also used RevEye Reverse Image Search: through different search engines in different languages, it was found that the text next to the image has also been reproduced in the languages portuguese and english; none show evidence of the cartel.
The probable origin of the myth of the bees
Once this was done, we proceeded to place in the Google search engine: “NASA Bumble bee can’t fly” (“NASA bumblebee can’t fly” in English). The only result associated with the official website of the space agency is an entry in the “Education” section entitled: “Legends, Traiblazaers Inspire NASA’s Future.”
In the post dated August 4, 2010, it talks about a scholarship awarded by the NASA Offices of Education to hundreds of minority students. This consisted of visiting some facilities, meeting legends and pioneers of the agency.
With the motto: “it is never too late to follow your dreams”, the students received motivational messages from NASA members in a forum as part of the activities.
One of these messages – transmitted by the Dr. Julian Earls – It is just the one that has been replicated on social networks, although slightly modified. This was part of the answer to keoshaa 10th grade girl who dreamed of being a gynecologist in the future.
--
The text makes it clear that it was done in a motivational and non-scientific context. A paragraph above mentions: “It may seem like a distant dream, but the goal of the forum was to dream big, achieve the impossible, spread your wings and fly.” Even Earls himself said: “She just flies all over the place and that’s what you (Keosha) have to do.”
The example used is no coincidence, since this idea seems to date back to 1934 when the French entomologist Antonie Magna and his assistant André Sainte-Laguëcalculated that the flight of the bees was “aerodynamically impossible”, this according to Caltech.
How do bumblebees fly?
The flight of bees has been studied by the scientific community for more than 70 years. In 2005, the biology professor and insect flight expert at the University of Washington, Michael Dicksonpublished a study on the flight of the bumblebee in the journal Proceedings of the National Academy of Sciences.
According to Dickson, the secret to flying of these insects is contrary to the false belief that bumblebees flap their wings up and down, they do; although to a lesser extent.
Actually, the movement is somewhat more complex, since they do it forwards and backwards, but with a certain angle of inclination.
LiveScience offers a video on YouTube where you can see the particular movement with which bumblebees are capable of flying.
The combination of the above plus the speed of the execution helps create vortices in the air, which means that the pressure inside them is lower than the outside, allowing you to stay in the air.
In conclusion
Therefore, it can be ensured that the post is false. Although it is true that part of the information has been taken out of context; There is no such poster nor is it announced by NASA, but rather it was from a motivational talk shared in a program for the promotion of science.
Although bumblebees ‘defy’ aerodynamics since their evolution has created an unconventional mechanism for their flight; there is a concise explanation for it that does not refute or compromise any principle of aerodynamics.
Not to mention that the vast majority of this hoax is accompanied by a photo that does not correspond to the text.
The Brazilian site of E-Farsas came to the same conclusion in 2020.
This article is original from the Mexican fact-checking outlet Verificado, which is part of the Latam Chequea network, like Mala Espina.
|
This was part of the answer to keoshaa 10th grade girl who dreamed of being a gynecologist in the future.
--
The text makes it clear that it was done in a motivational and non-scientific context. A paragraph above mentions: “It may seem like a distant dream, but the goal of the forum was to dream big, achieve the impossible, spread your wings and fly.” Even Earls himself said: “She just flies all over the place and that’s what you (Keosha) have to do.”
The example used is no coincidence, since this idea seems to date back to 1934 when the French entomologist Antonie Magna and his assistant André Sainte-Laguëcalculated that the flight of the bees was “aerodynamically impossible”, this according to Caltech.
How do bumblebees fly?
The flight of bees has been studied by the scientific community for more than 70 years. In 2005, the biology professor and insect flight expert at the University of Washington, Michael Dicksonpublished a study on the flight of the bumblebee in the journal Proceedings of the National Academy of Sciences.
According to Dickson, the secret to flying of these insects is contrary to the false belief that bumblebees flap their wings up and down, they do; although to a lesser extent.
Actually, the movement is somewhat more complex, since they do it forwards and backwards, but with a certain angle of inclination.
LiveScience offers a video on YouTube where you can see the particular movement with which bumblebees are capable of flying.
The combination of the above plus the speed of the execution helps create vortices in the air, which means that the pressure inside them is lower than the outside, allowing you to stay in the air.
In conclusion
Therefore, it can be ensured that the post is false. Although it is true that part of the information has been taken out of context; There is no such poster nor is it announced by NASA, but rather it was from a motivational talk shared in a program for the promotion of science.
|
yes
|
Probabilistics
|
Was it possible for Bumblebees to fly according to the laws of aerodynamics?
|
yes_statement
|
"bumblebees" were able to "fly" "according" to the "laws" of "aerodynamics".. "bumblebees" were capable of "flying" based on the principles of "aerodynamics".
|
https://www.mdpi.com/2072-666X/12/5/511
|
Study of Mosquito Aerodynamics for Imitation as a Small Robot and ...
|
Notice
Notice
All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
Abstract
In terms of their flight and unusual aerodynamic characteristics, mosquitoes have become a new insect of interest. Despite transmitting the most significant infectious diseases globally, mosquitoes are still among the great flyers. Depending on their size, they typically beat at a high flapping frequency in the range of 600 to 800 Hz. Flapping also lets them conceal their presence, flirt, and help them remain aloft. Their long, slender wings navigate between the most anterior and posterior wing positions through a stroke amplitude about 40 to 45°, way different from their natural counterparts (>120°). Most insects use leading-edge vortex for lift, but mosquitoes have additional aerodynamic characteristics: rotational drag, wake capture reinforcement of the trailing-edge vortex, and added mass effect. A comprehensive look at the use of these three mechanisms needs to be undertaken—the pros and cons of high-frequency, low-stroke angles, operating far beyond the normal kinematic boundary compared to other insects, and the impact on the design improvements of miniature drones and for flight in low-density atmospheres such as Mars. This paper systematically reviews these unique unsteady aerodynamic characteristics of mosquito flight, responding to the potential questions from some of these discoveries as per the existing literature. This paper also reviews state-of-the-art insect-inspired robots that are close in design to mosquitoes. The findings suggest that mosquito-based small robots can be an excellent choice for flight in a low-density environment such as Mars.
1. Introduction
The role of robotic systems, including miniature unmanned autonomous ones over the years, has expanded considerably. With the rapid advancement in sensor and robotic technologies, these robotic vehicles are envisaged to be assigned various tasks, including vector disease control, atmospheric analysis, disaster monitoring, product delivery, surveillance, and reconnaissance. While there has been rapid progress in micro aerial vehicle design, locomotion challenges at the nano- and pico-aerial level, as in Figure 1, and low altitudes remain relevant and hinder the proliferation of this technology. Aerial locomotion is particularly difficult close to the Earth’s surface, where the winds can very rapidly change in speed and direction, rendering the conditions unfavorable for flight and for a planetary atmosphere such as Mars, where the density is very low compared to Earth. High levels of turbulence in the wind and even very low density such as that of Mars can be adverse for flight and poses severe flight-control challenges. In seeking solutions to these challenges, researchers have sought inspiration in biological flying systems such as insects and birds since they possess excellent flight prowess and can outperform current robots in nearly every facet of autonomous locomotion. Despite possessing miniature brains, natural flyers can solve complex tasks associated with navigation and flight control in an inherently complex environment. Flapping flight offers advantages over other platforms, especially at small scales, and it is the preferred mode of flight for natural flyers. Flapping flight provides high maneuverability while being collision-tolerant—traits that are critical for successful flight in highly cluttered terrain. However, due to the vastly different and highly dynamic nature of aerodynamic force production, wing actuation and flight control are incredibly challenging.
As per the latest research [2,3,4,5], the mosquito’s long, slender wings flap at a moderately high frequency relative to similar insects. Experimental and numerical studies reveal aerodynamic processes that have not been seen before in this type of flight environment. Additionally, with interest in developing insect-inspired robots for planetary studies, it is worth looking at the mosquito’s unique aerodynamic features for such an environment. These things make it necessary to look at this insect’s detailed aerodynamics and simulate its flight for Earth and a less dense Martian atmosphere. The application of biomimetics and bioinspired solutions to miniaturized-aerial-vehicle production has evolved to an incredibly small pico- and nano-aerial vehicle (PAV–NAV) size that exceeded its immediate predecessor, leaving a wide range of technology choices that MAVs need to sell. This review’s primary goal is to impart essential knowledge of the kinematics and aerodynamics of the insect-type flapping wing and membrane wing composition of mosquito and their suitability for planetary research.
An adult mosquito has three segments: head, thorax, and abdomen. The pair of wings and balancing organs, called halteres, are essential flight elements. Halteres, small ball-like structures, help mosquitoes to maintain balance during flight. For the imitation of insect-based aerial robots, the accurate identification of insect species is essential for wing design and venation. Identification of the organism is completed based on the wing pattern, and in mosquitoes, the species can be identified based on the size, shape, and color of the scales on the wings. The major dorsal part of the mosquito is called the scutum. In many mosquito species, the scutum might have very distinct scaled patterns used for identification [6]. Techniques such as wing geometric morphometrics (GM), artificial neural network (ANN) can be used for the identification and categorization of mosquito species based on wing shape characters [7,8,9]. The mass distribution in real wings is associated with venation patterns. Artificial wings having a pattern of veins are likely to be the ones that are biologically influenced and can be optimized to produce complex deviations close to those observed in natural ones [10]. Although wings are responsible for lift, haltere also plays an essential role in flight stability. If both halteres are immobilized, insects cannot remain aloft in flight. On all three axes, haltere has gyroscopic oscillations. By making rotations around three orthogonal axes, they produce distinctive angular velocity-dependent Coriolis forces. A single halter could therefore detect all rotations in space. In dipteran, halteres function as a micro-scale vibratory gyroscope [11,12,13].
Experimentally, it is proven that the elasticity in the sensing direction of both the wings and halter structure is due to the presence of resilin, which restores the bending deformation. This rubber-like material covers many mobile joints and vein boundaries linked to the wing membranes of the wings. From a design point of view, recognizing the effect on the use of resilin and the entire resilome in biomaterial sciences, such as micro-robotics with elasticity advantages, would be very beneficial. The basic halter structure of the soldier fly is shown in Figure 2a, and the localization of resilin proteins in various parts of the fruit fly is shown in Figure 2b [13,14].
2. Wing Beat Frequency and Stroke Amplitude
2.1. Flapping and Actuation
Mosquitoes beat their long slender wings at an enormous speed with a flapping frequency in the range of 600 to 800 Hz, compared to their natural counterparts [2,3]. Computational and experimental studies have helped us understand the sources of high-force development in insect flapping wings so far, such as the flapping and deforming of fish fins and the integration of that knowledge into bio-inspired vehicle designs along with trade-off studies carried out during the bio-inspired design between efficiency and productivity [15,16]. The energy of a flapping wing MAV is vital in order to build flapping-wing aerial vehicles. It involves the design of an insect thorax-like energy storage mechanism in the flying vehicle, aerodynamic wing models based on blade element theory, optimizing the energy storage mechanism parameters using dynamic models to minimize the peak input power from the outer actuators throughout the flapping period [17]. For example, when designing a compact flimsy mechanism used for wing flapping, stability, controllability, and power dispensing are the main issues when size is reduced. The wing efficiency and advance ratio, controlled either by extending the stroke amplitude or increasing the flapping frequency, are the two most essential factors used in mimicking an insect. [18,19]. Wing flexibility also plays a role here. Compared to the rigid wing, it must be recognized that wing flexibility increases the capacity for thrust generation and performance for all kinematic patterns [20]. However, there is the development of insect-inspired robots with a frequency of up to 250 Hz on the micro-nano scale with different actuation systems such as piezoelectric, electromagnetic, and other actuators (see Table 1 above). The development of an actuation system that can generate a flapping frequency (600 to 800 Hz) close to mosquitoes is still a challenge. In this case, unstable aerodynamic processes need to be quantitatively determined over a broad Reynolds number scale to validate the morphological model test method [21]. For example, the mass and stiffness disparities along the wing of the blowfly give directions for designing a biomimetic structure in the case of insect-scale flapping wings [22]. Artificial wings must have biomimetic wing features similar to their natural counterparts to have substantial lift force. The artificial wing’s mechanical characteristics rely heavily on venation thickness, retaining a fully stringent arrangement during flapping motion and helping to generate appropriate thrust [23,24].
2.2. Stroke Amplitude
Mosquitoes have other unique flight characteristics in addition to beating their wings at a very high pace. For instance, their long, slender wings move between the most anterior and posterior wing positions through a stroke amplitude of about 40 to 45° [3]. They have diminished reliance on LEV, which is opposite to the lift force generation mechanisms of other insects and animals such as birds and bats during wing translation [2]. Generally, for all hovering insects, including mosquitoes, there are four wing stroke phases: translational states called upstroke and downstroke, and rotational states called pronation and supination. Figure 3a–c show these phases as well as the complex aerodynamic mechanisms associated with mosquitoes. The accelerating wing experiences the accelerating fluid nearby as “added mass”. Table 2 shows the wing stroke amplitude and Reynolds number of hovering insects, including mosquitoes. Figure 4 gives the relationship of the lift and drag coefficients and lift-to-drag ratio with the angle of attack (AoA) and stroke amplitude. A and B give the quasi-steady mean lift coefficient (CL) values, C and D give the mean drag coefficient (CD) values, whereas E and F represent the lift-to-drag (L/D) ratio [25]. The mean lift rises with enhanced flap frequency, which proportionally increases the average wing speed and therefore lift, but as the amplitude of the stroke elevates, the mean wing speed and the lift also increase.
Nevertheless, the mean lift to the mean win tip speed squared ratio falls as the stroke rate rises, which affects the mean lift coefficient. It might be possible that the mean lift will escalate with a stroke amplitude square, vortex shedding impedes the potential outcomes of the mean wing speed increase [26]. Therefore, higher flapping frequencies are advantageous because there is an increase in the lift without a potential improvement in the lift-to-torque ratio. Although the mean lift and mean drag are influenced by varying the stroke amplitude, their relationship is linear [27]. Since the whole flow mechanics tend to alternate themselves as the stroke amplitude is varied, it anticipates that the mean lift-to-drag (L/D) and mean lift-to-torque (L/Q) characteristics would have a corresponding effect [27]. In mosquitoes, the low amplitude means 75 percent radial position of the lifting surface moves two chord lengths between the stroke reversals, which fails the fluid mechanic’s assumption about lifting surfaces acting as sweeping helicopter blades [2].
Mechanical hysteresis is another functional constraint; with the increase in flapping frequency, hysteresis can lead to a substantial loss of power [27]. Together with microelectromechanical systems (MEMS), the use of computational fluid dynamics (CFD) is a positive step towards understanding insect-scale flying robots [33].
Table 1 gives information about different bio-robotic models successfully developed in the frequency range of 30 Hz to 200 Hz. Vertical force enhancement is an essential factor in insect-like tailless flapping-wing micro air vehicles (FW-MAVs), and one such key challenge is that a lack of feedback control leads to instability after take-off. Insect mimicked aerial bodies are more challenging than birds due to different control principles [34]. Mosquitoes use their system as a motor during the leg push-off to control pitching torque [35]. To correct bias torques produced due to irregularities in the complex flapping mechanics of small insects, researchers have introduced a trimming system to correct bias torques that often lead to rapid free flight rotation if not adequately trimmed [36]. There is an observation that the bio-inspired honeybee and bumblebee wing configurations show optimal performances with similar wingspan and wing surface [37]. Although high-frequency flapping incurs higher inertial power requirements, it is vital for acoustic communication in mosquitoes [2]. A short rotation period or low stroke angle is generally associated with increased performance, a unique flying characteristic of the mosquito. The wing design and mechanism strongly influence the chosen aerodynamic features and performance [38]; for instance, the dry film based on a negative epoxy photoresist (SUEX) compliant flapping mechanism of the pico-air vehicle (PAV) airframe demonstrates excellent agreement with experimental results [39]. Fog or dew significantly affects flexible mosquito wings. Water accumulates on mosquito wings, folds them, and makes them useless for flight [40]. Essential observations from mosquito-related studies, such as in reference [4], showed that in the case of downstroke and upstroke, when the stroke amplitude is low, LEV does not play a role there, and delayed-stall could not contribute to force generation. Furthermore, wake from a previous stroke could be harmful to force generation. Reference [5] suggested that force peaks and stroke-related descriptions in Bomphrey’s [2] study were not explained clearly. A thorough investigation is needed further to understand the full benefits of high-frequency flapping and low stroke amplitudes in mosquitoes. Progression in the necessary technologies associated with insect-scale robots is mandatory despite many complex challenges [41]
3. Lift Generation Mechanisms
3.1. TEV by Wake Capture, Added Mass Effect, and Rotational Drag
Wing-wake interaction or wake capture, a nonlinear, unsteady aerodynamic effect, significantly impacts the lift, required power, and dynamics of flight [42]. Mosquitoes possess several distinct aerodynamic characteristics among other counterparts: TEV due to wake capture; LEV, generally used for all insects of the same class; and most importantly, rotational drag [2]. TEV capture during the stroke cycle is a kind of wake capture because it depends upon induced flow during the half stroke. The parameter representing the unsteady effect is the Strouhal number, which is inversely proportional to stroke amplitude [5]. The Strouhal number (St) optimal range is: 0.2 < St < 0.4 for efficient flying [43]. The stroke amplitude of mosquitoes is low, which means an extremely high Strouhal number, so the flow throughout the wing is substantially unsteady. Reference [5] pointed out that in Bomphrey’s study in reference [2], the force peaks and their explanation were based on instantaneous streamline patterns that are very different from vortic patterns if the flow is substantially unsteady [5]. Furthermore, added mass, which plays a vital role in unsteady flow, was not explained well in Bomphrey’s study in reference [2]. As in Figure 5a, since the wing’s linear velocity is steady, added mass inertia seems negligible during much of the stroke. It contributes very little to the estimated aerodynamic forces [25]. In Figure 5b, t/T = 0–0.12, the inference is that there is a significant presence of force peaks, though there is no prior wake, which means that it is not wake-capture but an added-mass effect [5]. The rapid wing oscillation also contributes to significant added mass forces in the case of insects. Several studies related to fruit flies reveal that the added mass effect is a crucial aerodynamic mechanism. The Strouhal number, St, representing unsteady flows, explains such a mismatch. As the St surpasses 0.5, the added mass becomes dominant. For example, a Hawkmoth’s St is close to 0.315, lower than that of small insects, which does not dominate the added mass force but can interfere with the wing circulation [44]. Several studies demonstrate unsteady models that include quasi-steady, unsteady, circulatory, and non-circulatory flight, have limitations in the sense that they presume that their models comply with the condition of Kutta–Joukowski, which is not possible when the flow is all over the trailing edge as in the case of mosquito flight [45,46]. The advantage of rotational drag by mosquitoes during their flight is its intense contribution to lift. This lift contribution by the rotational drag should not be confused by the rotational lift, which is different.
Flapping frequencies have optimum ranges with two parameters depending upon higher AoA, low Reynolds number and leading-edge vortex development and shedding [47]. Wing rotation is significant in developing and reducing lift in flapping wing motion during the stroke period [48]. As opposed to traditional airfoils, insects usually flap their wings at higher AoA. A high Reynolds number can lead to spanwise flow within the vortex core which relies on wing shape and kinematics [49]. In some insects, such as dragonfly, while hovering, an unsteady force mechanism helps generate a new vortex ring, a downward momentum in the downstroke of either the hindwing or forewing, giving up upward force. The complicated relationship between the wing deformation and surrounding airflow has long prohibited the flexibility effect from being understood [50,51,52]. The time-varying vorticity flux distribution articulates the relation between the shedding vortex and force generation [53]. As seen in Figure 6, researchers used the immersed boundary method to model the 3-D flow field around a mosquito in hover based on direct computational analysis. The numerical computation findings were validated for the same results with the particle image velocimetry (PIV) measurements from reference [2]. At various intervals within a flapping phase, all the wing vorticity contours display good agreement [4], resulting in fully understanding the aerodynamic features such as TEV and wake capture. Sometimes, the forward force and a lift force component integrate to create the turning moment; meanwhile, the side force generates the restoring torque all over the manoeuvre alone [54]. Camber deformation is also essential. The dynamic stall can be delayed significantly, with the airfoil’s role having time-varying camber deformation, thereby delaying LEV production and shedding [55]. Camber deformation affects the aerodynamic forces on the flapping wing much more compared to a substantial twist [56]. Just after stroke reversal, the wake-capture mechanism accountable for a rise in thrust output decreased with rising downward velocity and fades away as soon as this velocity exceeds the mean wing-tip velocity [57]. Delayed stall, the rotational effect, and wake capture influence the surface aerodynamic properties of the flapping wings in forwarding flight. For instance, with an improved advanced ratio, the delayed stall effect deals with a rise in downstroke and fall in the upstroke. The rotational effect also relies on the advanced ratio and angle of the stroke. Wake capture is effective at an early upstroke rather than downstroke [58].
3.2. Wing Corrugation, Versatility and Other Factors
Flapping wing motion is often correlated with separated flow patterns, as flow separation on the wing surfaces has often existed [59]. The lift and thrust forces are also substantially responsive to flexural stiffness distributions, with optimum execution in various phase sectors [60]. In flexible airfoils/wings, the relative convection rates of positive and negative vorticity influence the thrust generation. The asymmetry between the LE and TE is also useful for producing thrust [61]. Wing corrugation is often assumed to play an important role as well. However, studies show that corrugation of the wing is intended for structural purposes, not aerodynamic ones. Corrugated wings have the advantages of being light and sturdy. These advantages are related to their aerodynamic properties and can be answered by further work in this field [62,63,64]. The contact between vortexes is the critical attribute that enables insects to produce sufficient lift to remain aloft. The standard unsteady vortex-lattice approach and the general kinematic model could be extremely reliable and systematic tools for aeroelastic studies in the future [65,66]. It is evident that the spanwise versatility of the wings improves the thrust marginally but reduces the performance [67].
It is important to note that wing-body interaction in flapping insects also substantially enhances the total lift production [68]. Wing deformation patterns induce a 30 percent increase in lifting force throughout the upstroke than the rigid wing model. For the flapping-wing flight, wing elasticity thus plays a fundamental role [69]. Compared to traditional rotary and insect-like flapping wings, flapping wing rotor (FWR) can perform reasonably well by generating a notably higher aerodynamic lift coefficient with power efficiency [70,71]. Mosquitoes produce the aerodynamic force to support their weight differently from their general counterparts, even though they use familiar separate flow patterns [2]. Wing aspect ratio, high flapping frequency, and small stroke amplitude of mosquitoes also allow high-intensity wing tones to be produced efficiently for acoustic communication [72]. Therefore, it is evident that due to vortices at the tip and root that interfere with the wing during the flapping period, unsteady lift and drag are produced. For assessing the performance of geometry and kinematic parameters of flapping-wing vehicles, power loading is more appropriate than the lift-to-drag ratio [73].
4. Unique Kinematic Patterns and Wing Flexibility
TEV through the wake capture method is a significant characteristic observed in mosquitoes at low Reynolds numbers as described in the previous section. A recent study reveals the total added mass as a factor here [5]. Mosquitoes advanced the traditional boundary of kinematic patterns. A substantial angular rate and exceptional stroke reversal timing are the two critical parameters that help the mosquito to generate the necessary force to support its weight during flight [2].
4.1. Kinematic Patterns
Flapping kinematics, aerodynamic modeling, and body dynamics are three primary building blocks of the dynamic flight framework [74]. To improve aerodynamic energy development, insects use time-varying pattern mechanisms during the flapping process [75]. Mosquitoes have a very precise axis of rotation despite lift produced due to rotational drag being dependent on the angular pitching rate by square. The wing’s pitching rotational axis moves from LE to TE amid pronation after the upstroke [2]. It is often better to combine aerodynamic studies with behavioral ones to understand flight locomotion in insects [76]. The study indicates that at different Re, vortex movement during its movement from LE to the wake, which allows for sustained vortex attachment, takes various forms [77]. Therefore, significant improvements have been made in predicting aerodynamic force mechanisms and power requirements in insect flight [78]. Methods of aerodynamic modeling are the most enticing for iterating rapidly across various design configurations. The span-wise movement of the LEV is a very significant function that most models do not notice. In order to understand the wing flapping mechanism’s efficiency, the modified quasi-steady 2D modeling is a good approach [46,79]. Insects can increase their flight strength by interacting with the contralateral wing during the dorsal stroke’s reversal (‘clap-and-fling’), affects the power loading, propeller efficiency and the metabolic activity in the aerial body [80]. At low Re, the spanwise flow appears almost more pronounced. An increase in the Reynolds number does not have severe effects on the LEV, so scaling up insect flapping is possible if the aspect ratio is below 10. Additionally, elastic deformation-based circulatory lift increase is the combined effect of an in-phase rise in wing velocity and wing camber shifts. [81,82,83]. The wing’s elastic deformation kinematics reveals that the incidence angle and the camber both display a reversal effect as they suddenly shift at the reversal of stroke. Even a primitive wing vein architecture is enough to reinstate the flexible wings’ capacity to produce forces at very close-rigid values. It is important to note that flapping produces stable LE vorticity at high angles of attack, continues over the stroke period, and raises mean aerodynamic forces. By modulating the TE’s flexibility and thus controlling the enormity of the vorticity of the LE, the magnitude of the generation of force can be regulated. [84,85,86]. Takahashi et al. measured the differential pressure distribution of different insect ornithopters and free-flying insect wings during flight phases such as take-off. Using micro differential pressure sensor developed using microelectromechanical systems (MEMS) technology they found that this measured distribution is characteristic aerodynamic force during the flight phase and proposed that this method combined other experimental techniques such as digital particle image velocimetry helps understand the unsteady aerodynamic forces. [87,88,89,90,91]. Experiments suggest that for substantial performance, the combination of a flapping phase with a feathering phase is significant in hovering and forward flight [92]. From the aerodynamic perspective, passive feathering gives lift development the necessary capacity at a very reasonable energy cost [93]. Digital particle image velocimetry, therefore, helps illustrate how flexible wings achieve aerodynamic strength. The phase delays in stroke movement of flexible wings impact the generation of a vortex, especially the leading-edge vortex (LEV), thereby supporting weight. The wake capture force is entirely unsteady during stroke reversal. For example, for dragonfly, the forewing LEV provides support for weight throughout the routine flapping flight [94,95,96,97,98] show a high dexterity level in wing motion [99]. This summarizes the LEV as the most significant aerodynamic element for most insects.
LEV development is a function of the span, which means that separated flow in the wings’ outer regions or boundary exists, thereby clarifying free vortex simulation of wake progression [100]. Actual LEV configurations may be more complex [101]. At low Re, flapping type lifting devices have very high performance aerodynamically. Typically, the decrease in AoA during the upstroke with fixed AoA at the downstroke decreases the wake upstroke, notably reducing the effect of downstroke LEV production through wake capture [102]. To understand mosquito’s reliance on TEV and the role of delayed-stall, recent research on the mosquito kinematic model and a unique computational study with the immersed boundary method for mosquitoes is shown in Figure 7 [4]. In this study, for only one wing, the aerodynamic time history of the forces is plotted in Figure 7a, obtained directly from Lagrangian force integrated about the IB taken from [4,103]
FAero=−∫ΩFs,tds
(1)
where Ω is the body surface represented by Lagrangian points, reference [4] observed some differences in lift force while having similar drag and side forces as compared to reference [2]. There are three lift peaks in the lift, shown by t1, t2, and t3 in Figure 7a, compared to four in reference [2]. The TEV developed at t1 binds itself and produces a broad negative pressure area at the upper surface TE, which leads to the primary lift point. t2 and t3 show both LEV and TEV’s presence, but pressure contours tell different stories due to distinct patterns on both the wing sides, which needs further examination, as in Figure 7b–d [4]. After the reference [2] study on mosquitoes in 2017, references [4,5] did some tremendous work on the unsteady aerodynamics of mosquitoes. These studies have explained the flying pattern and essential factors related to mosquito flight with computational and experimental aerodynamics. The study indicated that the delayed-stall mechanism has no direct relationship with aerodynamic force development in mosquitoes [5]. Early studies [49,104] also pointed to the delayed-stall mechanism’s irrelevancy for a lifting surface, having a low amplitude stroke [5].
4.2. Role of Wing Flexibility
The flexibility of wings primarily leads to substantial lift generation, and the flight speed is significantly improved by gliding forces, indicating that the optimum layout of the wing structure and flapping motion may increase the efficiency of these vehicles. The size also affects insect hovering aerodynamics. Wing deformation is critical to the mosquito because it helps retain the LEV, i.e., delayed stall, therefore significantly reducing the overall aerodynamic strength needed for the insect to hover. The Reynolds number increases with increasing size, and so do the lift and power efficiency, which is why larger mosquitoes are more effective in searching and feeding [4]. A flexible wing can reshape its form, adjusting its camber to make the surrounding flow more effective. Figure 8 shows wing features at Re = 100. The translational lift with higher efficiency is used for flexible flapping, with low rotational force at stroke reversals. Figure 8c shows the lift’s variation from two peaks to one-peak shape at the stroke center. As the flexibility increases, there is lift enhancement characterized by γ, which is non-dimensional wing-tip displacement w.r.t. leading edge (Figure 8d) [105,106]. Flexible wings help in increasing the L/D ratio for superior performance. Though it generates lower lift and drag than the rigid, the chordwise deformation ceases the increase in effective geometric AoA, thereby changing the total resultant force direction upwards, increasing the L/D [107]. To better control and play with the dynamic characteristic modes, insects can utilize wing base flexibility [108]. Study shows that aerodynamic force affects the deformation of the insect flapping flexible wing, whereas inertial force controls deformation [109]. So it is clear that aerodynamic performance and wing flexibility have a peculiar relationship where the latter helps lift enhancement.
Flexibility also improves the propulsion efficiency by significantly reducing the rotation sequence losses [110,111,112]. In a flapping wing, wing flexibility is crucial as it demonstrates propulsive performance. [113]. Flexibility increases downwash in the wake, and therefore force. For kinematics, the reduction in tip LEV breakdown due to dynamic bending enhances force production before stroke reversal [114]. Flexibility, along with passive deformation, also significantly influences force production. High flapping frequency as observed in mosquitoes and high LE flexibility results in phase lag in the tip’s displacement and thus less vertical thrust production. Flexibility also affects trim conditions [115,116,117]. Structural mechanics and aeroelasticity are vital tools to understand insect wing flexibility [118]. The wing performs exceptionally well, considering wing flaps are resonant and density similar to insects’ natural wings for flexibility in hovering performance [119]. In its specific scope of chordwise flexibility, the flexible wing with proper LE venation can have more incredible aerodynamic performance [120]. So wing flexibility in bio-imitation and insect flight is an essential factor for modeling and imitating the bio-inspired robotic locomotion using soft organs, such as a general framework centered on a mobile multibody systems (MMS) model [121]. By utilizing a novel process called vortex trapping, elastic wings recycle energy from separated LEVs [122]. Using stereoscopic PIV on fruit fly, researchers found powerful axial flow components on the top wing surface and the axial flow in the vortex core of LE [123]. The general clap-and-fling effect fails to contribute to lift development and enhancement [124]. It has been found recently that elastic wing deformation also helps mitigate asymmetry in flapping in case of maneuvers [125]
4.3. Other Essential Factors for Kinematics
Compliant transmission mechanisms are a better replacement for rigid transmission systems to minimize total weight, reduce energy losses, and accumulate and liberate mechanical power during the flapping process [126]. The impact of the Reynolds number on LEVs around a wide range of size scales and modes, flexible structure performance with realistic models, turbulence studies in unsteady environments, and a systematic analysis on functional morphology to create real-life bio-inspired lifting surfaces and structures are some of the challenges associated with multimodal locomotion [127]. In imitating a mosquito-based robot, fabricating an actuation mechanism with a frequency range of 600 to 800 Hz with size limitations is a gruesome task. The insect kinematics that characterizes the natural insect flight is very complicated. The kinematic model enables this study using both the body and the stroke plane orientation of the insect in 3D space [128]. Spanwise wing deformation at stroke reversals often leads to mechanical energy loss in flight, even if aerodynamic power outshines inertial force [129]. To control aerodynamic forces and power, it is always better to take control over the angle of attack during the flapping process [130], for instance, a dual-differentiated four-bar flapping system for a lightweight vehicle with a tethered hover [131]. Regardless of the type, some insects, such as butterflies, have vortex rings developed over the wing while downward flapping, which grows from LE to TE [132].
From the previous section, we learned that dynamic wing pitching would significantly raise the thrust and thrust-to-power ratio while retaining the lift and lift-to-power ratio or increasing simultaneously [133]. Appropriate insect asymmetric strokes may boost the wing’s aerodynamic performance at low Reynolds numbers but may not function at moderate and high Reynolds numbers [134]. Wake deformation is often most extreme behind small lower aspect ratio wings, meaning that the insects that fall in this category are reflected as substantially risky in terms of measurement error when there is a shortfall of the distance between the wings [135]. Here, the non-uniform downwash effect leads to induced power factor, k, contributed by chord distribution and the advanced ratio [136,137]. Interestingly, if the insect wing beats at a high frequency, such as mosquitoes, and has a short wing length, the wing’s relative velocity is minimal. As a result, the moderate wing lift coefficient is relatively high to balance the weight, far higher than that of cruising aircraft [138].
In the case of insect-based robotics, UBET (unsteady blade element theory) provides reasonably good estimates of the thrust developed by the wing flapping systems by comparing estimated thrust with measured thrust [139]. Using unsteady blade element theory, researchers have shown that for the evolution and building of FM-MAV, perfect twist configuration can be obtained from wing root offset of 0.20c¯ as far as flapping wings are concerned. The power loading is just two percent greater for the positively twisted wing than the full force-generating flat wing. For immense power loading, force/power ratios, it is desirable to opt for a high-frequency flapping wing using the geometric AoAs [140,141,142]. As per the quasi-steady wing (or blade) element theory-based aerodynamic model given in reference [143,144], various forces act on the wing, which can be taken into account during modeling. The total force acting on the blade (a spanwise division of insect wing into finite blade elements) can be estimated as the addition of rotational, steady-state, and added mass forces, as shown in Figure 9.
FT=FL+FD+FR+FD+FWakeCapture
(2)
FL=12ρc¯‖vF‖2CLαΔR
(3)
FD=12ρc¯‖vF‖2CDαΔR
(4)
FR=CRρα˙c¯2‖vF‖ΔR
(5)
FA=ρπc¯24vF·vF˙‖vF‖sinα+‖vF‖α˙cosαΔR
(6)
where CR is the rotational force coefficient having specific values for each insect wing and vF is the instantaneous flow speed in the elemental plane [143]. It is essential to consider how the aerodynamic model assumptions affect optimal kinematics of the wings during hovering. Rotational motion produces lift with low power consumption compared to translatory [145]. The contribution of the vertical force mostly produced during the downstroke becomes more dominant as the flight speed increases [146]. The flight efficiency of insects in free flight is very significant for the research of bionic fluid mechanics. [147]. In big animals such as the calliope hummingbird, during the wing-beat process, both the downstroke and upstroke produce significant thrust for drag reduction, but such thrust output comes at the price of induced adverse lift at the time of upstroke [148]. Rotational acceleration developed at the end stroke during the flapping (LEV) significantly reduces lift [149]. Optimization of the power output during floating flight can be achieved by knowing the mandatory optimum pitching axis for flapping wings, which saves around 33 percent of the power during hovering [150]. For design purposes, remember that soft vein joints in passive deformation improve the chordwise flexibility and work well [151]. As far as propulsive performance is concerned, propulsive characteristics are significantly affected by the phase angle and mean wing spacing in the flapping wing [152]. Most insects combine their fore and hind wings to produce substantial lift. However, synchronously flapping two wing pairs together tends to create extra lift force [153]. Compared to gliding flight, the anticipated power savings can be influenced by flapping wings in the ground effect, depending on the wing motion [154]. As the size of the insect decreases, the impact of air viscosity on insect wing movement increases [155]. The scale-dependent distribution of energy in the turbulent ambient flow is an important element in how aerial insects such as bumblebees regulate their body orientation. Similar to mosquitoes, bumblebees use unsteady aerodynamic mechanisms, for example, LEV generation, wake capture, and rapid end-of-stroke rotation to enable them to fly [156,157]. Bumblebee-based miniaturized drones are all ready to fly to Mars very soon.
For tiny aerial insects with a low Reynolds number (Re), such as mosquitoes, viscous effects are more. These insects typically switch their flapping mode to overcome this problem; for instance, the planar-type upstroke to deeper U-shaped upstroke is used to generate large vertical forces [158]. During hovering, the wing with phase flap (alula) provides the maximum lift but the lowest performance and a stabilizing effect on the LEV [159]. For a flapping drosophila, at high stroke amplitude, a hairpin-like vortex loop and stroke reversal affect the instant time the wake capture materializes [160]. Insects even compensate for wing damage. Researchers discovered that insects such as the phorid fly compensate for the loss by increasing the amplitude of the stroke and the angle of deviation [161]. The lift is generated during upstroke for hovering flight due to stable LEVs and a stronger downwash at downstroke [162]. Finally, this entire section summarizes that TEV is a dominant part of mosquito flight, while similar insects, e.g., fruit flies, have comparable sizes, and Reynolds numbers depend on the LEV. This distinction is due to the mosquito’s significantly low stroke amplitude as a unique kinematic characteristic. As described above, passive deformation affects the maximum aerodynamic power but takes care of the delayed stall. With the increase in Reynolds number, the efficiency and performance of mosquitoes also increase [4]. At Re below 70 for miniaturized insects, there is a rapid effect on lift and drag due to the viscous effect being very high [163]. The Reynolds number, which has an inverse relation with k=πfc/Uref, reduces the frequency. The aspect ratio and the flapping amplitude are factors affecting k, and the physical size variance of mosquitoes is a significant factor in the variations of their Reynolds number. As a result, with a Reynolds number increase, the lift coefficient and therefore flight efficiency is enhanced, i.e., larger, which may be the explanation for why larger mosquitoes are particularly good feeders [4].
Recent advances in small-scale manufacturing and control have made it possible to build insect-scale robots. Nevertheless, there are still numerous constraints on component technologies, such as scalable high-energy storage, which restrict their functionality and propulsion, power, and control architecture [164,165]. Insect robots have not yet demonstrated characteristics such as the ability to traverse complex and substantially dynamic habitats, rapidly adjust flight speeds and even directions, the robustness to environmental threats, and the ability to travel long distances autonomously instead of their natural counterparts. Can mosquitos’ specific aerodynamic features help improve wing architecture and miniaturized drone designs in the future? The answer to this question needs more intensive research, but as per the existing literature, we observe:
5.1. The Importance of Dynamic Stability, Mechanisms, and Mathematical Modeling
Free of a radial position, the aerodynamic force is extremely positive regarding lift interceded by rotational drag. It covers the entire wingspan, incredibly close to the root, where the lift is negligible. This specialty, including lower inertial costs and smaller pitching torques, is possibly crucial in creating the mosquito’s high aspect ratio wings [2] and is one of the design factor in imitation. Apart from having excellent aerodynamic characteristics, mosquitoes also possess good dynamic stability, a boon for the future design of miniaturized drones. Mosquitoes have a unique behavioral characteristic related to the response to avoid any obstacle during the short-range, very blurred vision, and change direction in flight called mechanosensory collision-avoidance mechanism. Researchers use CFD-based dynamic kinematic analysis to measure corresponding changes that appear at the pressure and velocity cues at this mechanosensory antennae. Figure 10 shows the model quadcopter fitted with such sensory equipment. By detecting nearby obstacles during flight, the model system successfully emulated the mosquito model’s behavior [166]. Insects typically take-off from the ground using a catapult technique to impel legs against the ground surface while propelling them into the air using their pairs of flapping wings [167]. Although mosquitoes use a slightly different technique, such as flapping their wings before jumping, this combination is an effective way to get rid-off unspecified terrain or steer clear of enormous obstacles, and even the host will not detect them.
Not just wings but fog and dense gas affect stability by increasing the aerodynamic drag on halteres [168], so of course, the knowledge of mechanisms is a must at these scales. As it is evident that smaller stroke amplitudes such as that of mosquitoes have strong unsteady effects, high flapping frequency in insects can be explained appropriately using the rigid body assumption and vice versa [78,169]. To understand the dynamics of these miniaturized insects and use it for mimicked robots, mathematical modeling or CFD is a great tool. The aerodynamic force and moment generation in insects is oscillatory due to unsteady flow, so mathematical methods that deal with nonlinearity should be used [170]. Kinematics can be easily imitated for real insects, but artificial insect-based robots are not easy to build. Mathematical models must also be simplified [171].
5.2. Transmission Systems, Suitable Controllers, and Acoustics
Sensory and biomechanical systems must be taken care of in order to emulate responsive insect bio-robots. Current gliding animals have used these to track and guide their descent and are an essential factor in the evolution of modern biomimicked flight [172]. One such key challenge for designing and producing insect flapping-wing robots is generating a productive and efficient transmission system to control flapping wing movements. Insect thoracic based system actuated by electrostatic force [173] or bio-inspired thoracic robotic designs for generating kinematics of asymmetric wing almost similar insects found in nature are some examples [174]. Table 1 gives light detail about some of these mechanisms. Even suitable controllers are needed to control the motion and look into stability and performance. The development of real-time controllers that can use the concept of an input–output linear time-invariant (LTI) equivalent system to apply the desired trajectories to micro-robotic insects is an example [175]. As artificial muscles can withstand the stresses caused by collision impacts, they are a good alternative to actuation. However, due to nonlinearity and limited bandwidth, these soft actuators have yet to show adequate power density for lift and are not suitable for flight control [176]. Researchers have designed stable robots using soft artificial muscles made from multi-layered dielectric elastomer with a resonant frequency of 500 Hz, recently in 2020 [176]. In a low Reynolds number flapping wing, the dynamic stall is a vibrant, dynamic problem. It accompanies many characteristics such as dynamic stall vortex, large aerodynamic loads that can mix with structural dynamics, and even negative damping [177]. The transmission system’s total weight was substantially decreased by piezoelectric transmission [178], which is still being experimented with so far in several versions. As discussed in the previous section, elastic wing properties are also very significant. Material analysis such as flexibility and compatibility in designing realistic wings, along with the transmission mechanism, and good elastic properties are fundamental [179]. Research also shows that bio-inspired insect robots undergo a stabilization technique called vibrational stabilization, with exceptionally high frequencies [180]. Without the need to use a thrust force, some insect species generate the flapping lift sufficient to retain their body weight [181]. Researchers found that the CG repercussion on longitudinal flight stability is a common characteristic of all tailless flapping insect species, with research restricted to the longitudinal direction [182]. It is necessary to know that the active control mechanism, partnered with light microcontroller-based actuators, can produce significant control torques to keep the robot airborne [183]. High sweep amplitude is more beneficial for power requirements than low amplitudes that need higher frequencies, resulting in higher inertial forces to generate a similar vertical force [184]. The modern Robobee from Harvard University or Robofly by UoW had been manufactured with proven MEMS piezo-based mechanisms, powerful miniaturized controllers, and laser power and at a large scale, reconfigurable multi-rotor using a novel active-passive motor scheme have already been proposed [185].
Have you ever wondered if there is an aerodynamic buzz or pitched whine associated with mosquito flight? This is due to the high flapping frequency. When it comes to insects such as mosquitoes, the acoustics are crucial. Researchers have tried to figure out why insects, particularly mosquitos, have this peculiar trait. In order to better understand the sound generation mechanism of flapping wings, Sueur et al. (2005) discovered that the flapping sound is directional, with the wing beat frequency dominating the front and the second harmonic dominating the two sides [186]. Although it is clear from the preceding sections that flexibility plays an important role in lift generation, its impact on aerodynamic sound is not well understood. The fluid–structure–acoustics interaction of flexible flapping wings was numerically investigated using an immersed boundary method at a Mach number of 0.1 in a recent study published in 2019 and 2020. There were three important observations made; (1) Coupled (translating and rotating) motion produces smaller sounds than the translating wing, and greater rotational angles convert the dipole sound to a monopole sound. Sound fields are present, but they shift downstream for large flexible wings. (2) The flapping frequency dominates the sound; (3) when the wing is flapping with a stroke plane angle less than 90°, the sound on the windward side is noticeably louder [187,188]. This explains, to some extent, the extreme buzz produced by mosquitoes with stroke amplitudes in the range of 40 to 45° and high flapping frequency (~800 Hz).
For the information of our readers, immerse boundary method (IBM) is a numerical technique by Peskin in 1972 [189] that deals with the boundary conditions for grids that do not conform to immerse boundary shapes [190]. Mittal et al. developed sharp immersed IBM to analyze incompressible viscous flow past 3D immersed bodies, for example. It manages complex immersed surfaces described by Cartesian grids using the ghost-cell methodology to satisfy boundary conditions. [191]. However, researchers are developing the immersed boundary surface (IBS) process, based on active cell concept. In terms of numerical investigation of the aerodynamics of insect-like complex geometries, IBM techniques are effective.
6. Flight in a Low-Density Environment Such as Mars
A low-density atmosphere such as that of the red planet Mars has an average pressure of 0.6% of Earth’s. Mars also has unearthly features, such as carbon dioxide (CO2), the primary atmospheric variable, which condenses in the Martian polar regions and the middle atmosphere [192].
6.1. Prior Work
According to the Russian Academy of Sciences, during an experiment on the ISS, natural mosquitoes could live in outer space, incredibly for 18 months, and could be taken back to Earth alive. Studies indicate that the fixed-wing drones capable of flying in the future fly on Mars shall have poor performance than other solar entities because of low Reynolds number values. Drones flying in a lower density environment, such as Mars, relative to Earth, stop providing the needed positive performance [193]. As far as planets such as Mars are concerned, insect-like compliant wings for low-density environments improve aerodynamics and low power design. High lift coefficients can be obtained by looking into strict dynamic similarities between the bio-inspired insect flight regime and the Martian climate. Due to the extremely low density on the red planet, there is an influence of inertial power. Minimal flight time is the greatest challenge for flapping-wing micro-air vehicles because of restricted onboard energy storage space. Given the average ground solar spectral irradiance and efficiency of solar cells in terms of energy conversion, the difference in the energy supply rate by size is assessed [194]. The flapping-wing design for the Martian atmosphere showing the modeling and simulation of a micromechanical flying insect (MFI) flapping-wing called Entomopter is built for the Martian atmosphere for continuous and autonomous flight. The overall geometry is built on hummingbirds and large insects for this micromechanical insect. Due to the very low density, unsteady aerodynamics of flapping wings need an investigation [195]. DelFly and ExoFly are the two examples of flapping winged flying robots designed particularly for low-density flight. Although various physical characteristics require adaptation to Mars conditions, studies reveal that this should not be a significant obstacle to feasibility [196]. Marsbee is a bumblebee imitated robot with enlarged wings for the vehicle to sustain weight in the Martian atmosphere. Just like the bumblebee, the wing-to-body mosquito mass ratio is just 0.52 percent. A consistently large change in the wing area raises the overall weight by just a fraction. The excellent ability to hover, fly efficiently even under the impact of forces, and work at fast forward speeds with an exceptional aerodynamic capability makes it an appealing biomimicking candidate in both cases [197]. In 2020, NASA sent the Perseverance rover on a Mars mission with the Ingenuity helicopter, which landed safely on February 18, 2021. This rotorcraft has a rotor diameter of 1.21 m and its performance at a low Reynolds number has to be assessed. The insect-based flapping wing can easily overcome these difficulties in this rarefied atmosphere. Insects have an excellent capability to use unsteady aerodynamics mechanisms in such low values of Re [198].
6.2. Flight Feasibility in a Low-Density Environment
For mosquito-based robots’ suitability for flight in low-density environments such as Mars, it is essential to account for the lift enhancing unsteady mechanisms such as wake capture, rotational drag, and added mass effect, perfect for the flight on Earth. This part of the review is purely based on aerodynamics, but it is essential to note that the actuator dynamics and materials used for such mimicked robots in such an environment also play a significant role and are a subject of research. To date, the study has only focused on wing beat motion to generate enough lift to sustain weight in such an environment. Once this question is appropriately answered, one can design the sensors, actuators, controllers, and power sources for success on Mars. The findings reveal that there are four major challenges to overcome in order to successfully fly through the Martian atmosphere. Due to the high concentration of CO2 in the atmosphere, traditional oxygen air breathing motors cannot be used; instead, we must rely on chemical or electric propulsion, which is difficult with insect-sized robots. Second, because of the low density, it is difficult to generate enough lift to fly. The third issue is Martian gravity, which is one-third that of Earth, and the fourth and most intense issue is temperature, particularly at night, where it plunges to around as low as −90 °C, making it difficult for components to survive if left unheated. As per references [197,198], who have extensively studied this part of the flight, the following can be possible, which authors of this review have compared to mosquito flight too.
Studies on bumblebee-type insect characteristics for Martian flight show that in order to achieve hover on the red planet, it is crucial to offset the reduced density and reduced gravity by making adjustments to the flapping motion. This offset can be achieved by wing scaling without changing the aspect ratio (though mosquitoes have a higher aspect ratio than bumblebees). Since the flapping amplitude is not changed, reduced frequency k will not be affected. However, this should impact the Re, which is again minimal because of density. Studies also suggest that a high flapping frequency is needed to offset the density and gravity factors. The wingtip Mach number also plays a significant role here. Mosquito imitated robots fall in the excellent category because mosquitoes have a very high flapping frequency with a low stroke amplitude, both of which are essential factors in determining Martian flight and need further investigation. The study on wing scaling on bumblebees comes with high power requirements, but it is possible that with mosquito-mimicked robots, there will be a relief in that matter because mosquito already flaps at a frequency beyond 800 Hz, which is close to the requirement (~990 Hz) as per reference [197] for offsetting the reduced density and gravity. So, wing scaling need not be as close as for bumblebee. Research is in progress in this regard.
Reference [197] also gave light about the aerodynamics related to MAV’s on Mars. Despite a low Reynolds number, the lift enhancing mechanisms such as a delayed stall, rotational lift, and added mass effect could help produce sufficient lift. From reference [198], the study published in 2021, the vorticity contours and lift time histories, as shown in Figure 11, established based on Q-criterion and coherence in vortex structures (low-pressure regions), show that the average lift is relatively high for weight balancing and time histories are similar to insects on Earth, as shown in Figure 11a. Due to LEV, the CL value is high because of the lift peak during each half stroke, as in Figure 11b. It can be linked to the negative pressure gradient due to high vorticity near LE. When the stroke ends, the reduction in CL due to the shedding of LEV is taken care of by rotational lift using attached TEV, as shown in Figure 11c,d. Mosquito flight works in almost the same way due to their unique aerodynamic mechanisms discussed in previous sections and TEV use for lift enhancement.
It is essential to note that wing scaling is associated with a penalty related to actuation power. In that case, the inertial power is more significant than the aerodynamic one because of ultra-low density. Figure 12 shows the flap and pitch power time histories required for hovering on the red planet with different wing sizes n. Because the flapping amplitude and frequency changes to obtain equilibrium as n increases, wing kinematics must be optimized to minimize the power requirement [197].
Interestingly, power contains significant negative stoke values, and amplitude rises with the wing size [199]. Missions such as Marsbee, with a broad testing range from 10 to 100 km and above, will enable low-altitude, substantially high-resolution imagery of Mars and in-depth long-range Martian exploration will allow the observation and study of Martian atmosphere, and phenomena such as dust storms [200].
Insect-inspired robots for planetary research are a booming new area of research. Work is in progress, and a few pieces of literature are available to help. Weight is also a factor affecting performance. There will be a difference in the masses/weight of drones on Mars than Earth due to gravity on the red planet. The weight is decreased by 61.5 percent [1]. Mosquito-based robots have many factors associated with being effective in low-density atmospheres such as Mars. For instance, as it has low stroke amplitude and high flapping frequency, and considerable wing size, which is associated with necessary power and stability, it can hover with stability.
So, it is clear that low Martian atmospheric density makes it difficult for flight on Mars. Aerodynamic forces mostly rely on the atmosphere’s density, which restricts conventional aerial configurations on Mars. In the simulated Martian environment, trimmed flight and hovering are only feasible if insects’ dynamic resemblance on Earth is achieved. This can be accomplished by maintaining the necessary dimensionless parameters by scaling the wings to three to four times the standard size, as described above. Due to its ultra-low density, the maximum power available is because of the inertia of the wings. By using a torsion spring, the inertial flap strength can be significantly reduced [197]. When developing potential bio-inspired robots for planetary studies, all these considerations need to be taken into account. However, mosquito-based characteristics modified according to the Martian atmosphere are an excellent alternative.
7. Conclusions and Outlook
Mosquitoes are without a doubt among the best flyers and intelligent insects on the planet. The study of this insect’s anatomy, kinematics, aerodynamics, and stability has so far revealed the value of imitating it, creating smart tiny drones, and using them for anti-disease, atmospheric studies in low-density environments such as Mars, and many other applications. Here are the conclusions from this review:
Since mosquitoes have extremely unsteady flow around the wings, the aerodynamic mechanisms are way different from the same group of insects. A high flapping frequency and low amplitude stroke are beneficial in terms of flight, attraction, and feeding. Mosquitoes extensively take advantage of rotational drag and TEV through wake capture for lift enhancement and even sustaining their weight during flight.
Wing deformation and the Reynolds number are two crucial factors influencing the flight. Wing deformation is essential to the mosquito because it takes care of delayed stall, reducing the overall aerodynamic energy required for hovering flight. Compared to insects with substantial stroke amplitude, mosquito lift-related output is controlled by various aerodynamic processes. However, most of these insects create a lift with the support of the LEV, which binds and advances along with the wing and the corresponding vortex movement. Flow around mosquito wings is exceptionally unsteady and generates lift by mechanisms such as TEV using wake capture, which is now replaced by the added-mass effect for mosquitoes in particular as part of the new study and the ‘rapid-pitching-up rotation’ mechanism.
As per the analysis, a sizeable aerodynamic force is produced when the rate of change of time at the first instant of vorticity is affected by the rapid production of opposite-sign vorticity at distinct wing locations. Big mosquitoes are active in feeding hosts because increasing size correlates to an increased lift coefficient and power efficiency, increasing the Reynolds number, and thus increasing aerodynamic performance.
The excellent ability of a mosquito to hover, fly efficiently even under the impact of forces, and work at fast forward speeds with an exceptional aerodynamic capability makes it an appealing biomimicking candidate for investigating low-density planetary atmospheres such as that on Mars. Mimicking a mosquito-based miniaturized flapping-wing insect robot for planetary studies comes with several challenges. For example, in an extremely low-density atmosphere, there is a drastic impact on the flight’s efficiency and sustainability. Wing length is also one of the factors that plays an essential role in-flight stability.
All the aerodynamic features examined in this review support the ability of mosquito-imitated robots to fly in low density that needs to be thoroughly explored experimentally and controlled in flight. Despite the challenges, mosquito-imitated robots present a bright and auspicious future.
This review paper hopefully provides valuable information for further investigation and in the elaborative study of unsteady aerodynamics related to mosquito flight, kinematics, and low-density environment, which will help to successfully imitate these insects for various applications. The future work will be the production of the mosquito-inspired robot with suitable materials and improved aerodynamics.
Author Contributions
Formulation, preparation, writing—original draft preparation and review and editing, B.S.; formulation, supervision and writing—review and editing N.Y. and K.A.A.; writing—review and editing, N.Y., A.A.B., R.P., and K.A.A. All authors have read and agreed to the published version of the manuscript.
Funding
This study is supported by the Universiti Putra Malaysia Geran Putra Berinpak (GPB) research grant; UPM/800-3/3/1/GPB/2019/9677600.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The authors gratefully acknowledge Universiti Putra Malaysia (UPM) for providing opportunities for biomimicry research to flourish and make this mosquito-mimicked small robot research a reality. The authors would also like to convey their gratitude to UPM to grant them the necessities required to advance in biomimicry research through the University’s Geran Putra Berinpak (GPB) research grant; UPM/800-3/3/1/GPB/2019/9677600.
Figure 3.
(a) Aerodynamic representation of mosquito during flight (b) Representation of mosquito-related flapping cycle phases (c) Graphical representation of formation of leading and trailing edge vortices during phases of the wingbeat cycle of mosquito flight.
Figure 3.
(a) Aerodynamic representation of mosquito during flight (b) Representation of mosquito-related flapping cycle phases (c) Graphical representation of formation of leading and trailing edge vortices during phases of the wingbeat cycle of mosquito flight.
Figure 4.
Stroke-based CL, CD, and L/D ratio in relationship with AoA and stroke amplitude. (A,C) and (E) have a mechanical model contour map based on the measured value. (B,D) and (F) showed measured values from the translational quasi-steady model using empirically measured force coefficients. Reproduced with permission from Ref. [25]. Copyright 2001, Company of Biologists Ltd. (Cambridge, UK).
Figure 4.
Stroke-based CL, CD, and L/D ratio in relationship with AoA and stroke amplitude. (A,C) and (E) have a mechanical model contour map based on the measured value. (B,D) and (F) showed measured values from the translational quasi-steady model using empirically measured force coefficients. Reproduced with permission from Ref. [25]. Copyright 2001, Company of Biologists Ltd. (Cambridge, UK).
Figure 5.
(a) How added mass inertia contributes to an estimated total aerodynamic force with a specific kinematic pattern at 45 degrees AoA. Reproduced with permission from Ref. [25]. Copyright 2001, Company of Biologists Ltd. (Cambridge, UK) (b) Coefficients of lift and drag in a single flapping cycle with a mosquito sample in flight. Reproduced with permission from Ref. [5]. Copyright 2020, Cambridge University Press (Cambridge, UK).
Figure 5.
(a) How added mass inertia contributes to an estimated total aerodynamic force with a specific kinematic pattern at 45 degrees AoA. Reproduced with permission from Ref. [25]. Copyright 2001, Company of Biologists Ltd. (Cambridge, UK) (b) Coefficients of lift and drag in a single flapping cycle with a mosquito sample in flight. Reproduced with permission from Ref. [5]. Copyright 2020, Cambridge University Press (Cambridge, UK).
Figure 6.
Instantaneous vorticity dissemination of the y-component at different t/T and the velocity in the symmetric XZ plane at the center point of the wingspan: (a) the numerical simulation results; (b) the PIV results from [2]. Reproduced with permission from Ref [4]. Copyright 2019, AIP Publishing (Melville, NY, USA).
Figure 6.
Instantaneous vorticity dissemination of the y-component at different t/T and the velocity in the symmetric XZ plane at the center point of the wingspan: (a) the numerical simulation results; (b) the PIV results from [2]. Reproduced with permission from Ref [4]. Copyright 2019, AIP Publishing (Melville, NY, USA).
Figure 12.
Time histories of the (a) flap power and (b) pitch power required for hovering on the red planet with different wing sizes n. Reproduced with permission from Ref. [197]. Copyright 2018, IOP Publishing (Bristol, UK).
Figure 12.
Time histories of the (a) flap power and (b) pitch power required for hovering on the red planet with different wing sizes n. Reproduced with permission from Ref. [197]. Copyright 2018, IOP Publishing (Bristol, UK).
Follow MDPI
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely
those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or
the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas,
methods, instructions or products referred to in the content.
|
Due to the very low density, unsteady aerodynamics of flapping wings need an investigation [195]. DelFly and ExoFly are the two examples of flapping winged flying robots designed particularly for low-density flight. Although various physical characteristics require adaptation to Mars conditions, studies reveal that this should not be a significant obstacle to feasibility [196]. Marsbee is a bumblebee imitated robot with enlarged wings for the vehicle to sustain weight in the Martian atmosphere. Just like the bumblebee, the wing-to-body mosquito mass ratio is just 0.52 percent. A consistently large change in the wing area raises the overall weight by just a fraction. The excellent ability to hover, fly efficiently even under the impact of forces, and work at fast forward speeds with an exceptional aerodynamic capability makes it an appealing biomimicking candidate in both cases [197]. In 2020, NASA sent the Perseverance rover on a Mars mission with the Ingenuity helicopter, which landed safely on February 18, 2021. This rotorcraft has a rotor diameter of 1.21 m and its performance at a low Reynolds number has to be assessed. The insect-based flapping wing can easily overcome these difficulties in this rarefied atmosphere. Insects have an excellent capability to use unsteady aerodynamics mechanisms in such low values of Re [198].
6.2. Flight Feasibility in a Low-Density Environment
For mosquito-based robots’ suitability for flight in low-density environments such as Mars, it is essential to account for the lift enhancing unsteady mechanisms such as wake capture, rotational drag, and added mass effect, perfect for the flight on Earth. This part of the review is purely based on aerodynamics, but it is essential to note that the actuator dynamics and materials used for such mimicked robots in such an environment also play a significant role and are a subject of research. To date, the study has only focused on wing beat motion to generate enough lift to sustain weight in such an environment.
|
yes
|
Probabilistics
|
Was it possible for Bumblebees to fly according to the laws of aerodynamics?
|
no_statement
|
"bumblebees" were unable to "fly" "according" to the "laws" of "aerodynamics".. "bumblebees" did not adhere to the principles of "aerodynamics" in their flight.
|
https://aphidsrus.wordpress.com/2016/11/09/how-do-bumblebees-fly/
|
How do bumblebees fly? | Mastering Entomology
|
Post navigation
How do bumblebees fly?
The flight of the bumblebee is not only an excellent classical piece composed by Rimsky-Korsakov, but also the subject of another ‘fact’ about insects, which usually goes something like: “According to the laws of physics, bumblebees shouldn’t be able to fly.” or a phrase of similar meaning. Indeed, the violation of the observably consistent Newtonian Laws of Motion by bumblebees and only bumblebees is not a very strong position to hold and might suggest the advocate of this belief should more fully examine how bumblebees generate lift. I shall try to provide such an examination. But first, let us delve into insect flight more generally!
A bumblebee ‘defying the laws of physics’
There are two main ways insects control their wings. The assumed primitive condition for insects only exists in two living groups, the mayflies (Ephemeroptera) and the dragonflies (Odonata). They are the last surviving members of the group Paelaeoptera (appropriately meaning ‘ancient wing’) and conduct ‘direct flight’. These groups directly control muscles that are attached to the base of each wing, allowing for powerful downward and upward contractions, as well as smaller muscles that can alter the angle or tilt of the wing. The true dragonflies (Odonata: Anisoptera) maximise on this, controlling almost every aspect of their wing strokes, including angle, speed, and power ratio between each wing, whilst wing beat frequency remains largely constant (Alexander, 1986). This method is energetically expensive because of the large muscles involved, but does allow them to out-pace and out-manoeuvre their prey (usually dipterans).
The second method of innervating the wings is found in all other winged insects, the Neopterans (you guessed it, ‘new wing’!), and is called ‘indirect flight’. Although there are muscles attached to the base of the wing, they only control its tilt and angle; the power is generated by muscles that deform the thorax. Contraction of muscles that attach to the top (dorsal) and bottom (ventral) internal surfaces of the thorax – called the tergosternal muscles – pull the thorax down and lever the wings upwards because of the ‘double jointed’ nature of how each wing is attached. The wings are lowered for the downstroke when dorsally located longitudinal muscles compress the thorax front-to-back and the dorsal surface (notum) becomes elevated. The way the wings are moved up and down due to levering effects is comparable to the way rowers pull their oars to generate force. That’s right, just imagine the wings are tiny oars and the notum is the torso of the rower.
In this second group, wing beat frequencies can be substantially higher than in the first. Whilst dragonfly wingbeat frequency may peak at about 40 Hz (beats per second per wing), a hummingbird hawkmoth peaks at 90 Hz, a Bombus terrestris worker bumblebee at 156 Hz, a Chalcid wasp at 400 Hz, and Forcipomyia midges have been shown to peak at 1000 Hz (Sotavalta, 1953; King et al., 1996; Wang, 2005)*. We are referring to beats per second remember! The wingbeat frequencies for the bumblebee, wasp, and midge all exceed the rate at which nerves are able to fire impulses, so their muscles must be adapted to contract several times for each nerve impulse. This type of muscle is called ‘asynchronous flight muscle’.
Our friendly neighbourhood bumblebee has asynchronous flight muscles (i.e. one nerve impulse for several muscle contractions) that are innervated by tergosternal muscles (upstroke) and dorsal longitudinal muscles (downstroke) i.e. it utilises the second method – indirect flight (King et al., 1996). Bumblebees’ wings may be small and heavily loaded, but they are still able to generate significant lift through several cumulative aerodynamic principles. The most important of these is the production of a ‘leading edge vortex’ on each wing (Dickinson et al., 1999).
As the bee’s wing moves through the air (translation), the air flow separates upon crossing the leading edge, but reattaches before reaching the trailing edge. This leaves a sort of air bubble, stimulating the formation of a vortex (a circulating ‘parcel’ of air) that is situated on the dorsal surface of the wing near the leading edge. This vortex generates lift perpendicular to the plane of the wing (‘normal’ force). It sucks in air, accelerating it downwards at high velocity, however, the area it is being sucked into is at low pressure (Bernoulli’s Principle) and it is this low pressure above the wing that provides the lift**.
A physical visualisation of the leading edge vortex, showing the ‘normal’ nature of the force generated (from Sane, 2003)
Not only that, but these vortices detach, persist, and generate further lift by interacting with the wing on the next stroke (Dickinson and Gotz, 1993). The shed vortices enhance the wing velocity and acceleration upon meeting it, resulting in greater aerodynamic forces immediately following stroke reversal – this is called a wing-wake interaction. The reason the airflow separates and a vortex forms is because insect wings are thin and translate at a high angle relative to the oncoming air (angle of attack). This high angle allows a greater downward momentum to be imparted on the air below it, resulting in further lift and thrust generation (Ellington, 1999). Thus, far from defying the laws of physics, bumblebees utilise several complex principles of fluid-dynamics to gloriously propel themselves along in their own bumbling way.
The leading edge vortex and wake creation, which enhances lift when the wing interacts with it on the following stroke in a generic dipteran (from Lauder, 2001).
But how does a bumblebee generate this lift with such small wings relative to their size? A typical B. terrestris bumblebee of 0.88 grams has a total wing area of 1.97 cm2 and therefore a wing loading of 0.447 g cm-2 (0.224 g cm-2 for right and left pairs of wings separately) (Sotavalta, 1952). This figure represents the amount of force the wings need to generate in relation to their area, and bumblebees tend to have a higher wing loading than their mass would suggest (Byrne et al., 1988). The way they generate the required lift with their relatively small wings is by beating them much faster than most other insects of a comparable mass (Byrne et al., 1988). Whilst B. terrestris beats its wings at 156 Hz, the 5% lighter Saturniid moth Adeloneivaia boisduvalii beats it at only 30 Hz (Bartholomew and Casey, 1978). Despite being only 0.04 g lighter at 0.84 g, its wing area is a whopping 5.564 cm2 and therefore its wing loading is kept at a low 0.151 g cm-2. Yet both can fly proficiently. The same is true for numerous other bumblebees (Bombus spp.), as high wing loading seems to be one of their traits as a genus (see Table 1 in Byrne et al., 1986).
The adaptation of indirect flight muscles increases the rate at which bumblebees can flap their wings, which is further augmented by usage of asynchronous flight muscles that remove constraints to wingbeat frequency enforced by the nervous system. The result is a very high wingbeat frequency for their weight, providing them with enough lift, primarily via the leading edge vortex and wing-wake interaction processes. So, according to physical principles, bumblebees are able to fly in a very interesting way.
Until next time.
Blog article written by Max Tercel ([email protected])
* These values are rough averages for each taxon
** Interestingly, although the vortex generates lift, it also generates drag due to the normal nature of the force it generates.
|
Post navigation
How do bumblebees fly?
The flight of the bumblebee is not only an excellent classical piece composed by Rimsky-Korsakov, but also the subject of another ‘fact’ about insects, which usually goes something like: “According to the laws of physics, bumblebees shouldn’t be able to fly.” or a phrase of similar meaning. Indeed, the violation of the observably consistent Newtonian Laws of Motion by bumblebees and only bumblebees is not a very strong position to hold and might suggest the advocate of this belief should more fully examine how bumblebees generate lift. I shall try to provide such an examination. But first, let us delve into insect flight more generally!
A bumblebee ‘defying the laws of physics’
There are two main ways insects control their wings. The assumed primitive condition for insects only exists in two living groups, the mayflies (Ephemeroptera) and the dragonflies (Odonata). They are the last surviving members of the group Paelaeoptera (appropriately meaning ‘ancient wing’) and conduct ‘direct flight’. These groups directly control muscles that are attached to the base of each wing, allowing for powerful downward and upward contractions, as well as smaller muscles that can alter the angle or tilt of the wing. The true dragonflies (Odonata: Anisoptera) maximise on this, controlling almost every aspect of their wing strokes, including angle, speed, and power ratio between each wing, whilst wing beat frequency remains largely constant (Alexander, 1986). This method is energetically expensive because of the large muscles involved, but does allow them to out-pace and out-manoeuvre their prey (usually dipterans).
The second method of innervating the wings is found in all other winged insects, the Neopterans (you guessed it, ‘new wing’!), and is called ‘indirect flight’. Although there are muscles attached to the base of the wing, they only control its tilt and angle; the power is generated by muscles that deform the thorax.
|
yes
|
Probabilistics
|
Was it possible for Bumblebees to fly according to the laws of aerodynamics?
|
no_statement
|
"bumblebees" were unable to "fly" "according" to the "laws" of "aerodynamics".. "bumblebees" did not adhere to the principles of "aerodynamics" in their flight.
|
https://faculty.washington.edu/callis/Flight/Insect_Flight_A-99.htm
|
Insect Flight Research
|
According to the laws of quasi steady-state aerodynamics, insects cannot produce enough lift pressure to fly. The mechanism whereby they achieve flight must involve unsteady flows interacting with the dynamically changing wing surfaces. Quantitative experimental data on this issue is not currently available. We have recently invented a new flow visualization technique, luminescent barometry. We will apply this technique to measuring the time dependent surface lift pressure produced by a bumblebee in flight.
Introduction and Rationale
Insects represent some of the most versatile and maneuverable of all flying machines. Many of them can hover, turn in their own length, decelerate rapidly, role over, loop, and even land upside down on a ceiling. Yet insects cannot fly, at least according to the conventional laws of aerodynamics which rely on quasi steady-state approximations. Clearly, the extra lift required must be generated from the complex (unsteady) flapping motions generated during the wing-beat cycle. Until recently, the exact nature of this extra source of lift remained a mystery.
In three remarkable papers (1-3), Ellington and colleagues have provided insight into the high-lift mechanism used by the hawkmoth, Manduca sexta. These researchers obtained details of the flow around the wings and the overall wake structure by use of conventional flow visualization techniques (carefully placed smoke rakes and stereophotography). Both tethered insects and a hydrodynamically scaled dynamic model were studied. Observations of the insect indicated the presence of a leading edge vortex and highly three-dimensional flow pattern. To further investigate this intriguing discovery, a scaled up robotic insect was constructed and subject to more exacting flow visualization. In this system, further evidence was obtained for a leading edge vortex generated on the downstroke, which moreover was shown to be associated with a strong axial circulation. In addition, the axial vortex was connected to a large tangled tip vortex, extending back to a combined stopping and starting vortex from pronation. Similar experiments have been carried out by Dickinson (4) on fruitflies.
The above works are admirable for the new insights they provide for insect flight, and they encourage further study. For example, the conditions for optimizing vortex stability are at present unknown. In addition, such questions as what is truly the maximum angle of attack and how does the angle of attack vary during the stroke remain to be answered. Also, flow techniques are largely qualitative in nature. Thus, calculations of the mean lift force (2) and span-wise lift force (3) generated by the vortex are highly speculative. Clearly, quantitative studies of the lift force generated as a function of position on the wing surface would be of great value, especially if they could be generated at arbitrary positions of the wing during its wing-beat cycle.
Some years ago, the author of this proposal, together with Professor Martin Gouterman and his students, invented luminescent barometry (5, 6). This technique provides a flexible and relatively inexpensive method and apparatus for continuous pressure mapping of aerodynamic surfaces. It is based on the use of a luminescent paint which consists of a phosphorescent compound, a platinum porphyrin dissolved in an oxygen permeable polymer. When the surface to be studied is coated with the paint and illuminated with ultraviolet light, it is observed to give a beautiful red luminescence. The intensity of the emitted light is found to be proportional to the inverse of the pressure at the surface because the air contains a constant fraction of oxygen independent of the total pressure. Luminescent images are captured with a CCD camera with computer interface. Calibration is accomplished by obtaining a reference image with the wind tunnel fans off. Compared to conventional pressure tap methods, luminescent barometry does not require drilling holes in the surface, provides a much faster response and maps the pressure over the entire surface. Over the past decade, this technique has been refined and used in wind tunnels around the world (7). (see Figure)
Figure Pressure profiles over the plane surface. Left half of plane describes the computational fluid dynamics model results. The right half of the plane reflects the pressures determined by pressure sensitive paint luminescence. Note the excellent agreement between experimental and theoretical results. Adapted from a presentation (7) by Marvin Sellers, Sverdrup Technology Group, Arnold AFB, TN.
It is our belief that the technique of luminescent barometry could be profitably applied to the problem of insect flight. Our desire to carry out such studies has been given impetus by recent advances in computational fluid dynamics and computer power that make possible simulations of unsteady three dimensional flows interacting with moving boundaries. New computer programs, being developed at the Courant Institute will make possible a level of comparison of theory (8) and experiment that is unprecedented.
However, further thought shows that such an application of luminescent barometry is not to be undertaken lightly. Clearly, working with insects will require some knowledge of their physiology. A collaborator with this expertise should be sought. In addition, insect wings cannot simply be painted. The present coatings will load the wings far too much and render them far too stiff. Finally, if theory and experiment are to be quantitatively compared, then it will be most helpful to have the collaboration of a computational fluid dynamicist.
B. Objectives
The overall goal of this proposal is to develop methodology, chemistry and instrumentation suitable for making dynamical measurements of the surface pressure of insect wings during flight. As an experimental subject, we will begin with the bumble bee, Bombus locorum. This insect has transparent wings and the wing-beat frequency is sufficiently high to generate observable pressure gradients. In order to accomplish the above goal, the following tasks must be accomplished:
A suitable technique must be developed for attaching the oxygen sensitive probes to the surface.
The insects must be tethered to a support in such a manner that their flying ability is not compromised.
A flash illumination method must be devised which is simultaneously synchronized to the camera shutter and wing beat cycle.
C. Procedure
(1) Attaching Oxygen Probes to the Surface. Since the wings are made of chitin, and therefore terminated on the surface with hydroxide groups, we may be able to use conventional silane chemistry to attach the platinum porphyrin groups to the surface. Such molecules are available for reasonable cost from Porphyrin Products, Inc. Alternatively, we can use the fact that the wings are coated with a thin (one micrometer) layer of hydrocarbon. We may be able to directly dissolve the oxygen probes in this material. A third possibility is to remove the hydrocarbon layer and replace it with a custom porphyrin-containing hydrophobic layer.
We will begin our coating experiments with detached wings. These will be tested for response characteristics using instrumentation devised for testing the conventional pressure sensitive paints. Once suitable candidates are discovered, we will evaluate them on live insects.
(2) Tethering the Insects. We hope to develop a probe application technique that allows the insect to fly as well as it did before applying the probes. Next , with the assistance of our collaborators in Zoology, we will evaluate technology for tethering the insects. For these preliminary studies we will judge the success of our tethering methods by the wing beat frequency. Methods that alter this parameter by more than ten per cent will be rejected.
Another problem is that pulsed light sources may alter insect behavior. We will be careful to monitor the wing beat cycle using different rates of pulsing, different pulse durations and finally with different pulse energies.
(3) Instrumentation. Obviously, we must be able to capture blur-free images of the insects at multiple positions of their wings during the wing-beat cycle. While this is in principle possible with steady-state light sources using the CCD camera shutter, in practice, we will need to produce images as bright as possible. Thus, we propose to use a xenon flash lamp in conjunction with camera shuttering to yield the most intense phosphorescence images possible.
A major difficulty will be in synchronizing the wing beat with the camera and pulsed light source. We propose to use a microphone to obtain an audio waveform from the wings and trigger the camera at a chosen phase of the waveform. This will coincide with a specific wing position.
In an experiment on the pressure gradients on the surface of a propeller powered by an electric motor equipped with a shaft encoder, we have managed to synchronize the motion of the propeller and the firing of the flashlamp. The image of the propeller is sharp and appears to remain still in space. Next we will attempt to synchronize the propeller with sound from a microphone.
(4) Experiments. With accomplishment of the above, we will be ready to perform experiments. Phosphorescent probes will be attached to the upper surface of live bumblebees and they will be tethered. The system will be equipped with a pedestal so the bees can be rested at intervals. Data will be retained for images taken when the bees are operating within 10% of the normal wing-beat frequency. We anticipate that the pressure changes will be small. Thus we will record many images at the same position in the wing-beat cycle and co-add them. Hopefully, the position of the wings may be reproducible from cycle to cycle. If there are irreproducibilities, we will superimpose the images by coordinate mapping. We have already developed such software for another image processing application.
If the experiments are successful, we will have a pressure map of the wing surface throughout the wing-beat cycle. This data can be integrated in space to give the integrated lift as a function of time. Then such data can be integrated in time to give the average lift. If the data show that the average lift force is greater than the weight of the bee, we can have a modicum of confidence in the data. In addition, the results can be compared with predictions from various theories of insect flight. These will range from simple calculations of the mean lift force for the down stroke as per Ellington (2) to estimates of the lift force and circulation (3) to more sophisticated two-dimensional unsteady flow calculations (8), and finally to the ultimate of three dimensional unsteady flows interacting with dynamically altering boundaries. .
E. The Research Team
The study of insect flight represents an ideal project for an interdisciplinary team. Fortunately, Professor Tom Daniels of the Zoology Department has agreed to collaborate with us. He has a group of people studying the flight mechanism of the hawkmoth. He will provide help with the insect physiology and conventional wind tunnel measurements.
In addition, we have secured the assistance of Dr. Johnathan Wettlaufer of the Applied Physics Laboratory, who is also Adjunct Associate Professor of Physics. He has agreed to help us with the aerodynamic simulations and data analysis. Dr. Wettlaufer, in turn, has connections with Professor S. Childress at the Courant Institute of Applied Mathematics at New York University. Dr. Childress is a leading authority on the aerodynamics of insect flight.
This research team has just been awarded a three-year grant from the National Science Foundation. The grant has support for two graduate students and one post-doctoral student.
|
According to the laws of quasi steady-state aerodynamics, insects cannot produce enough lift pressure to fly. The mechanism whereby they achieve flight must involve unsteady flows interacting with the dynamically changing wing surfaces. Quantitative experimental data on this issue is not currently available. We have recently invented a new flow visualization technique, luminescent barometry. We will apply this technique to measuring the time dependent surface lift pressure produced by a bumblebee in flight.
Introduction and Rationale
Insects represent some of the most versatile and maneuverable of all flying machines. Many of them can hover, turn in their own length, decelerate rapidly, role over, loop, and even land upside down on a ceiling. Yet insects cannot fly, at least according to the conventional laws of aerodynamics which rely on quasi steady-state approximations. Clearly, the extra lift required must be generated from the complex (unsteady) flapping motions generated during the wing-beat cycle. Until recently, the exact nature of this extra source of lift remained a mystery.
In three remarkable papers (1-3), Ellington and colleagues have provided insight into the high-lift mechanism used by the hawkmoth, Manduca sexta. These researchers obtained details of the flow around the wings and the overall wake structure by use of conventional flow visualization techniques (carefully placed smoke rakes and stereophotography). Both tethered insects and a hydrodynamically scaled dynamic model were studied. Observations of the insect indicated the presence of a leading edge vortex and highly three-dimensional flow pattern. To further investigate this intriguing discovery, a scaled up robotic insect was constructed and subject to more exacting flow visualization. In this system, further evidence was obtained for a leading edge vortex generated on the downstroke, which moreover was shown to be associated with a strong axial circulation. In addition, the axial vortex was connected to a large tangled tip vortex, extending back to a combined stopping and starting vortex from pronation. Similar experiments have been carried out by Dickinson (4) on fruitflies.
The above works are admirable for the new insights they provide for insect flight, and they encourage further study.
|
no
|
Probabilistics
|
Was it possible for Bumblebees to fly according to the laws of aerodynamics?
|
no_statement
|
"bumblebees" were unable to "fly" "according" to the "laws" of "aerodynamics".. "bumblebees" did not adhere to the principles of "aerodynamics" in their flight.
|
https://byustudies.byu.edu/article/byu-and-religious-universities-in-a-secular-academic-world/
|
BYU and Religious Universities in a Secular Academic World - BYU ...
|
BYU and Religious Universities in a Secular Academic World
Contents
Most of the modern research universities in the United States began as Protestant colleges whose highest stated aspirations were to foster faith and the development of Christian character as well as higher learning. While some Christian colleges remain from that era, among the 207 universities in the Carnegie classification’s high and very high research universities, only nine claim a religious affiliation (seven Catholic institutions; Baylor University, with a Baptist affiliation; and Brigham Young University, operated by The Church of Jesus Christ of Latter-day Saints). We will briefly outline some of the primary reasons that religious research universities are such a small proportion of American research universities. However, our primary intent in this article is to examine Brigham Young University as a limit case of the religious research university. In many ways, BYU is an anomaly. At its founding in 1875, BYU was organized in ways that were almost identical to the early Protestant colleges. What is remarkable is that through the period of secularization that led most of those colleges to cut their ties with religion, BYU became more closely tied to its affiliated church and more intentionally religious than any of the remaining religious universities.1
A popular twentieth-century myth has it that aerodynamics experts have examined the bumblebee and determined that “that critter can’t fly,” because “it does not have the required capacity (in terms of wing area or flapping speed).” Nevertheless, the laws of physics do not prevent the bumblebee from flying. Research shows that “bumblebees simply flap harder than other insects, increasing the amplitude of their wing strokes to achieve more lift, and use a figure-of-eight wing motion to create low-pressure vortices to pull them up.”2 In other words, the bumblebee flies, but it does so differently than many other insects.
As organizational scholars, we ask similar questions of BYU. Our goal is to help those who are interested in universities, and particularly religious universities, to understand them better by comparing BYU to the others in this niche. We believe that by studying the limit case we can shed light on the nature of such organizational “critters” and how they can actually “fly,” sometimes, as it might appear, against all odds.
After reviewing the primary reasons for the secularization of American research universities, we consider BYU by contrasting it with other religious universities in its institutional niche. We then focus on trying to understand how BYU deals with the inherent dilemmas it has chosen quite consciously and the implications of these choices for its ability to “fly.” We conclude by considering implications for faculty, administrators, and scholars of universities that for a variety of reasons (some more conscious than others) incorporate such dilemmas as a core aspect of their identity.
The Secularization of American Higher Education
Given the history of secularization in institutions of higher education in America, some might wonder whether BYU is the last of its kind. Most American universities started out as church-related colleges, but by the 1920s the majority of them had been “secularized.” George Marsden provides some perspective about just how rapidly this secularization took place:
The American university system was built on a foundation of evangelical Protestant colleges. Most of the major universities evolved directly from such nineteenth-century colleges. As late as 1870 the vast majority of these were remarkably evangelical. Most of them had clergymen-presidents who taught courses defending biblicist Christianity and who encouraged periodic campus revivals. Yet within a half century . . . the evangelical Protestantism of the old-time colleges had been effectively excluded from leading university classrooms.3
Harvard’s Charles Eliot offered what Marsden describes as the “shibboleth of the movement” against the possibility of a church university: “A university cannot be built upon a sect.”4 A few years earlier, the founding president of Cornell University, Andrew White, said something similar in his inaugural address: “I deny that any university fully worthy of that great name can ever be founded upon the platform of any one sect or combination of sects.”5 Indeed, this feeling became so shared among American intellectuals that in 1905 Andrew Carnegie was persuaded to bankroll a foundation that would provide incentives for universities affiliated with denominations to sever their ties in exchange for participation in a generous faculty retirement program. The Carnegie Foundation for the Advancement of Teaching had on its board the president of almost every major university of the day.6
During this same period, a growing number of Protestants formed a loose coalition of northeastern states Congregationalists, Presbyterians, and Unitarians desiring to establish a nonsectarian though Christian (Protestant) educational system that could foster a moral order for American society in the absence of an established religion. Their view largely excluded Catholics and Jews as well as more conservative Protestants and sought to avoid divisive sectarian battles regarding doctrine. This coalition (largely Whigs and later Republicans in the north) gained significant influence during and following the Civil War because the most powerful opposition had largely been religious conservatives, often Democrats, in the southern states.7
Ironically, the Whig/Republican Protestant coalition felt at first that they had won the day over their more conservative Protestant brethren and over Catholics and Jews. Many of them felt that democratic values were compatible with an emphasis on the development of individual character (rather than on salvation explicitly) and freedom to pursue truth through science.8 However, drawing on the historical work of Burtchaell9 and Marsden,10 we note four structural factors that influenced the movement to secularize higher education or to formally separate its institutions from influence by any particular church or religious order:
1. In their attempt to appeal to a broad coalition of Protestants (to get more students and to influence a larger part of the country) and to avoid unseemly and energy-sapping sectarian debates, academic leaders “established” a secular moral approach to education emphasizing values such as free inquiry, democracy, service to humankind, and so forth. The values were so general that many eventually came to believe they did not require allegiance to a particular religious tradition. Curriculum came to focus on disciplinary subjects, and Bible classes along with the study of church history and doctrine were no longer required and eventually did not appear in class offerings. Curriculum has thus become almost entirely focused on scientific values and critical thinking.11
2. Faculty were hired to teach increasingly specialized subjects. At first, Christian (though nonsectarian) values were deemed important in faculty candidates, but soon universities began to focus, with support from these more specialized and nonsectarian faculty, almost entirely on a faculty member’s academic expertise.
3. Funding sources changed. Many religious proponents of this era assumed that the state would fund “public” universities whose approach coincided with their Christian interests, especially as these interests became less denomination- or theology-specific. However, primary funding sources for both private and public universities shifted from churches (which had never provided more than meager funding beyond donated scholarships for students in any case) to increased student tuition, private industry, foundations, and, eventually, to government sources (largely in the form of loans or grants to students and funding for faculty research). Those who provided these resources sought to influence universities to adopt their more practical, nonreligious values. The government (both state and local) often required universities to give up hiring preferences and specific religious requirements in order to receive particular forms of aid and forbade the use of religious texts or religious tests in public schools, many of which had been seen as Christian institutions even though they were funded by state funds.12
4. Membership in boards of trustees changed along with the funding sources. Increasingly present on these boards were people from the world of business, alumni, and other citizens representing diverse interests of the university. Church leaders were less often involved in interactions with administrators and faculty. Soon the affiliated church leaders had no involvement beyond occasionally continuing to work with a divinity school or theological seminary that persisted at some universities but increasingly became located at the periphery of campus.13
Why Are So Many Religious Universities Catholic, Given the Protestant Beginnings?
During this era when many liberal Protestants were seeking less sectarian and more generally acceptable educational approaches, Catholics had relatively little involvement in higher education. They were largely immigrants without a tradition of higher education, and at the turn of the century perhaps 4,200 Catholics were in the sixty-three schools of the Catholic higher-education network.14 Marsden points out that this was a period of Americanization, when many in the United States saw progress as dependent upon political freedom and free inquiry.15 Catholic leaders in Rome and Europe viewed this movement with great alarm. The Catholic University of America (CUA) was founded in 1889 by Catholic progressives who were interested in bringing together “Catholic teachings with cautious versions of the attitudes typical of American university founders.”16 Pope Leo XIII issued an encyclical in 1895 addressed to the American church, stating that the separation of church and state was not the desirable model for the church. While the Vatican had given approval to establish CUA as the only pontifical university in America, concerns about CUA and Americanization led the pope in 1896 to remove John Keane, the first rector of Catholic University of America.17 In 1910, a professor of scripture, Henry A. Poels, was dismissed because he held a multiauthorial view of the Pentateuch, contrary to the Pontifical Biblical Commission’s position that Moses was the substantial author of the first five books of the Bible.18
As interest in education grew, Catholics sought to protect themselves from what they saw as contradictions to their faith in the American culture and in its educational approaches. Catholic orders created educational institutions staffed largely by priests and nuns from the order. That approach was quite inexpensive and largely maintained a Catholic ideology. However, the quality of education suffered, and it was very difficult for these institutions to achieve accreditation by anyone beyond their own Catholic accrediting associations. Leahy suggests several reasons for the move away from priests as teachers: (a) increased post–WWII demand by Catholics for higher education, (b) increased desire to fit in with the American mainstream (fueled by a growing trust among Americans of Catholics, growing affluence of Catholics, and an increased desire to be a part of the economy), (c) an increased desire to be accredited and thus recognized more broadly, and (d) fewer Catholics becoming clergy and getting PhDs and therefore a lack of qualified priests.19
Midway through the twentieth century (in 1955), John Tracy Ellis summarized the intellectual situation among Catholic academics by writing that there was “general agreement as to the impoverishment of Catholic scholarship in this country.”20 Marsden’s conclusion regarding the first half of the twentieth century in Catholic higher education is: “Whatever the weaknesses of Catholic higher education during this era, and they were many, Catholics emerged from this era with one thing Protestants did not: universities with substantial religious identities.”21
James Burtchaell explained that in the 1950s many American Catholic educators were embarrassed at the lack of influence of Catholics in intellectual and scientific spheres. He studied a variety of American Catholic as well as Protestant institutions and concluded that from that time forward academic leaders of these Catholic colleges and universities sought independence from official church oversight because they felt it was too restrictive.22 In his massive study of the secularization of both Protestant and Catholic institutions of higher education, entitled The Dying of the Light, Burtchaell laments that just as Catholic intellectuals were becoming trained well enough to truly bring a unique light both to the secular world and to the church, Catholic institutions of higher education engaged in secularization that essentially made them look similar to all of the non-Catholic institutions of higher education.23 Elsewhere, he presents historical evidence demonstrating a secularization process among Catholic universities that closely parallels the Protestant secular movement at the turn of the twentieth century. While the process started a century later, it is heading in the same direction, according to Burtchaell, and is likely to have a similar result.24
Current Situation of Religious Universities in America
Given the history of secularization we have just reviewed, we were interested to learn that out of eight million students enrolled in undergraduate bachelor’s degree programs in the United States in 2004, over one million were attending religiously affiliated colleges or universities. Most of these institutions are quite small, as suggested by the fact that almost one-third (768 of 2,345) of higher-education institutions listed in the U.S. Department of Education database claim a religious affiliation.25 What we observe is that the Christian college (small, typically focused on the liberal arts, and either Protestant or Catholic) has persisted into the present. On the other hand, prominent universities with a clear dedication to research are almost completely secularized. Specifically, the Carnegie classification of universities (2012)26 that are high or very high in research provides the following:
Figure 1
Research Universities That Are Religiously Affiliated
Research classification
Number of institutions
Number of religious institutions
Very high
108
2
High
99
7
Total
207
9
As figure 1 indicates, less than 5 percent of these institutions claim a religious affiliation; BYU is among that minority. Of particular interest to us are questions about how BYU and other universities that clearly value research have been able to deal with significant institutional pressures to secularize. Further, how does BYU organize itself to attend to its avowed (and what many outsiders at least would see as contradictory) goals to foster both faith and reason? While we could look at the extent to which such potential tensions exist in “doctoral universities” in the Carnegie classification system, our choice is to focus on the niche that is least likely in this age of secularization, the religious universities most focused on research.
Following a brief description of BYU’s history relative to secularization forces during this same period, we will compare the religious commitment and institutional structures of the nine religiously affiliated research universities using the best data we have available.
BYU’s Beginnings in the Context of the Secularization of American Higher Education
BYU’s history is all the more remarkable against the backdrop we have just reviewed of secularization among major universities in the United States. Contrary to the trends, BYU has become more closely tied to its sponsoring church during the same period in which the Protestant and more recently Catholic universities were distancing themselves from their initial religious affiliation. Indeed, during the past half-century when pressures on Catholic universities to become more secular and intellectual have led to significant changes in their intentional religiosity, BYU has in many ways reemphasized and strengthened its commitment to its religious moorings. At the same time, BYU paralleled the efforts of both Protestant and Catholic institutions to become accredited and establish a reputation of educational excellence that would benefit its graduates. As we shall see, this move to become at the same time stronger both educationally and religiously is indeed unique among universities.
Brigham Young Academy was founded by Brigham Young in 1875. As he wrote to his son Alfales, then a student at the University of Michigan, he established a private trust to fund Brigham Young Academy “at which the children of the Latter-day Saints can receive a good education unmixed with the pernicious, atheistic influences that are found in so many of the higher schools of the country.”27 At first, the Academy was intended to provide elementary and secondary education and a “normal” school to prepare teachers for the public schools in the Utah Territory that no longer allowed the use of the Book of Mormon or the teaching of explicitly Mormon philosophies. Its initial institutional structure was patterned after most of the Protestant colleges of the day: funding through small amounts of tuition (in BYA’s case, $4 per term per student, which over 60 percent of the students paid in commodities) and modest income from property donated by Brigham Young. The board of trustees was composed of local political and church leaders, with teachers who were for the most part members of the affiliated faith.28
Brigham Young Academy was not initially thought of as the Church’s university or even the predecessor of such a university. In 1891, the First Presidency of the Church asked James E. Talmage to leave the presidency of LDS College in Salt Lake City to establish what his biographer called “a genuine Church University.”29 Talmage thrilled at the prospect of founding “an institution of wide scope and high standards that would merit recognition by the established centers of learning throughout the nation and the world. It was a dream he had cherished for many years.”30 The proposed name was Young University. However, the Panic of 1893 destroyed any hope of continuing plans for Young University.
The Brigham Young Academy was named Brigham Young University in 1903 when the secularization forces were gaining strength and influencing the formation of most modern American universities. The newly named BYU still did not have additional or significant Church funding, but it was thought by its leaders in Provo that the new name indicated a direction toward more college-level work, even though the pace toward that end would be slow.31
The growing commitment of the Church to BYU is seen by the decision of its leaders in 1918 to liquidate BYU’s debts in exchange for its assets.32 In the years that followed, the Church provided an increasingly significant proportion of its budget. The dream of a genuine Church university was thus kept alive and eventually applied to BYU, remarkably during a time when the Church leaders were deciding that they could not support the Church’s breadth of educational offerings and were withdrawing for the most part from secular education. Indeed, in the 1920s and 1930s the Church withdrew almost completely from higher education. The result was that by 1934 only two higher education institutions were sponsored by the Church—Brigham Young University and Ricks College.33 A system of LDS Institutes of Religion was created.34 During this period, the Church appears to have committed to BYU the fulfillment of the dream of becoming a “real university”—one, however, that would remain committed to real faith in the restored gospel of Jesus Christ.35
Figure 2 summarizes the improbable direction and result of changes at BYU relative to principal organizational indicators of secularization among religious institutions of higher education mentioned previously. What we may observe in BYU is an institution that is unique among American universities in general. We turn next to the question of how unique BYU is within these same parameters when compared to the few remaining religiously affiliated universities.
Figure 2
Comparison of Secularization Choices from Founding to Present
Relationship to Church
Other Universities
BYU
Required religion courses:
None
Clarified and increased
Faculty from sponsoring Church:
Decreased to no requirement
Increased, including worthiness requirement
Church funding:
Decreased to 0
Increased, Church contribution
Church leaders on Board:
Decreased to 0
Increased, 100% Church leaders
Source: George M. Marsden, The Soul of the American University (Oxford: Oxford University Press, 1994), 155–56, 251, 270, 281–82, 300, 419–21, 438.
How Does BYU Compare with Other Religious Universities?
Burtchaell36 points to a secularization pattern that included faculty seeking professionalization through increased specialization and prestige-seeking university presidents pushing to hire new faculty experts who were not members of the affiliated church. He also chronicles the move by most higher education institutions to admit students with no religious requirement to increase revenues. Additional funding was eventually received from private donors and alumni but was more immediately available from foundations, business, and government (through scholarships, grants for research, and so forth). Through this period of change, most institutions continued to label themselves religious. The label was often the last vestige to go once secularization had run most of its course.37
We noted previously key indicators that reflect the separation of universities from religious influence. We now use these historical indices of secularization to compare the nine universities that claim religious affiliation. However, we begin by using minimum criteria others have employed to qualify universities as having a credible claim to religious affiliation to indicate where each of these nine institutions falls with respect to these measures.
Serious claim to a religious affiliation. All nine of the universities that claim a religious affiliation in the Carnegie classification of Research/High and Research/Very High universities pass a minimum criteria test devised by Lyon, Beaty, and Mixon to determine whether universities have a credible claim to religious affiliation: Does the university have a mission statement that (a) “acknowledges a specific linkage to a church or claims a religious heritage,” (b) “mentions at least one explicitly religious goal,” and does it have (c) “a core curriculum requiring religion courses that reflect and support the university’s religious identity”?38
Figure 3 shows the list of these nine universities along with the number of hours of religion-related courses they require. Each of their mission statements contains an explicit acknowledgement of religious affiliation and at least one religious goal. Some variation in what might be termed a “religion” course exists between these institutions because of differences in definition of what is religious. Other differences exist because some of these universities require only a class about various religious traditions while others (specifically Baylor, BYU, Notre Dame, and Catholic University of America) require the study of scripture or doctrine of the particular religious tradition. Thus, while there is some variation in the extent to which a religious commitment entails study of the specific traditions, scripture, or doctrine of a particular religious tradition, all nine of these universities have at least a minimum commitment to identifying themselves with a religious tradition.
Faculty hiring. We are not aware that any of these religious universities requires that a faculty member or other employee of the university be a practicing member of a particular faith or religious order. Figure 4 provides a comparison of university hiring policies with respect to the religious character of the faculty candidates. BYU is the only one of these universities that has an explicit “preference” for members in good standing of the affiliated church. BYU advertises in its faculty position announcements that “preference is given to qualified candidates who are members in good standing of the affiliated church.”48 Most of the other universities have standard equal employment, affirmative action statements that claim they do not discriminate on the basis of religion or any other “excluded categories.” In addition, Notre Dame encourages women, minorities, and Catholics to apply, and Loyola of Chicago acknowledges, as does the Catholic University of America, that there are some theology degrees that must be offered by approved Catholic faculty members using approved content to receive pontifical sanction. Based on “The Application of Ex Corde Ecclesiae for the United States,” all Catholic colleges and universities must require that theology professors obtain a mandatum from the bishop of the local diocese in which the university or college is located.49 However, in most cases, Catholic universities and colleges do not reveal whether a particular professor has a mandatum, claiming that such information is private.50
We have a general sense based on conversations with colleagues at several of these universities that during hiring interviews some discussion occurs regarding the candidate’s willingness to respect the religious tradition (or at least its predominant values) with which the university is affiliated. On the other hand, Burtchaell claims that few if any Catholic universities insist on faculty loyalty to their faith traditions.60 A study by Lyon, Beaty, and Mixon presents faculty attitudes at four of the religious universities on our list (Baylor, Boston College, Notre Dame, and BYU), demonstrating that at each institution there are at least some faculty members who would be willing to wait for a significant period to find a candidate who is a member of the affiliated religion. Nevertheless, BYU’s faculty are significantly more supportive of this idea with 82 percent of the faculty being willing to go shorthanded for a significant period in order to hire an LDS candidate (compared with 55 percent at Baylor, 38 percent at Notre Dame, and 28 percent at Boston College).61
At Baylor, there has been significant debate about how Baptist the university should be and how much religiosity, especially religious fundamentalism, should be required of the faculty. Indeed, two presidents previous to the current one, President Kenneth Starr, were fired by the board of regents for issues related to faculty hiring and the standards for granting tenure. Specifically, Robert Sloan was fired after a tenure of ten years because, according to critics, he was “devaluing teaching . . . and . . . edging the institution toward religious fundamentalism.”62
In their study, Lyon and his colleagues noted the very high percentage of BYU faculty who are LDS. They wondered whether the religious affiliation of faculty accounted for the differences in their attitudes about faculty hiring and academic freedom issues in general. They found that the Baptist professors at Baylor and the Catholic professors at Notre Dame and Boston College were significantly more committed to the religious mission of their institution than their colleagues who were not of the faith of the affiliated church. However, even comparing responses of members of the affiliated religions, BYU faculty were more religious in their attitudes.63
Indeed, hiring at BYU focuses on finding LDS candidates who are among the best in their field and who are judged by the leader of their local congregation (bishop) and by an interviewing General Authority of the Church to be faithful, even exemplary, members of the Church. In addition, on a regular basis the Commissioner of Church Education sends a letter to the local bishop of each LDS faculty member at BYU, asking whether he or she continues to abide by certain essential expectations of membership (as someone who is worthy of a temple recommend). Those who are not LDS are asked to abide by similar moral commitments and are reviewed regularly for compliance. These requirements would have been unusual for universities and even religious colleges in the late 1800s.64 The explicit goals of BYU for faculty members who are members of the sponsoring Church are that “they . . . live lives reflecting a love of God, a commitment to keeping his commandments, and loyalty to the Church. They are expected to be role models to students of people who are proficient in their discipline and faithful in the Church. All faculty are expected to be role models for a life that combines the quest for intellectual rigor with spiritual values and personal integrity.”65
Funding. BYU’s funding model demonstrates another clear difference in institutional governance and support compared with the approach taken by the other religious universities. Figure 5 suggests that a chief form of funding for the other universities derives from tuition, with the average tuition and fees charged for the 2012–13 school year being $38,116 per school year, compared with $4,710 at BYU (for LDS undergraduates; $9,420 for non-LDS students). BYU’s board of trustees, by contrast, has chosen to provide a subsidy for students that is comparable to what many states provide to state residents who attend a state-supported university. The university’s president, Cecil Samuelson, has stated that Church leaders have determined that the Church would be the primary source of support for the university, contrary to the trends of declining church involvement in other universities, to make it “abundantly clear to whom we would look for our leadership and guidance.”66
Figure 5
Tuition and Other Funding of Religiously Affiliated Universities
* Tuition from the websites of each university for 2012–13 school year.
** Funding information from telephone call to financial VP or designee in that office during 2009, except for CUA.
When one of us called financial vice presidents at each of these religiously affiliated universities to ask whether they received funding from the affiliated church or order of the church, the response was often a chuckle and a clear no. In one case, the vice president of a Catholic university commented that it was indeed the other way around. He said that the university administrators are so interested in maintaining a religious presence in an era when those going into the Catholic priesthood is diminishing that they provide a full-time position (FTE) and salary to any department that will hire a priest of the affiliated religious order who also had a terminal degree in the area. After six years, if the department decides to give tenure to that priest/faculty member, the department has to come up with the FTE and funding. As a result of this process, the vice president said the salary for those FTEs across campus, which goes first to the religious order and then a portion to the priest, is helping to fund the order. Vice presidents from several other universities affiliated with the Catholic Church or one of its orders expressed a similar sense that the university actually helped the order in one way or another, rather than the university receiving financial support from the order.
Board membership. Figure 6 shows a comparison of these universities with respect to membership on a governing board or board of trustees. Only four of the universities have a requirement for a particular number of “religious” on the board (specifically: Baylor, BYU, Notre Dame, and Catholic University of America), and only BYU requires that all board members be General Authorities/Officers of the Church. Catholic University of America is the only other university that has more than 50 percent of the board made up of church representatives. Indeed, by the mid-1960s, Catholic university leaders came to believe that only by giving lay people (nonclerics) a “shared legal trusteeship” and a predominant role on boards of trustees would they get the financial resources needed to expand Catholic higher education. They were explicitly concerned that exclusive control of boards by priests, brothers, and nuns would limit or curtail state and federal monies. Most of the Catholic universities moved to increase the proportion of laity on their boards during this period.68
6 board fellows must be Holy Cross and 6 must be lay persons, and they approve/appoint board of trustees (trustees have no religious requirement); currently 7 of 47 (15%) have religious titles; according to bylaws, president must be a Holy Cross priest
In addition, Notre Dame and Catholic University of America both require that their chancellor/president be a Catholic from the particular order or sponsoring church conference. The past two presidents of BYU have come from among the General Authorities of the Church, although there is no requirement that this be the case. However, the board of trustees (all General Authorities or officers of the Church) conducts the search and appoints the president, who has always been a member of the sponsoring church.
Summary of comparisons. Given the history of secularization in higher education, we should perhaps be surprised that any large universities interested in serious research would claim a religious affiliation. We can observe nine universities, mostly Catholic, that have maintained an explicit religious affiliation and seek to foster campus cultures that are open to an association with a particular religious tradition (and in several cases, religious traditions in general). Five of the nine universities do not require a religious presence on the board. They all require that at least six credit hours of the courses a student takes during his or her university experience be at least related to religious thought and lifestyles.
We agree, however, with Baylor scholars Lyon, Beaty, and Mixon that BYU is the most “intentionally religious” of the universities whose faculty they surveyed.78 As we compare BYU with the other religiously affiliated universities that qualify to be on our list, we see evidence as well that BYU is more focused on religiosity in addition to academic excellence than those other universities. Part of the difference must come from variation in what it means to be religious in each of the traditions represented, and that sort of comparison is beyond our current intentions and abilities. Nevertheless, what we can see clearly from our organizational theory perspective, which focuses on institutional and organizational structures, is that BYU is the only research university that has such a close relationship with a church. All of the others have been founded by religiously minded individuals and have developed impressive trajectories of academic improvement while at the same time inviting their campus communities to acknowledge the role of faith in their lives and learning. However, BYU is an integral part of its sponsoring church. Its board members are leaders of the Church, and significant church funds are invested directly in the education of the youth of the Church. No other university is structured in that way. The effects on faculty hiring, faculty attitudes, and curricular requirements are clear.
Intentional Dilemmas:
BYU’s Strong Ties to the Church and Its Goal to Be a Major University
Obviously, the responses by BYU and its sponsoring church to secularization pressures have been significantly “against the grain” of general institutional trends in America. While BYU has been able to develop increased academic excellence and commitment to faith, faculty and administrators often, of necessity, address dilemmas that require special attention. The following questions are representative: How can we grow in academic quality and still hire primarily members of the Church? How will the university and faculty members protect free inquiry in the disciplines and honor scriptural truth as taught by the Church when these interests come in conflict? How can faculty members develop excellent scholarly programs and share their learning in the top journals and presses of their disciplines while working primarily with undergraduate students? Will faculty hold students accountable for obedience to Church standards (honor code and dress and grooming standards, for example) as well as academic performance?
These are the sorts of tensions that, according to both Burtchaell and Marsden, led the pace-setting universities of the late nineteenth and early twentieth centuries to seek to free themselves from their affiliated churches. These dilemmas are not the sort that will disappear. They come from the interplay of the reigning “script” about how to be a “real university” and the Church “script” about how to develop faith and character, as well as from the Church’s intention to influence primarily undergraduate students.
Scholarly work by Albert and Whetten provides a framework with which to understand some of the organizational tensions that BYU faculty and administrators face in this institutional environment. They argue that organizations are significantly more efficient when they do not have to specify all of their organizational elements, that is, when the elements are institutionalized and largely taken for granted.79 For example, if you work in a retail bank as opposed to a local grocery store, the organizational structure, reward system, and strategies of the business will differ significantly but will not be explained fully anywhere. In higher education, religious colleges are still taken for granted in this way. They focus on undergraduate teaching in a specific religious context and often hire faculty based on their faith as well as academic expertise. But universities, even private ones, as we have seen, are expected to avoid religious commitments and give primary attention to research.
When organizations violate such institutional expectations or seek to combine expectations from two different institutional environments (in this case, church and academic environments), they are “swimming against the current.” They must exert extra effort to find people willing to be different, educate them about the differences, and help them value the “hybrid” organizational life they must then lead. They must convince those outside the organization upon whom they depend for legitimacy and resources that this way of organizing is valuable, or at least allowable (think of accrediting bodies, graduate schools evaluating undergraduates, funding agencies, alumni, and students, whose approval and support of the university are critical for its ongoing existence and success).
Albert and Whetten, along with many others, suggest, contrary to what we might assume, that a large number of organizations are “hybrid” because they combine two or more organizing scripts.80 For example, one of the most ubiquitous organizational forms is the family business. Family businesses enjoy the commitment of family members to get the business started and do not have to pay them big salaries. However, families tend to operate on an organizing script that gives membership in the family privileges, and businesses tend to operate on the basis of meritocracy (and to establish policies against “nepotism”). Hence, there are usually inherent dilemmas to manage in such hybrid organizations, as well as potential benefits to gain.
BYU is a unique case of hybrid organization because, as President Cecil Samuelson has reaffirmed, “We have been defined by our board of trustees as a primarily undergraduate teaching university with some graduate programs of distinction and high quality.”81 Their intention is to provide the very best education possible, first to undergraduate students, and to offer graduate programs that support, or at least do not detract from, undergraduate education. As figure 7 suggests, the commonly accepted institutional scripts in modern American higher education anticipate that a university will have a strong emphasis on graduate students and research. A religious frame of reference would be expected in small colleges. By explicitly designing BYU as a large university focused on teaching undergraduates in an intentionally religious context, the board of trustees has created a “dual hybrid”: church university and teaching university. The church university raises questions in the institutional environment about how to maintain academic freedom. The teaching university raises questions about time, resources, and students who can join with faculty in research.
Figure 7
BYU as a “Dual Hybrid”
As a Church-University Hybrid
Expected frame of reference for a top-tier research university
Secular
BYU’s frame of reference as a research university
Religious
As a Teaching-University Hybrid
Expected focus of effort for a research university
Graduate students
BYU’s focus of effort
Undergraduate students
Most outsiders to BYU would think that the principal tensions would be found in the church-university portion of the hybrid. However, our experience at BYU listening to faculty across campus talk about their career concerns suggests that for most of them the teaching-university tensions are more prominent and ubiquitous. Compared with the number of BYU professors who have academic freedom concerns, significantly more BYU professors wonder about the tension between feeling the need to share their work in the top journals and venues of their discipline while at the same time teaching relatively higher numbers of undergraduates with relatively fewer or no doctoral students to involve in their research.
Church-university tensions. Our observation based on experience finds some confirmation in the research cited earlier by Lyon, Beaty, and Mixon.82 In this study, three Baylor professors compared the attitudes of professors at four of the nine major religious universities (Baylor, Boston College, Brigham Young University, and Notre Dame) regarding their approach to dealing with their religious and academic missions. They surveyed faculty at each of these institutions during the middle to late 1990s. Their questions focused on various aspects of practices and attitudes of these professors in such areas as university goals, classroom activities, extracurricular activities, faculty hiring, academic freedom, and integrating faith and learning. Figure 8 provides several examples of how the responses from faculty at the four institutions compare regarding the roles of faith, scholarship, and academic freedom.
Survey Statement: Since we strive to be a Christian university, the encouragement of faith and learning are important tasks, but they should be separate and not integrated. (Yes: strongly agree or agree)
Brigham Young: 6%
Notre Dame: 38%; Baylor: 42%; Boston College: 52%
Survey Statement: We should guarantee faculty freedom to explore ideas or theories and publish the results even if they question the sponsoring church’s beliefs and practices. (Yes: strongly agree or agree)
BYU faculty are more likely than are faculty at other religious universities to see faith and reason as companion approaches that should be integrated to arrive at understanding and truth.83 Figure 8 shows the comparison of faculty attitudes at BYU and three other universities regarding the idea that faith and learning should be kept separate. It also suggests that when there is conflict between Church doctrine and research findings, BYU faculty are significantly less likely to assume that reason always trumps faith.
The responses to the second question in figure 8 show BYU faculty as much less inclined than faculty at the other universities to guarantee freedom to publish research that questions the sponsoring church’s beliefs and practices. At the time this survey question was asked, BYU faculty members were considering issues raised by an American Association of University Professors (AAUP) investigation many claimed to be related to academic freedom. Since BYU’s academic freedom policy was under scrutiny at that time and the question asked by the Lyon, Beaty, and Mixon survey is similar to but different than the BYU policy, we provide a brief discussion of BYU’s policy.
BYU’s 1992 statement on academic freedom argues for both individual and institutional academic freedom. The intent of BYU’s policy is to grant the individual faculty member freedom to “teach and research without interference, to ask hard questions, to subject answers to rigorous examination, and to engage in scholarship and creative work.” However, it also argues that BYU must have institutional academic freedom to retain the benefits of its unique religious commitments (which benefits include preservation of pluralism in American higher education, antidogmatism, and religious freedom). Both individual and institutional academic freedom are critically important and may occasionally come into conflict. Neither freedom is unlimited. Further, individual academic freedom is limited to some extent in all institutions (for example, secular universities limit racist and anti-Semitic speech, and public institutions limit advocacy of religion to maintain a separation of church and state). Nevertheless, at BYU, “individual academic freedom is presumptive, while institutional intervention is exceptional.” Indeed, at BYU, limitations on individual academic freedom are deemed reasonable only “when the faculty behavior or expression seriously and adversely affects the University mission or the Church.” Such limitations include faculty member expression in public or with students that “contradicts or opposes, rather than analyzes or discusses, fundamental Church doctrine or policy; deliberately attacks or derides the Church or its general leaders; or violates the Honor Code because the expression is dishonest, illegal, unchaste, profane, or unduly disrespectful of others.”84
The Lyon, Beaty, and Mixon survey asks a question about whether faculty should be guaranteed the “freedom to explore any idea or theory and to publish the results of those inquiries, even if the ideas question some traditional (Catholic, Baptist, Mormon) beliefs and practices.”85 At BYU, exploring ideas and publishing results that question the sponsoring church’s beliefs and practices would not be cause for dismissal. Nevertheless, some BYU faculty members may feel that the spirit of such an enterprise would not be in harmony with the academic freedom policy or with the spirit of searching for truth through both rational methods as well as through revelation to prophets of God. Whatever the interpretation BYU faculty members made of these issues, their responses to these and similar questions in the survey suggest that they are more likely to bring together spiritual and rational pursuits of truth than to see tensions between the two approaches. Indeed, from analysis of the results of the BYU responses to the same survey data used by Lyon, Beaty, and Mixon, Wilson reports that “88 percent of the women and 89 percent of the men say that they ‘have more freedom at BYU to teach’ as they deem appropriate than they think they would have elsewhere.”86
Lyon and his colleagues noted that BYU had the highest university religiosity scores on every question by a sizeable margin. The most common rank order was BYU, Baylor, Notre Dame, and Boston College. The Baylor professors concluded their study by saying that “in contrast to the overlap among Baylor, Notre Dame, and Boston College, our data suggest that Brigham Young faculty are distinctively committed to their school’s religious tradition. . . . Brigham Young is more committed to their religious tradition in both organizational structure and faculty attitudes.”87
Of course, BYU faculty members do experience tensions around academic freedom, in some disciplines more than others. Lyon and his associates report that professors in the arts and sciences at all of the universities, including BYU, have greater concerns about academic freedom than their counterparts in other disciplines.88 Particularly among faculty at BYU in the arts and sciences we hear concerns about preparing undergraduates for doctoral work outside of BYU. How can they help students understand and contribute to academic discussions that do not allow for the existence of God or that contradict their faith? How can they help their students be open to important ideas that appear to contradict their faith but that may indeed be a useful corrective to cultural definitions of their faith that may need to be reconsidered? In our experience, these faculty members are in general both academically thoughtful and committed to BYU’s unique mission, and they experience the tensions that result from these dual commitments. Nevertheless, as the Lyon, Beaty, and Mixon survey demonstrates, BYU faculty members seem to feel much less “hybrid identity” tension in these areas than do those at other religious universities, and certainly less than the hybrid identity literature would suggest.
Thus, the hybrid tensions around academic freedom are much more evident in interactions with outside entities like the American Association of University Professors (AAUP), accrediting bodies, and some funding agencies. For example, of the nine major religious universities, only BYU and the Catholic University of America (CUA) have been censured by the AAUP, and both for matters related to religion. CUA’s censure was related to a professor teaching in the university’s theology department in a degree program that requires papal support. The university and a papal board determined that this professor could not teach in that program because of his outspoken criticism of papal encyclicals regarding divorce, “artificial contraception,” “masturbation, pre-marital intercourse and homosexual acts.” The AAUP argued that this professor’s work had been well received in academic circles and that the university could not deprive him of his right to teach material that had received such supportive external peer review.89
In BYU’s case, the AAUP censure was triggered by the university’s decision to deny continuing faculty status (tenure) to a professor who, among other concerns, was unwilling to curb her discussion of prayer to Mother in Heaven (contrary to Church doctrine) after having been told that her expression was inappropriate. The AAUP argued that the university should not have denied this professor her academic freedom to engage in such expression.90
Others have noted that the AAUP is biased against religiously affiliated institutions and have pointed out that a large proportion of its censures have been given to such institutions.91 Many in the AAUP and in the academic world in general see no reason for any religious or faith-based limitations on what faculty members teach or write,92 and therefore universities or colleges that exercise any such limits at all are subject to critique or censure.
Some accrediting bodies for individual disciplines also raise issues related to the mission of religious colleges and universities. For example, in 2001, the American Psychological Association’s Committee on Accreditation conducted a six-month public comment on footnote 4 of its Guidelines and Principles for Accreditation of Programs in Professional Psychology.93 This footnote allows programs with a religious affiliation or purpose to adopt and apply “admission and employment policies that directly relate to this affiliation or purpose,” including policies that “provide a preference for persons adhering to the religious purpose or affiliation,” if certain conditions are met. The concern was that religious universities and programs would use the exemption as a way to discriminate against students and faculty on the basis of their sexual orientation. After a long deliberation, Susan Zlotlow, then head of APA’s Office of Program Consultation and Accreditation, concluded: “The committee remains committed to valuing all kinds of cultural and individual diversity, including religion and sexual orientation. We will continue to work with individual psychology programs to foster diversity.”94 In other words, such tensions are not likely to dissipate for BYU and for other religiously affiliated institutions that take their affiliation seriously.
Based on our observations, we conclude that while there are tensions internally at BYU, the greater tensions faced by faculty and administrators at BYU are with external entities. We argue that institutional pluralism (including a variety of religious as well as secular universities and colleges) is important for the academic landscape just as is the rational approach to scholarship that encourages competition among ideas. We believe that such scholarly tensions in the pursuit of academic learning are, up to a certain point, good for BYU. They help us define our theories and subject our ideas to rigorous testing and peer review. On the other hand, we see a continuing bias against BYU because of its religious commitments that will require vigilance and, in some cases, increased academic rigor to earn respect from skeptical disciplinary colleagues who assume a religious bias.
Teaching-university tensions. The choice to focus on undergraduates is an important one for BYU. One reason is that it allows the Church to influence more students at what could be argued is a relatively more vulnerable life stage than would be the case for graduate students. However, BYU’s undergraduate emphasis suggests a relatively higher teaching load and a lower level of student specialization when compared with a graduate research university. In addition, doctoral programs at BYU are asked to be supportive of this undergraduate emphasis. Faculty groups proposing a new graduate program must show how it contributes to rather than detracts from undergraduate work.
Some faculty members feel the undergraduate focus thus significantly constrains their ability to produce a high quantity of good research. For example, faculty at BYU who have been educated at some of the finest research universities will occasionally question how BYU can involve them in such teaching loads and also expect them to contribute to the best academic journals and presses. In response to such questions, BYU’s president, Cecil Samuelson, has clarified that “we should not, and do not, have exactly the same quantitative standards for our people as another institution might have for its faculty who have little or no other responsibilities. . . . On the other hand, we cannot, and must not, compromise on the qualitative aspects of the creative work that we do here.”95 Indeed, a number of BYU’s faculty have been creative about this tension and have involved some very bright undergraduate students in their research. When done well, the result is a rather unique undergraduate teaching and research university, what President Samuelson has called a “learning university.”96
But Can This Critter Fly?
Trade-offs and Performance
Given such tensions, why would any university or board of trustees consciously choose to organize itself this way? In BYU’s case, we note that its board of trustees, essentially leaders of its sponsoring church, believe that this is the best way to accomplish what are for them important religious priorities: to provide a first-rate educational experience for its youth in the context of faith.97 What should be clear from this article is that there are clearly trade-offs associated with hybrid organizations. They are able to do some things remarkably and perhaps uniquely well. There are other things they don’t do as well. Hybrid organizations also present unique challenges to those who inhabit them. In figure 9, we suggest some of the more obvious advantages and challenges faced by BYU faculty and administrators that derive from the particular choices made by the board to implement its vision of a church teaching university. We argue that, in this case, if you pick up one end of the stick, you pick up the other end too. From this point of view, we now consider how these conscious organizing choices create specific trade-offs. We also review available evidence on the extent to which these trade-offs are able to produce unique results sought for by the university.
Figure 9
Advantages and Challenges Come Together for BYU
Advantages
• Stable source of funding
• Excellent teaching and research support
• Outstanding students (primarily undergraduate); low tuition; high grad school and job placement
• Distinctive mission and purpose
• Freedom to combine sacred and secular; most students feel inspired both intellectually and spiritually
Given BYU’s choice to be unique as a religious university, determining how well it is performing becomes more difficult. Admittedly, universities have a difficult time measuring success because they have so many publics who worry about quite different outcomes (for example, graduation rates, acceptance rates, win-loss records of athletic teams, amount of endowment, number of Nobel Prize winners, number of articles published in “A” journals, amount of government grants, impact on the local or national economy due to inventions by faculty and students, percentage of graduates employed, acceptance rates of graduates in quality graduate programs). In BYU’s case, these criteria are not all of equal importance. For example, its official policy is not to limit government funding, but it refuses to seek or receive funding that compromises its independence from certain government requirements that are incompatible with its religious commitments. As we have already seen, President Samuelson has invited faculty to engage in quality research in the best venues but perhaps not at the quantity level that some graduate research universities would require. In addition, BYU faculty focus significant attention on helping students develop in ways that go beyond intellectual ability, including being “spiritually strengthened,” developing Christian character, and living a life of continued learning and service.98
Because it is so closely aligned with the purposes of its sponsoring church, BYU receives uniquely stable funding. In what would seem an unusual move in a research university, the BYU board does not allow government research grant recipients to keep indirect funds to hire staff or to use in renting space. Rather, the board includes all indirect-cost money in the general budget of the university, where it is used to provide quite generous funding available to all faculty for travel, hiring of research assistants, and so forth.99 One result is that faculty members do not have the same incentive that faculty in other universities do to bid for more government grants and thus become relatively independent of the university. Indeed, BYU policy limits the number of faculty members who can buy out their time from teaching during the fall and winter semesters to six full-time faculty equivalents across the entire university.100 In terms of total research and development funds from federal sources expended each year, BYU ranks 226th in the U.S.101 We have also already noted the limitations on the number of graduate students and programs and the need to have them be supportive of rather than detrimental to BYU undergraduates. These trade-offs encourage the faculty to involve students (often undergraduate) in their research and to allow them to travel to conferences and research opportunities. They also provide opportunities for students to be involved as teaching assistants, for whom the university provides excellent teacher-development and online-learning supports. On the other hand, these conditions do not facilitate the flourishing of relatively independent “elite” researchers with their cadre of doctoral student followers.
As we mentioned earlier, BYU limits the number of graduate programs and the number of graduate students (to around 10 percent of the student body). Graduate programs must not detract from and should strengthen undergraduate programs. As a result, few departments outside of the STEM (science, technology, engineering, and math) areas have doctoral programs. Some faculty members in the areas without doctoral programs see the advantage of working with very bright undergraduate students and often treat them like doctoral students. Those with doctoral students also make significant efforts to include undergraduates in their research. Over $2 million a year is spent from a variety of funds to sponsor “undergraduate mentored research” efforts that provide a stipend for students and for faculty members who collaborate in this program. This effort, along with the caliber of BYU students, has been credited with the growing number of BYU undergraduates who have gone on to obtain PhDs. Indeed, BYU ranks tenth among U.S. universities in the past ten years and fifth in the past five years in the number of its undergraduates who go on to receive doctorates.102
In addition, a recent report from BYU’s office of research and creative activities shows that over the past forty years both the quantity and quality (as indicated by citations) of scholarly work by faculty members has increased rather significantly. Figure 10 displays the increases in scholarly publications. Figure 11 shows the number of citations in each decade for articles published in that decade. Note the significant increases in publications and the accelerated rate of increase in citations particularly in the past two decades. These are not comparisons with other universities, but they suggest a marked improvement.
Further, while assistant and associate professors tend to have salaries that are competitive with those of the same rank at comparable universities, full professors at BYU tend to receive lower than market salaries.103 That is likely most true in the areas where many other universities are willing to pay large salaries to professors who can teach in “executive education” programs or bring in large government contracts, thus generating additional funds by which their particular program provides a higher proportion of its own budget.
In terms of students, BYU is blessed with undergraduates who are, relative to other universities, very well prepared for college and who are attracted to the excellent academic programs taught in the context of their faith. They and their parents are attracted by the wholesome religious environment, but the relatively low tuition is undoubtedly an attraction as well. For the past two years, BYU has been the “most popular” national university in the United States, and this year (2012) it was second only to Harvard. The measure of popularity fashioned by U.S. News & World Report is essentially a “yield rate” that calculates the “percentage of applicants accepted by a college who end up enrolling at that institution in the fall.” BYU’s rate has been around 75 percent.104 Further, the top 1,500 students in the BYU freshman class, about the size of the entire freshman class at Harvard or Stanford, look equal on paper to students at those universities in terms of intellectual ability. For example, their ACT scores are 30 (96th percentile) or higher. The average ACT score for the whole incoming freshman class in 2012 (7,101 admitted) is 28.13 (91st percentile).105 Furthermore, 84 percent of them have completed a four-year Duty to God or Young Women’s award program, wherein they have engaged in significant service and talent development. Almost all of them (96 percent) have completed four years of seminary (eight semesters of studying the doctrine of the Church during high school; 47 percent of the students have taken this class at 5:30 or 6:00 a.m., before their regular high school classes started). In addition, 71 percent of incoming freshmen were involved in sports, 83 percent participated in performing arts, and 76 percent were employed during their high school years. By the time they complete their undergraduate experience, approximately 85 percent of the men and 15 percent of the women (about 50 percent of students) have completed full-time missionary service for the Church (two years for men and eighteen months for women). In large part because so many of these missions require learning a second language, approximately 70 percent of graduating seniors speak another language.106
Certainly, students and their parents are drawn to BYU by its religious environment and the opportunities to meet other youth of their faith, but they are also drawn by the academic quality and, increasingly, by the relatively low tuition (see figure 5). Tuition at BYU is even lower than tuition for many state-funded institutions (for example, the University of Utah tuition for 2012–13 is $6,764 for in-state residents,107 compared to BYU’s tuition for LDS students of $4,710).108 Indeed, as state governments have been pressed to reduce their budgets, many have cut their contributions to public education, and for this reason, among others, universities have increasingly raised their tuition and fees at rates many times greater than yearly inflation increases to cover the lost revenue.109 Of course, private universities have to charge even more tuition to cover their costs, but most of them raise money through donations to provide scholarships and help students apply for government grants. CNNMoney has compared the total yearly costs of universities and colleges in the U.S. (this includes tuition, fees, room and board, and books; it excludes grants and scholarships).110 We present in figure 12 the comparative results for the nine religious universities we have been considering. The differences in costs are not as great as those seen in figure 5, but BYU’s costs are nevertheless more than 2.5 times less than the average cost for the other universities. In the current economic climate, BYU’s favorable cost advantage combined with the religious and social environment and academic quality of its offerings make it indeed a desirable place. No wonder it rivals Harvard as the most popular university in the country.
Figure 12
Total Average Cost of College Per Year after Grants/Scholarships111
Some BYU faculty members have felt that while the quality of the faculty is good, the university could get better faster if it opened searches to consider non-LDS candidates more seriously. The board of trustees has determined that to pursue BYU’s mission faithfully requires the vast majority of faculty members to be committed members of the faith. We will examine later why this choice is so important, given the way BYU is designed. For now, we want to recognize the trade-off that this choice entails. Even before the current rather austere economic climate, in which positions at many universities have been cut and hiring was curtailed or ceased entirely for a time, faculty candidates of other faiths or of no particular faith tradition would often apply for positions at BYU. Some of them were very well prepared and clearly could have helped improve the intellectual quality of BYU’s teaching and research contributions. However, with rare exceptions, LDS candidates have been sought or a department has been encouraged to hire faculty temporarily until qualified LDS candidates could finish their terminal degrees. Indeed, several departments across campus have developed doctoral preparation programs (often teaching them as an overload) to give their undergraduate students the necessary background to be admitted into the best PhD programs, with the hope that some of them will come back in the future as faculty members. This approach requires significant patience and confidence in the idea that it is critical to have faculty members who are both academically alive and well grounded in the faith of the sponsoring church.
Certainly, the increasing number of BYU undergraduates who pursue a PhD is helping to create more robust and well-qualified faculty hiring pools. And many LDS faculty candidates are drawn to BYU because of its distinctive commitment to developing faith and intellect. On the other hand, the closeness to the Church and any limitations like those discussed earlier (such as contradicting or opposing fundamental Church doctrine or policy, or deliberately attacking or deriding the Church or its general leaders) can lead to criticism from those outside the university. One consequence of this situation is that in many disciplines BYU professors feel that they are scrutinized regarding potential religious bias and feel discriminated against in some journals, academic presses, or other outlets for faculty work. Some faculty members would like to engage in Mormon studies early in their careers but are advised to first establish credibility as a scholar in non-Mormon topics, for fear that (1) they will not develop the rigor and respect necessary to overcome a presumption of religious bias, and (2) they may become focused only on Mormon studies and fail to be current and growing in important disciplinary areas that need to be represented and taught at the university. Some faculty members have noted the irony that no other institution has the breadth and depth of research capacity combined with interest in Mormon themes, and yet BYU has relatively few faculty members who focus on Mormon studies. The reasons are complex and beyond our ability to address in this article but are related to the hybrid nature of BYU and its relationship to multiple institutional environments with often conflicting expectations.
As we demonstrated earlier, most BYU faculty members feel freer academically at BYU than they would at any other university.113 They sincerely appreciate the freedom to discuss their motives (often related to their religious values) and their faith in conjunction with secular subjects. In recent surveys we have conducted with undergraduate students, the large majority respond that in their classroom involvement with BYU professors they expect to grow both intellectually and religiously (spiritually). Further, they believe that, by and large, they have such integrated experiences in many of their classes. Nevertheless, they would like to see even more opportunities for serious and thoughtful integration of both aspects of learning promised by BYU’s mission statement.114 BYU professors are relatively supportive of this mission, as we have noted in the research by Lyon and his associates.115 However, we have observed several responses from BYU faculty members that preclude more serious reflection and efforts to develop the ability to make such integration. Some assume that since we are primarily LDS faculty and students, we must all agree about any particular topic. These faculty make comments in class that take for granted this presumed agreement and tend to close down rather than open up exploration of potentially important insights. Others fear that examination of our differences will lead to contention and believe that we have a mandate to avoid contention at all costs (3 Ne. 11:29–30). Still others express openly the thought that because of these two previous tendencies, bringing faith-related ideas into a discussion of secular subjects will water down the learning and destroy real critical thinking.
We have interviewed individually and in focus groups many faculty members across the disciplines at BYU who are in the top 25 percent of their college or discipline in student ratings measuring how much the students learned in their class and how much they were strengthened spiritually. Interestingly, there are many things about how to integrate faith and learning about which faculty do not agree (for example, whether prayer is necessary to begin class, whether the introduction of religious ideas should be spontaneous or planned, and whether the ideas have to be tightly integrated with the secular subject). Nevertheless, there was virtual unanimity about the idea that relationships of trust and sincere concern precede any genuine investigation of something so important as how faith and reason are related and how that intersection contributes to the growth of character. These faculty members employed a variety of ways to demonstrate their concern for students and a variety of ways related to their own personality and discipline to consider faith and learning issues, but they almost universally embraced the concept of beginning with a relationship of Christian caring and high expectations for the potential and importance of each student. In addition, some were quite articulate about how they introduced potentially sensitive or complex areas of combining faith and learning.116
Because the Church and the university care so deeply about having faculty serve as role models of both academic excellence and faithfulness, the hiring process is very deliberate. Most faculty candidates are eager enough to be considered for a faculty position that they put up with the higher number of interviews (including by General Authorities) and the longer hiring process. Indeed, many have such respect for the General Authorities that they feel honored these men would take time to interview them personally and believe the interview is a statement of how much BYU is an integral part of the work of the Church. However, the slow process and its almost exclusive focus on candidates who are members of the sponsoring church limit the number and quality of candidates in the hiring pool. It may also lead some candidates to accept employment offers that come earlier in the hiring cycle with a deadline for responding that precedes BYU’s ability to make an offer.
For a number of reasons, once faculty members have been hired at BYU, they become part of an intellectual and faith community that many would not easily consider leaving. We are aware of many faculty members who have turned down opportunities at prestigious universities because of their commitment to the mission of BYU and to their colleagues and students here. At the Faculty Center, we sponsor an annual retirement dinner to celebrate those who are retiring from the university that year. As mentioned earlier, the average tenure at the university of those who retire is approximately twenty-five years, or most of a faculty career. That is, most faculty members are “lifers.” The good news is that their loyalty and desire to remain at the university can lead to great willingness to sacrifice and contribute in a variety of important but not always glamorous ways to the growth of the community. The challenge is that some of these faculty members may be so sacrificing that they do not remain current in their discipline and lose the ability to contribute as much intellectually.
These trade-offs are illustrative of the fact that BYU is uniquely designed to do some things better than others. Those who would improve the university must take into account how such “improvements” would affect the intentional tensions that make BYU uniquely able to teach and nurture undergraduates in the context of a specific faith.
The approach we have been using to understand hybrid organizations affords us a critical insight: participants in hybrid-identity organizations must learn to deal with inherent dilemmas or tensions, many of which cannot be definitively resolved. Attempts to completely resolve the dilemmas—by ignoring one aspect of the dilemma, for example—significantly change the nature of the organization and eliminate the benefits of that hybrid nature. In the case of BYU, the church-university dilemmas will most likely persist unless the American higher education institutional environment becomes more open to the possibility that religion and freedom of inquiry can coexist, or unless BYU and its sponsoring church become less concerned about the importance of faith. Alternatively, the Church and BYU could decide not to take seriously BYU’s academic reputation. Of course, such a direction would significantly reduce the value of an education for students and for the Church and university. Furthermore, Church leaders have routinely emphasized their expectation that BYU be a place where faculty members and students can and should succeed both academically and spiritually, and most faculty members and students agree with them and come to BYU with that hope in mind.
President Gordon B. Hinckley, at the time a member of the Church’s First Presidency, captured this sense of the need to deal well with intentional dilemmas in order to fulfill BYU’s unique mission when he said: “This institution is unique. It is remarkable. It is a continuing experiment on a great premise that a large and complex university can be first class academically while nurturing an environment of faith in God and the practice of Christian principles. You are testing whether academic excellence and belief in the Divine can walk hand in hand. And the wonderful thing is that you are succeeding in showing that this is possible.”117
Some Design Choices Are More Critical Than Others
Some of the design choices and resulting trade-offs that we have just reviewed seem more critical than others. Changing some of these policies might begin to erode the uniqueness of BYU, but changing three of them would likely destroy what makes BYU so remarkable: (1) the almost exclusive focus on hiring LDS faculty members and the heavy investment in their socialization, (2) the significant financial support from the Church, and (3) the related policy oversight by the board of trustees. Of course, not coincidentally, these were some of the most prominent factors whose change led to the secularization of religious universities and colleges.
Perhaps one more element from the Albert and Whetten study of hybrid organizations will help us understand why these factors are so important. The authors describe two alternative ways that a hybrid organization can deal with disparate organizing scripts: ideographic and holographic.118 The ideographic approach seeks to keep each organizing script located primarily in separate parts of the organization, whereas the holographic approach seeks to have each member of the organization embody and deal with the tensions personally. Figure 13 displays these alternatives and suggests how they are applied in different institutions and with respect to the two underlying dilemmas or tensions inherent in BYU’s unique approach to being a church-teaching university. Regarding the church-university dilemma, most religious research universities organize ideographically. They may have priests or other religious officials working as student-life advisers or teaching in a theology department, but the majority of the faculty are hired for their qualifications to teach a particular subject and are not necessarily expected to bring a Catholic or Protestant perspective into the classroom or their counseling of students. In this approach, students are exposed to faith in some settings and to reason in other settings, with little explicit overlap. Faculty and staff are also organized in ways that keep them in relatively homogenous subgroups, so that they do not often confront hybrid tensions.119
Figure 13
Alternative Approaches to Organizing Hybrids
Holographic
(“compound in one”;
within tensions)
Ideographic
(“separate but equal”;
between tensions)
Church University
Faith and Reason
(BYU)
Faith or Reason
(Religious Universities)
Teaching University
Teaching and Scholarship
(BYU)
Teaching or Scholarship
(Secular Universities)
By contrast, BYU organizes “holographically.” The founding charge from President Brigham Young, then the President of the Church, to the first principal of Brigham Young Academy was “not to teach even the alphabet or the multiplication tables without the Spirit of God.”120 Following this approach, faculty members are expected to find ways to combine faith and reason in their relationships with students. As another Church leader explained, it is not intended “that all of the faculty should be categorically teaching religion constantly in their classes, but . . . that every . . . teacher in this institution would keep his subject matter bathed in the light and color of the restored gospel.”121
Regarding the teaching-university dilemmas or tensions, some secular research universities tend to organize and reward in ways that keep the teaching and the research relatively separate. Indeed, graduate students are significantly involved in teaching undergraduates, and the greatest indication that a faculty member is valued is that he or she gets a reduced teaching load. Faculty members more often teach graduate students who work with them on their research. In contrast, at BYU, faculty members are expected to give significant attention to both teaching (particularly undergraduates) and research, and both activities count heavily in whether a faculty member is given continuing faculty status (tenure) or is promoted.
Selecting “hybrid” faculty. Such expectations put a premium on who is hired at BYU. Faculty are expected not merely to be civil to people in a different part of campus who respond to a “different drummer” institutionally (for example, those who work with honor-code violations or those who teach religion courses full time), but they are expected to embody the dilemmas and bring them together in their work. Faculty members who are uninterested in the particular dilemmas they will have to manage at BYU are not likely to enjoy their experience or want to perform well. On the other hand, most faculty report that they feel freer here than they would at any other university because of the unique environment that includes these dilemmas. Indeed, members of the Church who have gone through doctoral or other terminal-degree experiences outside of BYU have had to learn to manage their own personal dilemmas that may be inherent in the organizational dilemmas BYU is designed to create. Because of their religious commitments to marriage and family, for example, a relatively large proportion of them have been married with children during their postgraduate studies and have had to learn how to balance family, professional, Church, and other commitments. They have also been exposed to those whose academic and personal values are quite different from theirs, and many learn how to balance faithful commitment and tolerance. Many of them have had to work through the dilemmas of reconciling their faith with what they are learning about homosexuality, evolution, or other topics that have been historically problematic for some Christian groups. They also find in their religion many paradoxes, like justice and mercy, that are inherently similar to the dilemmas we have been discussing: essential, often apparently incompatible, and ultimately responsible for their sense of unique identity as well as for their growth, learning, and happiness.
In other words, time spent finding those who have already learned about dilemma management is likely to be a key determinant in the ability of BYU to create a holographic approach to teaching and learning. Such an approach requires much greater ability to deal with tensions of the sort we have been discussing but also promises a much richer outcome of understanding and furthering the university’s mission.
Developing “hybrid faculty” through socialization. In addition to carefully selecting those whose background has provided dilemma-management experience, BYU invests significant funds to help new faculty “learn the ropes” and make a quick start on their career. For example, new faculty members engage in an eighteen-month development program that introduces them to BYU’s mission, campus resources, and teaching, research, and citizenship requirements. This program also helps them find a mentor to work with on three projects (research, teaching, and service/citizenship) and gives them time with the BYU president and a member of the board of trustees for questions and answers. As one indication of their level of support and involvement, they spend half-days for two weeks at the end of their first school year engaged in workshops focused on the topics listed above, among other things. They are paid for attending this two-week seminar and receive additional remuneration when they complete the three projects. Beyond these formal university efforts to socialize new faculty, departments and colleges often sponsor their own “on-boarding” programs. These programs help new faculty address both the religious-academic and the teaching-research dilemmas that lie at the heart of BYU’s hybrid identity.
Some faculty members also become involved in additional socialization regarding the hybrid nature of BYU when they are called to serve in lay ministry positions in congregations of students. They often meet with students for church services on the weekends in the same rooms where they have taught secular subjects during the week. Furthermore, a significant proportion of the faculty outside of Religious Education professors (these are full-time teachers of religion classes) have taught a religion class.
Import of Church financial and policy support. Even with all of these efforts and the growing ability to find LDS faculty who are well prepared and faithful, the dilemmas and related tensions we have reviewed have led to pressures from outside and inside BYU to relieve them just as other religious educational institutions have done. As at other universities, some very wealthy donors have been willing to give more money if it funds their favorite emphasis. The board has routinely responded that the Church would provide the bulk of the funding and accept only those donations that help further the ends they have negotiated with the university and approved.122 Over the years, faculty and administrators have asked for permission to engage in greater efforts to obtain government funding and be allowed to keep the indirect cost allocations to build their own programs. As mentioned previously, the board has routinely removed much of the indirect-cost monies from the specific projects and provided generous research support across the university (though not at the level that some more research-oriented faculty might like). Others have asked for more graduate programs and graduate students, for fewer required religion courses, or for their courses to count as part of the religion requirement. These proposals usually meet with a negative response because they do not conform to the mission of BYU. In these and many other ways, the board of trustees has provided a steady hand along with stable funding, without which many of the dilemmas would likely have dissolved into following the more predominant academic organizing script.
Perhaps with this perspective we can see why so few religious universities remain and why BYU is unique among them in this niche. The particular hybrid dilemmas that BYU has chosen are not inevitable. That is, we can imagine other combinations of tensions or specific applications of them. However, any institution whose leaders and faculty set out to create a unique hybrid identity that combines faith and learning is likely to have to address the basic factors we have examined and to do so with unusual financial and policy support over a long period of time. As organizational scholars, we marvel at the unique combination of these factors at BYU.
About the author(s)
Alan L. Wilkins is Professor of Organizational Leadership and Strategy and Associate Director of the Faculty Center at Brigham Young University. He received his PhD in organizational behavior from Stanford University in 1979 and has been a faculty member at BYU since that time. He served as BYU’s academic vice president from 1996 to 2004. From 1993 to 1996, he served as associate academic vice president for faculty and was serving as chair of the Organizational Behavior Department when he was invited to serve in these university positions. His research has appeared in Administrative Science Quarterly, Academy of Management Review, Annual Review of Sociology, Human Resource Management, Journal of Applied Behavioral Science, and Organizational Dynamics.
David A. Whetten is the Jack Wheatley Professor of Organizational Studies and Director of the Faculty Center at Brigham Young University. He received his doctorate at Cornell University and was on the faculty at the University of Illinois for twenty years. He is a former editor of the Academy of Management Review and past president of the Academy of Management. His research has appeared in Administrative Science Quarterly, Academy of Management Journal, Organization Science, the Journal of Management Studies, and Management and Organizational Review.
5. Andrew White, “Inaugural Address,” in Account of the Proceedings of the Inauguration, October 7, 1868 (Ithaca: Cornell University, 1869), quoted in Marsden, The Soul of the American University, 116.
8. This is a primary theme in Marsden, Soul of the American University; see particularly 150–64.
9. James Tunstead Burtchaell, The Dying of the Light: The Disengagement of Colleges and Universities from Their Christian Churches (Grand Rapids, Mich.: Eerdmans Publishing, 1998); see 823–32 for a summary of factors that marked and influenced institutional secularization. We have selected four organizational elements that reflect changing formal connection to and control by religious institutions.
10. Marsden, Soul of the American University, see particularly 150–64, 265–87.
25. Douglas Jacobsen and Rhonda Hustedt Jacobsen, “The Ideals and Diversity of Church-Related Higher Education,” in The American University in a Postsecular Age, ed. Douglas Jacobsen and Rhonda Hustedt Jacobsen (Oxford: Oxford University Press, 2008), 63–80.
26. For Research University, Very High, see Search Results for Basic = “RU/VH,” Carnegie Foundation for the Advancement of Teaching, http://classifications.carnegiefoundation.org/lookup_listings/srp.php?clq={%22basic2005_ids%22%3A%2215%22}&start_page=standard.php&backurl=standard.php&limit=0,50; for Research University, High, see Search Results for Basic = “RU/H,” Carnegie Foundation for the Advancement of Teaching, http://classifications.carnegiefoundation.org/lookup_listings/srp.php?clq={%22basic2005_ids%22%3A%2216%22}&start_page=standard.php&backurl=standard.php&limit=0,50.
27. Brigham Young to Alfales Young, October 20, 1875, Brigham Young Papers, quoted in Ernest L. Wilkinson, ed., Brigham Young University: The First One Hundred Years, 4 vols. (Provo, Utah: Brigham Young University Press, 1975), 1:67–68.
28. Marsden, Soul of the American University, 38–42; Wilkinson, First One Hundred Years, 1:25, 63, 65, 74, 105–14, 162; 2:749–56.
39. “Chapel and two required religion courses have been part of Baylor’s curriculum since the University’s founding more than one hundred sixty-five years ago. Courses in Christian heritage and scripture provide students with the knowledge necessary to understand the Christian narrative, reflect on how this narrative has shaped human history, and consider how Christ’s message relates to each of us personally. These core requirements offer students the opportunity to grow in their faith and reflect on God’s calling for their lives.” “General Education Outcomes,” Baylor, http://www.baylor.edu/vpue/index.php?id=82141.
42. Students are required to take one course in the Christian Theological Tradition and two or three others from an array of courses largely based on scripture and Catholic theology; see “TRS Undergraduate Program,” School of Theology and Religious Studies, the Catholic University of America, http://trs.cua.edu/academic/undergrad/index.cfm; and “Course Descriptions,” School of Theology and Religious Studies, the Catholic University of America, http://trs.cua.edu/courses/courses.cfm.
46. Two required theology courses: (1) Foundations of Theology (Theology 10001/20001) and (2) an elective (Theology 20xxx) that takes up a major theme or set of themes in the Christian theological tradition. See “Rationale for University Theology Requirement,” University of Notre Dame, http://nd.edu/~corecrlm/rationales/theology.htm; and “Approved Courses,” University of Notre Dame, http://nd.edu/~corecrlm/approved/index.htm.
48. From examples of departmental invitations to apply for available positions at BYU. See, for example, “Faculty Positions—Brigham Young University, UT,” ArchaeologyFieldwork.com, http://www.archaeologyfieldwork.com/AFW/Message/Topic/12854/Employment-Listings/faculty-positions-brigham-young-university-ut.
51. Baylor has recently announced the result of a two-year process that resulted in a new vision statement, “Pro Futuris.” In one section of that statement, the following statement is made regarding faculty hiring: “To these ends, we exercise care in hiring and developing faculty and staff who embrace our Christian identity and whose lives of faith manifest integrity, moral strength, generosity of spirit, and humility in their roles as ambassadors of Christ.” “Baylor’s Distinctive Role in Higher Education,” Baylor, http://www.baylor.edu/profuturis/index.php?id=88961. In their Human Resources page “Available Faculty Positions,” the following statement regarding religious requirements for faculty appears: “Faculty recruitment and retention is a top priority of the university. In particular, we seek to improve Baylor’s academic excellence while enhancing our integration of outstanding scholarly productivity and strong Christian faith.” See http://www.baylor.edu/hr/index.php?id=79678. A policy statement approved by Baylor’s president on August 1, 2006, states the following: “Based upon the religious exemption of Title VII of the Civil Rights Act of 1964, Baylor University has the right to discriminate on religious grounds in the hiring of its employees. It makes a good faith effort to administer all recruitment policies in a manner so as to maximize the diversity of the applicant pool.” See “BU-PP 110 Recruitment and Employment—Faculty,” http://www.baylor.edu/content/services/document.php?id=42352. The previous vision statement included the following statement: “Because the Church, the one truly democratic and multicultural community, is not identical with any denomination, we believe that Baylor will serve best, recruit more effectively, and both preserve and enrich its Baptist identity more profoundly, if we draw our faculty, staff, and students from the full range of Christian traditions.” “Baylor 2012: Our Heritage, Our Foundational Assumptions,” Baylor, href=”http://www.baylor.edu/about/baylor2012/index.php?id=64338.
52. In its EEO statement, the university does not indicate any religious preference in its hiring: “Boston College is an Affirmative Action/Equal Opportunity Employer.” See “Faculty Openings,” Boston College, http://www.bc.edu/offices/avp/openings.html.
53. All faculty are required to abide by the university’s honor code and dress and grooming standards. The following statement found in a position announcement for chemical engineering is typical of all such announcements: “BYU, an equal opportunity employer, requires all faculty members to observe the university’s honor code and dress and grooming standards (see honorcode.byu.edu). Preference is given to qualified members in good standing of the affiliated church—The Church of Jesus Christ of Latter-day Saints.” “Faculty Application Details,” Chemical Engineering, Ira A. Fulton College, BYU, http://chemicalengineering.byu.edu/faculty-application-details.
54. “The Catholic University of America is an AA/EO employer and does not discriminate on the basis of race, color, national origin, age, sexual orientation, religion, veterans’ status, or physical or mental disabilities. The Catholic University of America was founded in the name of the Catholic Church as a national university and center of research and scholarship. Regardless of their religious affiliation, all faculty members are expected to respect and support the university’s mission.” See, for instance, Positions, Office of the Provost, the Catholic University of America, http://chemicalengineering.byu.edu/faculty-application-details.
55. “Fordham is an independent, Catholic university in the Jesuit tradition that welcomes applications from men and women of all backgrounds. Fordham is an EEO/AA institution.” “Mathematics Department, Fordham University,” MathJobs.org, https://www.mathjobs.org/jobs/Fordham/2330.
56. “Georgetown University provides equal opportunity in employment for all persons, and prohibits unlawful discrimination and harassment in all aspects of employment because of age, color, disability, family responsibilities, gender identity or expression, genetic information, marital status, matriculation, national origin, personal appearance, political affiliation, race, religion, sex, sexual orientation, veteran’s status or any other factor prohibited by law.” “Georgetown University Faculty Handbook,” Georgetown University, http://www1.georgetown.edu/facultyhandbook/.
58. EEO/AA: “Women, minorities, and Catholics are encouraged to apply.” See, for instance, “University of Notre Dame, Economics, Professional Specialist in Economics,” American Economic Association, http://www.aeaweb.org/joe/listing.php?JOE_ID=201204_397029. “Employment decisions are based on qualifications and are made without regard to race, color, national or ethnic origin, sex, disability, veteran status, or age except where a specific characteristic is considered a ‘bona fide occupational qualification’ for a specific position.” “Recruitment, Selection, and Hiring,” Office of Human Resources, University of Notre Dame, http://hr.nd.edu/nd-faculty-staff/forms-policies/recruitment-selection-and-hiring/. From the University of Notre Dame Mission Statement: “The intellectual interchange essential to a university requires, and is enriched by, the presence and voices of diverse scholars and students. The Catholic identity of the University depends upon, and is nurtured by, the continuing presence of a predominant number of Catholic intellectuals. This ideal has been consistently maintained by the University leadership throughout its history. What the University asks of all its scholars and students, however, is not a particular creedal affiliation, but a respect for the objectives of Notre Dame and a willingness to enter into the conversation that gives it life and character. Therefore, the University insists upon academic freedom that makes open discussion and inquiry possible.” “Mission Statement,” University of Notre Dame, http://www.nd.edu/about/mission-statement/.
62. “The president’s critics have focused on a mix of issues related to strategy and personal style. They have accused Sloan of intimidating his opponents and chilling academic freedom. But it was the president’s ambitious plan to drive Baylor up the national ranks of research universities, while reinforcing its mission as a Christian institution, that spurred much of the fighting.” Doug Lederman, “Trying to Calm the Storm,” January 24, 2005, Inside Higher Ed,http://www.insidehighered.com/news/2005/01/24/baylor1_24.
66. “A conscious decision was reached many years ago and regularly reaffirmed by our board of trustees that the primary source of support for BYU and other Church institutions would come from the appropriated funds of the Church. This is so not only because we have a very generous Church and leaders but also because the Brethren have always wanted it to be abundantly clear to whom we would look for our leadership and guidance.” Cecil O. Samuelson, “The BYU Way,” speech given on August 23, 2005, at the BYU Annual University Conference, available online at http://speeches.byu.edu/index.php?act=viewitem&id=1491.
69. “The Board of Regents is the official governing body of Baylor University. Regents are selected by election, with 75% of the membership elected by the Regents themselves and 25% elected by the Baptist General Convention of Texas. Regents serve a three-year term, and may serve up to three terms consecutively before they must rotate off the Board for at least one year.” “Board of Regents,” Office of the President, Baylor, http://www.baylor.edu/president/index.php?id=1457.
70. “The membership of the Board of Trustees shall consist of twenty-one or more persons, as may be determined from time to time by majority vote of the entire Board of Trustees. The President of Boston College shall be an ex officio member of the Board of Trustees.” “The Bylaws of the Trustees of Boston College,” art. 2, sec. 1, Boston College, http://www.bc.edu/content/bc/offices/bylaws/bylaws.html#art2sec1. There are no requirements for nor mention of a proportion of “religious” on the Board. The most current listing of board members we found included that of forty-nine members, five of whom were listed “S.J.” (Society of Jesus, or Jesuit priests). “Boston College Board of Trustees,” Boston College, https://www.bc.edu/content/bc-web/about/trustees.html.
71. “The make-up of the Board was slightly amended in 2002, and currently the Board of Trustees can be made up of between five and fifteen members. Since its organization, it has been stipulated that all members of the Board of Trustees must be members in good standing in the Church. Though the exact make up of the Board has changed over time, it currently consists of the entire First Presidency, three members of the Quorum of the Twelve Apostles, the member of the Presidency of the Seventy who oversees the Church in Utah, the Relief Society general president, the Young Women general president and the Assistant Commissioner of the Church Educational System as Secretary and Treasurer. Between Board meetings, an Executive Committee consisting of Board members handles the duties of the Board of Trustees, subject to the ratification of the Committee’s decisions by the Board.” “Assets and Administrative Structure” section of “Brigham Young University. Board of Trustees,” Brigham Young University, https://lib.byu.edu/byuorg/index.php/Brigham_Young_University._Board_of_Trustees.
72. CUA Board of Trustees: “The civil charter and the Bylaws place in the Board of Trustees ultimate responsibility for governance and sole responsibility for fiscal affairs of the University. The Board’s membership is limited to fifty persons of whom twenty-four must be clerics of the Roman Catholic Church. The Chancellor, who is the Archbishop of Washington, and the President are members ex officio.” “Board of Trustees” section of “Office of the President,” the Catholic University of America, http://president.cua.edu/staff/trustees.cfm. Eighteen of the twenty-four clerics of the Church must be members of the U.S. bishops’ conference. “CUA Today” section of “A Brief History of Catholic University,” http://www.cua.edu/about-cua/history-of-CUA.cfm.
75. “The Board of Trustees manages the affairs of Loyola University of Chicago . . . , including the election of the President and all vice presidents and other officers. The Board approves the budget and all major financial transactions, the University’s strategic plans, and all major acquisitions and disposals of capital assets. It is composed of up to 50 members, made up of both Jesuit and lay colleagues. Trustees ordinarily serve a term of three years.” “Faculty Handbook: Policies, Procedures, and Information for the Faculty of Loyola University of Chicago,” Loyola University of Chicago, June 5, 2009, 17, http://www.luc.edu/academicaffairs/pdfs/LUC_Fachbook_2009.pdf.
76. “The Fellows of the University shall be a self-perpetuating body and shall be twelve (12) in number, six (6) of whom shall at all times be clerical members of the Congregation of Holy Cross, United States Province of Priests and Brothers, and six (6) of whom shall be lay persons.” For more information, see “Statutes of the University,” sec. 2, in “Charter of the University of Notre Dame,” University of Notre Dame, .
“Except to the extent of those powers specifically reserved to the Fellows of the University of Notre Dame du Lac (‘the University’) in the Statutes of the University, all powers for the governance of the University shall be vested in a Board of Trustees which shall consist of such number of Trustees not less than thirty (30) nor more than sixty (60) as shall from time to time be fixed by resolution of the Fellows.” For more information, see “Bylaws of the University,” sec. 1, no. 1, University of Notre Dame, May 23, 2012, https://www.nd.edu/assets/docs/bylaws.pdf and also Ed Cohen, “Next Leader of Notre Dame Chosen,” Notre Dame Magazine, summer 2004, https://magazine.nd.edu/stories/next-leader-of-notre-dame-chosen/; current bylaws do not require that the president be a priest of the Congregation of the Holy Cross.
“In 1967, Saint Louis University welcomed lay people to its Board of Trustees and became the first Catholic college or university to give the power of governance to a lay-dominated board. This pioneering action was soon emulated worldwide and is now the standard for most schools. Board members may serve three consecutive four-year terms, and the Board may have up to 55 members. According to the University’s Constitution and By-laws, the Chairman of the Board must be a lay person and the President can be either a lay person or a Jesuit.” See “Fact Book, 2009–2010,” Saint Louis University, February 12, 2010, 6, http://www.slu.edu/Documents/provost/oir/Fact%20Book%202009-2010%20Final%208-24-2010.pdf.
90. See “Academic Freedom and Tenure: Brigham Young University,” Academe, September–October 1997, 52–71, available online at http://www.aaup.org/NR/rdonlyres/27EB0A08-8D25-4415-9E55-8081CC874AC5/0/Brigham.pdf. Note also BYU’s response as an addendum to this report: “Comments from the Brigham Young University Administration,” 69–71. The response states: “Professor Houston engaged in an extensive pattern of publicly contradicting and opposing fundamental Church doctrine and deliberately attacking the Church. Professor Houston had ample notice that her public statements endorsing prayer to Heavenly Mother were inappropriate. President Hinckley made the matter crystal clear in 1991, and the Church’s scriptures clearly set forth the manner in which we are commanded to pray. In addition, Professor Houston received specific personal notice that her statements were inappropriate.”
91. See BYU defense in AAUP investigation of BYU in “Comments from the Brigham Young University Administration”; see also an examination of AAUP treatment of religious institutions in Michael W. McConnell, “Academic Freedom in Religious Colleges and Universities,” Law and Contemporary Problems 53, no. 3 (1990): 303–24, available online at http://www.jstor.org/stable/1191799.
92. The 1940 Statement of Principles on Academic Freedom and Tenure, issued jointly by the AAUP and the Association of American Colleges (now the Association of American Colleges and Universities) recognizes the right of religious bodies to establish limits on academic freedom if those limitations are clearly stated. However, in 1970 the AAUP questioned such limitations, arguing that they were no longer needed and said that it no longer endorsed such limitations. An interpretation made in 1988 of the 1970 statement suggests that any institution that requires allegiance to religious doctrine cannot call itself an “authentic seat of higher learning.” This 1988 interpretation was published by the AAUP’s Committee A, but the Committee did not endorse it. As a result, the matter appears to be unresolved. See Lee Hardy, “The Value of Limitations,” Academe Online,http://www.aaup.org/AAUP/pubsres/academe/2006/JF/Feat/hard.htm.
97. See, for example, Gordon B. Hinckley, “Why We Do Some of the Things We Do,” Ensign 29 (November 1999): 52–53; and Gordon B. Hinckley, “The BYU Experience,” devotional address given at BYU on November 4, 1997, available online at http://speeches.byu.edu/index.php?act=viewitem&id=761.
99. “Brigham Young University Sponsored Programs Handbook of Policies and Procedures,” Office of Research and Creative Activities, April 2012, 14: “At BYU, funds collected as indirect costs become part of the total university budget. They are thus used to support those functions identified earlier by the budget allocation process.”
112. “In 2005, entering freshmen came from households with a parental median income of $74,000, 60 percent higher than the national average of $46,326.” Kathy Wyer, “Today’s College Freshmen Have Family Income 60% above National Average, UCLA Survey Reveals,” UCLA News,http://heri.ucla.edu/PDFs/PR_TRENDS_40YR.pdf.
119. Boston College and other Catholic universities have been discussing Catholic identity and mission and how that is reflected in the hiring of Catholic faculty. See, for example, John Langan, “Reforging Catholic Identity,” Commonweal, April 21, 2000, 20–23. Such discussions are thoughtful and complex. They suggest that since the 1960s Catholic institutions of higher education have engaged in efforts to develop significant professionalization of their faculty that have been associated with increased independence from the Catholic Church, greater efforts to provide plurality of views within their institutions, and more focus on faculty in philosophy and theology carrying the discussion of faith and learning within a Catholic tradition. Several voices are calling for administrators to require at least some of the faculty who are hired (whether or not they are Catholic) to have the skill and interest to continue that conversation in scholarly ways across the other disciplines as appropriate. However, such discussions suggest that most, if not all, of these institutions have moved toward more ideographic approaches, where most faculty members are not expected to qualify for or engage in this dialogue or to involve their students in it.
|
BYU became more closely tied to its affiliated church and more intentionally religious than any of the remaining religious universities.1
A popular twentieth-century myth has it that aerodynamics experts have examined the bumblebee and determined that “that critter can’t fly,” because “it does not have the required capacity (in terms of wing area or flapping speed).” Nevertheless, the laws of physics do not prevent the bumblebee from flying. Research shows that “bumblebees simply flap harder than other insects, increasing the amplitude of their wing strokes to achieve more lift, and use a figure-of-eight wing motion to create low-pressure vortices to pull them up.”2 In other words, the bumblebee flies, but it does so differently than many other insects.
As organizational scholars, we ask similar questions of BYU. Our goal is to help those who are interested in universities, and particularly religious universities, to understand them better by comparing BYU to the others in this niche. We believe that by studying the limit case we can shed light on the nature of such organizational “critters” and how they can actually “fly,” sometimes, as it might appear, against all odds.
After reviewing the primary reasons for the secularization of American research universities, we consider BYU by contrasting it with other religious universities in its institutional niche. We then focus on trying to understand how BYU deals with the inherent dilemmas it has chosen quite consciously and the implications of these choices for its ability to “fly.” We conclude by considering implications for faculty, administrators, and scholars of universities that for a variety of reasons (some more conscious than others) incorporate such dilemmas as a core aspect of their identity.
The Secularization of American Higher Education
Given the history of secularization in institutions of higher education in America, some might wonder whether BYU is the last of its kind. Most American universities started out as church-related colleges, but by the 1920s the majority of them had been “secularized.”
|
yes
|
Paleoethnobotany
|
Was maize a staple food in prehistoric North American civilizations?
|
yes_statement
|
"maize" was a "staple" "food" in "prehistoric" north american "civilizations".. "prehistoric" north american "civilizations" relied on "maize" as a "staple" "food".
|
https://today.tamu.edu/2017/08/23/ancient-corn-reveals-clues-about-early-farming-says-texas-am-prof/
|
Ancient Corn Reveals Clues About Early Farming, Says Texas A&M ...
|
A team of researchers that includes a Texas A&M University anthropologist has analyzed a trove of ancient maize and their findings cast new light on the development of agriculture in Central America and the food that fueled the rise of the many Native American civilizations, including the Maya.
Heather Thakar, curator and instructional assistant professor in the Department of Anthropology, and colleagues from Penn State, UC-Santa Barbara, the Smithsonian Institution, and the University of Hawaii have had their work published in PNAS (Proceedings of the National Academy of Sciences).
The team examined maize (corn) cobs that were found in the El Gigante rockshelter in Honduras, where humans resided throughout the last 11,000 years. The researchers provide definitive evidence that fully domesticated maize was indeed productive enough to be a staple crop in the local residents’ diets by at least 4,300 years ago.
“Staple grain crops (like maize, wheat, or barley) are associated with a complete commitment to agriculture, which provided the basis for the development of many complex societies around the world that started developing after about 5,000 years ago. So here, the critical question is not so much why maize was so important for Native American civilizations but rather when maize became so important.”
Thakar says it is certain that maize was domesticated long before it became a primary dietary staple. “Based on genetic studies, most researchers agree that maize evolved from the teosinte plant somewhere in the Balsas area of southwestern Mexico around 9,000 years ago.” However, the full evolutionary history of maize is still poorly known because archaeological sites with well-preserved ancient maize are incredibly rare. Only a handful of specimens that are more than 4,000 years old have ever been recovered, and most of these ancient cobs come from just five caves located in the arid Mexican highlands.
“Recent studies on very early maize cobs from the Tehuacan Valley in Mexico (dating to approximately 5,300 years ago) indicate that those plants were only partially domesticated.” The small size of these and other early Mexican cobs suggests that maize may have been initially used as a green vegetable or for stalk sugar to produce alcoholic beverages rather than a food grain. The ancient maize cobs that Thakar and her team studied from El Gigante (dating to approximately 4,300 years ago) are interesting because of how large maize gets once dispersed from its primary center of domestication. She states, “the cobs that we analyzed are bigger than those known from other areas of Mexico for the same time period.”
The team’s high-precision dating based on accelerator mass spectrometry – commonly called radiocarbon dating — combined with detailed analysis of a well-preserved and extensive plant assemblage from El Gigante rock shelter in the highlands of western Honduras, significantly expands the number of known ancient maize specimens and their geographic distribution. “This unique archaeological site provides an unparalleled opportunity to reconstruct the diversification process outside the heartland of maize domestication and to assess its transition into a staple crop in the New World,” Thakar notes.
“We believe that the domestication history of maize in Honduras is distinct from Mexico because Honduras is well outside the range of the wild plant (teosinte) that maize was domesticated from. In Mexico, hybridization and backcrossing between teosinte and maize could have slowed the domestication process.” Research on specimens from El Gigante reveals that ancient farmers contributed to significant post-domestication crop improvements and the development of many local maize varieties through selection for useful traits.
Further changes in cob size and architecture documented in the El Gigante assemblage appear contemporaneous with the widespread adoption of this important domesticate throughout Central America. These changes correspond with increased reliance on maize as a staple grain crop and the emergence of state-level societies between 3,500 and 1,000 years ago.
“Our project is actively investigating evolving food economies (11,000 to 1,500 years ago) associated with early management of native tree crops (like avocados) and the subsequent introduction of non-native field crops (such as maize, beans and squash) that shaped the tropical forests of Central America over millennia and supported the rise of classic Mayan society.”
|
A team of researchers that includes a Texas A&M University anthropologist has analyzed a trove of ancient maize and their findings cast new light on the development of agriculture in Central America and the food that fueled the rise of the many Native American civilizations, including the Maya.
Heather Thakar, curator and instructional assistant professor in the Department of Anthropology, and colleagues from Penn State, UC-Santa Barbara, the Smithsonian Institution, and the University of Hawaii have had their work published in PNAS (Proceedings of the National Academy of Sciences).
The team examined maize (corn) cobs that were found in the El Gigante rockshelter in Honduras, where humans resided throughout the last 11,000 years. The researchers provide definitive evidence that fully domesticated maize was indeed productive enough to be a staple crop in the local residents’ diets by at least 4,300 years ago.
“Staple grain crops (like maize, wheat, or barley) are associated with a complete commitment to agriculture, which provided the basis for the development of many complex societies around the world that started developing after about 5,000 years ago. So here, the critical question is not so much why maize was so important for Native American civilizations but rather when maize became so important.”
Thakar says it is certain that maize was domesticated long before it became a primary dietary staple. “Based on genetic studies, most researchers agree that maize evolved from the teosinte plant somewhere in the Balsas area of southwestern Mexico around 9,000 years ago.” However, the full evolutionary history of maize is still poorly known because archaeological sites with well-preserved ancient maize are incredibly rare. Only a handful of specimens that are more than 4,000 years old have ever been recovered, and most of these ancient cobs come from just five caves located in the arid Mexican highlands.
|
yes
|
Paleoethnobotany
|
Was maize a staple food in prehistoric North American civilizations?
|
yes_statement
|
"maize" was a "staple" "food" in "prehistoric" north american "civilizations".. "prehistoric" north american "civilizations" relied on "maize" as a "staple" "food".
|
https://en.wikipedia.org/wiki/Maize
|
Maize - Wikipedia
|
Maize is widely cultivated throughout the world, and a greater weight of maize is produced each year than any other grain.[9] In 2021, total world production was 1.2 billion tonnes (1.2×109 long tons; 1.3×109 short tons). Maize is the most widely grown grain crop throughout the Americas, with 384 million tonnes (378,000,000 long tons; 423,000,000 short tons) grown in the United States alone in 2021.[citation needed]Genetically modified maize made up 85% of the maize planted in the United States in 2009.[10]Subsidies in the United States help to account for its high level of cultivation of maize and its position as the largest producer in the world.[11]
Maize is a cultigen; human intervention is required for it to propagate. Whether or not the kernels fall off the cob on their own is a key piece of evidence used in archaeology to distinguish domesticated maize from its naturally-propagating teosinte ancestor.[4] Genetic evidence can also be used to determine when various lineages split.[12]
Most historians believe maize was domesticated in the Tehuacán Valley of Mexico.[13] Recent research in the early 21st century has modified this view somewhat; scholars now indicate the adjacent Balsas River Valley of south-central Mexico as the center of domestication.[14]
An 2002 study by Matsuoka et al.. has demonstrated that, rather than the multiple independent domestications model, all maize arose from a single domestication in southern Mexico about 9,000 years ago. The study also demonstrated that the oldest surviving maize types are those of the Mexican highlands. Later, maize spread from this region over the Americas along two major paths. This is consistent with a model based on the archaeological record suggesting that maize diversified in the highlands of Mexico before spreading to the lowlands.[15][16]
A large corpus of data indicates that [maize] was dispersed into lower Central America by 7600 BP [5600 BC] and had moved into the inter-Andean valleys of Colombia between 7000 and 6000 BP [5000–4000 BC].
— Dolores Piperno, The Origins of Plant Cultivation and Domestication in the New World Tropics: Patterns, Process, and New Developments[14]
According to a genetic study by the Brazilian Agricultural Research Corporation (Embrapa), corn cultivation was introduced in South America from Mexico, in two great waves: the first, more than 6000 years ago, spread through the Andes. Evidence of cultivation in Peru has been found dating to about 6700 years ago.[18] The second wave, about 2000 years ago, through the lowlands of South America.[19]
The earliest maize plants grew only small, 25-millimetre-long (1 in) corn ears, and only one per plant. In Jackson Spielvogel's view, many centuries of artificial selection (rather than the current view that maize was exploited by interplanting with teosinte) by the indigenous people of the Americas resulted in the development of maize plants capable of growing several ears per plant, which were usually several centimetres/inches long each.[20] The Olmec and Maya cultivated maize in numerous varieties throughout Mesoamerica; they cooked, ground and processed it through nixtamalization. It was believed that beginning about 2500 BC, the crop spread through much of the Americas.[21] Research of the 21st century has established even earlier dates. The region developed a trade network based on surplus and varieties of maize crops.[citation needed]
Mapuches of south-central Chile cultivated maize along with quinoa and potatoes in pre-Hispanic times; however, potato was the staple food of most Mapuches, "specially in the southern and coastal [Mapuche] territories where maize did not reach maturity".[22][23] Before the expansion of the Inca Empire maize was traded and transported as far south as 40°19' S in Melinquina, Lácar Department.[24] In that location maize remains were found inside pottery dated to 730 ± 80 BP and 920 ± 60 BP. Probably this maize was brought across the Andes from Chile.[24] The presence of maize in Guaitecas Archipelago (43°55' S), the southernmost outpost of pre-Hispanic agriculture,[25] is reported by early Spanish explorers.[26] However the Spanish may have misidentified the plant.[26]
By at least 1000 BCE, the Olmec in Mesoamerica had based their calendar, language, myths and worldview with maize at the center of their symbolism.[27]
Columbian exchange
After the arrival of Europeans in 1492, Spanish settlers consumed maize, and explorers and traders carried it back to Europe and introduced it to other countries. Spanish settlers much preferred wheat bread to maize, cassava, or potatoes. Maize flour could not be substituted for wheat for communion bread, since in Christian belief only wheat could undergo transubstantiation and be transformed into the body of Christ.[28] Some Spaniards worried that by eating indigenous foods, which they did not consider nutritious, they would weaken and risk turning into Indians. "In the view of Europeans, it was the food they ate, even more than the environment in which they lived, that gave Amerindians and Spaniards both their distinctive physical characteristics and their characteristic personalities."[29] Despite these worries, Spaniards did consume maize. Archeological evidence from Florida sites indicate they cultivated it as well.[30]
Maize spread to the rest of the world because of its ability to grow in diverse climates. It was cultivated in Spain just a few decades after Columbus's voyages and then spread to Italy, West Africa and elsewhere.[30]
Widespread cultivation most likely began in southern Spain in 1525, after which it quickly spread to the rest of the Spanish Empire including its territories in Italy (and, from there, to other Italian states). Maize had many advantages over wheat and barley; it yielded two and a half times the food energy per unit cultivated area,[31] could be harvested in successive years from the same plot of land, and grew in wildly varying altitudes and climates, from relatively dry regions with only 250 mm (10 in) of annual rainfall to damp regions with over 5,000 mm (200 in). By the 17th century it was a common peasant food in Southwestern Europe, including Portugal, Spain, southern France, and Italy. By the 18th century, it was the chief food of the southern French and Italian peasantry, especially in the form of polenta in Italy.[32]
Names
Many small male flowers make up the male inflorescence, called the tassel.
The word maize derives from the Spanish form of the indigenous Taíno word for the plant, mahiz.[33] Linnaeus included the common name maize as the species epithet in Zea mays.[34] It is known by other names including "corn" in some English speaking countries.[35]
Maize is preferred in formal, scientific, and international usage as a common name because it refers specifically to this one grain, unlike corn, which has a complex variety of meanings that vary by context and geographic region.[36] The US and a handful of other English-speaking countries primarily use corn, though most countries use the term maize.[37][8][38] The word maize is considered interchangeable in place of corn in the West; during early British and American trade, all grains were considered corn. Maize retained the name corn in the West as the primary grain in these trade relationships.[34]
The word "corn" outside the US, Canada, Australia, and New Zealand is synonymous with grain referring to any cereal crop with its meaning understood to vary geographically to refer to the local staple,[39] such as wheat in England and oats in Scotland or Ireland.[36] In the United States,[39] Canada,[40] Australia, and New Zealand, corn primarily means maize. This usage started as a shortening of "Indian corn" in 18th century North America.[39][41] During European colonization of North America, confusion would occur between British and North American English speakers using the term corn so that North American speakers would need to clarify that they were talking about Indian corn or maize, such as in a conversation between the Massachusetts Bay governor Thomas Hutchinson and the British king George III.[41] "Indian corn" primarily means maize (the staple grain of indigenous Americans) but can also refer more specifically to multicolored "flint corn" used for decoration.[42] Other common names include barajovar, makka, silk maize, and zea.[43]
Betty Fussell writes in an article on the history of the word "corn" in North America that "[t]o say the word "corn" is to plunge into the tragi-farcical mistranslations of language and history".[27] Similar to the British, the Spanish referred to maize as panizo, a generic term for cereal grains, as did Italians with the term polenta. The British later referred to maize as Turkey wheat, Turkey corn, or Indian corn with Fusell commenting that "they meant not a place but a condition, a savage rather than a civilized grain", especially with Turkish people later naming it kukuruz, or barbaric.[27]
In Southern Africa, maize is commonly called mielie (Afrikaans) or mealie (English), words possibly derived from the Portuguese word for maize, milho, but more probably from Dutch meel or English meal, meaning the edible part of a grain or pulse.[49]
Structure and physiology
The maize plant is often 3 m (10 ft) in height,[50] though some natural strains can grow 13 m (43 ft),[51] and the tallest recorded plant reached almost 14 metres (46 ft).[52] The stem is commonly composed of 20 internodes[53] of 18 cm (7 in) length.[50] The leaves arise from the nodes, alternately on opposite sides on the stalk,[54] and have entire margins.[55]
The apex of the stem ends in the tassel, an inflorescence of male flowers; these are separate from the female flowers but borne on the same plant (monoecy). When the tassel is mature and conditions are suitably warm and dry, anthers on the tassel dehisce and release pollen. Maize pollen is anemophilous (dispersed by wind), and because of its large settling velocity, most pollen falls within a few meters of the tassel.[56]
Ears develop above a few of the leaves in the midsection of the plant, between the stem and leaf sheath, elongating by around 3 mm (1⁄8 in) per day, to a length of 18 cm (7 in)[50] with 60 cm (24 in) being the maximum alleged in the subspecies.[57] They are female inflorescences, tightly enveloped by several layers of ear leaves commonly called husks.
Elongated stigmas, called silks, emerge from the whorl of husk leaves at the end of the ear. They are often pale yellow and 18 cm (7 in) in length, like tufts of hair in appearance. At the end of each is a carpel, which may develop into a "kernel" if fertilized by a pollen grain. The pericarp of the fruit is fused with the seed coat referred to as "caryopsis", typical of the grasses, and the entire kernel is often referred to as the "seed". The cob is close to a multiple fruit in structure, except that the individual fruits (the kernels) never fuse into a single mass. The grains are about the size of peas, and adhere in regular rows around a white, pithy substance, which forms the cob. The maximum size of kernels is reputedly 2.5 cm (1 in).[58] An ear commonly holds 600 kernels. They are of various colors: blackish, bluish-gray, purple, green, red, white and yellow. When ground into flour, maize yields more flour with much less bran than wheat does. It lacks the protein gluten of wheat and, therefore, makes baked goods with poor rising capability. A genetic variant that accumulates more sugar and less starch in the ear is consumed as a vegetable and is called sweet corn. Young ears can be consumed raw, with the cob and silk, but as the plant matures (usually during the summer months), the cob becomes tougher and the silk dries to inedibility. By the end of the growing season, the kernels dry out and become difficult to chew without cooking.[59]
Planting density affects multiple aspects of maize. Modern farming techniques in developed countries usually rely on dense planting, which produces one ear per stalk.[60] Stands of silage maize are yet denser,[citation needed] and achieve a lower percentage of ears and more plant matter.[citation needed]
Maize is a facultative short-day plant[61] and flowers in a certain number of growing degree days > 10 °C (50 °F) in the environment to which it is adapted.[62] The magnitude of the influence that long nights have on the number of days that must pass before maize flowers is genetically prescribed[63] and regulated by the phytochrome system.[64]Photoperiodicity can be eccentric in tropical cultivars such that the long days characteristic of higher latitudes allow the plants to grow so tall that they do not have enough time to produce seed before being killed by frost. These attributes, however, may prove useful in using tropical maize for biofuels.[65]
Immature maize shoots accumulate a powerful antibiotic substance, 2,4-dihydroxy-7-methoxy-1,4-benzoxazin-3-one (DIMBOA). DIMBOA is a member of a group of hydroxamic acids (also known as benzoxazinoids) that serve as a natural defense against a wide range of pests, including insects, pathogenic fungi and bacteria. DIMBOA is also found in related grasses, particularly wheat. A maize mutant (bx) lacking DIMBOA is highly susceptible to attack by aphids and fungi. DIMBOA is also responsible for the relative resistance of immature maize to the European corn borer (family Crambidae). As maize matures, DIMBOA levels and resistance to the corn borer decline.[citation needed]
Because of its shallow roots, maize is susceptible to droughts, intolerant of nutrient-deficient soils, and prone to be uprooted by severe winds.[66]
Maize kernels
Ear of maize with irregular rows of kernels
While yellow maizes derive their color from lutein and zeaxanthin, in red-colored maizes, the kernel coloration is due to anthocyanins and phlobaphenes. These latter substances are synthesized in the flavonoids synthetic pathway[67] from polymerization of flavan-4-ols[68] by the expression of maize pericarp color1 (p1) gene[69] which encodes an R2R3 myb-like transcriptional activator[70] of the A1 gene encoding for the dihydroflavonol 4-reductase (reducing dihydroflavonols into flavan-4-ols)[71] while another gene (Suppressor of Pericarp Pigmentation 1 or SPP1) acts as a suppressor.[72] The p1 gene encodes an Myb-homologous transcriptional activator of genes required for biosynthesis of red phlobaphene pigments, while the P1-wr allele specifies colorless kernel pericarp and red cobs, and unstable factor for orange1 (Ufo1) modifies P1-wr expression to confer pigmentation in kernel pericarp, as well as vegetative tissues, which normally do not accumulate significant amounts of phlobaphene pigments.[69] The maize P gene encodes a Myb homolog that recognizes the sequence CCT/AACC, in sharp contrast with the C/TAACGG bound by vertebrate Myb proteins.[73]
The ear leaf is the leaf most closely associated with a particular developing ear. This leaf and above contribute 70%[74] to 75% to 90%[75] of grain fill. Therefore fungicide application is most important in that region in most disease environments.[74][75]
Abnormal flowers
Maize flowers may sometimes exhibit mutations that lead to the formation of female flowers in the tassel. These mutations, ts4 and Ts6, prohibit the development of the stamen while simultaneously promoting pistil development.[76] This may cause inflorescences containing both male and female flowers, or hermaphrodite flowers.[77]
This system has been replaced (though not entirely displaced) over the last 60 years by multivariable classifications based on ever more data. Agronomic data were supplemented by botanical traits for a robust initial classification, then genetic, cytological, protein and DNA evidence was added. Now, the categories are forms (little used), races, racial complexes, and recently branches.[citation needed]
Maize is a diploid with 20 chromosomes (n=10). The combined length of the chromosomes is 1500 cM. Some of the maize chromosomes have what are known as "chromosomal knobs": highly repetitive heterochromatic domains that stain darkly. Individual knobs are polymorphic among strains of both maize and teosinte.[citation needed] Hufford et al., 2012 finds that 83% of allelic variation within the genome derives from its teosinte ancestors, primarily due to the freedom of Zeas to outcross.[79]
The centromeres have two types of structural components, both of which are found only in the centromeres: Large arrays of CentC, a short satellite DNA; and a few of a family of retrotransposons. The B chromosome, unlike the others, contains an additional repeat which extends into neighboring areas of the chromosome. Centromeres can accidentally shrink during division and still function, although it is thought this will fail if it shrinks below a few hundred kilobase. Kinetochores contain RNA originating from centromeres. Centromere regions can become inactive, and can continue in that state if the chromosome still has another active one.[81]
The Maize Genetics Cooperation Stock Center, funded by the USDA Agricultural Research Service and located in the Department of Crop Sciences at the University of Illinois at Urbana-Champaign, is a stock center of maize mutants. The total collection has nearly 80,000 samples. The bulk of the collection consists of several hundred named genes, plus additional gene combinations and other heritable variants. There are about 1000 chromosomal aberrations (e.g., translocations and inversions) and stocks with abnormal chromosome numbers (e.g., tetraploids). Genetic data describing the maize mutant stocks as well as myriad other data about maize genetics can be accessed at MaizeGDB, the Maize Genetics and Genomics Database.[82]
In 2005, the US National Science Foundation (NSF), Department of Agriculture (USDA) and the Department of Energy (DOE) formed a consortium to sequence the B73 maize genome. The resulting DNA sequence data was deposited immediately into GenBank, a public repository for genome-sequence data. Sequences and genome annotations have also been made available throughout the project's lifetime at the project's official site.[83]
Primary sequencing of the maize genome was completed in 2008.[84] On November 20, 2009, the consortium published results of its sequencing effort in Science.[85] The genome, 85% of which is composed of transposons, was found to contain 32,540 genes (By comparison, the human genome contains about 2.9 billion bases and 26,000 genes). Much of the maize genome has been duplicated and reshuffled by helitrons—group of rolling circle transposons.[86]
In Z. mays and various other angiosperms the MADS-box motif is involved in floral development. Early study in several angiosperm models including Z. mays was the beginning of research into the molecular evolution of floral structure in general, as well as their role in nonflowering plants.[87]
Recombination is a significant source of diversity in Z. mays. (Note that this finding supersedes previous studies which showed no such correlation.)[91]
This recombination/diversity effect is seen throughout plants but is also found to not occur – or not as strongly – in regions of high gene density. This is likely the reason that domesticated Z. mays has not seen as much of an increase in diversity within areas of higher density as in regions of lower density, although there is more evidence in other plants.[91]
Some lines of maize have undergone ancient polyploidy events, starting 11 million years ago. Over that time ~72% of polyploid duplicated genes have been retained, which is higher than other plants with older polyploidy events. Thus maize may be due to lose more duplicate genes as time goes along, similar to the course followed by the genomes of other plants. If so - if gene loss has merely not occurred yet - that could explain the lack of observed positive selection and lower negative selection which are observed in otherwise similar plants, i.e. also naturally outcrossing and with similar effective population sizes.[91]
Ploidy does not appear to influence EPS or magnitude of selection effect in maize.[91]
Breeding
Maize reproduces sexually each year. This randomly selects half the genes from a given plant to propagate to the next generation, meaning that desirable traits found in the crop (like high yield or good nutrition) can be lost in subsequent generations unless certain techniques are used.[citation needed]
Maize breeding in prehistory resulted in large plants producing large ears. Modern breeding began with individuals who selected highly productive varieties in their fields and then sold seed to other farmers. James L. Reid was one of the earliest and most successful developing Reid's Yellow Dent in the 1860s. These early efforts were based on mass selection. Later breeding efforts included ear to row selection (C. G. Hopkins c. 1896), hybrids made from selected inbred lines (G. H. Shull, 1909), and the highly successful double cross hybrids using four inbred lines (D. F. Jones c. 1918, 1922). University supported breeding programs were especially important in developing and introducing modern hybrids.[92] By the 1930s, companies such as Pioneer devoted to production of hybrid maize had begun to influence long-term development. Internationally important seed banks such as the International Maize and Wheat Improvement Center (CIMMYT) and the US bank at the Maize Genetics Cooperation Stock Center University of Illinois at Urbana-Champaign maintain germplasm important for future crop development.[citation needed]
Since the 1940s the best strains of maize have been first-generation hybrids made from inbred strains that have been optimized for specific traits, such as yield, nutrition, drought, pest and disease tolerance. Both conventional cross-breeding and genetic engineering have succeeded in increasing output and reducing the need for cropland, pesticides, water and fertilizer.[93] There is conflicting evidence to support the hypothesis that maize yield potential has increased over the past few decades. This suggests that changes in yield potential are associated with leaf angle, lodging resistance, tolerance of high plant density, disease/pest tolerance, and other agronomic traits rather than increase of yield potential per individual plant.[94]
Certain varieties of maize have been bred to produce many ears which are the source of the "baby corn" used as a vegetable in Asian cuisine.[95]
One strain called olotón has evolved a symbiotic relationship with nitrogen-fixing microbes, which provides the plant with 29%–82% of its nitrogen.[96]
CIMMYT operates a conventional breeding program to provide optimized strains. The program began in the 1980s. Hybrid seeds are distributed in Africa by the Drought Tolerant Maize for Africa project.[93]
Genetic engineering
Genetically engineered(GE) maize was one of the 26 GE crops grown commercially in 2016.[97][98] The vast majority of this is Bt maize. Grown since 1997 in the United States and Canada,[99] 92% of the US maize crop was genetically modified in 2016[97][100] and 33% of the worldwide maize crop was GM in 2016.[97][101] As of 2011, Herbicide-tolerant maize varieties were grown in Argentina, Australia, Brazil, Canada, China, Colombia, El Salvador, the European Union, Honduras, Japan, Korea, Malaysia, Mexico, New Zealand, Philippines, the Russian Federation, Singapore, South Africa, Taiwan, Thailand, and the United States. Insect-resistant maize was grown in Argentina, Australia, Brazil, Canada, Chile, China, Colombia, Egypt, the European Union, Honduras, Japan, Korea, Malaysia, Mexico, New Zealand, Philippines, South Africa, Switzerland, Taiwan, the United States, and Uruguay.[102]
In September 2000, up to $50 million worth of food products were recalled due to the presence of Starlink genetically modified corn, which had been approved only for animal consumption and had not been approved for human consumption, and was subsequently withdrawn from the market.[103]
Origin
Maize is the domesticated variant of teosinte – teosintes are the crop wild relatives of this plant.[106] The two plants have dissimilar appearance, maize having a single tall stalk with multiple leaves and teosinte being a short, bushy plant. The difference between the two is largely controlled by differences in just two genes, called grassy tillers-1 (gt1, A0A317YEZ1) and teosinte branched-1 (tb1, Q93WI2).[106]
Several theories had been proposed about the specific origin of maize in Mesoamerica:[107][108]
In the late 1930s, Paul Mangelsdorf suggested that domesticated maize was the result of a hybridization event between an unknown wild maize and a species of Tripsacum, a related genus. This theory about the origin of maize has been refuted by modern genetic testing, which refutes Mangelsdorf's model and the fourth listed above.[107]: 40
how the tiny archaeological specimens of 3500–2700 BC could have been selected from a teosinte, and
how domestication could have proceeded without leaving remains of teosinte or maize with teosintoid traits earlier than the earliest known until recently, dating from ca. 1100 BC.
The domestication of maize is of particular interest to researchers—archaeologists, geneticists, ethnobotanists, geographers, etc. The process is thought by some to have started 7,500 to 12,000 years ago. Research from the 1950s to 1970s originally focused on the hypothesis that maize domestication occurred in the highlands between the states of Oaxaca and Jalisco, because the oldest archaeological remains of maize known at the time were found there.
Connection with 'parviglumis' subspecies
Genetic studies, published in 2004 by John Doebley, identified Zea mays ssp. parviglumis, native to the Balsas River valley in Mexico's southwestern highlands, and also known as Balsas teosinte, as being the crop wild relative that is genetically most similar to modern maize.[110][109] This was confirmed by further studies, which refined this hypothesis somewhat. Archaeobotanical studies, published in 2009, point to the middle part of the Balsas River valley as the likely location of early domestication; this river is not very long, so these locations are not very distant. Stone milling tools with maize residue have been found in an 8,700 year old layer of deposits in a cave not far from Iguala, Guerrero.[111][112][113]
Doebley was part of the team that first published, in 2002, that maize had been domesticated only once, about 9,000 years ago, and then spread throughout the Americas.[15][114]
A primitive corn was being grown in southern Mexico, Central America, and northern South America 7,000 years ago. Archaeological remains of early maize ears, found at Guila Naquitz Cave in the Oaxaca Valley, date back roughly 6,250 years; the oldest ears from caves near Tehuacan, Puebla, 5,450 B.P.[21]
Jaina Island ceramic statuette of the young Maya Maize God emerging from an ear of corn, 600–900 A.D.
As maize was introduced to new cultures, new uses were developed and new varieties selected to better serve in those preparations. Maize was the staple food, or a major staple – along with squash, Andean region potato, quinoa, beans, and amaranth – of most pre-Columbian North American, Mesoamerican, South American, and Caribbean cultures. The Mesoamerican civilization, in particular, was deeply interrelated with maize. Its traditions and rituals involved all aspects of maize cultivation – from the planting to the food preparation. Maize formed the Mesoamerican people's identity.[citation needed]
It is unknown what precipitated its domestication, because the edible portion of the wild variety is too small, and hard to obtain, to be eaten directly, as each kernel is enclosed in a very hard bivalve shell.[citation needed]
In 1939, George Beadle demonstrated that the kernels of teosinte are readily "popped" for human consumption, like modern popcorn.[115] Some have argued it would have taken too many generations of selective breeding to produce large, compressed ears for efficient cultivation. However, studies of the hybrids readily made by intercrossing teosinte and modern maize suggest this objection is not well founded.[citation needed]
Spreading to the north
Around 4,500 years ago, maize began to spread to the north. Maize was first cultivated in what is now the United States at several sites in New Mexico and Arizona about 4,100 years ago.[116]
During the first millennium AD, maize cultivation spread more widely in the areas north. In particular, the large-scale adoption of maize agriculture and consumption in eastern North America took place about A.D. 900. Native Americans cleared large forest and grassland areas for the new crop.[117]
In 2005, research by the USDAForest Service suggested that the rise in maize cultivation 500 to 1,000 years ago in what is now the southeastern United States corresponded with a decline of freshwater mussels, which are very sensitive to environmental changes.[118]
Cultivation
Planting
Seedlings three weeks after sowingYoung stalks
Because it is cold-intolerant, in the temperate zones maize must be planted in the spring. Its root system is generally shallow, so the plant is dependent on soil moisture. As a plant that uses C4 carbon fixation, maize is a considerably more water-efficient crop than plants that use C3 carbon fixation such as alfalfa and soybeans. Maize is most sensitive to drought at the time of silk emergence, when the flowers are ready for pollination. In the United States, a good harvest was traditionally predicted if the maize was "knee-high by the Fourth of July", although modern hybrids generally exceed this growth rate. Maize used for silage is harvested while the plant is green and the fruit immature. Sweet corn is harvested in the "milk stage", after pollination but before starch has formed, between late summer and early to mid-autumn. Field maize is left in the field until very late in the autumn to thoroughly dry the grain, and may, in fact, sometimes not be harvested until winter or even early spring. The importance of sufficient soil moisture is shown in many parts of Africa, where periodic drought regularly causes maize crop failure and consequent famine. Although it is grown mainly in wet, hot climates, it has been said to thrive in cold, hot, dry or wet conditions, meaning that it is an extremely versatile crop.[119]
Mature plants showing ears
Maize was planted by the Native Americans in hills, in a complex system known to some as the Three Sisters.[120] Maize provided support for beans, and the beans provided nitrogen derived from nitrogen-fixing rhizobia bacteria which live on the roots of beans and other legumes; and squashes provided ground cover to stop weeds and inhibit evaporation by providing shade over the soil.[121] This method was replaced by single species hill planting where each hill 60–120 cm (2 ft 0 in – 3 ft 11 in) apart was planted with three or four seeds, a method still used by home gardeners. A later technique was "checked maize", where hills were placed 1 m (40 in) apart in each direction, allowing cultivators to run through the field in two directions. In more arid lands, this was altered and seeds were planted in the bottom of 10–12 cm (4–4+1⁄2 in) deep furrows to collect water. Modern technique plants maize in rows which allows for cultivation while the plant is young, although the hill technique is still used in the maize fields of some Native American reservations. When maize is planted in rows, it also allows for planting of other crops between these rows to make more efficient use of land space.[122]
In most regions today, maize grown in residential gardens is still often planted manually with a hoe, whereas maize grown commercially is no longer planted manually but rather is planted with a planter. In North America, fields are often planted in a two-crop rotation with a nitrogen-fixing crop, often alfalfa in cooler climates and soybeans in regions with longer summers. Sometimes a third crop, winter wheat, is added to the rotation.[citation needed]
Many of the maize varieties grown in the United States and Canada are hybrids. Often the varieties have been genetically modified to tolerate glyphosate or to provide protection against natural pests. Glyphosate is an herbicide which kills all plants except those with genetic tolerance. This genetic tolerance is very rarely found in nature.[citation needed]
In the midwestern United States, low-till or no-till farming techniques are usually used. In low-till, fields are covered once, maybe twice, with a tillage implement either ahead of crop planting or after the previous harvest. The fields are planted and fertilized. Weeds are controlled through the use of herbicides, and no cultivation tillage is done during the growing season. This technique reduces moisture evaporation from the soil, and thus provides more moisture for the crop.
The technologies mentioned in the previous paragraph enable low-till and no-till farming. Weeds compete with the crop for moisture and nutrients, making them undesirable.[citation needed]
Harvesting
Maize harvested as a grain crop can be kept in the field a relatively long time, even months, after the crop is ready to harvest; it is also harvested and stored in the husk leaves if kept dry.[123]
Before the 20th century, all maize harvesting was by manual labour, by grazing, or by some combination of those. Whether the ears were hand-picked and the stover was grazed, or the whole plant was cut, gathered, and shocked, people and livestock did all the work. Between the 1890s and the 1970s, the technology of maize harvesting expanded greatly. Today, all such technologies, from entirely manual harvesting to entirely mechanized, are still in use to some degree, as appropriate to each farm's needs, although the thoroughly mechanized versions predominate, as they offer the lowest unit costs when scaled to large farm operations.
Before World War II, most maize in North America was harvested by hand. This involved a large number of workers and associated social events (husking or shucking bees). From the 1890s onward, some machinery became available to partially mechanize the processes, such as one- and two-row mechanical pickers (picking the ear, leaving the stover) and corn binders, which are reaper-binders designed specifically for maize. The latter produce sheaves that can be shocked. By hand or mechanical picker, the entire ear is harvested, which then requires a separate operation of a maize sheller to remove the kernels from the ear. Whole ears of maize were often stored in corn cribs, and these whole ears are a sufficient form for some livestock feeding use. Today corn cribs with whole ears, and corn binders, are less common because most modern farms harvest the grain from the field with a combine and store it in bins. The combine with a corn head (with points and snap rolls instead of a reel) does not cut the stalk; it simply pulls the stalk down. The stalk continues downward and is crumpled into a mangled pile on the ground, where it usually is left to become organic matter for the soil. The ear of maize is too large to pass between slots in a plate as the snap rolls pull the stalk away, leaving only the ear and husk to enter the machinery. The combine separates the husk and the cob, keeping only the kernels.[124]
For storing grain in bins, the moisture of the grain must be sufficiently low to avoid spoiling. If the moisture content of the harvested grain is too high, grain dryers are used to reduce the moisture content by blowing heated air through the grain. This can require large amounts of energy in the form of combustible gases (propane or natural gas) and electricity to power the blowers.[126]
Production
Maize is widely cultivated throughout the world, and a greater weight of maize is produced each year than any other grain.[9] In 2020, total world production was 1.16 billion tonnes, led by the United States with 31.0% of the total (table). China produced 22.4% of the global total.[128]
United States
In 2016, maize production was forecast to be over 380 million metric tons (15 billion bushels), an increase of 11% over 2014 American production.[130] Based on conditions as of August 2016, the expected yield would be the highest ever for the United States.[130] The area of harvested maize was forecast to be 35 million hectares (87 million acres), an increase of 7% over 2015.[130] Maize is especially popular in Midwestern states such as Indiana, Iowa, and Illinois; in the latter, it was named the state's official grain in 2017.[131]
The estimated corn usage for crop year September 1, 2020 to August 31, 2021, was 38.7 percent was used for feed, 34 percent for ethanol, 17.5 percent for export, and 9.8 percent for food.[132]
Trade
Corn futures are traded on several exchanges, the Chicago Board of Trade (CBOT) and JSE Derivatives (JDERIV). The Chicago Board Of Trade sells corn futures with a contact size of 5000 bushels which is quoted in cents/bushel and the JDERIV has a contact size of 100 Tonnes, quoted in Rand/Ton.[133][134] The detailed contract specifications are listed below:
The susceptibility of maize to the European corn borer and corn rootworms, and the resulting large crop losses which are estimated at a billion dollars worldwide for each pest,[137][138][139] led to the development of transgenics expressing the Bacillus thuringiensis toxin. "Bt maize" is widely grown in the United States and has been approved for release in Europe.
Storage
Drying is vital to prevent or at least reduce mycotoxin contamination. Aspergillus and Fusarium spp. are the most common mycotoxin sources, but there are others. Altogether maize contaminants are so common, and this crop is so economically important, that maize mycotoxins are among the most important in agriculture in general.[99]
In prehistoric times Mesoamerican women used a metate to process maize into ground cornmeal, allowing the preparation of foods that were more calorie dense than popcorn. After ceramic vessels were invented the Olmec people began to cook maize together with beans, improving the nutritional value of the staple meal. Although maize naturally contains niacin, an important nutrient, it was not bioavailable without the process of nixtamalization. The Maya used nixtamal meal to make varieties of porridges and tamales.[142] The process was later used in the cuisine of the American South to prepare corn for grits and hominy.[citation needed]
Maize can also be harvested and consumed in the unripe state, when the kernels are fully grown but still soft. Unripe maize must usually be cooked to become palatable; this may be done by simply boiling or roasting the whole ears and eating the kernels right off the cob. Sweet corn, a genetic variety that is high in sugars and low in starch, is usually consumed in the unripe state. Such corn on the cob is a common dish in the United States, Canada, United Kingdom, Cyprus, some parts of South America, and the Balkans, but virtually unheard of in some European countries.[citation needed] Corn on the cob was hawked on the streets of early 19th-century New York City by poor, barefoot "Hot Corn Girls", who were thus the precursors of hot dog carts, churro wagons, and fruit stands seen on the streets of big cities today.[144]
Within the United States, the usage of maize for human consumption constitutes only around 1/40th of the amount grown in the country. In the United States and Canada, maize is mostly grown to feed livestock, as forage, silage (made by fermentation of chopped green cornstalks), or grain. Maize meal is also a significant ingredient of some commercial animal food products.[citation needed]
Feed and fodder for livestock
Maize is a major source of both grain feed and fodder for livestock. It is fed to the livestock in various ways. When it is used as a grain crop, the dried kernels are used as feed. They are often kept on the cob for storage in a corn crib, or they may be shelled off for storage in a grain bin. The farm that consumes the feed may produce it, purchase it on the market, or some of both. When the grain is used for feed, the rest of the plant (the corn stover) can be used later as fodder, bedding (litter), or soil amendment. When the whole maize plant (grain plus stalks and leaves) is used for fodder, it is usually chopped all at once and ensilaged, as digestibility and palatability are higher in the ensilaged form than in the dried form. Maize silage is one of the most valuable forages for ruminants.[146] Before the advent of widespread ensilaging, it was traditional to gather the corn into shocks after harvesting, where it dried further. With or without a subsequent move to the cover of a barn, it was then stored for weeks to several months until fed to the livestock. Today ensilaging can occur not only in siloes but also in silage wrappers. However, in the tropics, maize can be harvested year-round and fed as green forage to the animals.[147]
Bio-fuel
"Feed maize" is being used increasingly for heating;[149] specialized corn stoves (similar to wood stoves) are available and use either feed maize or wood pellets to generate heat. Maize cobs are also used as a biomass fuel source. Maize is relatively cheap and home-heating furnaces have been developed which use maize kernels as a fuel. They feature a large hopper that feeds the uniformly sized maize kernels (or wood pellets or cherry pits) into the fire.[citation needed]
Maize is increasingly used as a feedstock for the production of ethanol fuel.[150] When considering where to construct an ethanol plant, one of the site selection criteria is to ensure there is locally available feedstock.[151] Ethanol is mixed with gasoline to decrease the amount of pollutants emitted when used to fuel motor vehicles. High fuel prices in mid-2007 led to higher demand for ethanol, which in turn led to higher prices paid to farmers for maize. This led to the 2007 harvest being one of the most profitable maize crops in modern history for farmers. Because of the relationship between fuel and maize, prices paid for the crop now tend to track the price of oil.[citation needed]
The price of food is affected to a certain degree by the use of maize for biofuel production. The cost of transportation, production, and marketing are a large portion (80%) of the price of food in the United States. Higher energy costs affect these costs, especially transportation. The increase in food prices the consumer has been seeing is mainly due to the higher energy cost. The effect of biofuel production on other food crop prices is indirect. Use of maize for biofuel production increases the demand, and therefore price of maize. This, in turn, results in farm acreage being diverted from other food crops to maize production. This reduces the supply of the other food crops and increases their prices.[152][153]
Farm-based maize silage digester located near Neumünster in Germany, 2007. The green tarpaulin top cover is held up by the biogas stored in the digester.
Maize is widely used in Germany as a feedstock for biogas plants. Here the maize is harvested, shredded then placed in silage clamps from which it is fed into the biogas plants. This process makes use of the whole plant rather than simply using the kernels as in the production of fuel ethanol.[citation needed]
Increasingly, ethanol is being used at low concentrations (10% or less) as an additive in gasoline (gasohol) for motor fuels to increase the octane rating, lower pollutants, and reduce petroleum use (what is nowadays also known as "biofuels" and has been generating an intense debate regarding the human beings' necessity of new sources of energy, on the one hand, and the need to maintain, in regions such as Latin America, the food habits and culture which has been the essence of civilizations such as the one originated in Mesoamerica; the entry, January 2008, of maize among the commercial agreements of NAFTA has increased this debate, considering the bad labor conditions of workers in the fields, and mainly the fact that NAFTA "opened the doors to the import of maize from the United States, where the farmers who grow it receive multimillion-dollar subsidies and other government supports. ... According to OXFAM UK, after NAFTA went into effect, the price of maize in Mexico fell 70% between 1994 and 2001. The number of farm jobs dropped as well: from 8.1 million in 1993 to 6.8 million in 2002. Many of those who found themselves without work were small-scale maize growers.").[155] However, introduction in the northern latitudes of the US of tropical maize for biofuels, and not for human or animal consumption, may potentially alleviate this.[citation needed]
Ornamental and other uses
Some forms of the plant are occasionally grown for ornamental use in the garden. For this purpose, variegated and colored leaf forms as well as those with colorful ears are used.[citation needed]
Corncobs can be hollowed out and treated to make inexpensive smoking pipes, first manufactured in the United States in 1869.[citation needed]
Children playing in a maize kernel box
An unusual use for maize is to create a "corn maze" (or "maize maze") as a tourist attraction. The idea of a maize maze was introduced by the American Maze Company who created a maze in Pennsylvania in 1993.[157][better source needed] Traditional mazes are most commonly grown using yewhedges, but these take several years to mature. The rapid growth of a field of maize allows a maze to be laid out using GPS at the start of a growing season and for the maize to grow tall enough to obstruct a visitor's line of sight by the start of the summer. In Canada and the US, these are popular in many farming communities.[citation needed]
Maize kernels can be used in place of sand in a sandboxlike enclosure for children's play.[158]
In the US since 2009/2010, maize feedstock use for ethanol production has somewhat exceeded direct use for livestock feed; maize use for fuel ethanol was 5,130 million bushels (130 million tonnes) in the 2013/2014 marketing year.[160]
A fraction of the maize feedstock dry matter used for ethanol production is usefully recovered as DDGS (dried distillers grains with solubles). In the 2010/2011 marketing year, about 29.1 million tonnes of DDGS were fed to US livestock and poultry.[161] Because starch utilization in fermentation for ethanol production leaves other grain constituents more concentrated in the residue, the feed value per kg of DDGS, with regard to ruminant-metabolizable energy and protein, exceeds that of the grain. Feed value for monogastric animals, such as swine and poultry, is somewhat lower than for ruminants.[161]
Note: All nutrient values including protein and fiber are in %DV per 100 grams of the food item. Significant values are highlighted in light Gray color and bold letters. [162][163]
Cooking reduction = % Maximum typical reduction in nutrients due to boiling without draining for ovo-lacto-vegetables group[164][165]
Q = Quality of Protein in terms of completeness without adjusting for digestability.[165]
The following table shows the nutrient content of maize and major staple foods in a raw harvested form on a dry weight basis to account for their different water contents. Raw forms are not usually eaten and cannot be digested well. They are either sprouted, or prepared and cooked for human consumption. In sprouted or cooked form, the nutritional and anti-nutritional contents of each of these staples differ from that of raw form of these staples reported in the table below.
Hazards
Pellagra
When maize was first introduced into farming systems other than those used by traditional native-American peoples, it was generally welcomed with enthusiasm for its productivity. However, a widespread problem of malnutrition soon arose wherever maize was introduced as a staple food. This was a mystery, since these types of malnutrition were not normally seen among the indigenous Americans, for whom maize was the principal staple food.[167]
It was eventually discovered that the indigenous Americans had learned to soak maize in alkali — water (the process now known as nixtamalization) — made with ashes and lime (calcium oxide) since at least 1200–1500 BC by Mesoamericans. They did this to liberate the corn hulls, but (unbeknownst to natives or colonists) it coincidentally liberates the B-vitamin niacin, the lack of which was the underlying cause of the condition known as pellagra.[168]
Maize was introduced into the diet of non-indigenous Americans without the necessary cultural knowledge historically acquired in the Americas. In the late 19th century, pellagra reached epidemic proportions in parts of the southern US, as medical researchers debated two theories for its origin: the deficiency theory (which was eventually shown to be true) said that pellagra was due to a deficiency of some nutrient, and the germ theory said that pellagra was caused by a germ transmitted by stable flies. Another theory promoted by the eugenicist Charles Davenport held that people only contracted pellagra if they were susceptible to it due to certain "constitutional, inheritable" traits of the affected individual.[169]
Once alkali processing and dietary variety were understood and applied, pellagra disappeared in the developed world. The development of high lysine maize and the promotion of a more balanced diet have also contributed to its demise. Pellagra still exists today in food-poor areas and refugee camps where people survive on donated maize.[170]
The Z. mays plant has an OPALS allergy scale rating of 5 out of 10, indicating moderate potential to cause allergic reactions, exacerbated by over-use of the same plant throughout a garden. Corn pollen is heavy, large, and usually airborne in the early morning.[172]
Mycotoxins
Fungicide application does not reduce fungal growth or mycotoxin dramatically, although it can be a part of a successful reduction strategy. Among the most common toxins are those produced by Aspergillus and Fusarium spp. The most common toxins are aflatoxins, fumonisins, zearalenone, and ochratoxin A. Bt maize discourages insect vectors and by so doing it dramatically reduces concentrations of fumonisins, significantly reduces aflatoxins, but only mildly reduces others.[99]
Art
Maize has been an essential crop in the Andes since the pre-Columbian era. The Moche culture from Northern Peru made ceramics from earth, water, and fire. This pottery was a sacred substance, formed in significant shapes and used to represent important themes. Maize was represented anthropomorphically as well as naturally.[173]
In the United States, maize ears along with tobacco leaves are carved into the capitals of columns in the United States Capitol building. Maize itself is sometimes used for temporary architectural detailing when the intent is to celebrate the fall season, local agricultural productivity and culture. Bundles of dried maize stalks are often displayed along with pumpkins, gourds and straw in autumnal displays outside homes and businesses. A well-known example of architectural use is the Corn Palace in Mitchell, South Dakota, which uses cobs and ears of colored maize to implement a mural design that is recycled annually. Another well-known example is the Field of Corn sculpture in Dublin, Ohio, where hundreds of concrete ears of corn stand in a grassy field.[174]
A maize stalk with two ripe ears is depicted on the reverse of the Croatian 1 lipa coin, minted since 1993.[175]
Shucked, a 2022 musical that is currently running on Broadway, was described by Vulture as a "show about corn". Much of the show contains puns relating to corn[176][177] and the plot revolves around a blighted corn crop.[178]
^ abMcLellan Plaisted, Susan (2013). "Corn". In Smith, Andrew (ed.). The Oxford encyclopedia of food and drink in America (2nd ed.). New York, NY: Oxford University Press. ISBN9780199739226. Retrieved February 15, 2023. The use of the word "corn" for what is termed "maize" by most other countries is peculiar to the United States. Europeans who were accustomed to the names "wheat corn," "barley corn," and "rye corn" for other small-seeded cereal grains referred to the unique American grain maize as "Indian corn." The term was shortened to just "corn," which has become the American word for the plant of American genesis.
^ abcPiperno, Dolores R. (October 2011). "The Origins of Plant Cultivation and Domestication in the New World Tropics: Patterns, Process, and New Developments". Current Anthropology. 52 (S4): S453–S470. doi:10.1086/659998. S2CID83061925. Recent studies in the Central Balsas River Valley of Mexico, maize's postulated cradle of origin, document the presence of maize phytoliths and starch grains at 8700 BP, the earliest date recorded for the crop (Piperno et al. 2009; Ranere et al. 2009). A large corpus of data indicates that it was dispersed into lower Central America by 7600 BP and had moved into the inter-Andean valleys of Colombia between 7000 and 6000 BP. Given the number of Cauca Valley, Colombia, sites that demonstrate early maize, it is likely that the inter-Andean valleys were a major dispersal route for the crop after it entered South America
^ abcFussell, Betty (1999). "Translating Maize into Corn: The Transformation of America's Native Grain". Social Research. 66 (1): 41–65. JSTOR40971301. GaleA54668866ProQuest209670587. To say the word "corn" is to plunge into the tragi-farcical mistranslations of language and history. If only the British had followed Columbus in phoneticizing the Taino word mahiz, which the Arawaks named their staple grain, we wouldn't be in the same linguistic pickle we're in today, where I have to explain to someone every year that when Biblical Ruth "stood in tears amid the alien corn" she was standing in a wheat field. But it was a near thing even with the Spaniards, when we read in Columbus' Journals that the grain "which the Indians called maiz... the Spanish called panizo.' The Spanish term was generic for the cereal grains they knew - wheat, millet, barley, oats - as was the Italian term polenta, from Latin pub. As was the English term "corn," which covered grains of all kinds, including grains of salt, as in "corned beef. French linguistic imperialism, by way of a Parisian botanist in 1536, provided the term Turcicum frumentum, which the British quickly translated into "Turkey wheat," "Turkey corn," and "Indian corn." By Turkey or Indian, they meant not a place but a condition, a savage rather than a civilized grain, with which the Turks concurred, calling it kukuruz, meaning barbaric.
^Rebecca Earle, The Body of the Conquistador: Food, Race, and the Colonial Experience in Spanish America, 1492–1700. New York: Cambridge University Press 2012, pp. 17, 151.
^ abEnsminger, Audrey H. (1994). Foods and Nutrition Encyclopedia, 2nd ed. CRC Press. p. 479. ISBN978-0-8493-8980-1. The word "maize" is preferred in international usage because in many countries the term "corn", the name by which the plant is known in the United States, is synonymous with the leading cereal grain; thus, in England "corn" refers to wheat, and in Scotland and Ireland it refers to oats.
^"Grain". education.nationalgeographic.org. National Geographic. Retrieved February 27, 2023. In most countries, the grain of the Zea mays plant is called maize. In the United States, it's called corn.
^ abMencken, H. L. (1984). The American language : an inquiry into the development of English in the United States (4th, corrected, enl., and rewritten ed.). New York: Alfred A. Knopf. p. 122. ISBN0394400755. Corn, in orthodox English, means grain for human consumption, especially wheat, e.g., the Corn Laws. The earliest settlers, following this usage, gave the name of Indian corn to what the Spaniards, following the Indians themselves, had called maiz. . . . But gradually the adjective fell off, and by the middle of the Eighteenth Century maize was simply called corn and grains in general were called breadstuffs. Thomas Hutchinson, discoursing to George III in 1774, used corn in this restricted sense speaking of "rye and corn mixed." "What corn?" asked George. "Indian corn," explained Hutchinson, "or as it is called in authors, maize."
^United States Department of Agriculture, Economic Research Service. Corn supply, disappearance, and share of total corn used for ethanol. www.ers.usda.gov/datafiles/US_Bioenergy/Feedstocks/table05.xls (Excel file, accessed June 29, 2015).
^ abHoffman, L. and A. Baker. 2011. Estimating the substitution of distillers'grains for corn and soybean meal in the U.S. feed complex. United States Department of Agriculture, Economic Research Service. FDS-11-l-01. 62 pp.
|
Valley, date back roughly 6,250 years; the oldest ears from caves near Tehuacan, Puebla, 5,450 B.P.[21]
Jaina Island ceramic statuette of the young Maya Maize God emerging from an ear of corn, 600–900 A.D.
As maize was introduced to new cultures, new uses were developed and new varieties selected to better serve in those preparations. Maize was the staple food, or a major staple – along with squash, Andean region potato, quinoa, beans, and amaranth – of most pre-Columbian North American, Mesoamerican, South American, and Caribbean cultures. The Mesoamerican civilization, in particular, was deeply interrelated with maize. Its traditions and rituals involved all aspects of maize cultivation – from the planting to the food preparation. Maize formed the Mesoamerican people's identity.[citation needed]
It is unknown what precipitated its domestication, because the edible portion of the wild variety is too small, and hard to obtain, to be eaten directly, as each kernel is enclosed in a very hard bivalve shell.[citation needed]
In 1939, George Beadle demonstrated that the kernels of teosinte are readily "popped" for human consumption, like modern popcorn.[115] Some have argued it would have taken too many generations of selective breeding to produce large, compressed ears for efficient cultivation. However, studies of the hybrids readily made by intercrossing teosinte and modern maize suggest this objection is not well founded.[citation needed]
Spreading to the north
Around 4,500 years ago, maize began to spread to the north. Maize was first cultivated in what is now the United States at several sites in New Mexico and Arizona about 4,100 years ago.[116]
During the first millennium AD, maize cultivation spread more widely in the areas north.
|
yes
|
Paleoethnobotany
|
Was maize a staple food in prehistoric North American civilizations?
|
yes_statement
|
"maize" was a "staple" "food" in "prehistoric" north american "civilizations".. "prehistoric" north american "civilizations" relied on "maize" as a "staple" "food".
|
https://oxfordre.com/environmentalscience/view/10.1093/acrefore/9780199389414.001.0001/acrefore-9780199389414-e-174
|
Prehistoric and Traditional Agriculture in Lowland Mesoamerica ...
|
We use cookies to enhance your experience on our website. By continuing to use our website, you are agreeing to our use of cookies. You can change your cookie settings at any time.Find out moreJump to
Content
Printed from Oxford Research Encyclopedias, Environmental Science. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).
Prehistoric and Traditional Agriculture in Lowland Mesoamerica
Summary
Mesoamerica is one of the world’s primary centers of domestication where agriculture arose independently. Paleoethnobotany (or archaeobotany), along with archaeology, epigraphy, and ethnohistorical and ethnobotanical data, provide increasingly important insights into the ancient agriculture of Lowland Mesoamerica (below 1000 m above sea level). Moreover, new advances in the analysis of microbotanical remains in the form of pollen, phytoliths, and starch-grain analysis and chemical analysis of organic residues have further contributed to our understanding of ancient plant use in this region. Prehistoric and traditional agriculture in the lowlands of Mesoamerica—notably the Maya lowlands, the Gulf Coast, and the Pacific Coast of southern Chiapas (Mexico) and Guatemala—from the Archaic (ca. 8000/7000–2000 bc) through the Preclassic/Formative (2000 bc–ad 250) and into the Classic (ad 250–900) period, are covered. During the late Archaic, these lowland regions were inhabited by people who took full advantage of the rich natural biodiversity but also grew domesticates before becoming fully sedentary. Through time, they developed diverse management strategies to produce food, from the forest management system (which includes swidden agriculture), to larger scale land modifications such as terraces, and continued to rely on semidomesticated and wild plant resources. Although lowland populations came to eventually rely on maize as a staple, other resources such as root crops and fruit trees were also cultivated, encouraged, and consumed. The need for additional research that includes systematic collection of paleoethnobotanical data, along with other lines of evidence, will be key to continue refining the understanding of ancient subsistence systems and how these changed through time and across lowland Mesoamerica.
Keywords
Subjects
Introduction
This article covers the emergence of plant management systems and their development through time in the lowland1 regions of Mesoamerica, notably the Maya lowlands, the Gulf Coast, and the Pacific Coast of southern Chiapas (Mexico) and Guatemala (Figure 1). The time periods covered include the Archaic (ca. 8000/7000–2000 bc) but mostly focus on the Preclassic/Formative period (2000bc–ad 250) and Classic (ad 250–900) periods of these three regions. Traditional milpa agriculture as practiced by contemporary indigenous groups, in particular the Maya, is also reviewed.
Note: A full-color figure can be found in the online edition of this article at oxfordre.com/
Source: Map modified from topographical map created by Sémhur/Wikipedia Commons/CC-BY-SA-3.0.
Mesoamerica is known as one of the world’s primary centers of domestication where agriculture arose independently (Piperno & Pearsall, 1998) and where dozens of plants were brought under cultivation and later domesticated (Piperno & Smith, 2012). There is no denying that the shift from hunting and gathering to agricultural economies was a major transition in the history of humankind (Smith, 2005). Yet, Mesoamerica is rather unique, and although beyond the scope of this article, some issues with terminology, which are particularly applicable to this region, have to be considered. Domestication is often equated with agriculture, but this is problematic (Vrydaghs & Denham, 2007). Agricultural models have often been based on Eurasian ones, which include sedentary lifestyles and livestock rearing, but clearly there is a need to move away from such concepts (Harris, 2007; Vrydaghs & Denham, 2007), especially considering that larger animals were not available for domestication in Mesoamerica. The term “cultivation” has also been widely used (Smith, 2001),2 and although some scholars view it as a broadly encompassing word that also includes activities involving wild plants (e.g., Harris, 2007), the problem persists with how to qualify these early populations that do not follow more traditional (Eurasian) models.
One of the main problems is that agriculture in the Americas has often been equated with the use of maize (Zea mays ssp. mays L.; Figure 2) as a staple (Iriarte, 2007). Yet, research in the lowlands has revealed that mobile populations already used some domesticates (besides maize), even before settling down into more permanent settlements (Lohse, 2010; Rosenswig, 2006, 2015). Thus, although Archaic populations in Mesoamerica are not what many scholars would consider traditional agriculturalists, they were nonetheless food producers, bringing plants under cultivation and eventually domesticating them during this period3 (Fritz, 1994; Piperno & Flannery, 2001; Piperno & Smith, 2012; Smith, 1997). Thus these populations are perhaps better viewed as low-level food producers with domesticates (Smith, 2001), who through time developed and intensified a multitude of forest management strategies (slash-and-burn farming/managed fallows,4 horticulture/homegardens, and managed forests) that took full advantage of their natural environment in order to grow plants (Gómez-Pompa, 1987; Gómez-Pompa & Kaus, 1990; Peters, 2000; VanDerwarker, 2006) that went beyond the triad of domesticated maize, beans (Phaseolus spp.), and squash (Cucurbita spp.) (see Table 1 for select major crops and earliest appearance in the archaeological record). In fact, as will become evident, Mesoamerican populations relied on a wide range of plants, available in their gardens, fields, and forests.
Table 1. Select Major Crops and Earliest Appearance in the Archaeobotanical Record of the Three Regions
Paleoethnobotanical Research in Lowland Mesoamerica
Paleoethnobotany (or archaeobotany), along with archaeology, epigraphy, and ethnohistorical and ethnobotanical data, provide increasingly important insights into the ancient agriculture of Lowland Mesoamerica. However, a lack of paleoethnobotanical data, the most direct form of evidence regarding ancient plant use, has limited the ability of scholars working in Mesoamerica to reach a full characterization of the subsistence economy of the various societies inhabiting these areas. Understanding the subsistence economy is important in order to address larger issues within the archaeology of these societies. For the Maya, this includes rethinking the ways that larger and denser ancient Maya populations subsisted in different ecological settings besides simply relying on swidden agriculture (Chase & Chase, 1998) and in turn if increased populations led to deforestation, loss of resources, and eventual collapse (Lentz & Hockaday, 2009; Santley, Killion, & Lycett, 1986). Combined with more specialized studies that largely focus on diet, it is obvious that the collapse was not uniform (Emery & Thornton, 2008; Wright, 2006). In the case of the people native to the Olmec heartland and the Pacific Coast, this means rethinking the role of maize in the development of complexity and the continued use of a wide spectrum of resources even when maize reliance increased (Arnold, 2009; Rosenswig, 2006; Rosenswig, VanDerwarker, Culleton, & Kennett, 2015; VanDerwarker, 2006).
Considering the deep roots of Mesoamerican archaeology in the lowlands, relatively few systematic studies to recover macrobotanical remains have been carried out. Notable exceptions of important early studies mostly exist for the Maya area (Cliff & Crane, 1989; Lentz, 1999; Miksicek, 1983, 1986, 1990; Miksicek, Wing, & Scudder, 1991; Turner & Miksicek, 1984), and less so for the Soconusco (Feddema, 1993). Additionally, early microbotanical research in the Soconusco was carried out by Voorhies (1976), while Zurita Noguera (1997) conducted phytoliths studies on samples from the Gulf Coast, and pollen data in the Maya lowlands was collected by Pohl et al. (1996).
While these and more recent studies that are detailed later have provided scholars a much-needed window into ancient Mesoamerican subsistence practices, paleoethnobotanical data from these sites reflect unique ecological, social, and historical variables that cannot be extrapolated across the entire Mesoamerican lowlands. One reason for the lack of systematic paleoethnobotanical research in these tropical regions was the belief that plant remains would not survive well due to extreme wet-dry cycles (Ford & Nigh, 2009; Miksicek, 1983). Although the relative quantity of surviving plant remains may be lower in these tropical regions than in other geographical areas, the potential of recovering evidence increases if archaeologists systematically adopt recovery methods (Cagnato, 2016; Hageman & Goldstein, 2009; Morehart, 2011; Vanderwarker, 2006). While the lack of substantial botanical data is in part due to technological considerations, it is also the result of archaeologists focusing on elite contexts and answering questions relating to social organization, demography, and status (Folan, Fletcher, & Kintz, 1979; LeCount, 2001; McNeil, 2006; Morehart, Lentz, & Prufer, 2005).
Ancient Maya plant use has been extensively studied through iconography. Plant iconography is mostly featured on painted ceramic vessels and murals, media that have more successfully withstood the test of time over organic materials such as cloth and paper (Reents-Budet, 1994). The iconographic corpus consists of mainly painted and sculpted images of plant parts, including flowers, leaves, and fruits, on a range of artifacts (Zidar & Elisens, 2009). Trees are also commonly represented, often depicted as anthropomorphic, which may stem from the idea that humans are reborn from fruit trees (Schele & Mathews, 1998). While some species have been easier to identify than others, for example waterlilies (Nymphaea sp.) and the calabash (Lagenaria sp.) and ceiba (Ceiba sp.) tree to name a few, identifying plants based on iconographic depictions is not an easy task (Reents-Budet, 1994, p. 79). Archaeologists working in the Maya region have long acknowledged the importance of two plants, maize and cacao (Theobroma cacao L., Figure 3), which together form a conceptual pairing (Martin, 2006). Both plants figure prominently in Maya iconography (McNeil, 2006) as well as having dedicatory hieroglyphic tags that specified the owner and contents of a vessel (Beliaev, Davletshin, & Tokovinine, 2010). Moreover, both plants are mentioned in the Popol Vuh, the sacred book of the Quiché Maya (Tedlock, 1996). There was also a flourishing of maize motifs during the Middle Formative period in Olmec art (Taube, 2000).
As a result of relatively few dedicated paleoethnobotanical studies and a focus on the study of maize, for the longest time the role and importance of other plant species was overshadowed, with populations believed to have heavily relied on a diet of the triad of maize, beans, and squash. While scholars have moved beyond these oversimplifications, the relative importance of other plants is a work in progress (see Ford & Nigh, 2015; Sheets et al., 2012; Simms, 2014; VanDerwarker, 2006). Fortunately, an increase in the interest of archaeologists to include a wider range of paleoethnobotanical analyses in their research along with new advances in the analysis of microbotanical remains in the form of pollen, phytoliths, and starch-grain analysis and chemical analysis of organic residues have recently contributed to our understanding of ancient plant use in the Mesoamerican lowlands (e.g., Cagnato, 2016; Loughmiller-Cardinal & Zagorevski, 2016; Powis, Cyphers, Gaikwad, Grivetti, & Cheong, 2011, Powis et al., 2013, Rosenswig, Pearsall, Masson, Culleton, & Kennett, 2014; Seinfeld, von Nagy, & Pohl, 2009; Simms, 2014). Moreover, the recovery of plant remains is figuring more prominently in the agendas of some archaeologists in lowland regions of Mesoamerica, resulting in a dramatic increase of paleoethnobotanical reports that for the most part, combine various lines of data (Abramiuk, Dunham, Cummings, Yost, & Pesek, 2011; Cagnato, 2016; Cavallaro, 2013; Dedrick, 2014; Lentz et al., 2014; Morell-hart, 2011; Simms, 2014). The results of these more recent studies, combined with past research, is discussed in more detail for each of the three regions.
Mexican Gulf Coast
Chiefdoms developed along the southern Mexican Gulf Coast during the Formative period (1400bc–ad 300), with the establishment of large civic-ceremonial centers, namely San Lorenzo, La Venta, and Tres Zapotes (see Figure 1, blue-shaded area). Investigations have shown that Early Formative populations were only semi-sedentary, moving across the landscape seasonally or annually (Arnold, 2000) and likely relied on a mixed subsistence system, consisting of collecting and gardening5 (Killion et al., 2013). In time, as populations invested more in the swidden cycle “they created more gardens, more managed fallows, and more managed forests” (VanDerwarker, 2006, p. 110). Olmec farmers likely practiced forest- or bush-fallow shifting cultivation strategies, using the river levees and upland areas (VanDerwarker, 2006). While upland soils are less fertile, they have the advantage of being able to be cropped during both the dry and wet seasons, unlike the river levee soils that can only be cropped once but produce higher maize yields (VanDerwarker, 2006).
There is evidence that maize cultivation was taking place during the Early Formative period, corroborated by the recovery of maize phytoliths at San Lorenzo (Zurita-Noguera, 1997) and El Remolino (Wendt, 2005) and macrobotanical remains from San Lorenzo (Cyphers, 1996), Bezuapan, La Joya (VanDerwarker, 2006), and Tres Zapotes (Peres et al., 2010). Even earlier data comes from La Venta, where maize in macrobotanical form along with pollen have been reported (Rust & Leyden, 1994). These authors, based on the increase of pollen indicative of forest clearing, suggest that the levees were starting to be cleared during the Early Formative period and that, by the Middle Formative, maize cultivation became more intensive at the site. Yet, the oldest data comes from San Andres, located in Tabasco, Mexico, where Pope et al. (2001) report on the earliest evidence of maize pollen, dated to 5000bc. Other pre-Formative data come from the Tuxtla Mountain region, where pollen dating to 2780bc, has also been recovered (Goman, 1992). Interestingly, later periods (i.e., Early and early Middle Formative) did not contain maize pollen. It is only by the end of the Middle Formative period that maize is again picked up on the pollen record (Goman & Byrne, 1998).
In a pioneer multidisciplinary analysis of Olmec subsistence, VanDerwarker (2006) carried out paleoethnobotanical analysis at two Olmec settlements, Bezuapan and La Joya, located in the Tuxtla Mountains.6 Her research indicates that people at La Joya became more sedentary toward the end of the Early Formative and by the Terminal Formative had intensified maize production and harvested more fruits. In addition, she argues that greater reliance on infield production was likely, indicated from investing labor in ridging their fields and increases in substorage pits (Arnold, 2000). At Bezuapan (where ridges were also identified), maize also increased through time, with a slight decrease in the Terminal Formative period, which also coincides with a slight increase in fruit consumption. The increase in ratio of fruit trees such as avocado (Persea americana Mill.), coyol (Acrocomia mexicana),7 and sapote (Pouteria sapota (Jacq.) H.E. Moore & Stearn) is seen as evidence for possible tree crop management intensification through time. Combined with the zooarchaeological data, VanDerwarker argues that increases in “garden hunting” suggest that people were intensifying their use of nearby gardens (p. 163). At Tres Zapotes, however, maize production and consumption seems to have been higher than other sites, and during the Middle Formative period, coyol production seems to decrease (Peres, VanDerwarker, & Pool, 2010). At San Carlos, the inhabitants seem to have consumed moderate amounts of maize, while consuming more fruits such as coyol and sapote (VanDerwarker & Kruger, 2012). The presence of grinding stones (manos and metates) has also been used as an indicator of changes in maize processing. Grinding implements are reported from several Formative sites (Arnold, 2009; Coe & Diehl, 1980a). Arnold writes that by the Middle Formative, there was a “pronounced shift from multi-purpose metates (grinding slabs) to single-purpose metates, as revealed in distinct grinding patterns” (p. 404). The latter are “associated with targeted grinding, such as maize processing” (p. 404).
For some scholars, the paleoethnobotanical data modify the view that maize agriculture was the stimulus for complexity in the area (see Arnold [2009] for a good summary). VanDerwarker (2006) does not believe “that maize surpluses alone could have funded the Early Formative Olmec rise to power” (p. 39). In fact, Borstein (2001) suggests that populations were more focused on aquatic resources than agriculture prior to 1000bc (see also Arnold, 2009). While Arnold certainly agrees that maize was cultivated by the Early Formative, it “does not appear to have become a staple food resource until after the development of significant socio-political differentiation at San Lorenzo. Nor can maize be linked to aggrandizing displays, as suggested in the Early Formative ceramic record of the Pacific coast” (p. 408). He also suggests that manioc (and not maize)8 may have been a staple during the Early Formative period, yet the current evidence is scant (see Pope et al., 2001). VanDerwarker and Kruger (2012) argue that there are more geographical rather than temporal differences. They note that lowland sites such as San Carlos and Tres Zapotes, where the recovery of maize was relatively high, indicates that maize reliance increased as proximity to center of developing sociopolitical power also increased.9 Thus, for them, maize would have been a luxury food at first and would have been part of feasting events. This idea is reinforced by the elevated presence of maize residues on luxury wares when compared to utility ones, which has been reported from Middle Formative San Andres and argued to indicate that maize beverages were consumed (Seinfeld et al., 2009). Different data sets emerging from different sites across the Olmec region suggest that much variation existed and that different areas took different trajectories with regards to maize adoption (VanDerwarker & Kruger, 2012, p. 528); clearly more work is necessary to better characterize this process.
Other resources besides maize were cultivated and collected by populations living in the Gulf Coast. A variety of beans such as the common bean (Phaseolus vulgaris L. [Figure 4]), scarlet runner (P. coccineus L.), and tepary (P. acutifolius A. Gray), prickly pear (Opuntia sp.), guava (Psidium guajava L.), grape and sapote family remains, acorns (Quercus sp.), and miscellaneous plants such as possible achiote (Bixa orellana L.) morning glory family, and trianthema (Trianthema sp.) have all been reported from La Joya and Bezuapan (VanDerwarker, 2006). Additionally, Cyphers (1996) reports the presence of squash and beans in macrobotanical form from San Lorenzo. A single manioc (Manihot esculenta Crantz) pollen grain was reported from San Andres (Pope et al., 2001), and, more recently, phytoliths of this species have been reported from San Lorenzo (Cyphers & Zurita-Noguera, 2006). Cotton (Gossypium sp.) pollen has also been reported from San Andres (Pope et al., 2001). More recently, evidence of cacao preparation and consumption between 1800 and 1000bc, through the recovery of theobromine residues on a range of vessels from an elite mortuary context San Lorenzo has been discovered (Powis et al., 2011). Additional studies of organic residues inside ceramics revealed the presence of chili peppers (Capsicum sp.) at San Lorenzo as early as 1800 cal bc (Lara, Cyphers, & Gaikwad, 2018). Macrobotanical data from San Carlos, 9 km to the southeast of San Lorenzo, has provided additional information, including the presence of evening primrose (Oenothera sp.), Forrestia sp., and possibly coco plum (Chrysobalanus icaco L.) and coyol real (Scheelea liebmannii).10 During the Classic period, plant remains from Bezuapan, as reported by VanDerwarker (2006) include maize, beans, avocado, coyol, sapote, guava, and grape family. However, besides this data set, there is currently little information regarding subsistence systems for the Classic11 and Postclassic (see Stark & Arnold [1997] for good summaries of the region).
Pacific Coast of Chiapas (Mexico) and Guatemala
The Soconusco region, known by the Aztecs as Xoconochco (Coe & Flannery, 1967) is located on a flat coastal plain between the Pacific Ocean and the Sierra Madre Mountains and covers two countries, Mexico and Guatemala (Figure 1, green-shaded area). The region, under Aztec control when Europeans arrived in the New World, was renowned for its agricultural productivity (Kennett et al., 2010 citing Gasco & Voorhies, 1989) and has “one of the most complete records of initial Early Formative period (1900–1400 cal bc) occupation in Middle America” (Rosenswig et al., 2015, p. 90).
The earliest human impacts detected in the region are in the Sipacate zone, at around 3500 cal bc.12 (Neff et al., 2006, p. 304). More recently, Kennett et al. (2010), based on pollen, phytolith, and charcoal cores, suggest that “slash-and-burn farmers” were present by 6500 cal bp; however, other reasons for this signature identified in the Archaic period, including natural fires or foragers enriching the soil, have been proposed (see Rosenswig, 2015, p. 134).
It is hard to characterize these early societies, as some were quite sedentary (Rosenswig, 2006) while others were still largely mobile (Arnold, 1999), eventually settling down by the Middle Formative Locona phase (1700–1500 cal bp). Increased political complexity is first documented at Paso de la Amada in the Mazatan region (Clark & Blake, 1994; Lesure & Blake, 2002; but see Love, 2007). By around 1000bc (Middle Formative), the focus of economic and political power shifted to the eastern side, in particular to two sites: Takalik Abaj and La Blanca (Love, 2007). By the Late Formative (400 cal bc) urban societies came to be established in the region, but during the Terminal Formative, many of the coastal sites were abandoned (Love, 2012).
Evidence of cultivation and horticulture has been found from early periods, from as early as the Archaic period where maize has been reported (Rosenswig et al., 2015). Two Archaic period individuals had high levels of C4, but it has been suggested that this may have been the result of a high reliance on marine resources (Blake, Chisholm, Clark, Voorhies, & Love, 1992). Thus, just like in the subsequent Early Formative period, maize did not play a major role in the diet of the locals. The earliest Formative phase is known as the Barra phase (1900–1700 cal bc) and saw the development of permanent villages and adoption of ceramics, while by the Locona phase rank systems developed as well as craft specialization among other features (Clark & Blake, 1994). Maize is not believed to have been an important crop in this early period, substantiated by the isotopic evidence (Blake et al., 1992); instead it has been suggested that it was likely a status food (beverage), potentially having a role in competitive feasting (see Smalley et al., 2003; but also see Arnold, 2009).
Similar to the Gulf Coast, it was only during the Middle Formative that the inhabitants of the Soconusco region began to intensify maize cultivation and consumption (Rosenswig et al., 2015). Other lines of evidence to suggest that maize and other cultivation was intensifying comes from indirect data such as the manos and metates, and ceramic graters, all starting around 1000 cal bc. In particular, it seems that manos and metates replaced mortars and pestles (Rosenswig, 2007); the former, according to Rosenswig, VanDerwarker, Culleton, and Kennett (2015), have a larger surface for grinding, which may indicate a “greater focus on processing a grain like maize as a dietary staple” (p. 99). Moreover, stable isotope studies on human bone collagen from various periods has shown that the Middle Formative population had higher levels of C4. Interestingly, even during the Middle Formative, the isotope results indicate that the population may have continued to rely on a mixed diet, even though maize levels were higher than in Early Formative period individuals (Blake et al., 1992).
The earliest published report of plant remains is probably that by Coe and Flannery (1967) who found impressions of maize cobs on and in floor layers at Salinas La Blanca, a small, Early Formative hamlet in the Ocós region of Guatemala. Impressions of maize stalks and leaves were also recovered, along with those of larger seeded fruits such as jocote (Spondias sp.), white sapote (Casimiroa sapota),13 and avocado. Based on the plant, but also animal remains, the authors suggest that the occupants lived here year-round and were a “self-sufficient, and totally sedentary hamlet, well adapted to a coastal farming life” (Coe & Flannery, 1967, p. 71). Feddema (1993) reports on seven taxa recovered through flotation from diverse contexts at four sites in the Mazatan region—Aquiles Serdán, Paso de la Amada, Chilo, and San Carlos—which span 1900 cal bc to 1200 cal bc. Her work revealed that maize, beans, and avocado dominated the samples, with smaller quantities of carpetweed (Mollugo sp.), knotweed (Polygonum sp.), and Brassica sp. seeds, all of which she associates with areas under cultivation (i.e., weedy). Jones and Voorhies (2004) carried out extensive microbotanical analyses at various Early Formative sites in the region, and their work has revealed the presence of phytoliths of maize, cucurbits, and taxa in the Annonaceae and Marantaceae families, which may have had economic value (Blake & Neff, 2011). Macrobotanical remains from another site, El Varal, include maize, possible Fabaceae, and nut fragments (Popper & Lesure, 2009). Bellacero (2010) conducted archaeobotanical analyses from Cantón Corralito, an Early Formative site. Besides finding maize, she also documented the presence of wild plants, including Solanaceaous plants (Physalis sp.; Solanum hispidum Pers.), amaranth (Amaranthus sp.), waterleaf (Talinum triangulare),14 purslane (Portulaca sp.), and Potentilla sp. More recently, Powis et al. (2008) documented the presence of cacao in the region through residue analysis. A sherd recovered from construction fill at Paso de la Amada resulted in the discovery of compounds identifiable to cacao. Even more recently, investigations at Cuauhtémoc (Rosenswig et al., 2015) suggest an increase in maize densities between the Early and Middle Formative periods and has led the authors to propose that some of the maize was grown nearby in gardens. Compared to the Middle Formative, the plant remains are more diverse in the earlier period, which includes avocado pit fragments, spurge (Euphorbiaceae family), and pokeweed (Phytolacca americana L.) seeds. The authors suggest this is likely the result of humans foraging in undisturbed areas. Other remains that were recovered from both periods include probable beans (Phaseolus spp.) and cheno/am15 seeds. While some scholars have argued for the importance of root crops to the diet in this region (Davis, 1975; Lowe, 1975), there is currently no evidence to support these claims.
Overall, this area is particularly interesting as “there is little evidence of a dramatic change in the subsistence base or resource procurement practices during the Archaic to Formative transition” (Rosenswig, 2006, p. 338). Moreover, domesticated plants were consumed in the region before the adoption of ceramics and sedentism. Macro- and microbotanical data seem to suggest that nonsedentary populations cultivated several plants, including maize and beans, and consumed cacao, avocados, and other fruits from trees. By the Middle Formative, people living in this lowland region of Mesoamerica consumed greater quantities of maize, along with avocado, beans, cheno/am, and fruit crops. Although slightly outside of the Soconusco area, at Chiapa de Corzo (Chiapas, Mexico), chili pepper residues were found on five intact vessels, recovered from burials and caches, dating between the Middle and Late Preclassic periods (Powis et al., 2013).
Although the Formative through to the Late Postclassic is well represented in the Soconusco (Blake et al., 1992),16 there is little information regarding agricultural practices. At Chocola’, located on the transitional zone between coastal plain and central highlands, a site that reached its peak in the Late Formative, water management has been documented, but its use remains unclear (Love, 2007). However, the recent recovery of cacao residues on ceramic sherds suggests that this area was indeed an area for cacao arboriculture (Kaplan et al., 2017), and therefore the canals could have functioned to irrigate the orchards. Another important site located on the piedmont, Takalik Abaj, has evidence for strong occupation during Middle and Late Formative and also into the Terminal period. However, unlike at the site of Kaminaljuyu, located in the Guatemalan highlands, where irrigation systems for agricultural purposes have been reported starting as early as the Middle Preclassic (Popenoe de Hatch, 2002), these are believed to have stored and drained water (Marroquin, 2005). Finally, a core taken near Cerro de las Conchas supports the idea that shifting agriculture was practiced in this region during the Late Formative (Blake, 2008). For the Classic period, even less is known with regard to agricultural practices, and for the Postclassic, limited isotopic data indicates that people in the region “maintained a more mixed diet, including more C3 plants (both directly and indirectly consumed) than highland Mesoamericans” (Blake et al., 1992, p. 91). It seems that the production of cacao increased during the Late Postclassic and likely continued during the Late Postclassic after the Aztec Empire conquered the region to gain more secure access to the region’s cacao bean production (Gasco, 2006). It stands that cacao orchards were therefore present in the region, as they had been for centuries.
The Maya Lowlands
The Maya lowlands are a large expanse of territory that includes the Yucatan peninsula in southern Mexico, Belize, Guatemala, northwestern Honduras, and northern El Salvador. Ecologically, the area consists of various zones, including tropical evergreen forests (Figure 5), tropical semi-deciduous forests, and zones of grasslands, wetlands, palm communities, and mangroves (Greller, 2000) and, except for the Maya Mountains, are below 800 m in elevation (Sharer & Traxler, 2006, p. 42). The northern Maya lowlands comprise the northern half of the Yucatan Peninsula, an area which is rather flat and is generally comprised of scrub and low bush vegetation, along with low rainfall, while the southern lowlands have deeper, fertile soils and greater rainfall, providing ideal conditions for tropical rainforest growth (Sharer & Traxler, 2006). Overall, the Maya lowlands can be considered “a complex mosaic of fine-grained heterogeneity at the local level, with significant variability in landscapes between sub-regions” (Fedick, 1996a, p. 347).
Figure 5. Views of the southern Maya lowlands (left) and of the evergreen forests (right).
Source: C. Cagnato.
Numerous scholars working in different parts of the Maya lowlands have documented evidence for cultigens and landscape disturbance as early as the Archaic period in what is the country of Belize. Maize, chili peppers, squash family, bean family, cotton (Gossypium sp.), along with fruit trees and tubers such as manioc, have been reported from Archaic period sites (Jones, 1994; Jones & Hallock, 2008; Pohl et al., 1996; Rosenswig et al., 2014). Notably, Rosenswig et al. (2014) report on finding maize starch grains on tools from Archaic period contexts (8320–6560 cal bp). Other archaeologically detectable human activity on the landscape is also visible (see Beach et al., 2006). While the presence of humans in Belize during the Archaic is more or less secure, debate exists for the Central Petén Lakes region in Guatemala. Research has been ongoing for decades, and different researchers (Curtis et al., 1998; Mueller et al., 2009; Vaughan, Deevey, & Garrett-Jones, 1985; Wahl, Byrne, Schreiner, & Hansen, 2006) have reached diverging conclusions regarding the nature of forest disturbance and increased soil erosion (see Castellanos & Foias [2017] for a good review of the issue). What seems more or less certain is that horticulturalist-foragers were present in the region as “early as 3000–2500bc in some areas and by 1500bc in other parts of the Petén lowlands” (Castellanos & Foias, 2017, p. 2).17 Between the Early and Middle Preclassic (ca. 1100–900bc), permanent settlements appear with the use of ceramics in the central and southern Maya lowlands (Lohse, 2010). However, the adoption of sedentism was not uniform, as evidenced by research at the sites of Cuello and Ceibal (Hammond, 1991; Inomata et al., 2015). Agriculture intensified during this time, in particular along swamp edges (Pohl et al., 1996). The recovery of maize pollen, squash, bottle gourd (Lagenaria sp.) phytoliths, and grinding stones “provide supporting evidence for maize cultivation” (Pohl et al., 1996, p. 365). Yet, Early Preclassic carbon isotopic data suggests that maize was not an important component of the human diet during this period (Pohl et al., 1996; Tykot, van der Merwe, & Hammond, 1996; van der Merwe, Tykot, Hammond, & Oakberg, 2000).
During the Middle Preclassic (1000–400bc), ditching and canals were constructed in the Maya lowlands (Pohl et al., 1996). Macrobotanical data, both seeds and wood charcoal18 from this period, reveal the presence of a wide variety of plants, including maize, squash, beans, cotton, chili peppers, hogplum (jocote), cashew (Anacardium occidentale L.), nance (Byrsonima crassifolia [L.] H.B.K.), ramon (Brosimum alicastrum Sw.), guava, soursop (Annona spp.), avocado, and tubers such as sweet potato (Ipomoea batatas [L.] Lam.) and manioc (Lentz et al., 2014; Miksicek et al., 1991; Powis et al., 1999). Moreover, cacao residues on ceramic sherds have also been reported from this period (Powis, Valdez, Hester, Hurst, & Tarka, 2002). Late Preclassic macrobotanical data are represented by those from the site of Kokeal and include maize, allspice (Pimenta dioica [L.] Merr.), papaya (Carica papaya [L.]; Figure 6), sapodilla (Manilkara zapota [L.] P. Royen), hogplum, siricote (Cordia sp.), and cacao (Miksicek, 1983). Inhabitants at the site of Cuello continued to utilize similar resources as in the previous period (Miksicek et al., 1991). In addition, palynological and macrobotanical data from Cerros, another site in Belize, suggests that the inhabitants had access to a range of plants that included maize, chili peppers, beans, squash and a range of fruit trees, including nance, siricote, sapote, and cacao, which Cliff and Crane (1989) suggest were likely under cultivation. At Tikal, macrobotanical evidence of scarlet runner, coyol, nance, and jocote is reported, along with achira (Canna cf. indica) pollen (Lentz et al., 2014).
By the Classic period, considered by most to be the time during which the civilization reached its apogee (Sharer & Traxler, 2006), large urban cities were in place and different modes of food production existed. The site of Ceren, located in El Salvador, has yielded a rich quantity of botanical remains that were extremely well preserved due to lava and ash from the Loma Caldera volcanic eruption in ad 540. The recovery of maize, squash, beans, cotton, guava, cacao, chili peppers, agave (Agave sp.), and root casts of manioc and malanga (Xanthosoma violaceum Schott.) indicates that the inhabitants probably relied on infield and outfield systems as well as arboriculture (Lentz & Ramirez-Sosa, 2002). The recovery of cacao at Ceren, which is typically associated with the elite and rulers during the Classic period (McNeil, 2006), indicates that non-elites also enjoyed this beverage (Sheets & Woodward, 2002, p. 189).
Ancient Mayan Agriculture—Theoretical Considerations
For the most part, scholars believed that the Maya were an anomaly, developing in a resource-poor tropical setting, often close to bajos (low-lying, seasonal wetlands), yet sustaining a complex society based upon swidden agriculture (Adams, Brown, & Culbert, 1981; Sanders, 1962; see also Dunning & Beach, 2000). In light of what was known about environmental limitations of tropical forests, such as infertile soils (Meggers, 1954), it was argued that the ancient Maya practiced long-fallow swidden agriculture (Hester, 1954; Sabloff & Willey, 1967), which led to their eventual collapse (Cook, 1921; Sanders, 1962; Wiseman, 1985).
Swidden or extensive agricultural systems were for a long time synonymous with the ancient Maya, who were believed to survive on a diet of mainly maize, beans, and squash, grown in what is known locally as the milpa. Assumptions about swidden agriculture as the prevalent Maya mode of cultivation became ingrained in scholarship as early as the colonial period (McAnany, 2013), but Wilken (1971) for example questioned whether practices observed in the 16th century and more recently can be extrapolated for Maya ancestors. Moreover, population number estimates were greatly skewed following the Spanish Conquest (Drucker & Fox, 1982), and many Maya were relocated and forced to live in nucleated settlements (Atran et al., 1993; Sanders, 1962). Other factors have also played a role in how people have understood these systems, for example, in how drawings in codices and frescoes have been interpreted (see Turner, 1978; Villacorta & Villacorta, 1976). Such narratives were strongly biased by stereotypes of modern agricultural systems. Reina and Hill (1980) discuss ethnohistorical information dating to the Colonial period, as gathered from accounts or Relaciones written by Dominican friars who recorded accounts of traditional Maya lifestyles before the Spanish relocated them. Such Relaciones seemingly indicate that maize fields provided the bulk of the food, although Reina and Hill (1980, p. 78) note that historical accounts are devoid of any field measurements or information regarding the composition of the people who worked these fields.
Although some early scholars have theorized that denser populations inhabited the Maya lowlands (see Cooke, 1931; Gann, 1929; Ricketson & Ricketson, 1937), the traditional view of the ancient Maya suggested low populations and, in turn, an decentralized social structure (Chase & Chase, 1998). Moreover, it implied small, temporary communities, and “it was this inherent dispersion and potential mobility that was thought to be incompatible with large and dense populations” (Rice & Culbert, 1990, p. 9). As stated by Sanders (1962), “Large urban communities cannot be maintained effectively with this type of subsistence base” (p. 287). As more archaeological research was carried out, a paradigm shift occurred during the late 1960s, with scholars supporting the idea that the Maya lowlands supported larger population numbers in both urban and rural settings (Andrews, 1965; Bullard, 1960; Haviland, 1970; Turner, 1978; Willey, Bullard, Glass, & Gifford, 1965). Rice and Culbert (1990) even described the Maya lowlands region as “among the most densely populated regions of the preindustrial world” (p. 26). Population density estimates for various parts of the Maya lowlands have varied greatly in time, ranging from 30 to over 700 persons per square km, depending on whether the calculations were made for the urban centers or the hinterlands (Chase & Chase, 1996; Cowgill, 1962; Haviland, 1970; Sanders, 1962). Based on these revised numbers, scholars came to question whether or not milpa agriculture was productive enough to support a large civilization such as the Maya. Following Boserup’s (1965) theory, it was thus argued that the ancient Maya had to intensify their agricultural production in order to meet the demands of growing population. To intensify food production, the long-fallow system was not tenable; instead, they had to carry out short-fallow systems and invest in modifying their landscape through canals, terraces, and raised and drained fields (Johnston, 2003; Lentz, Dunning, & Scarborough, 2015).
Not all early scholars were convinced that the ancient Maya relied solely on extensive cultivation techniques, arguing for other methods or techniques (e.g., Palerm & Wolf, 1957; Ricketson & Ricketson, 1937; Thompson, 1931, pp. 228–229). Wilken (1971) made the excellent point that there was no reason why the Maya had to rely on only one kind of farming system, considering that other civilizations in the New World practiced several forms of intensive agriculture. These include wetland fields in South America and complex chinampas in Central Mexico (Coe, 1964). A decade earlier, Palerm and Wolf (1957) had already posited that geometric patterns identified at various lowland sites were in fact the relics of wetland agriculture. Although these discoveries had an impact on research in the Maya area, views on Maya agriculture changed dramatically only after archaeological evidence in the form of drained fields was discovered in the 1970s (Puleston, 1977; Siemens & Puleston, 1972; Turner, 1974).
Ecological conditions of tropical forests were also misunderstood. The Maya lowlands were previously thought be a homogenous ecological area, but this was not the case (Sanders, 1977; Turner & Harrison, 1978; Wiseman, 1978). Recently, scientists and scholars have argued against the idea that tropical soils are inherently nutrient-poor (see Fedick, 1996b; Johnston, 2003, pp. 142–143; Sanchez & Logan, 1992). In fact, many parts of the lowlands have mollisols, which “can be considered as one of the world’s most agriculturally important and naturally productive soils” (Fedick, 1996a, p. 341). However, the uplands do have a problem in that the soils are vulnerable to erosion (Dunning et al., 1999). Research in the Maya lowlands also shows that intentional modification of the natural landscape was extensive—but not uniform—and included large-scale terrace systems (Chase & Chase, 1990; Chase, Chase, Fisher, Leisz, & Weishampel, 2012;19Dunning, Beach, Farrell, & Luzzadder‐Beach, 1998; Healy, Lambert, Arnason, & Hebda, 1983; Rice, 1993; Scarborough, Becher, Baker, Harris, & Valdez, 1995; Turner, 1974, 1983), small-scale terraces (Dunning & Beach, 1994), wetland systems consisting of raised and drained fields (Adams et al., 1981; Siemens & Puleston, 1972; Turner & Harrison, 1983), and ditched fields (Guderjan, 2007). As Wyatt (2012) notes, some scholars argue that terraces were constructed in the Late Classic in response to an increase in population and were controlled and exploited by elites. For Wyatt, however, excavations of terraces at the site of Chan in Belize suggest these were built starting in the Middle Preclassic. Thus these were not necessarily built in response to population increases, nor were early farmers under elite control. Interestingly, terraces did not necessarily increase productivity but instead prevented decreases, and these large-scale modifications are not found where expected (i.e., near large centers) but instead were located in rural areas with smaller population numbers (Johnston, 2003, p. 139; Pohl et al., 1996). The control of water on a large scale has also been documented (Lentz, Magee, et al., 2015; Scarborough et al., 2012).
By the end of the 1970s and into the 1980s, scholars recognized that a range of land modifications was possible, allowing for different types of agricultural intensification, namely “uplands for mixed-crop farming, hillsides for terrace farming, and wetlands for raised-and drained-field cultivation” (Fedick, 1996b, p. 2). Yet, the extent of raised fields has been debated, especially with regard to when they were used and how they were constructed (Fedick, 2003). For example, some argue that they were built in the Preclassic and later abandoned (e.g., Pohl & Bloom, 1996), while others argue that they were continuously used into the Late Classic (Turner & Harrison, 1983). Recent data indicate that raised fields and canals are both earlier and existed longer, into the Terminal Classic period (Beach, Luzzadder-Beach, Guderjan, & Krause, 2015). Archaeological investigations have resulted in scholars suggesting that wetland farming was not perennial; however, seasonal farming in wetlands was possible (Lucero, 1999). Lucero also notes that there are debates on whether bajos were in fact used by the ancient Maya for seasonal wetland agriculture. Some scholars have argued that these areas are too clayey, although evidence suggests that bajos were in fact used for farming (Culbert, Fialko, McKee, Grazioso, & Kunen, 1997; Kunen, 2004; Lentz et al., 2014). Considering that wetlands and swamps make up about 40% of the southern Maya lowlands (Fedick, 2003; Lucero, 1999, p. 219), it is important to consider these areas as potential farming zones. Raised fields are noted as requiring a serious investment, both to build and to maintain (Wiseman, 1983, p. 117).
In the Maya lowlands, outfields are located at varying distances from the home (Ford & Nigh, 2015; McAnany, 2013). For those that were located too far away from the home, a second base was established. Outfields are noted as being managed less intensively than infields (Ford & Nigh, 2015). Infield milpas situated closer to the home were “continuously or nearly continuously cultivated” (Stark & Ossa, 2007, p. 389). Homegardens, also known as houselots, kitchen gardens, or simply gardens, were close to the home and intensively managed, with economic, medicinal, and ornamental plants grown (Alcorn, 1984; Lundell, 1937; Peters, 2000; Stark & Ossa, 2007).20 Homegardens in Petén (the largest department in Guatemala) and in the general Maya region were noted by several early scholars (Lundell, 1938; Wisdom, 1940). Infields and outfields have been reported archaeologically, including at Tamarindito, Tikal, and Calakmul (Dunning & Beach, 2010). In terms of composition, fields are noted as being used more for planting staples, whereas gardens were typically more diverse in their composition (Stark & Ossa, 2007). It is important to note that ethnographic studies have led to the belief that homegardens of the 20th century did not contribute large amounts of food to a household (e.g., Redfield & Villa Rojas, 1934, p. 38). Guderjan (2007) states that the “impact of the overall agricultural productivity of Blue Creek would be in the range of one percent” (p. 65). Others have argued for the opposite. Kitchen gardens can provide substantial resources (Fisher, 2014; Netting, 1977), including maize that was found to provide up to one-third of the food required for a family (Sanders & Killion, 1992, p. 18). Archaeological studies support the existence of homes with adjacent plots, with land being extensively farmed, for example at Sayil in the Puuc region (Dunning & Beach, 2010), Blue Creek (Guderjan, 2007), and Ceren (Figure 7; Lentz & Ramirez-Sosa, 2002).
Figure 7. Structure 11 at Ceren, part of Household 1. Note the kitchen garden to the left of the structure.
Source: C. Cagnato.
Orchards or areas with economic trees that were planted or encouraged are reported for the Maya (Kintz, 1990; Miksicek, 1983, p. 103; Wisdom, 1940). Colonial chroniclers describe orchards located near settlements (Fisher, 2014; Turner & Miksicek, 1984), with important trees belonging to the elite or prominent lineages (McAnany, 2013), in particular the cacao trees (Caso Barrera, & Aliphat, 2006). Evidence of cacao in residue form for the Maya area was first reported from an Early Classic vessel at Rio Azul (Hall, Tarka, Hurst, Stuart, & Adams, 1990). Folan et al. (1979) have also argued that lords and priests at the site of Coba had access to and control of the majority of the fruit trees. The importance of arboriculture and careful forest management among contemporary Maya groups is well documented (Dussol, Elliott, Michelet, & Nondédéo, 2017; Lentz & Hockaday, 2009; Thompson, Hood, Cavallaro, & Lentz, 2015). It is believed that several species of palm trees were domesticated, or at least intensively managed, by the ancient Maya (Lentz, 2000). Palms are so versatile as well as being excellent sources of protein, calories, and fat (McKillop, 1996), which would explain their ubiquity and presence in the archaeobotanical record (Cliff & Crane, 1989; Hageman & Goldstein, 2009; Lentz, 1991). Managed forests “represent the endpoint of the successional process on a site” (Peters, 2000, p. 211) and are easy to overlook, as they are less visible than other forms of plant management. However, extensive investigations indicate that the current landscape is partly, if not completely, the result of active management carried out by the ancient Maya (Ford & Nigh, 2015; Lentz & Hockaday, 2009; Thompson et al., 2015).
Besides the importance of fruit trees, evidence for the use of root crops has greatly increased in recent times (since Pohl et al., 1996), supporting early theories for their importance (Bronson, 1966; Drucker & Fox, 1982; but see Cowgill, 1971). Achira pollen has been identified at Tikal and may have grown in the bajos area (Lentz et al., 2014) but has also been identified in starch grain form (Cagnato, 2016) and possibly in phytolith form (Abramiuk et al., 2011). Achira produces starch-rich edible rhizomes (Abramiuk et al., 2011), and its leaves are also used to wrap tamales, which are then placed in underground ovens (Salazar et al., 2012). Manioc has also been reported in greater numbers. Although scholars identified manioc decades ago in the Maya region (Hather & Hammond, 1994; Pohl et al., 1996), the potential of this plant as a staple was not secured until manioc root casts from Ceren were identified (Lentz & Ramirez-Sosa, 2002). Additional support for the importance of manioc is provided by the recovery of starch grains (Figure 8; Cagnato & Ponce, 2017; Rosenswig et al., 2014). Today, the Maya grow manioc in milpas and cavities in limestone (Fedick et al., 2008; Nations & Nigh, 1980) and often cook it in underground ovens (piibs) (Salazar et al., 2012).
Sweet potatoes have also been reported, from Tikal in tuber form (Lentz et al., 2014) and in starch grain form from La Corona (Guatemala) and Los Naranjos (Honduras) (Cagnato, 2016; Morell-Hart, Joyce, & Henderson, 2014). Sweet potato pollen was also recovered from a site in Belize (Guderjan et al., 2010, p. 226). While arrowroot (Maranta sp.) is not typically recovered in macro-botanical form, starch grains of this species dominated the assemblage from Yucatan (Simms, 2014) and are also reported from Petén (Cagnato, 2016), with phytoliths recovered from middens in Belize (Abramiuk et al., 2011). Arrowroot is used to prepare foods cooked in Maya earthen ovens and as atole (Salazar et al., 2012; Simms, 2014). As noted, another tuber, malanga, has been reported from Ceren, but also from Tikal, from Late Classic contexts (Lentz et al., 2014).
Other plants have remained rather underappreciated in the archaeological record, for example edible greens, of which a wide range are reported as having been used in the Colonial period, including epazote (goosefoot, Chenopodium sp.), chipilin (Crotalaria sp.), and purslane, although the latter was considered food for the poor (Coe, 1994). Twenty-first-century gardens across Mesoamerica (including the Maya area) have edible greens growing (Casas, Otero-Arnaiz, Pérez-Negrón, & Valiente-Banuet, 2007; Ross-Ibarra & Molina-Cruz, 2002; Vieyra-Odilon & Vibrans, 2001), yet archaeologically these plants have been rather elusive. However, recent studies (Cagnato, 2016, 2018a, 2018b; Dedrick, 2014) report on the recovery of seeds of chenopods, amaranth, purslane, and Solanaceae family members. In addition, starch grains support the idea that chili peppers (Figure 9) were as important as they are today for Mexicans/Guatemalans, as they are ubiquitously recovered on artifacts, from El Peru-Waka’ in the southern Maya lowlands to sites in Belize and the northern lowlands (Cagnato, 2016, 2018a; Rosenswig et al., 2014; Simms, 2014). Their recovery suggests these plants were likely an important component of Mesoamerican diets.
Finally, organic residue analyses have also been helpful to recover additional information regarding plants in the past. After identifying cacao and chili peppers, tobacco (Nicotiana sp.), which is rather elusive archaeobotanically (but see Dedrick, 2014), has been positively identified from the presence of nicotine alkaloids inside an 8th-century codex-style flask (Loughmiller-Cardinal & Zagorevski, 2016). Previously, murals with a glyph that reads “tobacco person” (Martin, 2012) and ceramic vessels (Coe & Kerr, 1982) depicting individuals smoking cigars had suggested the use of tobacco among the ancient Maya.21
While maize was the staple crop during the Classic period across the lowlands, its relative importance across the region is something that has yet to be determined. Isotopic analyses have a long history in the region (White & Schwarcz, 1989; see Rand, Healy, & Awe [2013] for full bibliography). Notably, at Lamanai (Belize) there seems to be a decline in maize consumption in time, from Preclassic through Terminal Classic (White, Wright, & Pendergast, 1994), but the opposite is true at Pacbitun and Altar de Sacrificios (White, Healy, & Schwarcz, 1993; Wright, 2006). At Pusilha, Belize, Somerville, Schoeninger, and Braswell (2016) report that individuals living at the site had access to higher maize consumption relative to other sites in the Maya lowlands for the Late and Terminal Classic period, in addition to finding that higher-status men had preferential access to maize. At Copan, sex- and age-based differences are also reported (Reed, 1999). Overall, there were varying patterns of reliance on maize, probably more as a result of geographical location and thus environmental conditions (see Gerry, 1993; Wright, 2006).
Debates continue on how intensive Maya agricultural practices were and whether they led to environmental degradation and overexploitation of resources by the Terminal Classic period. Although the Maya collapse is now seen as a result of a combination of factors (Aimers, 2007), understanding the nature and degree of environmental impact remains an important issue. The Maya were not perfect managers of their environment (Fedick, 2010; Lentz, 2000), nor was their environment a pristine and unused landscape as imagined by the first Europeans colonizing the New World, especially Amazonia (Meggers, 1971; Roosevelt, 1989). Population pressure is believed by some scholars to have led to environmental degradation in the form of erosion through increased deforestation and modifying slopes through terracing as populations in the Maya region expanded farming and settlements on the landscape (Beach, Dunning, Luzzadder-Beach, Cook, & Lohse, 2006; Rosenmeier, Hodell, Brenner, Curtis, & Guilderson, 2002). However, research on a micro-scale indicates that environmental use and modification were variable across the Maya region, and overexploitation of natural resources was not universal (Beach et al., 2006; Fedick, 2010; McNeil, Burney, & Burney, 2010). Some scholars even argue that the Classic period was “a time of renewed cultural and ecological stability” (Ford & Nigh, 2009, p. 227).
Traditional Swidden Agriculture
This section describes in more detail the various stages of the milpa because it is one of the strategies that is best documented today among Maya and other indigenous communities of Mexico and Central America. For brevity, this section focuses mainly on traditional Maya practices.22 It should be noted that the milpa, although crucial to the Maya, “must be seen merely as a pivotal element in a highly complex natural resource multiple-use strategy, displayed over the long run” (Barrera-Bassols, & Toledo, 2005, p. 22). Thus the milpa is, and probably was, only one piece in a “multi-strategy land-use continuum” (Barrera-Bassols & Toledo, 2005, see also Ford & Nigh, 2015; Wilk, 1985).
Swidden agriculture is a means of producing food where forests are burned down to provide nutrients to the soil, which is considered poor in tropical areas (Sanchez & Logan, 1992; but see Fedick, 1996a, p. 339; Johnston, 2003). Nutrients required for agriculture are found in the vegetation, and thus by slashing and burning, the nutrients are moved into the soil (Johnston, 2003). In Petén, this type of agriculture is currently the most common subsistence practice (Fernández, Johnson, Terry, Nelson, & Webster, 2005; Nigh, 2008). Contemporary conservationists do not view swidden favorably, as they consider it a destructive form of farming (see Fox et al., 2000; Nigh & Diemont, 2013).
Recently, scholars have pushed for viewing forms of agricultural and management strategies as more nuanced. While swidden is typically understood as the cause of deforestation, it should be understood as an “undetermined number of agricultural systems” (Conklin, 1954, p. 1). When swidden is carefully carried out, it does not have to equate with destruction of an environmental system but rather can provide many benefits, including soil enrichment, wildlife support, and production of useful resources (Peters, 2000). Thus the availability of labor and skill can in certain cases lead to the creation of more fertile areas during the milpa cycle (Nigh, 2008; Nigh & Diemont, 2013). As noted by Fox et al. (2000, p. 521) shifting cultivation should not be equated with simply using fire to burn down large tracts of land for cow pasture or development. The literature is ripe with evidence from around the world that supports the use of swidden (e.g., Balée, 1994; Conklin, 1954).
Milpa derives from a Nahuatl word and is a combination of the words millipan, milli (to cultivate) and pan (place) (Ford, Jaqua, & Nigh, 2012). The Maya milpa “entails a rotation of annual crops with a series of managed and enriched intermediate stages of short-term perennial shrubs and trees, culminating in the re-establishment of mature closed forest on the once-cultivated parcels of land” (Nigh & Diemont, 2013, p. 46). Scholars have proposed that the ancient Maya, like some traditional Maya today, practiced and managed “high-performance milpa,” a term used by Wilken (1971) and also known as the “traditional milpa” (e.g., Nigh, 2008) or as “intensive milpa”; the latter can be considered as “intensive forms of cultivation, not to be confused with long-fallow slash-and-burn systems” (Wiseman, 1978, p. 84). These cultivation systems include short or few fallows and high labor investment and are highly productive (Sheets & Woodward, 2002). Short fallow systems are also reported from sites such as Tikal and Ceren (Lentz et al., 2014; Sheets & Woodward, 2002). To make these systems more sustainable, intercropping was likely practiced as it is done today (Flores & Kantun Balam, 1997; Ford & Nigh, 2015) and is also reported from ethnohistorical records (Caso Barrera & Aliphat, 2006; Pohl, 1985, p. 39).
This complex subsistence strategy, carried out by some contemporary Maya groups, requires more skill and labor but also enriches the soil and the vegetation. This is achieved through careful use of fire (Nigh & Diemont, 2013). Studies on swidden agriculture have noted that burning the forest has positive effects; namely it releases nutrients into the soil, reduces pests, and counters problems associated with leaching (Nigh & Diemont, 2013; but see Johnston, 2003, p. 143). Ethnohistorical accounts describe how the Maya cultivated milpas, with the firing of a field being a “delicate operation” (Reina & Hill, 1980, p. 76). During the slashing and burning of the forest, select trees considered useful are spared (Alcorn, 1984; Cowgill, 1961; Emerson, 1953; Lundell, 1937). Milpas today consist of polyagricultural systems with diverse species grown, often including species common to various indigenous groups in Mexico and Central America (Cowgill, 1961; Ford & Nigh, 2015; Nations & Nigh, 1980; Toledo, Ortiz-Espejel, Cortés, Moguel, & Ordoñez, 2003).
Cycles of swidden as observed in current societies vary in length and details (see Nations & Nigh [1980] for Maya population analogues). The conventional model that many proponents of tropical ecology have supported is that farmers could not cultivate a field for more than two years in a row, as the soil loses fertility and there is a decrease in crop yields. This is supported by studies of some Maya farmers that have noted the decrease in crop yield within two to three years of cultivation (e.g., Alcorn, 1984; Atran et al., 1993; Cowgill, 1961; Emerson, 1953; Lucero, 1999; Lundell, 1937; Redfield & Villa Rojas, 1934, p. 24; Wilk, 1985, p. 50). This is not restricted to the contemporary Maya, as indicated by 16th-centuryRelaciones, which relate that the land could be used for only two years to successfully grow maize (Reina & Hill, 1980). Other sources, however, suggest that historic period milpa farmers could use the field for more than 20 years (Pohl, 1985, p. 40). Although this time frame is relatively long, it is closer to what scholars have documented for other Maya farmers such as the Lacandon, Yucatec, Tsotsil, Tztezal, and Kekchi, who cultivate their fields between two and five years (Casagrande, 2004; Diemont, Bohn, Rayome, Kelsen, & Cheng, 2011; Ford & Nigh, 2009; Lentz, 2000; Nations & Nigh, 1980; Wilk, 1985). Subsequently, weeds become a more important problem the longer a field is cultivated, as weeds thrive in “nutrient-poor environments” (Johnston, 2003, p. 132; see also Emerson, 1953).
Johnston (2003) strongly argues against the idea that a decrease in soil fertility is due to the leaching of nutrients but rather that this is due to the increase in weeds, which will absorb the majority of the nutrients in the soil after the second year of cultivation. As noted by Emerson (1953), “One might reasonably question how such weeds as amaranths in a second-year milpa could grow to a height of 2–3 meters with a spread of branches nearly equal to their height if the soil were nearing exhaustion” (p. 58). Thus intensive weeding and mulching would be necessary to keep the field fertile beyond two years, as decomposing vegetation on the surface will be more important in providing nutrients to the soil than ash (Johnston, 2003). The effects of weeding on crop output were widely studied by Steggerda (1941), who demonstrated that longer use of a field could be successfully achieved through weeding.
Weeding techniques among Maya populations are well documented (Cowgill, 1961, p. 22–24; Nations & Nigh, 1980; see also Johnston, 2003; Wisdom, 1940). The contemporary or nontraditional method involves only weeding the field a few times in a season, allowing the weeds to go to seed (Emerson, 1953). The more traditional method consists of removing the entire plant including the roots as soon as it sprouts, thus preventing the weeds from going to seed and lessening the chances of regrowth (Wisdom, 1940). The latter allows for longer use of the field, a practice noticed among the Lacandon Maya (Nations & Nigh, 1980). Of course, this traditional method is also more labor-intensive (Johnston, 2003). Bringing soils and ash from domestic fires to areas of cultivation has been noted to fertilize terraces and other cultivation areas (Hansen, Bozarth, Jacob, Wahl, & Schreiner, 2002; Wilken, 1987). Archaeologically, the presence of wood ash on terraces, as reported by Wyatt (2008), supports the idea that past farmers were using similar strategies to increase soil fertility.
During periods of fallow, the land is not “useless,” nor is it abandoned (Ford & Nigh, 2009; Kintz, 1990; Nations & Nigh, 1980), as plants that grow during this time include weedy herbaceous species and pioneer shrubs, which make these places “excellent sites for gathering wild foods and medicinal plants and hunting game” (Lentz, 2000, p. 96), a theme that resonates with other parts of the world (Balée, 1994). Maya groups have different terms for every stage in the milpa cycle. One such group, the Lacandon, is one of the few groups that continues to perform traditional farming and hunting (Nations & Nigh, 1980), thus the value of observing their techniques and methods. The Lacandon have first and second bush fallow stages (robir and jurup), followed by two secondary forest stages (Diemont et al., 2011). The Yucatec Maya will rest the land after using it for milpa for about three years, known as yerba. Two years later, useful plants are planted to help the forest regenerate (Diemont et al., 2011), leading to the creation of a secondary forest (10–25 years). Diemont and colleagues also studied a community of Yucatec and Tsotsil Maya and found that they had three fallow stages, namely the arbusto (5–10 years), the acahual (10–25 years), and the mature forest, selva (25 years). Different groups have varying fallow times; for example, the Yucatec Maya interviewed by Kintz (1990) considered 50 years to be an ideal number, but population pressure with limited land did not allow this.
Scholars have noted that transplanting useful plants into fallow fields may also occur (Diemont et al., 2011). Moreover, trees are noted as being planted purposefully in these fallow areas in order to restore the natural ecology. For example, the Lacandon Maya actively manage certain plants that have been quantifiably shown to increase the fertility of the soil, namely by planting balsa (Ochroma pyramidale [Cav. ex Lam.] Urb., Diemont et al., 2006). The Yucatec Maya mention using allspice to restore their fallow fields (Diemont et al., 2011). Other groups were also found to use diverse native species to enhance the restoration of their forest (Diemont et al., 2011). Among these Maya groups, mature or primary forest was attained between 20 to 30 years after the forest was first cut down and cultivated (Diemont et al., 2011; Nigh, 2008). The Spaniards who encountered the Itza Maya in Central Petén in the 16th and 17th centuries also documented this type of system; the latter had small plots of milpa and planted or encouraged species in their abandoned milpas (acahuales) (Caso Barrera & Aliphat, 2006).
Discussion and Conclusion
Decades of research across the lowlands have revealed important clues to the nuanced ways in which food was produced by ancient populations that lived in diverse parts of Mesoamerica. Yet it is the increased attention to collecting a range of data (macro- and microbotanical, organic residues, isotopes) from archaeological projects that is challenging, or sometimes supporting, interpretations. It is also encouraging to see evidence of plants, which were reported missing from the archaeobotanical record in the 1990s (see Lentz, 1991), appear in various forms in the archaeobotanical record. In particular, it was proposed long ago that root crops may have been an important part of the diet (Bronson, 1966; Lowe, 1975); however, there was no evidence at the time to support such claims (Flannery, 1973). While the role of the root crops along the Pacific Coast remains unclear, research in other lowland regions leaves little doubt that root crops were important food resources. Yet, there is still much work that needs to be done across the Mesoamerican lowlands to determine their diversity, where these were grown (gardens, raised fields, etc.), and when they were adopted: Could they have been staples before the adoption of maize? Rigorous starch grain analysis and palynology may help address these issues.
It is key to consider the importance of other plants, both before maize was a staple but also long afterwards, and to move away from what Iriarte (2007) calls “traditional maize-centric formulations” (p. 176). An increase in the recovery of plant materials not only shows the richness of past plant use and the use of various management systems, but it also allows for quantitative analyses (convincingly done by VanDerwarker, 2006) to be made regarding the importance of certain plants, notably maize. While maize eventually became a staple for Mesoamerican societies, its role during the transition from mobile to sedentary lifestyles, and toward the development of complex societies, seems to be less key than previously argued (Killion, 2013; Rosenswig et al., 2015; VanDerwarker & Kruger, 2012).
To reconstruct ancient agriculture effectively, it will be important to continue implementing techniques that will allow the recovery of even the smallest seed, or microbotanical, element. It is only through the systematic collection and analysis of multiple data sets that it will be possible to refine our understanding of ancient subsistence systems and how these changed through time and across space. In effect, considering the geographical extent of lowland Mesoamerica, it is impossible to talk about one agricultural system that can be applied to ancient Mesoamerican populations as a whole. There is much that scholars have yet to learn regarding intra/intersite and regional differences and to evaluate the individual pathways taken by different groups in their pathways to food production, from the collection of wild plants to managing the landscape. To do this, archaeologists will need to consider collecting samples from a wider range of contexts (e.g., focus on non-elite contexts and smaller sites), in order to better represent past populations (see also Arnold, 2000). In addition, directly dating (and in some cases re-dating; see Long, Benz, Donahue, Jull, & Toolin, 1989; Smith, 2005) archaeobotanical materials and considering the secureness of the archaeological contexts (Fritz, 1994) will be essential. The use of innovative technologies (e.g., LIDAR) has greatly challenged the way in which the landscape is viewed and has laid to rest any debates regarding the ability of Maya farmers to intensively manage their natural environment. The use of this technology elsewhere in lowland Mesoamerica may provide some additional data (see for example Canuto et al., 2018).
In conclusion, it is evident that the ancient Mesoamericans who inhabited the various lowlands regions were engaged in modifying their natural environment from as early as the Archaic period and by the Middle Formative/Preclassic periods had started to invest more time and labor in the construction of canals and terraces. The intensified use of forest management strategies enabled people to gain access to a broad range of resources, from homegardens to the carefully managed forests. Unique ways to deal with local environments, such as sinkholes in the northern lowlands, were also devised. These various forms of management continue to be practiced by indigenous communities, thus perpetuating ancient subsistence systems.
Bellacero, C. M. (2010). Subsistence patterns, social identity and symbolism at the Early Formative period site of Cantón Corralito, Chiapas, Mexico (Doctoral dissertation). Florida State University, Tallahassee.
Blake, M., & Neff, H. (2011). Evidence for the diversity of Late Archaic and Early Formative plant use in the Soconusco region of Mexico and Guatemala. In R. G. Lesure (Ed.), Early Mesoamerican social transformations: Archaic and Formative in the Soconusco region (pp. 47–66). Berkeley: University of California Press.
Borstein, J. A. (2001). Tripping over colossal heads: Settlement patterns and population development in the upland Olmec heartland (Doctoral dissertation). Pennsylvania State University, State College.
Cagnato, C. (2018a). Shedding light on the nightshades (Solanaceae) used by the ancient Maya: A review of existing data, and new archaeobotanical (macro- and microbotanical) evidence from archeological sites in Guatemala. Economic Botany, 72, 180–195.
Clark, J. E., & Blake, M. (1994). The power of prestige: Competitive generosity and the emergence of rank societies in lowland Mesoamerica. In E. Brumfiel & J. Fox (Eds.), Factional competition and political development in the New World (pp. 17–30). Cambridge, U.K.: Cambridge University Press.
Emerson, R. A. (1953). A preliminary survey of the milpa system of maize culture as practiced by the Maya Indians of the northern part of the Yucatan Peninsula. Annals of the Missouri Botanical Garden, 40(1), 51–62.
Gasco, J. (2006). Soconusco cacao farmers past and present: Continuity and change in an ancient way of life. In C. L. McNeil (Ed.), Chocolate in Mesoamerica: A cultural history of cacao (pp. 322–337). Gainesville: University Press of Florida.
Gasco, J., & Voorhies, B. (1989). The ultimate tribute: The role of the Soconusco as an Aztec tributary. In B. Voorhies (Ed.), Ancient trade and tribute: Economies of the Soconosco region of Mesoamerica (pp. 48–94). Salt Lake City: University of Utah Press.
Inomata, T., MacLellan, J., Triadan, D., Munson, J., Burham, M., Aoyama, K., . . . Yonenobu, H. (2015). Development of sedentary communities in the Maya lowlands: Coexisting mobile groups and public ceremonies at Ceibal, Guatemala. Proceedings of the National Academy of Sciences of the United States of America, 112(14), 4268–4273.
Martin, S. (2006). Cacao in ancient Maya religion: First fruit from the maize tree and other tales from the underworld. In C. McNeil (Ed.), Chocolate in Mesoamerica: A cultural history of cacao (pp. 154–183). Gainesville: University Press of Florida.
Matsuoka, Y., Vigouroux, Y., Goodman, M. M., Sanchez, J., Buckler, E., & Doebley, J. (2002). A single domestication for maize shown by multilocus microsatellite genotyping. Proceedings of the National Academy of Sciences of the United States of America, 99(9), 6080–6084.
McNeil, C. L., Burney, D. A., & Burney, L. P. (2010). Evidence disputing deforestation as the cause for the collapse of the ancient Maya polity of Copan, Honduras. Proceedings of the National Academy of Sciences of the United States of America, 107(3), 1017–1022.
Piperno, D. R., & Flannery, K. V. (2001). The earliest archaeological maize (Zea mays L.) from highland Mexico: New accelerator mass spectrometry dates and their implications. Proceedings of the National Academy of Sciences of the United States of America, 98(4), 2101–2103.
Powis, T. G., Cyphers, A., Gaikwad, N. W., Grivetti, L., & Cheong, K. (2011). Cacao use and the San Lorenzo Olmec. Proceedings of the National Academy of Sciences of the United States of America, 108(21), 8595–8600.
Ranere, A. J., Piperno, D. R., Holst, I., Dickau, R., & Iriarte, J. (2009). The cultural and chronological context of early Holocene maize and squash domestication in the Central Balsas River Valley, Mexico. Proceedings of the National Academy of Sciences of the United States of America, 106(13), 5014–5018.
Scarborough, V. L., Dunning, N. P., Tankersley, K. B., Carr, C., Weaver, E., Grazioso, L., . . . Lentz, D. L. (2012). Water and sustainable land use at the ancient tropical city of Tikal, Guatemala. Proceedings of the National Academy of Sciences of the United States of America, 109(31), 12408–12413.
Smith, B. D. (2005). Reassessing Coxcatlan Cave and the early history of domesticated plants in Mesoamerica. Proceedings of the National Academy of Sciences of the United States of America, 102(27), 9438–9445.
Taube, K. (2000). Lighting celts and corn fetishes: The Formative Olmec and the development of maize symbolism in Mesoamerica and the American Southwest. In J. E. Clark & M. E. Pye (Eds.), Olmec art and archaeology in Mesoamerica (pp. 297–337). Washington, DC: National Gallery of Art.
VanDerwarker, A. M., & Kruger, R. P. (2012). Regional variation in the importance and uses of maize in the Early and Middle Formative Olmec heartland: New archaeobotanical data from the San Carlos Homestead, southern Veracruz. Latin American Antiquity, 23(4), 509–532.
Voorhies, B. (1989). Settlement patterns in the western Soconusco: Methods of site recovery and dating results: New frontiers in the archaeology of the Pacific Coast of Mesoamerica. Arizona Research Papers, 39, 329–369.
Notes
3. Note that the regions discussed in this article are not where the major crops (maize, squash, beans) are believed to have originated from in Mesoamerica. For studies on where such crops originated, readers can refer to a range of publications, which includes genetic as well as microbotanical studies (e.g., Kraft et al., 2014; Kwak, Kami, & Gepts, 2009; Matsuoka et al., 2002; Piperno, Ranere, Holst, Iriarte, & Dickau, 2009; Ranere, Piperno, Holst, Dickau, & Iriarte, 2009; van Heerwaarden et al., 2011).
Related Articles
Printed from Oxford Research Encyclopedias, Environmental Science. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).
|
By the Late Formative (400 cal bc) urban societies came to be established in the region, but during the Terminal Formative, many of the coastal sites were abandoned (Love, 2012).
Evidence of cultivation and horticulture has been found from early periods, from as early as the Archaic period where maize has been reported (Rosenswig et al., 2015). Two Archaic period individuals had high levels of C4, but it has been suggested that this may have been the result of a high reliance on marine resources (Blake, Chisholm, Clark, Voorhies, & Love, 1992). Thus, just like in the subsequent Early Formative period, maize did not play a major role in the diet of the locals. The earliest Formative phase is known as the Barra phase (1900–1700 cal bc) and saw the development of permanent villages and adoption of ceramics, while by the Locona phase rank systems developed as well as craft specialization among other features (Clark & Blake, 1994). Maize is not believed to have been an important crop in this early period, substantiated by the isotopic evidence (Blake et al., 1992); instead it has been suggested that it was likely a status food (beverage), potentially having a role in competitive feasting (see Smalley et al., 2003; but also see Arnold, 2009).
Similar to the Gulf Coast, it was only during the Middle Formative that the inhabitants of the Soconusco region began to intensify maize cultivation and consumption (Rosenswig et al., 2015). Other lines of evidence to suggest that maize and other cultivation was intensifying comes from indirect data such as the manos and metates, and ceramic graters, all starting around 1000 cal bc.
|
no
|
Paleoethnobotany
|
Was maize a staple food in prehistoric North American civilizations?
|
yes_statement
|
"maize" was a "staple" "food" in "prehistoric" north american "civilizations".. "prehistoric" north american "civilizations" relied on "maize" as a "staple" "food".
|
https://www.cambridge.org/core/books/climate-clothing-and-agriculture-in-prehistory/agriculture-and-textiles-in-the-americas/6E2ADFE396B20D5284973736D5C3B3BD
|
Agriculture and Textiles in the Americas (Twelve) - Climate, Clothing ...
|
Moving now to the Americas, there were two early agricultural transitions – in Mexico and Peru – and both led to the rise of civilizations. Agriculture began at the end of the last ice age, although the shift to agriculture in the New World was more gradual than in Southwest Asia and China. As was the case elsewhere, much of the human diet was still supplied by hunting and gathering. Aside from domesticated dogs – which had followed the first human immigrants from Siberia – early domesticated animals were limited to wool-bearing camelids, llamas, and alpacas (in Peru and Bolivia). Mexico and Peru both witnessed the cultivation of fiber crops for textiles at an early stage with two different varieties of cotton, domesticated independently in the two regions, and another fiber crop in Mexico: maguey.1
Maize and Maguey in Mexico
To begin with Mexico, archaeologists are mystified by the limited evidence for America’s main food crop: maize (corn); it now seems maize did not play a leading role in the Mesoamerican transition to agriculture. Given that it is now one of the world’s major food crops, the absence of maize in early Mexican agriculture is quite amazing. The first food crops in Mexico include a few varieties of edible gourds (squash and pumpkins) around 10,000 years ago. Maize begins to appear 9,000 years ago, and the common bean follows later, around 3,000 years ago. Even by 5,000 years ago, maize was only partially domesticated.2
Feeding People and Animals
Civilization in the region began with the Olmecs 3,000 years ago followed by the Mayans, then the Aztecs. These state societies developed after a relatively late transition to a sedentary lifestyle – late compared to Southwest Asia and China. Pottery and the loom weaving of textiles begin to appear between 4,000 and 3,000 years ago and the settlements attracted wild fowl, with two species domesticated as commensals: the turkey and Muscovy duck. People consumed dogs and rabbits as well as fowl, and these animals were typically fed with maize. Wild animals (especially deer) continued as major meat resources, but dogs were more popular on the menu from 5,000 years ago. Dog skins were sometimes used in garments too, and one unusual dog breed developed around 2,000 years ago was the Mexican hairless dog (called the Xolo). Rabbits were also kept and fed on maize from around 2,000 years ago, probably for fur as well as meat. Some bird species – notably tropical parrots such as the brightly colored scarlet macaw – were bred for their feathers, fed on maize, and even exported to the North American southwest.3
Near modern Mexico City is the ancient city of Teotihuacan, built nearly 2,000 years ago. Some of the wild animals and birds that figured prominently as cultural icons were kept in captivity, although not domesticated. Isotope studies from the skeletal remains of jaguars, pumas, and eagles excavated by archaeologists inside the Sun and Moon Pyramids reveal these wild animals were fed with maize and probably (in the case of the felines) with maize-fed rabbits – and also with the hearts of maize-fed human captives.4
Maguey and Mesoamerican Clothing
Textiles in Mexico were woven mainly with fibers extracted from cactus-like plants called maguey, which belongs to the agave family. More than a hundred varieties were used in Mexico, and these served many purposes besides supplying fiber for clothes. Before the spread of maize, people in western Mexico relied heavily on agaves for food, and the plants were used also as fodder. Agaves are adapted to dry conditions, storing water in their succulent leaves as a sap that can be drunk fresh as a sweet liquid. The juice is fermented into alcoholic beverages known as pulque, which can be distilled as tequila and mescal; the psychedelic mescaline comes from a similar Mexican cactus.
Maguey fiber is sometimes called sisal hemp, but sisal actually refers to a tougher fiber from one variety used mainly for ropes and matting. The softer varieties are ideal for clothes and were probably woven throughout Mesoamerica from an early stage. Only some species were domesticated – including a few used for fiber and a couple for alcohol – but many more were cultivated, without changing much from their wild forms. Thanks to the dry climate, maguey fibers and cordage are preserved at archaeological sites from around 9,000 years ago. The oldest occurs at a site in Honduras where maguey is present throughout a long series of occupation phases spanning from 10,000 to 1,500 years ago; the earliest maguey fibers there are radiocarbon-dated to 9,000 years ago.5
78.Maguey farm, Mexico
From early in the Holocene, textile fibers from a variety of cactus-like agaves were used to make everyday clothes in Mexico. The main fiber plants were various varieties of maguey, which was cultivated extensively before the mid-Holocene. Other varieties were exploited mainly for food and also for their sap, a drink that can be processed for alcoholic beverages. Subsequently overshadowed by cotton as the main textile fiber in Mexico, domesticated varieties of agave are now grown mainly for tough sisal fibers and to make alcohol, especially tequila. Shown here is a plantation of blue agaves (Agave tequilana) in the hills of Oaxaca, Mexico.
Source: flowerphotos / Alamy Stock Photo.
Maguey was cultivated in the settlements from 4,000 years ago and, judging from the number of spindle whorls, textile production increased in the first civilizations. Maguey was often grown along with maize in terraced irrigation systems in the highlands, and cotton became popular too. The spindle whorls come in two sizes: larger ones for maguey and smaller ones for spinning the finer cotton fibers. Cloth played an important economic role in these complex societies and functioned as a kind of currency, with surplus cloth paid by villagers as tax to the state. Cotton was more valuable than maguey – like the situation in China where silk was more valuable than hemp.6
Compared to Peru, the ready availability of maguey might have delayed the widespread cultivation of cotton. The earliest cotton in Mesoamerica dates to 5,000 years ago, and by then the cotton plants were already domesticated.7
Double Trouble in South America
In comparing food and fiber as causes of agriculture, of all the regions around the world that can serve as a test, this is the best. In South America the transition from hunting and gathering involved both plants and animals. What makes South America pivotal is that in both instances, the transition to agriculture involved textile fibers.
Cotton was one of the first South American crops along with peanuts, beans and squash – but not corn. In northern Peru, cotton was cultivated by 8,000 years ago and cotton yarn has been preserved in archaeological deposits from around 7,000 years ago; cotton textiles dyed with indigo blue are dated to around 6,000 years ago. We saw earlier that the world’s oldest textile fabrics – woven with wild plant fibers 11,000 years ago – are found in Peru. Not only was cotton one of the founding crops, in Peru, cotton was often a focus of early agriculture and its cultivation was a driving force in the emergence of civilization. With cotton we can hardly doubt that its fiber was the main motive – cottonseed and the edible oil were not likely the products in question.8
Cotton in Peru
Early cultivation of crops in South America happened mainly along the coast in the northwest, in the region that stretches from Ecuador to Peru. Food crops were involved from 10,000 years ago and chili peppers were domesticated by 6,000 years ago. Maize entered the picture around 5,000 years ago – when it was consumed by dogs as well as people in the settlements. The first maize occurs 8,000 years ago at some sites, but it was not a significant aspect of the early agriculture; nor was maize a staple food in the early civilizations. Like the situation with rice in China, archaeologists are perplexed to see how this great cereal crop played such a minor role in the Peruvian transition to agriculture.9
But cotton is the big shock. Cotton often dominates the agriculture, while the main industry in the settlements was weaving textiles – to make fishing nets as well as fabrics for clothes. At sites that span the period leading up to the first civilizations, cotton was often the main crop. One well-studied site is Huaca Prieta on the northern coast of Peru, where cotton yarn is dated to around 7,000 years ago.10
Most stunning is the site of Caral, located 200 km (125 miles) north of the modern capital Lima. Caral is a ceremonial complex built in the desert not far from the coast, and archaeologists have unearthed an impressive irrigation system to support the crops. With a date of nearly 5,000 years ago, Caral is now claimed as the first city in the Americas – indeed it is one of the first cities in the world. Not only was cotton the main crop, but the whole city was sustained economically by cotton textiles. In fact, the agricultural base for this first American city was not food but textiles. The residents’ food supply came mainly from marine resources, which were traded for textiles (including nets) with fishing communities on the coast.11
The discovery of Caral and how its economy revolved around cotton has created a furor among archaeologists. Caral seems to raise questions about agriculture arising as a way to feed people, and it raises doubts about the role of the food economy in the emergence of civilization. Some remarkable textiles have been preserved in the dry climate, including an almost complete dress woven with cotton. We see a similar picture at another early city in Peru, El Paraíso (built around 4,000 years ago). Again, cotton was the dominant crop.12
An expert on the Incas and their forebears is Professor Michael Moseley at the University of Florida, and he highlights this astonishing evidence about how agriculture was based on textiles rather than food. Caral is a challenge to conventional assumptions and, as he says, it is “demanding of explanation.”13
79.City of Caral, Peru
In the Supe Valley near the coast of Peru is the city of Caral – at nearly 5,000 years old, one of the oldest cities in the world. Agriculture in the vicinity was dominated by cotton, with virtually no food crops. The people of Caral relied for their food on hunting and gathering and, especially, trading cotton for fish with nearby coastal populations – where cotton was used in fishing nets as well as clothes.
Source: age fotostock / Alamy Stock Photo.
Llamas and Wool in the Andes
The South American evidence is telling not just with plants but also with animals. While domestic animals did not feature prominently in American agriculture, there is one exception in the Andean highlands. In this high mountain region where the climate is cooled by altitude even in tropical zones, we find an independent transition to farming in the form of herding animals. The transition was unusual in a few ways: it was not connected with a sedentary lifestyle, nor did it involve cultivating any crops. Likely reasons relate to local ecology: cultivating permanent pasture to feed animals – or people – was not feasible at the elevations inhabited by native camelids (above 3,000 m). Instead the animals were kept on a mobile basis, with people moving along with the herds as the animals grazed on natural pastures. Two species – llama and alpaca – were domesticated from their wild parents, the guanaco and vicuña. These wild camelids were hunted (and possibly herded) from early after the ice age, when they replaced wild deer as the dominant animal remains found at archaeological sites. Skeletal changes that identify the domesticated species are visible by 6,000 years ago.14
Unlike the surreal situation with wild sheep where the presence of wool is often disputed (and discounted as a reason for domestication), there is no disputing the presence – and the value – of wool in the wild camelids, both guanaco and vicuña.15
Wool from the guanaco is like cashmere, and vicuña wool is the world’s most valuable natural fiber. So we can safely assume that the first American animal domesticates could provide people with wool for textiles – because both of the wild progenitors obviously produce wool. During historical times, llamas have served more as multipurpose assets – as beasts of burden and to supply meat as well as wool – whereas alpacas were always kept mainly for fiber. Wool fibers – probably from the fleece of guanaco – have now been found at archaeological sites between 10,000 and 9,000 years ago. So even at that very early time when people were hunting wild camelids for meat, they were collecting the wool.16
80.Llama, Peru
Llama at Machu Picchu, Peru. The two wild species of South American camelids – guanaco and vicuña – grow excellent wool, and both species were domesticated by humans in the Andean highlands before the mid-Holocene. Along with cultivation of cotton in coastal areas of Peru and Ecuador, the two domesticated animal species – llama and alpaca, respectively – represent an independent fiber-based transition to agriculture.
Source: Efrain Padro / Alamy Stock Photo.
The quality of woollen fiber has actually deteriorated since the Spanish invasion in the sixteenth century. At that catastrophic time the camelid population – along with the indigenous human population – was decimated by up to 90 percent. Modern-day wool varieties in Peru result from poorly controled breeding and hybridization since then. Archaeologist Jane Wheeler has examined the fibers on llama and alpaca mummies in southern Peru that were buried around 1,000 years ago. Her findings show that the wool in preconquest herds was superior to the wool of present-day domesticates (including alpacas). Not only was alpaca wool at that time better but so too was the llama wool, and she suggests that along with alpacas, llamas were bred mainly for their wool.17
As in Mesoamerica to the north, textiles played a big role in the rise of South American civilizations. Textiles sustained the first cities on the coast and the same is true in the Andes. Early highland settlements and regional states – culminating with the Incas – developed with the integration of coastal cotton and highland wool economies via expanding trade networks. Textile goods were the top commodity, sometimes woven with blends of cotton and woollen fibers. Long caravans of llamas carried their heavy loads through steep mountain passes, stopping at way stations where the textiles and other goods were traded. Archaeologists have found skeletal signs of pathology reflecting years of physical stress in the foot bones of the llamas. These complex Andean economies also produced a range of new food crops such as potato, and maize was adopted (eventually). Maize later became more important in the human diet and was fed to domestic animals as well, not only llamas but dogs and then guinea pigs. The latter is a little rodent that started out as a commensal species, attracted by all of the litter in the settlements; domesticated guinea pigs were fattened on maize and consumed for meat.18
Domesticating the Amazon
Trade across the Andes stretched not just to the western coast but to the east, where agriculture developed in the Amazon basin from around 7,000 years ago. We like to think that before Europeans arrived and spoiled things, the Amazon was a pristine wilderness inhabited by hunter-gatherers (naked or nearly naked). But recent archaeological finds have revealed that many indigenous populations were cultivating plants of various kinds from at least midway through the Holocene. They still relied heavily on wild plants and animals, and we now realize that many of these early agriculturalists (not just in the Amazon but elsewhere) engaged in low-level food production as a stable economic strategy. Some were semisedentary and horticultural, growing mixed domestic crops and wild plants in garden plots which they would abandon on a periodic basis. One of the main food crops first domesticated in the Amazon was manioc, of which there are a number of varieties like cassava.19
Cotton had spread into the Amazon basin from Peru and Ecuador, possibly via Colombia, and cotton is still grown by some remote tribal groups in Brazil. Clothing generally was minimal in these hot and humid climates – often nothing more than a woven waistband for women and a penis string for men.
Most of these South American societies were egalitarian, but some developed more hierarchical social structures, which American archaeologists refer to as chiefdoms. By 3,000 years ago, there were quite complex societies in parts of the Amazon – so-called mound builders. Most of the mounds started out as natural formations in the landscape, with people adding to the mounds as they built houses – and as refuse accumulated in the settlements. Excavations have unearthed elaborate pottery vessels and other ceramic artifacts including spindle whorls, testifying to a textile industry. Fabrics and items such as strings and ropes were woven from wild plants as well as cultivated cotton (and bark cloth as well); a few textile fragments have been recovered by archaeologists on the large island that lies in the mouth of the Amazon, Marajó Island.20.
Thanksgiving in North America
Before leaving the Americas, another independent center of agriculture has been discovered by archaeologists in recent decades, in the eastern woodlands region of North America. Beginning around 5,000 years ago, it was a relatively late starter, and at least three new plant species were domesticated – marshelder, chenopod, and sunflower. Seeds of other plants have been found at some of the earliest agricultural sites but without any physical signs of domestication, although these plants nonetheless may have been cultivated: these likely crops include erect knotweed, little barley, and maygrass. Squash was also among these domesticates, and it may have been the first plant to be domesticated in eastern North America. The presence of squash among the first North American domesticates might suggest that agriculture had spread northwards from Mexico, where squash was domesticated earlier, beginning around 10,000 years ago. However, evidence from genetic studies and archaeology points to an independent domestication of squash in eastern North America. All these crops were grown to feed people living in sedentary or semisedentary village societies – some were mound-building chiefdoms similar to those in the Amazon. As was the case in much of North America, the communities were Mesolithic rather than Neolithic or Paleolithic: neither settled farmers nor mobile hunter-gatherers, they were hunter-gatherers who settled down in villages and who sometimes started to cultivate crops. Neither were they naked: they wore clothes, including textile garments. Even when they had crops, they often relied mainly on wild foods, and their local environments were well-stocked with natural resources – for clothing as well as food. In those areas where agriculture was practiced more intensively, the local food crops were largely replaced by maize that had spread from Mexico by around 2,000 years ago.21
81.Map of USA showing sites and the eastern agricultural center
Major prehistoric and ethnographic clothing trends in the USA in relation to climate and early agriculture, with sites mentioned in the text.
82.10,000-year-old woven sandals from Oregon
Early textiles in North America include cordage dated from 12,000 years ago. Sandals made from twined sagebrush are dated from 10,400 years ago at Fort Rock Cave in Oregon. Shown here are sandals from (a) Catlow Cave, (b) Fort Rock Cave, and (c) Elephant Mountain Cave; (d) type drawing (slightly reduced scale).
As for textiles and clothes, weaving technologies were well-developed throughout North America. Archaeological remains of baskets, nets, ropes, and perhaps fabrics for garments are reported from sites dating back to around 12,000 years ago. At one of the earliest sites, Meadowcroft near Pittsburgh, a piece of cut bark that is probably from a basket fragment is dated to at least 15,000 years ago. In recent historical times, clothes were made from animal hides as well as textiles, and tools such as awls, scrapers, and needles are found at agricultural sites. Traditional fabrics included blankets woven from plant and animal fibers, and feathers too. Common plant fibers were yucca, nettles, milkweed, the local variety of hemp (dogbane, often called Indian hemp) and fibers from various tree barks and roots, while animal fibers came from wild mountain goats, moose, rabbits, and even woolly dogs. Among the Mesolithic Salish peoples who lived in the northwest around Seattle and Vancouver, a local variety of dog was bred with a dense woolly coat. The wool was woven into elaborately decorated blankets that functioned also as cloaks in winter. In some of the surviving examples at the Smithsonian Institution, the woolen fabric from dogs is held together by cedar bark cordage, sometimes supplemented with down and feathers. Along the Atlantic and Gulf coasts, people made fabrics from the fibers of Spanish Moss, a flowering plant that grows over trees in humid climates. As mentioned earlier, one of the earliest North American textile finds is an 11,000-year-old sandal made from twined plant fibers, found at a cave in Oregon. Also mentioned earlier, a remarkable site for textiles is the Windover peat pond in Florida, which dates to around 7,500 years ago. On some of the skeletons in the Florida pond, archaeologists found fragments of finely woven fabrics that they think were probably worn as everyday clothes; the fibers in that case were extracted from the fronds of palm trees.22
83.7,500-year-old textile at Windover bog, Florida
7,500-year-old textiles are preserved at the Windover Bog site in Florida, including artifacts such as bags, blankets, and garments woven from various wild plant resources, including palm fronds. Some items comprised loom-woven cloths, and the assemblage demonstrates a range of complex weaving patterns.
So the transition to agriculture in eastern North America occurred in the context of sedentary, or semisedentary, peoples who were wearing technologically sophisticated clothes, including textiles woven from a host of wild fiber resources.23
Agriculture without Fibers?
Taken at face value, this transition to agriculture appears to refute the notion that textiles led to agriculture. It would seem that no fiber crop was domesticated, and neither were there any wool-producing domestic animals. But the evidence from North America presents another problem: it seems to refute all of the other theories as well. According to leading researcher Bruce D. Smith at the Smithsonian, the evidence seems to refute both social complexity (Hayden’s feasting model) and population pressure (Cohen’s argument). The North American evidence also fails to implicate major environmental stresses such as global climate change as possible causes for people to start cultivating these plants for food. With regard to the environment, the most that can be argued is that there was a mid-Holocene stabilization of the riverine floodplain systems that favored some of the plants involved and that created attractive resource zones for human settlements. These environmental changes might provide some kind of local explanation – Smith is skeptical of general theories about the origin of agriculture, and he prefers instead to construct local models.24
Yet this can lead to a limited and rather retrospective approach. Local environmental situations probably served more as a precondition rather than as a cause. After all, humans would have encountered favorable conditions and similar opportunities at many times and places during the past 300,000 years, without developing agriculture. Even in North America, regions such as California offered some excellent opportunities, but the people there did not start any agriculture.25
Smith mentions the presence of textiles (mainly basketry and cordage), but as he says, there is a big bias because these perishable materials are less likely than hard seeds to survive from so long ago, and this applies particularly with fragile clothing fabrics. Smith also points out that we tend to forget about another plant because it was not used for food, namely the bottle gourd (a variety of squash): this “utilitarian” domesticate reached eastern North America by 7,000 years ago where it served mainly as a container, and it is “frequently” present at the early agricultural sites. And Smith also points out that in the early days of agriculture in eastern North America, food from cultivated plants seems to have made only a small contribution to the human diet.26
Cultivated if Not Domesticated
Although we have only four or five plant species that were domesticated, others may have been cultivated but kept their wild forms. These wild plants could have been involved in the agricultural process, cultivated along with domesticated plants. Also, the sunflower was used for other purposes besides food. Sunflowers provided medicines and dyes (purple and yellow, for body paints and textiles), and sunflower seed oil was used ceremonially to “anoint” their heads; the stalks of sunflowers were used as a construction material and possibly for textiles.27
As we have seen, textiles and woven fabrics have survived at quite a few early sites in North America. One important site is the Newt Kash shelter in Kentucky, where a vast array of textiles has been recovered including fabrics, mats, strings, and ropes, along with domesticated plants. This site dates from a little over 4,000 years ago, which places it at a fairly early stage of agriculture in eastern North America. Crops cultivated at Newt Kash include sunflower, sumpweed, chenopod (goosefoot), maygrass, giant ragweed, bottle gourd (inedible squash), fleshy (edible) squash, maize, and tobacco.
Among the wild plants used for the textiles at Newt Kash were Indian hemp and milkweed, traditionally woven into cordage and cloth. Milkweed is a dual fiber source: the fine fibers from its seed are similar to cotton and can be spun into yarn to make cloth for garments, while tougher fibers from the stem can be used for making strings, ropes, and baskets. In eastern North America, fibers such as milkweed and Indian hemp were often thigh-spun without spindle whorls, which may account for a paucity of spindle whorls at archaeological sites.28
So we may have one or two fiber plants in the early agriculture that were not formally domesticated. At least one – milkweed – was collected as a wild plant around some of the settlements, and may have been cultivated. The same is probably true for Indian hemp and maguey-like yucca fibers and various nettles. Textile fibers were used not so much for clothing as for cordage and string, and as thread for sewing. In these climates where the winters get cold, people favored warm animal skins, which were worn loosely – and often minimally – in summer. Scraper tools are often found at the early agricultural sites, along with bone awls (made from deer and turkey bones), suggesting that preparation of animal skins was commonplace.
84.Native American woman weaving milkweed fibers
Milkweed fibers were used widely by Native Americans to weave cloth and other textiles. While not domesticated, wild milkweed may have been cultivated along with food crops in the agricultural center of Eastern North America (ENA), which developed nearly 5,000 years ago. Shown here is a Native American woman weaving milkweed at Plimoth Plantation, Plymouth, MA.
A couple of things to keep in mind are that this agricultural transition in eastern North America did not happen among mobile hunter-gatherers who were naked: it happened among people who were wearing clothes and who were settling down in villages. As we saw earlier, all the indigenous American populations are descended from ancestors who had complex clothing. Without such warm garments, they could not have reached the New World from northeast Asia at the end of the last ice age. And even as hunter-gatherers, in many places they were moving toward a sedentary lifestyle.
We saw earlier how a sedentary existence can favor agriculture by attracting certain animal species (like chicken and pigs), animals that then become domesticated of their own accord in the villages. A similar thing can happen with plants: the ones involved in eastern North America probably started out as weeds that grew around settlements. With the exception of the sunflower, nowadays most of the indigenous crop plants are regarded more as weeds than as crops (and they were all superseded by maize as a food crop). Even the sunflower may have started out as a humble weed in clearings around settlements. At some point, after collecting the seeds from wild plants in their surroundings, people probably began to actively plant the seeds and make more use of the sunflower as a food resource, leading to the plant’s domestication. The question is whether this would have happened had the plants not already become established around the settlements as weeds, through a passive, commensal process akin to how many animal species were attracted to human settlements.
Turkeys to the Rescue
As well as milkweed as a possible fiber crop, there is one other species to mention – not plant but animal. During historical times, the sunflower has been used mainly as food not for humans but for farm animals. The small seeds of the sunflower are a favorite feed for domestic fowl – sunflower is almost the quintessential chicken feed. As it happens, one of the main meat sources for the early farmers in North America was a native fowl – the wild turkey. These wild fowl were attracted to all the grasses that were cultivated in the clearings around the settlements – as happened with chicken and millet (and wild rice) in China. The big birds were valued not just as food but also for their feathers: turkey feathers are a feature of traditional American costume. And bird feathers offer good insulation in the cold – Native North Americans sometimes made elegant winter cloaks from turkey feathers.29
Which brings us to one last surprise. We saw how turkeys were domesticated in Mexico, where they were fed with maize. Yet archaeologists recently discovered that the wild turkey was domesticated in North America as well. Turkeys were domesticated by 2,000 years ago, and this was a separate event from the Mexican domestication. These North American domestic turkeys are found in the southwest, but genetic analyses suggest they were domesticated earlier and maybe they spread into the southwest from eastern North America.
A key finding comes from a site in Utah where it turns out that turkey was not a major item in the menu. Archaeologists were surprised to find that the domestic fowl was not a food staple. Instead they suspect it was kept and fed – with maize – for its feathers. Traditionally, turkey feathers in the region were woven into brightly colored feather blankets and into garments as well, with the fanciest feathers reserved for ceremonial dress. And in the Eastern Woodlands, women wove warm cloaks with turkey feathers.30
So with the turkeys there is good reason for thanksgiving: the only indigenous animal domesticate in North American agriculture was probably domesticated for its feathers, not for food.
78.Maguey farm, MexicoFrom early in the Holocene, textile fibers from a variety of cactus-like agaves were used to make everyday clothes in Mexico. The main fiber plants were various varieties of maguey, which was cultivated extensively before the mid-Holocene. Other varieties were exploited mainly for food and also for their sap, a drink that can be processed for alcoholic beverages. Subsequently overshadowed by cotton as the main textile fiber in Mexico, domesticated varieties of agave are now grown mainly for tough sisal fibers and to make alcohol, especially tequila. Shown here is a plantation of blue agaves (Agave tequilana) in the hills of Oaxaca, Mexico.
Source: flowerphotos / Alamy Stock Photo.
79.City of Caral, PeruIn the Supe Valley near the coast of Peru is the city of Caral – at nearly 5,000 years old, one of the oldest cities in the world. Agriculture in the vicinity was dominated by cotton, with virtually no food crops. The people of Caral relied for their food on hunting and gathering and, especially, trading cotton for fish with nearby coastal populations – where cotton was used in fishing nets as well as clothes.
Source: age fotostock / Alamy Stock Photo.
80.Llama, PeruLlama at Machu Picchu, Peru. The two wild species of South American camelids – guanaco and vicuña – grow excellent wool, and both species were domesticated by humans in the Andean highlands before the mid-Holocene. Along with cultivation of cotton in coastal areas of Peru and Ecuador, the two domesticated animal species – llama and alpaca, respectively – represent an independent fiber-based transition to agriculture.
Source: Efrain Padro / Alamy Stock Photo.
81.Map of USA showing sites and the eastern agricultural centerMajor prehistoric and ethnographic clothing trends in the USA in relation to climate and early agriculture, with sites mentioned in the text.
83.7,500-year-old textile at Windover bog, Florida7,500-year-old textiles are preserved at the Windover Bog site in Florida, including artifacts such as bags, blankets, and garments woven from various wild plant resources, including palm fronds. Some items comprised loom-woven cloths, and the assemblage demonstrates a range of complex weaving patterns.
Source: Adovasio et al., 2001:24. Reproduced with permission of SAGE Publications, and courtesy of James Adovasio.
84.Native American woman weaving milkweed fibersMilkweed fibers were used widely by Native Americans to weave cloth and other textiles. While not domesticated, wild milkweed may have been cultivated along with food crops in the agricultural center of Eastern North America (ENA), which developed nearly 5,000 years ago. Shown here is a Native American woman weaving milkweed at Plimoth Plantation, Plymouth, MA.
Cited by
Save book to Kindle
To save this book to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By using this service, you agree that you will only keep content for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services
Please confirm that you accept the terms of use.
Save book to Dropbox
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to Dropbox.
By using this service, you agree that you will only keep content for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services
Please confirm that you accept the terms of use.
Save book to Google Drive
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to Google Drive.
By using this service, you agree that you will only keep content for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services
Please confirm that you accept the terms of use.
|
Moving now to the Americas, there were two early agricultural transitions – in Mexico and Peru – and both led to the rise of civilizations. Agriculture began at the end of the last ice age, although the shift to agriculture in the New World was more gradual than in Southwest Asia and China. As was the case elsewhere, much of the human diet was still supplied by hunting and gathering. Aside from domesticated dogs – which had followed the first human immigrants from Siberia – early domesticated animals were limited to wool-bearing camelids, llamas, and alpacas (in Peru and Bolivia). Mexico and Peru both witnessed the cultivation of fiber crops for textiles at an early stage with two different varieties of cotton, domesticated independently in the two regions, and another fiber crop in Mexico: maguey.1
Maize and Maguey in Mexico
To begin with Mexico, archaeologists are mystified by the limited evidence for America’s main food crop: maize (corn); it now seems maize did not play a leading role in the Mesoamerican transition to agriculture. Given that it is now one of the world’s major food crops, the absence of maize in early Mexican agriculture is quite amazing. The first food crops in Mexico include a few varieties of edible gourds (squash and pumpkins) around 10,000 years ago. Maize begins to appear 9,000 years ago, and the common bean follows later, around 3,000 years ago. Even by 5,000 years ago, maize was only partially domesticated.2
Feeding People and Animals
Civilization in the region began with the Olmecs 3,000 years ago followed by the Mayans, then the Aztecs. These state societies developed after a relatively late transition to a sedentary lifestyle – late compared to Southwest Asia and China. Pottery and the loom weaving of textiles begin to appear between 4,000 and 3,000 years ago and the settlements attracted wild fowl, with two species domesticated as commensals: the turkey and Muscovy duck.
|
no
|
Paleoethnobotany
|
Was maize a staple food in prehistoric North American civilizations?
|
yes_statement
|
"maize" was a "staple" "food" in "prehistoric" north american "civilizations".. "prehistoric" north american "civilizations" relied on "maize" as a "staple" "food".
|
https://www.americanhistorycentral.com/entries/apush-native-american-societies-before-european-contact/
|
Native American Societies Before European Contact, APUSH 1.2
|
Native American Societies Before European Contact — APUSH Terms and Notes
Prehistory–1491
APUSH Unit 1, Topic 1.2 covers topics related to the regions of North America, Central America, and South America prior to the arrival of Europeans.
The Pyramid of the Moon is the second-largest pyramid in Mesoamerica. It is located in present-day Teotihuacan, Mexico.
APUSH Unit 1, Topic 1.2 covers topics related to the regions of North America, Central America, and South America prior to the arrival of Europeans.
An Overview of Native American Cultures and Systems in the New World Prior to Contact with Europeans
APUSH Unit 1, Topic 1.2 focuses on Native American Societies Before European Contact. This period spans thousands of years and covers the diverse cultures, civilizations, and empires that existed in North America, Central America, and South America, prior to the arrival of Europeans.
Migration from Asia to the Americas
Native American Societies in the Americas developed over thousands of years. The development started with the migration of people from Asia to the Americas between 36,000 and 14,000 years ago. They accomplished this by crossing the Bering Strait Land Bridge, which formed during an ice age and connected present-day Siberia to present-day Alaska.
As they spread throughout North, Central, and South America, they developed different languages, created distinct cultures, and adapted to a wide range of environments across the Western Hemisphere. From regions with abundant rainfall to those with arid conditions and frozen soils, these people demonstrated incredible bravery and resilience.
Their resolve to survive eventually led to the establishment of civilizations that ranged from interconnected tribal communities to vast empires.
Native American Civilizations in Central America and South America
In Central America, three civilizations emerged — the Aztecs, the Maya, and the Incas. Each had an advanced society, which featured large urban centers, complex political systems, and well-formed religious beliefs.
The Aztecs, who referred to themselves as “Mexica,” lived in Central America, with their capital city, Tenochtitlan, serving as the home of an estimated 300,000 people. They had a written language, designed and implemented sophisticated irrigation systems, and practiced human sacrifice, which they believed ensured fertility for not only their people but also their crops.
Further south, the Maya established themselves on the Yucatan Peninsula. They built large cities, utilized advanced irrigation and water storage techniques, and constructed massive structures for their rulers, who were considered to be divine.
The Inca Civilization flourished in the Andes Mountains along the Pacific Coast, ruling over a vast empire that encompassed 16 million people and covered 350,000 square miles. Their success was attributed to the cultivation of fertile mountain valleys and the development of elaborate irrigation systems.
Corn Helps Native Americans Spread to North America
The cultivation of maize played a crucial role in the economic development and social diversification of Native American societies.
As maize spread northwards, it influenced the establishment of settlements and advanced irrigation techniques, especially in the present-day American Southwest.
Native American Civilizations in North America
In the American Southwest, known for its harsh and dry climate, the Pueblo People and others established themselves. They built Adobe Structures and developed sophisticated irrigation techniques that allowed them to cultivate the “Three Sisters” — maize, squash, and beans.
On the Great Plains, where natural resources and fertile soil were scarce, tribes like the Sioux and Ute lived a nomadic hunter-gatherer lifestyle. They relied on the vast herds of bison for their sustenance, which required mobility and hunting skills.
Indians Hunting Bison by Karl Bodmer. Image Source: Wikipedia.
In the Pacific Northwest, abundant rivers, access to the ocean, and forested areas provided food. Coastal communities such as the Chinook also built intricate plank houses using cedar trees.
Further south in present-day California, the Chumash People lived as hunters and gatherers. They lived in permanent settlements strategically located to their sources of food and water.
Tribes along the Atlantic seaboard in the eastern part of North America showcased mixed agricultural and hunter-gatherer economies. They developed permanent villages and engaged in Trade Networks and political alliances with neighboring tribes.
In the Mississippi River Valley, the Hopewell people established towns engaged in extensive Trade Networks across different regions. They interacted with diverse communities, reaching as far as Florida and the Rocky Mountains. Farming practices supported the formation of large settlements, such as the Cahokia people, whose population reached an estimated 20,000 around 1150 CE, surpassing that of London during the same period.
Cahokia stood out with its large population, estimated between 10,000 and 30,000 individuals, and centralized government. Its influence extended from the Great Lakes to the Gulf of Mexico.
In the Northeastern Region, the Iroquois resided in villages composed of several hundred individuals. They practiced agriculture, cultivating crops such as maize, squash, and beans, They lived in longhouses alongside their extended families.
Before the arrival of Christopher Columbus, it is believed the Western Hemisphere was home to approximately 50 million people, with around 5 million Native Americans residing in what is now North America. Today, that is roughly the population of Florida and Texas combined and the United States is home to more than 330 million people.
APUSH 1.2 Review Video
This video from Heimler’s History provides an excellent overview of APUSH 1.2. You can also check out our APUSH Guide provides a look at all Units and Topics in the APUSH Curriculum.
Geography – Regions
The following APUSH Terms and Definitions fall under the theme of Geography. These Terms are not listed in alphabetical order, but in directional order — North to South.
Arctic
The Arctic Region spanned present-day Alaska, Canada, Greenland, and the Arctic Coast. In the Arctic, Inuit People and Aleut People adapted to extreme cold using seal oil lamps, igloos/sod houses, and bone/ivory tools. They were nomadic hunter-gatherers hunting whales, seals, caribou, and fish. Shamanism focused on animism and spirits. Kinship ties provided community cohesion. The harsh climate shaped small, migratory bands with resourcefulness, resilience, and intimate knowledge of their environment and surroundings.
Subarctic
The Subarctic Region stretched across inland Alaska, Canada, and the Hudson Bay area. Long winters and short growing seasons led its mixed hunter-gatherer/forager societies like the Cree and Ojibwe to be highly mobile. Key resources in the region included fish, caribou, moose, and small game. Conical wood-frame lodges offered portable shelter. Canoes enabled transportation. Exquisite wood carvings became a signature art form. Sharing networks redistributed resources efficiently.
Northwest Coast
The Northwest Coast Region refers to the coastal and inland region of North America that includes parts of present-day Oregon, Washington, British Columbia, and Alaska. Native American Tribes, such as the Chinook, Haida, and Tlingit, developed complex societies based on fishing, hunting, gathering, and elaborate ceremonial traditions.
Plateau
The Plateau Region lies between the Cascades and the Rocky Mountains. People living in the region relied on salmon fishing, root gathering, and trade. Semi-sedentary villages with cedar plank houses appeared along major rivers. Societies like the Yakama and Nez Perce mastered fish drying, basketry, and travois sleds pulled by dogs. Vision quests, oral narratives, and symbolic art conveyed cultural values. The region served as a trading nexus between the tribes living on the Northwest Coast and in the Great Plains.
Great Plains
The Great Plains Region is a vast area of flat or rolling grasslands located in the central portion of North America. It extends from the Canadian provinces of Alberta, Saskatchewan, and Manitoba in the north to the states of Texas, Oklahoma, Kansas, Nebraska, Wyoming, Colorado, and Montana in the south. Native American Tribes that historically inhabited the Great Plains region include the Sioux, Cheyenne, Arapaho, Comanche, and Pawnee. These tribes and others relied on a combination of hunting, particularly buffalo, and agriculture, mainly cultivating maize, beans, and squash. The buffalo were vital to their way of life, providing food, clothing, shelter, and tools. Native American Societies in the Great Plains developed distinct cultural practices, including Communal Living in Tipis, a Nomadic Lifestyle following the buffalo herds, and elaborate spiritual ceremonies.
Northeast
The Northeastern Region of North America before European contact encompassed present-day New England, the Great Lakes, and the Eastern Woodlands. Native societies were semi-sedentary, transitioning between fixed villages and seasonal hunting/fishing camps. They depended on abundant aquatic resources like fish and shellfish, supplementing with deer hunting and wild plants. Agriculture based on the Three Sisters became increasingly important. Sociopolitical organizations ranged from bands and tribes to confederacies like the Iroquois and chiefdoms on the Atlantic coast. Extensive Trade Networks connected the diverse tribes of this heavily forested region. Characteristic art forms included wood carvings, birch bark containers, and wampum belts.
Great Basin
The Great Basin is a large, arid region in the western United States, encompassing parts of Nevada, Utah, Oregon, Idaho, and California. Native American Groups such as the Shoshone and Paiute developed unique adaptations to the desert environment, relying on hunting, gathering, and seasonal migrations.
This photograph of the Great Basin Desert shows the drastic elevation changes found in the region. Image Source: National Park Service.
California
The California Region runs along the West Coast of North America and runs from the base of the Northwest Coast Region to the tip of present-day Baja California Sur. Diverse tribes lived in the varied microclimates of the California Regions, including Yurok fishermen, Chumash maritime traders, and Paiute agriculturists. The California groups were expert basket weavers. They also knotted strings as a way of record-keeping. Dance regalia, rock art, and shell jewelry reflected the sophisticated artistry of these people. Tribes managed resources sustainably through controlled burning, seed caching, and seasonal migrations.
Southeast
The Southeast Region refers to the geographical area encompassing present-day states such as Alabama, Mississippi, Georgia, Florida, Tennessee, and parts of North and South Carolina. Before European contact, this region was home to diverse Native American societies characterized by their unique cultural, social, and political systems. Native American tribes such as the Cherokee, Creek (Muscogee), Choctaw, and Seminole inhabited this region. The Southeast Region was known for its fertile lands, abundant water resources, and varied ecosystems, which allowed Native American communities to thrive through agriculture, hunting, fishing, and gathering. These societies exhibited remarkable cultural achievements, including distinct languages, art forms, Pottery, and Trade Networks.
Southwest
The Southwest Region refers to the geographic region encompassing present-day Arizona, New Mexico, southern Colorado, and southern Utah. Native American Societies such as the Anasazi, Hopi, and Navajo inhabited this area before European contact, developing unique cultural practices and adapting to the desert environment.
Four Corners
The Four Corners Region is a geographical point in the Southwest Regions where the present-day states of Arizona, Colorado, New Mexico, and Utah meet. This area is culturally significant as it is home to numerous Native American Tribes, including the Navajo, Ute, Hopi, and Zuni. Today, these tribes maintain their traditional practices and sovereignty.
Caribbean
The Caribbean, also known as the West Indies, is a region in the Caribbean Sea consisting of numerous islands and coastal areas. Before European contact, the Caribbean was home to various indigenous peoples, including the Taíno, Arawak, and Carib tribes. These indigenous societies thrived in the region, relying on agriculture, fishing, and Trade Networks.
Mesoamerica
Mesoamerica refers to a cultural region spanning present-day Mexico and Central America prior to the 16th century. Distinctive characteristics included pyramid architecture, writing systems, calendars, mathematics, metallurgy, agriculture, and urban centers like Tenochtitlan, Palenque, and Tikal. Maize served as the economic base alongside Trade Networks and Tribute Systems. Political structures ranged from chiefdoms to expansive empires such as the Maya, Aztec, and Olmec Civilizations. Polytheistic religious practices shared common elements like human sacrifice and worship of feathered serpent deities. Although Mesoamerica lacked wheel technology or pack animals, its cultural sophistication rivaled that of ancient Europe, Asia, and Africa.
Central America
Central America is a geographic region located between North and South America, comprising countries such as Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and Panama. Native American societies in Central America, such as the Maya, developed advanced civilizations characterized by sophisticated agricultural practices, monumental architecture, and complex political systems. The Maya, in particular, constructed impressive cities with pyramids, temples, and observatories, while also making significant advancements in mathematics, astronomy, and writing systems.
Mexico
Mexico, located in the southern part of North America, is a country with a rich and diverse history. Before European contact, Mexico was home to advanced indigenous civilizations, most notably the Aztecs and the Mayans. These civilizations flourished in different regions of Mexico, leaving behind impressive architectural wonders, intricate artistic creations, and complex societal structures. The Aztecs built their capital city, Tenochtitlan, on an island in Lake Texcoco and developed a sophisticated empire that encompassed vast territories through conquest and trade. The Mayans, on the other hand, thrived in the Yucatan Peninsula and southern Mexico, constructing impressive temple-pyramids and advancing in mathematics, astronomy, and writing. Native American societies in Mexico were known for their agricultural expertise, with innovative cultivation techniques such as Terrace Farming and Chinampas.
Yucatan Peninsula
The Yucatan Peninsula is a geographical region in southeastern Mexico, known for its tropical climate and lush vegetation. The peninsula was home to ancient Mesoamerican Civilizations, including the Maya, who developed impressive cities, hieroglyphic writing, and complex astronomical knowledge.
South America
South America is a continent located in the Southern Hemisphere, primarily composed of countries such as Brazil, Argentina, Peru, and Colombia. Native American civilizations, including the Inca, Moche, and Mapuche, thrived in this region, exhibiting advanced agricultural techniques, monumental architecture, and complex societies.
Peru
Peru is a country located on the western coast of South America, known for its rich indigenous history and archaeological sites. The ancient civilizations of Peru, such as the Inca, developed advanced agricultural techniques, intricate road systems, and impressive architectural structures like Machu Picchu.
Geography – Mountain Ranges and Land Formations
The following APUSH Terms and Definitions fall under the theme of Geography. These Terms are listed in alphabetical order.
Andes Mountains
The Andes Mountains are a major mountain range in South America, extending along the western coast from present-day Venezuela to Chile. Native American Civilizations like the Inca thrived in this region, utilizing advanced agricultural techniques, terrace farming, and Trade Networks to sustain their societies.
Appalachian Mountains
The Appalachian Mountains are an ancient mountain range in eastern North America. Native American tribes, such as the Cherokee and Shawnee, inhabited this region before European contact. The mountains influenced migration, trade, and interactions among tribes and played a crucial role in the history and development of the area.
Bering Land Bridge
The Bering Land Bridge is a term used to refer to the landmass that connected North America and Asia during the last ice age when sea levels were much lower than they are today. The Land Bridge was a significant route for the movement of people and animals between the two continents, and it played a significant role in the settlement and development of the Americas
Canadian Shield
The Canadian Shield is a large geological formation covering much of eastern and central Canada. It is characterized by ancient rocks, forests, and thousands of lakes. Native American cultures, like the Inuit and Algonquin, have inhabited this rugged and resource-rich region for thousands of years, adapting to its unique environment.
Mississippi River Valley
The Mississippi River Valley is a vast region in the central United States, encompassing the drainage basin of the Mississippi River. Native American societies, including the Cahokia and Natchez, flourished in this fertile region, utilizing the river’s resources for agriculture, trade, and transportation.
Ohio Valley Region
The Ohio Valley Region is a region in the eastern United States that encompasses the drainage basin of the Ohio River and its tributaries. Before European contact, it was home to various Native American tribes, including the Shawnee, Miami, and Delaware. The Ohio Valley was a fertile and resource-rich area, offering abundant game, fish, and fertile soils for agriculture.
Rocky Mountains
The Rocky Mountains are a major mountain range stretching from Alaska to New Mexico in North America. Native American groups, such as the Shoshone and Ute, inhabited the region before European contact, adapting to the rugged terrain and utilizing the mountains’ resources for hunting, gathering, and spiritual practices.
Sierra Nevada Mountains
The Sierra Nevada Mountains are a mountain range located in the western United States, primarily in California. They are known for their towering peaks, deep valleys, and diverse ecosystems. Native American tribes, including the Miwok and Paiute, have a long history of inhabiting and adapting to the environment of the mountains.
Tidewater Region
The Tidewater Region refers to a coastal plain area along the eastern coast of the United States, particularly in the states of Virginia and Maryland. Before European contact, the region was inhabited by Native American tribes such as the Powhatan Confederacy in Virginia. The Tidewater Region is characterized by its low-lying, marshy landscape and proximity to the Chesapeake Bay and its tributaries. It was a significant area for trade, fishing, and agriculture, with fertile soil supporting the cultivation of crops like corn.
Geography — Waterways
The following APUSH Terms and Definitions fall under the theme of Geography. These Terms are listed in alphabetical order.
Great Lakes
The Great Lakes are a group of large freshwater lakes located in northeastern North America, shared by the United States and Canada. Native American Tribes, such as the Iroquois Confederation and Ojibwe, resided in the region, relying on fishing, hunting, and agriculture for sustenance.
Great Salt Lake
The Great Salt Lake is a large saltwater lake located in the northern part of present-day Utah. It is the largest saltwater lake in the Western Hemisphere. Before European contact, Native American tribes, such as the Shoshone and Ute, inhabited the surrounding areas and had a deep cultural and ecological connection to the lake. They relied on the lake’s resources for sustenance and utilized its salt deposits for trade and food preservation.
Mississippi River
The Mississippi River runs through the central part of the United States, stretching from northern Minnesota to the Gulf of Mexico. Before European contact, the Mississippi River and its tributaries played a significant role in the lives of Native American Societies. Tribes such as the Cahokia, Natchez, and Sioux inhabited the Mississippi River Valley and relied on its fertile floodplains for agriculture, hunting, and fishing. The Mississippi River served as a vital transportation route, facilitating trade and communication between Native American Communities across the region.
Missouri River
The Missouri River is in the central part of the United States. It runs from Montana through the Great Plains and joins the Mississippi River in Missouri. Before European contact, the Missouri River was important to various Native American tribes, including the Lakota, Mandan, and Osage. Native American Societies relied on the river’s resources for sustenance, transportation, and trade. The Missouri River Valley provided fertile lands for agriculture, and the river itself offered an abundant supply of fish and other wildlife.
Ohio River
The Ohio River is a significant waterway in the eastern part of the United States, forming part of the border between several states and serving as a tributary to the Mississippi River. Before European contact, the Ohio River was vital to Native American Societies such as the Hopewell and Shawnee. The fertile lands along the Ohio River supported agriculture and allowed diverse Native American Communities to develop. The river served as a cultural and economic hub and encouraged the exchange of goods, ideas, and alliances among Native American tribes in the region. The Ohio River Valley witnessed the rise and fall of complex societies and the development of unique cultural traditions among Native American societies before the arrival of Europeans.
Politics and Power
The following APUSH Terms and Definitions fall under the theme of Politics and Power. These Terms are listed in alphabetical order.
Chiefdom Organization
Chiefdoms were a form of socio-political organization in many Native American Tribes prior to European contact. They were led by hereditary chiefs who held power and authority over a collection of villages. Chiefdoms demonstrated social stratification, with commoners required to provide tribute and labor to the chief and nobility. They engaged in the redistribution of resources and organized trade. Chiefdoms were not as centralized or hierarchical as states or empires.
City-Based Empires
City-Based Empires were Native American Societies characterized by the presence of large urban centers with centralized political, economic, and religious institutions. Examples include the Maya City-States of Mesoamerica and the Inca Empire in the Andes. Complex social hierarchies and monumental architecture were found in City-Based Empires.
This mural by Diego Rivera depicts the Aztec city of Tenochtitlan. Image Source: Wikipedia.
Fertility Cults
Fertility Cults were religious practices or belief systems centered around fertility and the abundance of the land. Native American Societies, including the Mississippian Culture and some Mesoamerican Civilizations, practiced fertility rituals, in hopes of ensuring successful agriculture and reproduction.
Long-Distance Exchange Networks
Extensive trade routes and long-distance exchange networks developed among Native American civilizations prior to European contact. Goods like precious stones, seashells, obsidian, turquoise, copper, and decorative feathers traveled thousands of miles across North and Mesoamerica along trade routes. In the Eastern Woodlands, tribes traded corn and furs for marine shells from the Atlantic and Gulf coasts. In the Southwest, an interregional network spanned 2,000 miles linking major Puebloan centers. Mesoamerican Cultures exchanged jade, cacao, and cotton textiles within empires and with other polities. Trade fostered cultural diffusion while supporting economic prosperity, specialization, and complex societies before 1492.
Matrilineal Kinship Systems
Many Native American tribes practiced Matrilineal Kinship Systems before European arrival, meaning they traced descent through the mother’s lineage. This meant children were considered born into their mother’s clan, not the father’s. Property and inheritance rights passed through the maternal line. Matrilineal kinship produced relatively equal status for women and men. Clan mothers and elder women held authority over selecting chiefs in some matrilineal tribes.
Pottery
Pottery refers to the art and craft of creating ceramic vessels, containers, and other objects through the shaping and firing of clay. Native American cultures throughout the Americas developed diverse pottery traditions, using different techniques, styles, and designs. Pottery served functional and ceremonial purposes and provided insight into cultural practices and artistic expressions.
Trade Networks
Trade Networks were essential in pre-Columbian Native American societies, facilitating the exchange of goods, ideas, and cultural practices. These networks connected various regions, such as Mesoamerica, the Andes, and the Great Lakes, encouraging economic interdependence and the spread of knowledge.
Tribute Systems
In Mesoamerican and some North American societies before European contact, Tribute Systems obliged conquered peoples to pay regular taxes or gifts to dominant regional powers. Goods given as tribute included maize, cotton, cacao, jewels, cloth, wood, rubber, and live animals. Failure to pay tribute could prompt military raids. The Aztecs, Maya, and Inca Civilizations relied heavily on tribute to support their cities and armies as they expanded. Chiefs also demanded tribute from subordinated tribes. Tribute energized trade, promoted the circulation of luxury goods, and allowed political centralization and specialization of labor. Tribute Systems also developed social hierarchies, fueling tensions between elite rulers and commoner producers that erupted after the arrival of the Spanish.
Wampum
Wampum consisted of beads fashioned from quahog clam shells by Native Americans of the Northeastern Woodlands and Iroquoian Tribes. White and purple wampum beads were strung together into belts and sashes that served vital economic, diplomatic, and symbolic functions. Wampum belts were made to record treaties, send messages, and record oral histories. The custom of gift exchange using wampum belts fostered trade and political alliances between tribes. Strings of wampum also served as a form of currency for commercial transactions. The use and significance of wampum belts endured even after European contact.
Culture and Society — Lifestyle
The following APUSH Terms and Definitions fall under the theme of Culture and Society. These Terms are listed in alphabetical order.
Communal Living
Communal Living refers to a social structure in which individuals live and work together in shared spaces and resources. Many Native American Societies, including the Pueblo People and certain Plains Indian tribes, practiced communal living, emphasizing cooperation, collective decision-making, and the sharing of food and shelter.
Desert Culture
Desert Culture is a term that refers to the unique adaptations and cultural practices developed by Native American Societies inhabiting arid regions, such as the American Southwest. These cultures relied on innovative techniques for water management, agriculture, and the utilization of local plant and animal resources.
Hunters and Gatherers
Hunters and Gatherers were Native American Societies that relied on hunting game, fishing, and gathering wild plants as their primary means of subsistence. Before the development of agriculture, many Native American Groups, such as the Paleo-Indians and Archaic cultures, followed this nomadic lifestyle across various ecological zones.
Nomadic Lifestyle
A nomadic lifestyle is characterized by a lack of permanent settlements, with individuals or groups constantly moving in search of resources and following migratory patterns. Some Native American Tribes, such as the Plains Indians and certain Great Basin Tribes, led Nomadic Lifestyles, relying on hunting, gathering, and seasonal migrations for sustenance. Portable housing like Tipis allowed for mobility.
Work, Exchange, and Technology — Homes and Dwellings
The following APUSH Terms and Definitions fall under the theme of Culture and Society. These Terms are listed in alphabetical order.
Adobe and Masonry Homes
Adobe and Masonry homes were prevalent in Native American Societies before European contact, particularly in the American Southwest. These structures were made of clay, sand, and straw (adobe) or stone (masonry) and provided durable shelter against the region’s extreme weather conditions.
Cliff Dwellings
Cliff Dwellings were intricate structures built into the sides of cliffs or caves, particularly found in the American Southwest. Native American Societies, like the Anasazi, constructed these dwellings for shelter and defense, utilizing the natural landscape to their advantage.
Longhouses
Longhouses are large, communal dwellings that were constructed by some Native American Cultures in the Northeastern Region of North America. Longhouses were typically built of wood and were used for both residential and ceremonial purposes. Longhouses were often located in areas with access to water and other resources, and they were typically located near other important cultural and economic centers. Longhouses were typically rectangular in shape and could be up to several hundred feet long, with walls made of bark or other materials. Longhouses were divided into separate compartments or rooms, each of which was used for a specific purpose, such as sleeping, cooking, or storage.
Pueblos
Pueblos are large communal dwellings that were constructed by the Anasazi, Hohokam, and other Native American cultures in the southwestern part of North America. Pueblos were typically built of stone or adobe and were used for both residential and ceremonial purposes. Pueblos were often located in areas with access to water and other resources, and they were typically located near other important cultural and economic centers.
Tipis
Tipis were a form of portable shelter used by various Native American Nomadic Tribes on the Great Plains prior to European arrival. They consisted of animal hide or bark sheets wrapped around wooden poles to create a cone-shaped, free-standing dwelling. Tipis could be quickly dismantled and carried to new camps. Their versatile structure allowed interior fires for heating and cooking. Plains Tribes like the Sioux and Cheyenne used Tipis.
This photo of an Arapaho Camp shows Tipis in the background. Image Source: National Archives.
Work, Exchange, and Technology — Agriculture and Food Production
The following APUSH Terms and Definitions fall under the theme of Work, Exchange, and Technology. These Terms are listed in alphabetical order.
Chinampas
Chinampas were artificial islands used for agriculture by Mesoamerican civilizations like the Aztecs. They consisted of plots of land built in shallow lake waters, separated by canals. Chinampas were highly productive, enabling the cultivation of crops and supporting urban centers.
Maize Cultivation
The practice of growing and cultivating maize, or corn, which is a staple food in many parts of the world. Maize was a key crop in the agriculture of many Native American cultures, and it played a significant role in the cultural and economic development of the Americas. Maize was also an important export crop for European colonists in the Americas, and it helped to establish trade links between the New World and the Old World.
Sedentary Farming
Sedentary Farming refers to the agricultural practice of cultivating crops in a fixed location over an extended period. Native American societies, such as the Mississippian Culture and the Hohokam in the American Southwest, practiced Sedentary Farming, relying on maize, beans, and squash as Staple Crops.
Staple Crops
Staple Crops are the main agricultural crops cultivated by Native Americans prior to European contact including maize (corn), beans, squash, and sunflowers. These plants were grown together in a sustainable cropping system that provided balanced nutrition. Maize was the primary staple crop, providing carbohydrates, while beans added protein and squash provided vitamins. Together, corn, beans, and squash are known as the “Three Sisters.”
Terrace Farming
Terrace Farming is a method of agriculture that involves constructing stepped, horizontal platforms on hillsides or mountainsides. Native American civilizations like the Inca in the Andes Mountains used Terrace Farming to maximize arable land, prevent soil erosion, and cultivate a variety of crops in difficult terrain.
Three Sisters
The Three Sisters refers to three crops that were traditionally grown together by Native American Cultures in the Americas. The Three Sisters were maize (corn), beans, and squash, and they were often grown together in a system of companion planting, which is called the Three Sisters Cropping System.
Three Sisters Cropping System
The Three Sisters Cropping System refers to the interplanting of maize (corn), beans, and squash by Native American Civilizations. Corn provided carbohydrates and structure for bean vines to climb. Beans added protein through nitrogen fixation in the soil. Squash covered the ground to retain moisture and deter weeds/pests. Planting the three crops together produced greater yields in a sustainable agricultural system.
Culture and Society — Cities
The following APUSH Terms and Definitions fall under the theme of Culture and Society. These Terms are listed in alphabetical order.
Cahokia
Cahokia was a Native American City located in present-day Illinois. It was the largest and most influential city in the Mississippian Culture, which flourished in the Southeastern Region and Midwestern Region of the United States from about 1000 AD to the arrival of Europeans in the 16th century. Cahokia was located near the confluence of the Mississippi, Missouri, and Illinois Rivers, and was home to a complex society with advanced architecture, a sophisticated system of governance, and a thriving Trade Networks.
Machu Picchu
Machu Picchu is an ancient Inca city located in the Andes Mountains of Peru. Built around the 15th century, it is renowned for its remarkable architecture, engineering, and stunning natural surroundings. This “Lost City of the Incas” was constructed with intricately cut stone blocks, showcasing the Inca’s advanced construction techniques. Situated at a high altitude, Machu Picchu served as a sacred and ceremonial site, possibly for Inca rulers. Machu Picchu was rediscovered in 1911 and has since become a renowned archaeological site, offering invaluable insights into the Inca civilization and their mastery of architecture and urban planning.
Tenochtitlan
Tenochtitlan was the capital city of the Aztec Empire and one of the largest cities in the world during the 15th century. Located on an island in Lake Texcoco in present-day Mexico City, Tenochtitlan was a magnificent urban center characterized by advanced infrastructure, impressive architecture, and a thriving population. The city was carefully planned and constructed with a network of canals, causeways, and stone buildings. It served as the political, economic, and cultural hub of the Aztec civilization, with grand temples, palaces, markets, and public spaces.
Tula
Tula was a major urban center and archaeological site that flourished in central Mexico during the Early Postclassic Period between 900–1150 CE. It was the capital of the Toltecs, who created an empire after the decline of the Mayans and Teotihuacan. Tula was a hub for politics, economics, and culture in the region. The city featured monumental architectural structures like twin temple-pyramids, large platforms, courts for ceremonial ballgames, and columned halls. Intricately carved stone warriors and chacmool figures have been excavated. Tula held extensive influence over Trade Networks and the exchange of ideas across Mesoamerica before its mysterious abandonment in the 12th century CE. Its art and architecture influenced later civilizations like the Aztecs.
Culture and Society — Peoples, Tribes, Societies, Civilizations
The following APUSH Terms and Definitions fall under the theme of Culture and Society. These Terms are listed in alphabetical order.
Adena-Hopewell Culture
The Adena-Hopewell Culture was a culture that emerged in the eastern part of North America in the Woodland Period (1000 BC–1000 AD). The Adena-Hopewell Culture was a Native American Indian Culture known for its elaborate burial mounds, which were used to bury tribal leaders and other important individuals. The Adena-Hopewell Culture is also known for its advanced agriculture, trade, and metalworking skills.
Algonquin People
The Algonquian People are Native American People who traditionally lived in the northeastern part of North America. The Algonquins were a diverse group of tribes who spoke a variety of related languages and were known for their extensive Trade Networks and their sophisticated system of governance. The Algonquin People were also known for their skilled use of the bow and arrow, as well as their mastery of canoe-building and other technologies.
Anasazi Culture
The Anasazi Culture — also known as the Ancestral Puebloans — refers to a Native American Civilization that lived in the southwestern United States, particularly in present-day Arizona, Colorado, New Mexico, and Utah. The Anasazi Culture existed from around 200 BCE to approximately 1300 CE. Known for their masonry architecture, including Cliff Dwellings and multi-story Pueblos, the Anasazi Culture cultivated maize, beans, and squash, utilizing advanced agricultural techniques. They also developed intricate pottery, crafted textiles, and engaged in Trade Networks across the region.
Apache People
The Apache People were a group of Native American Tribes who inhabited the Southwestern Region of the United States. The Apache People are known for their skilled warriors and adaptability to diverse environments, the Apache had a semi-nomadic lifestyle, relying on hunting, gathering, and trading within the region. Apache People still exist today, and live on reservations in Arizona and New Mexico.
Archaic Culture
The Archaic Culture emerged in North America from around 8000–1000 BCE. As megafauna died off, Archaic Peoples transitioned to foraging plant foods and hunting smaller game. They invented new tool types, like grinding stones to process seeds and nuts. Increased population density led to regional diversity in subsistence patterns, art, and trade items. Archaic Peoples pioneered Long-Distance Exchange Networks. They became more sedentary with base campsites, developing early Pottery and wooden tools. Their environmental knowledge allowed them to maximize resources. The Archaic Period set the stage for the rise of agriculture and complex societies in later time periods.
Aztecs
The Aztecs were a Native American Civilization that emerged in the Valley of Mexico in the Late Postclassic Period (1200–1500 AD). The Aztecs are known for their highly developed system of agriculture, trade, and architecture, including the construction of large ceremonial centers, such as Tenochtitlan and Tula. The Aztecs are also known for their sophisticated system of governance, which was based on a system of provinces and administrative districts, and for their advanced system of roads and communication.
Blackfoot People
The Blackfoot People are a Native American Tribe comprising several closely related groups, including the Siksika, Kainai, and Piegan. Located in the Great Plains Region, primarily in present-day Montana and Alberta, Canada, the Blackfoot People had a Nomadic Lifestyle, following the seasonal migrations of buffalo herds. The Blackfoot People were known for their horsemanship, warrior traditions, and Communal Living in Tipis. The Blackfoot People had a strong spiritual connection to the land and engaged in spiritual ceremonies, such as the Sun Dance. They also maintained Trade Networks with neighboring tribes and European settlers.
Cherokee People
The Cherokee People are a Native American Tribe traditionally inhabiting the southeastern United States, primarily in present-day Georgia, North Carolina, Tennessee, and Alabama. The Cherokee developed an agricultural society, cultivating crops such as maize, beans, and squash. They established towns, engaged in trade, and developed a complex political system with a central government led by a principal chief. The Cherokee had a rich cultural heritage, including storytelling, traditional dances, and a syllabic writing system known as the Cherokee syllabary. Despite facing forced removal and the Trail of Tears in the 1830s, the Cherokee people persevered and continue to maintain their cultural identity and sovereign status.
John Ross was a Cherokee leader in the 1800s. Image Source: Wikipedia.
Chinook People
The Chinook people were Native American Tribes inhabiting the Pacific Northwest Region, specifically the area around the Columbia River. They were skilled fishermen, traders, and expert canoe builders, relying on the abundant natural resources of the region for sustenance and economic activities. The Chinook People spoke the Chinook Language, a trade language widely used by Indigenous peoples in the region for trade and communication. Chinook is now considered a critically endangered language, with only a small number of speakers remaining.
This painting by Charles Marion Russel depicts the Lewis and Clark Expedition meeting Chinooks on the Lower Columbia, in October 1805. Image Source: Wikipedia.
Choctaw People
The Choctaw People are a Native American Tribe originally from the Southeastern United States, primarily in present-day Mississippi, Alabama, Louisiana, and Florida. The Choctaw had a Sedentary Agricultural Society, cultivating maize, beans, and squash, as well as engaging in hunting and gathering. They developed complex social and political structures, with Matrilineal Kinship Systems and a Chiefdom Organization. The Choctaw had a rich cultural heritage, including traditional stickball games, the Green Corn Ceremony, and a distinct language.
Chumash People
The Chumash People were indigenous to the central and southern coastal regions of present-day California. They established thriving communities based on fishing, hunting, and gathering, utilizing advanced maritime technology and engaging in extensive Trade Networks with neighboring tribes.
Creek People
The Creek People — also known as the Muscogee — are a Native American Tribe originating from the Southeastern United States, primarily in present-day Alabama, Georgia, and Florida. The Creek developed an agricultural society, cultivating crops such as maize, beans, and pumpkins. They established towns with communal buildings and developed a complex political system, including a confederacy composed of various Creek tribes. The Creek engaged in trade with neighboring tribes and European settlers, and their society featured Matrilineal Kinship Systems. The Creek people had a rich cultural heritage, including traditional dances, storytelling, and the Creek language.
Delaware People
The Delaware People — also known as the Lenape — are a Native American Tribe originally from the Northeastern United States, primarily in present-day Delaware, New Jersey, Pennsylvania, and New York. The Delaware Society featured a Matrilineal Kinship System, Communal Living, and a Decentralized Political Structure. The Delaware were skilled hunters, fishermen, and farmers, cultivating crops such as maize, beans, and squash. The Delaware People had a rich spiritual and cultural life, including ceremonies, storytelling, and traditional crafts. They maintained Trade Networks with other Native American Tribes and European settlers, playing significant roles in early colonial history, including interactions with William Penn and the founding of Pennsylvania.
Eastern Indians
The Eastern Indians are the Native American Tribes that lived in the eastern part of North America, including the Southeast, Northeast, and Great Lakes regions. These tribes had a diverse range of cultures and languages and were often organized into complex societies with advanced systems of governance and trade. Many Eastern Indian Tribes had long-standing trade and diplomatic relationships with European powers, and their interactions with Europeans played a significant role in the early history of the region. Prominent tribes of the Eastern Indians are the Cherokee, Creek, Choctaw, and Iroquois.
Great League of Peace
A political and military alliance formed in the early 16th century by the Haudenosaunee (Iroquois) Confederacy, a group of Native American Tribes located in what is now upstate New York. The league was formed in order to promote peace and cooperation among the tribes and to establish a system for resolving conflicts. The league was also a way for the tribes to unite against common enemies, such as the European colonists who were beginning to settle in the region.
Hohokam Culture
Hohokam Culture emerged in the Southwestern Region of North America in the Prehistoric Period (1000 BC–1500 AD). The Hohokam Culture is known for its complex system of irrigation, which allowed them to grow crops in an otherwise dry and arid region. The Hohokam culture is also known for its highly developed system of trade, and for the construction of large communal dwellings, known as pueblos.
Hopewell People
The Hopewell people were a Native American Culture that flourished in the Midwest and Eastern Woodlands Regions of North America from 200 BCE to 500 CE. Known for their elaborate burial mounds and extensive Trade Networks, the Hopewell created a complex society characterized by ceremonial centers and artistic expression.
Incas
The Incas were a Native American Civilization that emerged in the Andes Mountains Region of South America in the Late Intermediate period (1000—1400 AD). The Incas are known for their systems of agriculture, trade, and architecture, including the construction of large ceremonial centers, such as Machu Picchu and Cusco. The Incas are also known for their system of government, which was based on provinces and administrative districts, and for their advanced system of roads and communication.
Iroquois Confederation
The Iroquois Confederation was a political and military alliance of Native American Tribes that formed in the northeastern part of North America in the 15th and 16th centuries. The Iroquois Confederation — also called the Iroquois League or the Five Nations — was composed of five tribes: the Mohawk, Oneida, Onondaga, Cayuga, and Seneca. The Tuscaroras joined in 1720. Afterward, the Confederation was also referred to as the Six Nations. The Iroquois Confederation was known for its sophisticated system of governance, which was based on the concept of a Great Law of Peace.
The Iroquois Confederation played a significant role in the Fur Trade during the Colonial Era, including the Beaver Wars.
The Iroquois Confederation also formed an alliance with the British and participated in the Albany Congress in 1754.
One of the Iroquois Confederation’s important leaders was Theyanoguin, also called King Hendrick.
This rough engraving depicts King Hendrick. Image Source: Wikipedia.
Mayas
A Native American civilization that emerged in the tropical rainforests of Central America in the Preclassic period (2000 BC – 250 AD). The Mayas are known for their systems of agriculture, trade, and architecture, including the construction of large ceremonial centers, such as Tikal and Copan. The Mayas are also known for their sophisticated system of writing and their advanced knowledge of mathematics and astronomy.
Mississippian Culture
The Mississippian Culture was a Native American civilization that emerged in the Mississippi River Valley and the southeastern United States from approximately 800 CE to 1600 CE. Known for their ceremonial centers, mound-building, and extensive Trade Networks, the Mississippian People exhibited social complexity and urban development.
Paiute People
The Paiute People are Native American Tribes that historically inhabited the Great Basin Region of the western United States. These semi-nomadic tribes relied on hunting, gathering, and seasonal migrations to adapt to the arid environment of the region, developing a rich cultural heritage.
Paleo-Indian Culture
The Paleo-Indian Culture refers to the earliest human inhabitants of the Americas, from around 40,000-10,000 years ago. They are characterized by a Nomadic Hunting and Gathering Lifestyle, following migrating wild game and plant foods. Signature artifacts include fluted Clovis projectile points used for big-game hunting. Paleo-Indians traveled in small bands, with low population density across North America. They hunted large animals like mammoths, mastodons, and bison to extinction. Over time, the climate warmed as the Ice Age ended, causing environmental adaptations by these first Native Americans.
Plains Indians
Plains Indians were Native American Tribes who inhabited the Great Plains Region of North America, stretching from present-day Canada to Texas. Tribes like the Lakota, Cheyenne, and Comanche lived a nomadic lifestyle, relying on buffalo hunting, horsemanship, and communal living.
Powhatan Confederacy
The Powhatan Confederacy was a Native American political alliance that existed in the 17th century in the region now known as Virginia. Led by Chief Powhatan, the confederacy encompassed numerous Algonquian-speaking tribes, including the Pamunkey, Mattaponi, and Chickahominy. The confederacy’s territory covered the coastal plains and rivers of present-day Virginia, with its political and cultural center located near the Jamestown Settlement.
Following the arrival of the English in the Tidewater Region and the establishment of Jamestown, three wars took place:
Pueblo People
The Pueblo People are Native American Tribes inhabiting the Southwestern United States, including present-day New Mexico and Arizona. Known for their distinctive Adobe Dwellings and intricate Pottery, the Pueblo Tribes, such as the Hopi and Zuni, cultivated Maize and developed complex social and religious systems.
Shoshone
The Shoshone are Native American Tribes that traditionally resided in the Great Basin Region of the western United States. Skilled horse riders and hunters, the Shoshone adapted to the arid environment, relying on seasonal migrations, gathering, and trading to sustain their communities.
Sioux
The Sioux — also known as the Lakota — are a Native American people who traditionally lived in the Great Plains Region of the United States. The Sioux were a highly organized and powerful group, known for their elaborate social, political, and religious systems. They also had a highly developed system of trade and commerce and were known for their skilled use of horses, which they used for transportation, hunting, and warfare. The Sioux were a deeply spiritual people, and their traditional belief system was centered around the concept of the Great Spirit and the interconnectedness of all things. Today, the Sioux continue to play a significant role in the cultural and political life of the Great Plains Region, and they are recognized as a sovereign nation by the United States government.
Toltecs
The Toltecs were a civilization that dominated central Mexico and influenced much of Mesoamerica from the 10th to 12th centuries CE. Based at their capital of Tula, the Toltecs rose to prominence in the Early Postclassic Period after the fall of the great Classic Period civilizations. Skilled artisans and architects, the Toltecs made advances in urban planning, agriculture, and crafts. Their mythic founder, Quetzalcoatl, spread a cult seeking peace and knowledge. Toltec arts like pottery, obsidian tools, and carved stone boxes and columns incorporated motifs like feathered serpents, jaguars, and warriors.
Ute People
The Ute People are Native American Tribes that historically inhabited the Great Basin Region and Rocky Mountain Region of the Western United States. The Ute People developed a unique culture, adapting to diverse environments through hunting, gathering, and trading, and playing a significant role in the Fur Trade Era.
Wampanoag People
The Wampanoag People are a Native American Tribe residing in the Northeastern Region of the United States, particularly in present-day Massachusetts and Rhode Island. Prior to the arrival of Europeans, the Wampanoag People had a rich cultural heritage, engaging in agriculture, hunting, and fishing, and playing a crucial role in the early interactions between Native Americans and European settlers, including:
Woodland Mound Builders
Woodland Mound Builders were a group of Native American Cultures that emerged in the eastern part of North America in the Woodland Period (1000 BC–1000 AD). The Woodland Mound Builders are known for their elaborate burial mounds, which were used to bury leaders and other important people. The Woodland Mound Builders are also known for their advanced agriculture, trade, and metalworking skills.
American History Central
American History Central is an independent encyclopedia of American history. The site is owned, operated, and funded by R.Squared Communications, LLC. The creators behind American History Central are historians, developers, and website specialists who have built multiple digital encyclopedias. If you would like to know more, please contact us.
Advertising Disclaimer
American History Central is a participant in the Amazon Services LLC Associates Program, a program designed to allow sites to generate revenue by advertising and linking to Amazon.com. As an Amazon Associate, the owner of AHC can earn from qualifying purchases.
Privacy
Please read our Privacy Policy regarding the use of cookies and visitor tracking. American History Central also displays ads from third-party networks.
|
Exchange, and Technology — Agriculture and Food Production
The following APUSH Terms and Definitions fall under the theme of Work, Exchange, and Technology. These Terms are listed in alphabetical order.
Chinampas
Chinampas were artificial islands used for agriculture by Mesoamerican civilizations like the Aztecs. They consisted of plots of land built in shallow lake waters, separated by canals. Chinampas were highly productive, enabling the cultivation of crops and supporting urban centers.
Maize Cultivation
The practice of growing and cultivating maize, or corn, which is a staple food in many parts of the world. Maize was a key crop in the agriculture of many Native American cultures, and it played a significant role in the cultural and economic development of the Americas. Maize was also an important export crop for European colonists in the Americas, and it helped to establish trade links between the New World and the Old World.
Sedentary Farming
Sedentary Farming refers to the agricultural practice of cultivating crops in a fixed location over an extended period. Native American societies, such as the Mississippian Culture and the Hohokam in the American Southwest, practiced Sedentary Farming, relying on maize, beans, and squash as Staple Crops.
Staple Crops
Staple Crops are the main agricultural crops cultivated by Native Americans prior to European contact including maize (corn), beans, squash, and sunflowers. These plants were grown together in a sustainable cropping system that provided balanced nutrition. Maize was the primary staple crop, providing carbohydrates, while beans added protein and squash provided vitamins. Together, corn, beans, and squash are known as the “Three Sisters.”
Terrace Farming
Terrace Farming is a method of agriculture that involves constructing stepped, horizontal platforms on hillsides or mountainsides. Native American civilizations like the Inca in the Andes Mountains used Terrace Farming to maximize arable land, prevent soil erosion, and cultivate a variety of crops in difficult terrain.
|
yes
|
Paleoethnobotany
|
Was maize a staple food in prehistoric North American civilizations?
|
yes_statement
|
"maize" was a "staple" "food" in "prehistoric" north american "civilizations".. "prehistoric" north american "civilizations" relied on "maize" as a "staple" "food".
|
https://www.cambridge.org/core/books/cambridge-world-history/early-agriculture-in-the-americas/805F8932E0809046D9721932A893804A
|
Early agriculture in the Americas (Chapter 20) - The Cambridge ...
|
Summary
Seasonal environments, especially forests and forest fringes, were key habitats for domestication in the Americas. Based on available data, plant domestication in the Americas was characterized by multiple, independent domestications of species in useful genera in North, Central, and South America. This chapter considers this pattern for pseudocereals, legumes, chiles, squashes, tobacco, cotton and a number of fruit trees. Plants needed for nutritionally balanced meals were domesticated multiple times in diverse settings. The early history of plant domestication begins in lower Central America and northwestern South America and is known in large part from microfossil evidence. Agriculture led to landscape transformations in the Americas, the scale of which varied across time and place. Fire was an important, early management tool. Other practices that changed landscapes include management of water and soil. Native agriculturalists in the Americas also practise crop rotation, sequential planting and fallowing.
Crops of the Americas and the geography of domestication
At the time of European contact in the fifteenth century ce, millions of people throughout the Americas lived in agriculturally based societies. Some practices and foods were millennia old; others coalesced late in prehistory. Domestication and agriculture did not arise everywhere, however, nor did all societies take an agricultural route. Distributions of wild ancestors, genetic studies of American crops, and archaeology give us snapshots of the geography of plant domestication (Figure 20.1) and of the diversity of crops grown (Table 20.1).
Figure 20.1 Likely areas of origin for selected crops of the Americas.
Today our understanding of the ancestry and areas of origin of a few crops is quite good, while for others little has changed since the 1970s, when one of the first comprehensive overviews, Crops and Man, was written by Jack Harlan.Footnote 1 Economically important crops are more likely to have been studied by agronomists and plant geneticists. For example, decades of debate and study of the only widespread American grain, maize, and its related wild species leave little doubt that wild Zea mays subsp. parviglumis, Balsas teosinte, gave rise to maize in a single domestication.Footnote 2 Balsas teosinte grows today in the deciduous tropical forests of southern and western Mexico, making this region the likely area of origin. How early Native farmers achieved the dramatic transformation of teosinte to maize is still being studied, but it was likely a process of a few thousand years that combined conscious and unconscious selection targeting seed dispersal, seed size, photoperiod, and starch production. Of the lowland root and tuber crops (arrowroot, cocoyam, llerén, manioc, sweet potato, yam), only productive and undemanding manioc is well studied. The primary stable crop for millions of people worldwide, mostly the poor of tropical countries, manioc was domesticated from Manihot esculenta subsp. flabellifolia on the southern border of the Amazon basin.Footnote 3 For many other crops, from fruit trees to roots and tubers to pseudocereals, we know only the broad geographic range of their likely area of origin. Seasonal environments, especially forests and forest fringes, were key habitats for domestication in the Americas.Footnote 4
More basic research, especially collection of wild related species and traditional crop varieties, is needed to understand the ancestry of many crops, and the pace of extinction, habitat loss, and loss of indigenous knowledge is accelerating. Based on available data, plant domestication in the Americas was characterized by multiple, independent domestications of species in useful genera in North, Central, and South America.Footnote 5 We see this pattern for pseudocereals, legumes, chiles, squashes, tobacco, cotton, and a number of fruit trees. Plants needed for nutritionally balanced meals were domesticated multiple times in diverse settings. For example, where land can be farmed in the Americas, there is a domesticated legume or pulse to thrive there, from peanuts, adapted to moist lowland environments, to the cold-tolerant lupine and the versatile common bean.
Tracing the domestication of root and tuber foods is especially challenging, since only manioc and potato are well studied. Each of those crops emerged in a single region/centre. Whether this pattern characterizes root and tuber domestication in general is unknown; archaeological data hint at multiple domestications (or the very early spread) of some root and tuber crops. Better understanding of the geography of plant domestication could provide valuable insights into the nature of early social networks in the Americas.
The early history of domestication
The early history of plant domestication begins in lower Central America and northwestern South America (Map 20.1), and is known in large part from microfossil evidence (phytoliths, starch grains, pollen). Human occupation of the neotropics began in the late Pleistocene, and by 10,900–9400 bce people occupied diverse environments and in some cases modified them by fire.Footnote 6 Burning of forests and small-scale land clearance is dated to 11050 bce* at Lake La Yeguada in Panama, for example. Arrowroot was the earliest domesticate there, dating to 7800 bce* at the Cueva de los Vampiros site and 5800 bce* at Aguadulce. By 5800 bce* maize and gourd were introduced to Panama and llerén and squash were present, and manioc was introduced shortly thereafter.
Map 20.1 Earlly agricultural sites and regions in the Americas.
Plant domestication began before 8500 bce in southwest coastal Ecuador. Squash phytoliths were recovered from terminal Pleistocene and early Holocene strata at Vegas sites.Footnote 7 Phytoliths recovered from the earliest levels are from wild squash, with domesticated-size squash phytoliths directly dated to 9840–8555 bce.Footnote 8 Other Vegas crops included gourd and llerén, and maize was introduced just before 5800 bce*. Maize continued to be grown at Real Alto and Loma Alta, two Valdivia tradition farming villages (4500–2250 bce), along with cotton, jack bean, achira, manioc, chile pepper, llerén, and arrowroot.Footnote 9 Agriculture in coastal Ecuador remained broad-based for many millennia, incorporating wild/managed tree fruits as well as annual crops.Footnote 10
Domesticated arrowroot dates back to 9250–8500 bce* at the San Isidro site in the Colombian Andes, where starch was identified on a pounding tool.Footnote 11Palms and avocado were also present, but whether domesticated or wild/managed is unknown. Pollen records documented maize in association with forest clearance and disturbance beginning at 7250 bce* in one core, and in several sequences from 5500 bce* and after. Palm and domesticated squash, llerén, and gourd were directly dated to 8250–6500 bce* at the Peña Roja site in eastern Colombia.
At sites in the Nanchoc valley of northern Peru, initial direct dates on domesticates with primitive morphologies, including manioc and peanut, were modern, but new direct dates document squash at 8283 bce, peanut at 6538 bce, and cotton at 4113 bce, and confirm early occurrence of manioc.Footnote 12 Starch from bean and pacae seeds, squash flesh, and peanut was recovered from dental calculus of teeth dating from 7163–5744 bce.Footnote 13 Domestication may be equally ancient in the central Peruvian sierra, but dating ambiguities exist for important sites. Oca, chile pepper, lucuma, and common and lima beans were recovered from Guitarrero Cave in strata dated 9250–8500 bce*, but beans were directly dated as much younger.Footnote 14 Several root crops were recovered from Tres Ventanas Cave in equally ancient strata, but one was directly dated to 5800 bce*. Domestication of a diverse array of local Andean tubers, pulses, and quinoa was likely underway before 5800 bce*.Footnote 15
The best-known data on early domestication in Mesoamerica come from two caves, Coxcatlán and Guilá Naquitz, each located in the semi-arid highlands of central Mexico. The excellent preservation of crop remains in these dry sites, and in the case of Coxcatlán, its historically early excavation and thorough publication, have long influenced perceptions of the history of plant domestication. Maize, squash, and bottle gourd first appear at Coxcatlán during the Coxcatlán phase, 5800–4400 bce*; by the end of the phase, tree fruits were present whose dispersal and maintenance depended on humans.Footnote 16 Another squash, common bean, tepary bean, and chile pepper appeared over the next 2,000 years. The supposed antiquity of domesticates in the Coxcatlán sequence has largely not stood up to direct dating of crop remains, however. Coxcatlán-phase maize was directly dated to 3600 bce, common beans 300 bce, and tepary beans 440 bce.Footnote 17 Only bottle gourd was as ancient as expected from site stratigraphy. Guilá Naquitz Cave also documents early domesticated squash, maize, chile pepper, and bottle gourd.Footnote 18 Squash (Cucurbita pepo) was directly dated to 8000–6000 bce and maize to 4250 bce.Footnote 19
Recent research at Xihuatoxtla shelter in the central Balsas River valley, southwest Mexico, has now documented early maize in the dry tropical forest setting of its wild ancestor.Footnote 20 Maize phytoliths were recovered from site sediments, and maize starch and phytoliths from grinding stones, dating to 6700 bce. Domesticated squash was also present. From the Balsas region maize spread first through the lowlands; it is documented, for example, at 5100–5000 bce in a sediment core on the Gulf Coast, and maize pollen and/or phytoliths document the crop in southern Pacific coastal Mexico, Pacific coastal Guatemala, northern Belize, and Honduras by 3500 bce.Footnote 21 Maize was carried south through the tropical lowlands prior to this time, however, as it is documented earlier in Panama, Colombia, and Ecuador.
Early domestication in the Americas took place in the context of changing climatic conditions, namely increasing warmth and moisture.Footnote 22 The earliest crop records in the northern tropics fall within the northern thermal maximum (8500–3400 bce), a period wetter than present, some prior to a reversal to colder, drier conditions (6300–5800 bce: maize, arrowroot), others at the end or shortly after that reversal (squash, llerén, manioc). Domestication was earlier in the southern tropics, with arrowroot, llerén, squash, and gourd present before the southern thermal maximum (8000–5500 bce). The list of domesticates in the southern tropics expands greatly during the thermal maximum and the millennia during which ENSO (El Niño – Southern Oscillation) was weak (6800–3800 bce: maize, peanut, cotton, Phaseolus, jackbean, achira, manioc, chile, potato). Domesticates were moved during this warm interval: for example, maize from west Mexico into Central and South America, and manioc from the southern edge of the Amazon to Peru (with peanut), Ecuador, and Panama. Too little is known of the areas of origin of many crops to trace early movements; many early finds are starch or phytolith residues from artefacts, and artefacts of comparable ages have not been studied from possible areas of origin.
The history of plant domestication begins in temperate North America between 3200 and 1785 bce, when native squash, chenopod, marshelder, and sunflower were domesticated in the Eastern Woodlands, and maygrass, erect knotweed, little barley, and giant ragweed were grown and moved outside their native ranges.Footnote 23 Variation exists in the relative importance of native crops and wild plants, with American Bottom populations (Mississippi floodplain near St Louis) producing the largest quantities of native crops over the longest time period. Acorn use was often higher in regions with less reliance on native crops.Footnote 24 Maize was incorporated into indigenous crop husbandry in the Eastern Woodlands around 300 bce.Footnote 25 For the better part of a millennium maize was one food in a broad diet, until its transformation into a staple crop between 800 and 1200 ce. Directly dated maize macro-remains and cooking residues place the crop in the Midwest and Northeast at about the same time.Footnote 26
Maize was introduced from Mexico into the Southwest by 1600 bce or somewhat earlier, just prior to the late Archaic or early Agricultural period (1500 bce to 0–500 ce), and by the end of the period had transformed foodways based on native plants.Footnote 27 With the widespread adoption of maize came substantial habitations with storage features. Maize, beans (common and tepary), and pepo squash form the core of Southwestern agriculture, with cotton and bottle gourd also introduced early, and other beans and squashes later arrivals. Wild native annuals, commonly used during the early middle Archaic (prior to 1500 bce), remained a component of diet through the Pueblo IV/Classic period.Footnote 28
In the Great Plains, during the Archaic (3500–500 bce) plant foods included native annuals, fleshy fruits, nuts, roots, and grasses.Footnote 29 The earliest directly dated domesticates are squash (2218–2142 bce), marshelder (628–609 bce), and maize (813–878 ce), which was likely introduced earlier. Woodland populations (500 bce to 800–900 ce) were more sedentary, ceramics were introduced, and cultivated plants were increasingly used. The maize-based Plains Village tradition (900–1600 ce) developed out of this foundation. Maize was also a widespread component of diet from 700–1600 ce in the eastern Canadian prairies and adjacent boreal forests.Footnote 30
Early food-producing societies
What was life like in early mid-Holocene food-producing societies of the Americas? Several examples illustrate the range of variation that existed and also commonalities, such as reduction in mobility and emergence of villages. At Real Alto, Ecuador, one of the earliest American agricultural villages, inferences can be drawn concerning social organization, ritual activities, and emerging political complexity. Other examples of early food-producing societies will be drawn from Mesoamerica and the desert borderlands of the southwest United States and northwest Mexico.
There are many ethnographically documented combinations of domesticated, managed, and wild plants used by non-agricultural societies, i.e. those that do not depend on domesticates for a substantial part of their diet.Footnote 31 The length of time between the appearance of domesticates and agriculture is variable, with cases of a long period of low-level food production, such as in Eastern North America (4,000-year separation between domestication of native crops and dependence on maize). But it can be difficult to gauge the contribution of domesticated plants to past diet.Footnote 32Presence of domesticates is not the same as dependence on domesticates; different kinds of food may become incorporated into the archaeological record in different ways (e.g. foods with robust inedible parts survive as charred macro-remains, tubers as starch or phytolith residues on tools). There is a tendency to assume that roots and tubers, squashes, legumes, and tree fruits were not staples in early food-producing systems, and to equate agriculture with maize as a staple crop.Footnote 33 But many root and tuber crops are equal to or exceed maize in caloric production, and when such resources are available, agriculture may follow quickly after domestication.
In southwest coastal Ecuador, domesticated plants are first documented during the early mid-Holocene Vegas tradition.Footnote 34 All Vegas sites but one are very small – dense scatters of lithic debris, likely associated with ephemeral structures. Site 80 is distinctively different: covering an area of over 2,000 m2, it served as a base camp for the seasonally mobile population, who buried their dead there.Footnote 35 Thus, prior to the appearance of villages in coastal Ecuador, a dispersed community began to link themselves to place via ancestors. One burial is distinctive: a female burial in a small structure, suggesting an early, central role for women in community and ceremonial life. Site locations – along seasonal streams – indicate that plant cultivation had already begun to shape the interactions of people and landscape.
Following a brief hiatus, life in southwest Ecuador was transformed during the Valdivia period (4400–1400 bce).Footnote 36 Valdivia is one of the earliest ceramic traditions of the Americas; the earliest Valdivia sites are among the first villages of the Americas; by the time of the middle Valdivia, the Real Alto site had grown to be a town, one of the earliest in the Americas.
Household and community structure at Real Alto provide insights into Valdivia society.Footnote 37 The earliest village was small (150 m across), and circular or U-shaped, with 12–15 houses. Houses were small (8.4 m2) single-family dwellings, giving a total population of 50–60 people. The village grew until by the end of the early Valdivia it had doubled in size, and was occupied by 150–250 people. The village plan continued to reflect a division of space into domestic (outer ring of houses) and public (interior plaza) domains; no structures for public ritual were present.
In the middle Valdivia, Real Alto grew into a town, 400 m across and U-shaped or rectangular. Average house size increased to 102 m2 extended-family dwellings, and population grew to 1,800. Community structure also changed, with the construction of two ceremonial mounds (Fiesta and Charnal House mounds) facing each other across the central plaza. The mounds divided the plaza into two segments, creating several levels of potential segmentation/opposition within the community. While there is no direct evidence of who participated in and who led ceremonies, most researchers argue that ritual life at Real Alto included shamanism.Footnote 38 Shamanic practices of tropical forest agriculturalists include rituals focused on life-cycle issues of women (puberty, pregnancy), curing, and divination. Shamans also keep a community’s ceremonial calendar, and provide leadership in both the domestic and sacred realms.
Over time at Real Alto, distinctive rituals became formalized within structures built on platform mounds, two to four social groups existed in the town, and family structure changed to extended families, with house clusters suggesting increased emphasis on descent group.Footnote 39 Two sizes of extended-family house existed, indicating differences in relative social standing of households, but there was no evidence of differential access to resources. Neither were there differences in grave goods, but some individuals were treated differently after death.Footnote 40 Most individuals were buried next to or in wall trenches of domestic structures. The Charnal House mound had a concentration of burials in a very different context: an adult female was buried in a tomb under the threshold, with nearby male and juvenile burials within the structure. The inference is burial of a woman in the apical role for a corporate kin group. The Fiesta mound also represented a distinctive context: large pits within a series of paired structures contained evidence of feasting activities, such as broken drinking vessels and exotic sea food. The inference is social competition through feasting: ritual to attract and retain group members.
The transformation of Real Alto from village to town during the middle Valdivia period represented significant changes in social relationships, and ritual feasting, led perhaps by shamans, helped create and maintain the new social order. While there are hints of differences in social standing, access to more labour (acquired by attracting followers through feasting), rather than prestige goods, seems to be the key element of status differences: labour to put more fields into production, to water long-growing root crops during the dry season, or to grow an extra maize crop in an albarrada (water catchment feature).
The first farmers of western Mexico were small groups of cultivators who likely shifted settlements seasonally.Footnote 41Xihautoxtla shelter, where early maize was identified, was repeatedly visited by small groups who stayed for several weeks or more. They used unmodified river cobbles and stone slabs as grinding tools, and manufactured chipped stone tools. Two contemporary sites in the region lack grinding stones, suggesting that different sets of activities were carried out there. Palaeoenvironmental data from nearby lakes indicate that lacustrine environments were used by Archaic period inhabitants of the region, including, perhaps, for cultivating plants on the lake edge. Population mobility appears to decline over time with the emergence of food production in Mexico. In the highland Tehuacan valley, for example, researchers argue on the basis of site locations, numbers, and sizes that semi-sedentary camps (i.e. occupied for two or three seasons) appear in the Riego phase (7500–6000 bce*) and small sedentary sites by the Abejas phase (4500–2750 bce*).Footnote 42 By 2000 bce, sedentary villages appear widely in Mesoamerica, marking the beginning of the Formative period (2000 bce to 250 ce), which represents the time when agriculture, village life, and ceramic production came together.Footnote 43
The late Archaic through early Formative was a period of change in Mesoamerica, from sparse populations of low-level food producers to settled agriculturalists and growing populations.Footnote 44 The southwest Pacific coasts of Mexico and Guatemala provide contrasting views of life during this transition. Large shell mounds, dating to 5500–1800 bce, are highly visible Archaic sites in coastal Mexico. These sites have been interpreted as seasonal occupations of foragers who harvested estuarine resources, and perhaps used some domesticated plants. Pollen, phytolith, and charcoal data from environmental cores adjacent to sites now document that sustained slash-and-burn farming, incorporating maize, took place between 2700 and 1800 bce.Footnote 45 Farming settlements were likely located inland, away from saline and seasonally inundated soils, and now buried beneath stream alluvium. The inference is that populations of farmer-foragers with reduced mobility lived in base camps near the best agricultural land, with seasonal settlements near rich estuarine resources.
This rich coastal environment extends into Pacific coastal Guatemala, where there is a 6,000-year palaeoenvironmental record of human occupation.Footnote 46 Evidence for anthropogenic fire survives from the late Archaic, and microfossil evidence indicates that maize, squash, and cotton were cultivated and arboreal species managed by non-sedentary peoples before the appearance of the first permanent villages. The lack of late Archaic sites indicates that populations were more mobile than those of coastal Mexico; the overall scarcity of crop remains suggests a low level of food production, with fire used to encourage useful wild plants and to attract animals.
Recent research in the desert borderlands of northwest Mexico and the southwest United States indicates that Archaic populations who grew domesticated plants were less mobile than previously thought.Footnote 47 Early farming systems were very diverse in this region, incorporating flood, water-table, run-off, irrigated, dry, and rain-fed farming systems, with nearly all early systems focused on alluvial lands with naturally replenished soils.Footnote 48 In southeast Arizona, for example, maize, bean, cotton, and amaranth were grown before evidence of canals, terraces, and larger and more permanent settlements appears.Footnote 49 Eventually there is large labour investment in canals and terraces, suggesting reduced mobility and increased territoriality.
There were differences in the rates at which foraging populations in the desert borderlands were transformed into farming ones. For example, the population of the Cerro Juanaqueña site in Chihuahua, Mexico, made significant investments in agriculture by 1200 bce, while the nearby Jornada Mogollon region did not undergo this transition until 1000 ce.Footnote 50 Cerro Juanaqueña is the earliest known cerros de trincheras site (complex of hilltop terraces, rock rings, and stone walls). The terraces served as living surfaces, while farming took place in the floodplain of the Rio Casas, below the site. Maize was found in 60 per cent of features, suggesting it was a dietary staple; there were also large numbers of worn grinding stones, possible domesticated amaranth, and wild chenopod and other seeds. This suggests a population that was relatively sedentary: the Rio Casas floodplain offered a lower risk and higher return rate for maize agriculture than was possible in the Jornada Mogollon region, where more mobile populations relied on productive wild resources (especially shrubs and mesquite).
Agricultural practices and domestication of landscapes
Agriculture led to landscape transformations in the Americas, the scale of which varied across time and place. Fire was an important, early management tool. Other practices that changed landscapes include management of water (through irrigation, water catchment features, construction of raised and ditched fields) and soil (through terracing, formation of black earths, fallow regimes).
By the time of European contact, anthropogenic landscapes existed throughout the Americas. For example, the Gulf Coast and piedmont of Mesoamerica, where Cortés came inland, was a productive patchwork of cultivation interspersed with managed forests and scrublands.Footnote 51 Well-drained lands (hill slopes and constructed terraces) were cultivated in the rainy season, and in the dry season margins of wetlands were farmed as water receded or was drained away. Tree crops such as cacao were cultivated in special plots as well as being part of managed forests and house gardens. In the semi-arid basins of the central highlands, the upper slopes remained in forest, while rain-fed agriculture was practised on lower slopes, constructed terraces and floodwater and irrigation cultivation along watercourses and terraced basin floors, and wetland cultivation on poorly drained basin soils.
Landscapes were also significantly transformed in Southwest and Eastern North America, and intensive practices (i.e. those requiring high labour inputs) were used in both regions.Footnote 52 In the Southwest both stream floodplains and upland slopes were farmed. Irrigation systems that supplemented summer rainfall and sometimes permitted a second crop were found in many river valleys. Slopes were modified for agriculture by construction of terraces (to increase soil depth and water retention) and check dams (to slow and spread water run-off).
Historical accounts of farming in the Eastern Woodlands suggest that selective burning and clearance had created a productive mosaic of cultivated fields, successional growth, semi-permanent open areas, and open forests.Footnote 53 There are accounts of cropping for extended periods of time, with brief fallows and localized burns to control weeds (in-field burning).Footnote 54Fields varied in size, including very large fields, and systems that approached annual cropping. Raised fields, ridged fields, and hilled fields were known and house gardens were common, but slope modification has not been identified. The first farmers of the Eastern Woodlands appear to have targeted floodplain environments. In the lower Little Tennessee River valley, for example, human impact on bottomland forests, as shown by increases in disturbance-favoured species, increased after the appearance of squash and gourd.Footnote 55 Over time, lower terraces as well as active floodplains were farmed. Minimal forest clearance occurred in upland forests until nearly the time of Euro-American settlement.
Prior to European contact, farming throughout the Americas was carried out exclusively by hand tools and human labour; draught animals and the traction plough are post-contact. There were two broad classes of farming tool: digging or planting sticks and spade-like implements, with a blade in the same plane as the handle; and hoes or mattocks, with a blade set at an angle to the handle.Footnote 56 The wooden digging or planting stick was wedge-shaped and used to make planting holes or to turn the soil. The tip was fire-hardened or sometimes the tool was tipped with stone. In the Andes the foot-plough or chaqui-taclla brought the foot of the cultivator into use to turn heavy sod (Figure 20.2). Hoes or mattocks were used for cultivating around crops; blades were made of wood, stone, or bone scapulas. In Mesoamerica and western South America tools were sometimes tipped with copper or bronze. The steel machete is used today for clearing brush and felling trees; prehistorically, wooden and stone tools were used for cutting and clearing.Footnote 57Stone axes, made by hafting a shaped, sharpened stone to a wooden handle, smashed wood fibres, more rarely cutting through them. Cutting was supplemented by girdling and firing, with the largest trees often left standing. Clubs or ‘swords’ made of hard wood were used to remove undergrowth and for weeding. Other traditional approaches to weed control included mulching, shading out weeds with cover crops, and in-field burning.
While examples of the ‘hard technologies’ of agriculture (i.e. permanent field features, discussed below) are well preserved in the Americas, ‘soft technologies’, the essential practices for manipulating the field environment, leave little to no archaeological evidence.Footnote 58 Adding organic fertilizer to soil (i.e. bird guano, fish, animal dung, mucking, composting) was likely a prehistoric practice, as was planting on anthropogenic soils (‘black earths’, former settlement sites). Fire was an important agricultural technology.Footnote 59Traditional farmers use fire in combination with forest clearing to create and maintain openings for sun-demanding crops; fire removes debris, kills pests, and returns nutrients to the soil via ash deposition. Cropping patterns are known from ethnographic accounts and some historical records.Footnote 60 The literature gives the impression that mixed cropping (polyculture) dominates traditional farming, but there are many variations, including companion planting (e.g. corn–beans–squash: each crop provides benefits to the others), agroforestry (combining annual and perennial crops with tree crops), zonation (different species in blocks or rings within fields), and planting that is nearly monocropping (fields dominated by one crop, with a few individuals of others). There are many examples of environmental zonation, where fields are dispersed across microhabitats, for example the verticality that characterizes traditional Andean agriculture, and planting the floodplains of major rivers, where crops are matched to microrelief, soil, and differential flooding. Native agriculturalists in the Americas also practise crop rotation (changing crops year to year in a field), sequential planting (one crop after another in a field), and fallowing (allowing land to rest, to restore fertility and combat weeds and pests).
Water management was essential to prehistoric agriculture in many parts of the Americas, and transformed landscapes. In desert coastal Peru, for example, early farmers cultivated self-watering alluvial lands along rivers and their outflows, and locations where short ditches or embankments could guide water.Footnote 61Intensification and expansion of agriculture depended on irrigation. Canal irrigation was practised in South America from southern coastal Ecuador to central Chile, in intermontane Andean valleys, the Altiplano of southern Peru/northern Bolivia, and some valleys along the Caribbean. In Peru, canal irrigation is documented in twenty-five to thirty coastal valleys, with the largest system and area irrigated on the north coast, dating to 1000 ce.Footnote 62 Small-ditch irrigation began by 4500–3400 cal bce in the Nanchoc valley.Footnote 63
Irrigation was also critical to agriculture in the Southwest United States.Footnote 64 Early farmers planted well-watered alluvial lands; rain-fed farming was practised only rarely in the region, in higher elevations with sufficient rainfall. Historical accounts indicate that stone and brush weirs and earthen berms were built to slow and divert water from streams, springs, and flood run-off. Rock terraces built on hillsides slow run-off, trap sediment, and create planting surfaces. Canal irrigation dates back to 1250–400 bce* in the southern and central parts of the Southwest. Irrigated farming likely required shifting field locations/fallow cycles to replenish nutrients and to avoid salinization. The roughly contemporaneous dates from the American Southwest and Mesoamerica (see below) suggest independent development of water management systems.
Development of water management technology began during the Formative in Mesoamerica.Footnote 65 Practices included use of floodwater and run-off, springs, and upland and valley-bottom perennial stream systems. Floodwater and run-off systems were the most common, dating to 1200 bce and later in numerous locations. Features include dams, canals, ditches, drains, artificial ponds and reservoirs, raised fields, terraced fields, and ridged fields. Much less common were spring-fed systems (770 bce), upland perennial stream systems (300 bce), and valley-bottom systems (1050 bce). A deep-water well dated to 7900 bce, possibly used for hand irrigation, has been identified at a site in the Tehuacan valley. Most of the familiar kinds of water control system were developed along with the emergence of villages throughout Mesoamerica between 1200 and 1000 bce. There is considerable variability in the scale of early systems, but horizontal, kin-based organization is inferred.
The development of agriculture in the Andean highlands was linked to the creation of productive agricultural lands through landscape modification. The basic forms were irrigation, terracing, and raised fields.Footnote 66 Irrigation supplemented rainfall, and was practised in many inter-Andean valleys, with extensive systems in larger basins with expanses of land. Irrigation canals are common in the Lake Titicaca basin, where canals associated with raised fields carried water away from the lake. Irrigated bench terraces at Huarpa near Ayacucho date from 200 bce to 600 ce.
Terraces, flat planting surfaces created on slopes, are mostly found in arid and semi-arid highlands in the Americas, and in the driest areas are associated with irrigation.Footnote 67 The most northerly zone of terracing stretches from southwestern Colorado through to the Sierra Madre of western Mexico. The distribution is quite dispersed, and consists of cross-channel terraces across narrow drainages. There are discontinuous zones of terracing in Mesoamerica, including the basins of central and southern Mexico and western Guatemala, with few terraces south of Guatemala until the Andes. Forms include cross-channel terraces, contour terraces, and valley-floor terraces. In higher elevations frost hazard is alleviated in part by terracing, since crops can be grown above frost-prone valley bottoms.
Terracing extended discontinuously in South America from Venezuela to Chile and northwest Argentina, with heavy concentrations of irrigated terraces in southern Peru, including around the Inca capital Cuzco and northern Bolivia.Footnote 68 Expanses of rain-fed terraces occur in the eastern Peruvian Andes and southern Ecuador. Sloping-field terraces, in which retaining walls running across a slope accumulated soil and controlled run-off, were the most common type. Bench or staircase terraces were long, narrow expanses of level, deep soil held by high stone retaining walls. Terraces altered field microclimates, optimizing production, reducing risk, and permitting cropping in unfavourable settings. Both unirrigated and irrigated terraces have been dated as early as 2400 bce in Peru, with large terracing systems dating to 600 ce and later.
Raised fields are artificially elevated earthworks that improve drainage and provide planting surfaces in wetlands.Footnote 69Such fields, called chinampas, were an important component of agriculture on the fringes of lakes in the Basin of Mexico, for example. Formed of lake mud, aquatic vegetation, and domestic refuse, chinampas did not float, but some seed beds were in the form of movable rafts. Chinampa fields were usually narrow, but could be quite long, and were often planted along the edges with trees.Footnote 70 Some 12,000 ha of chinampas helped feed the population of the Aztec capital. Earlier, buried chinampa systems have been documented by remote sensing in the northern Basin of Mexico. Raised-field systems occur elsewhere in highland Mexico, as well as in the lowlands of the Mexican Gulf Coast, northern Belize, and Guatemala. Swampy land can also be cultivated by digging ditches to drain away water, rather than building up soil.
The largest expanses of raised fields in the Andean highlands are in the Sabana de Bogotá (Colombia), northern Ecuador, and the Lake Titicaca basin.Footnote 71 In the Lake Titicaca region, from 600–1200 ce the Tiwanaku state supported dense populations in a region marginal for agriculture through raised-field technology and selection of nutritious local crops like potato, quinoa, and lupine.Footnote 72 Approximately 25,000 ha of raised fields were built on flat or gently sloping land. Fields functioned for thermal protection, provided higher fertility through mucking, and retained water in droughts and drained it in floods. Earlier, smaller field systems date to 1500–200 bce in the region.Footnote 73
In the South American lowlands, large expanses of raised fields are located in northern Colombia (earliest 800 bce), the coast of French Guiana (1000 ce), and the Guayas basin (southwest Ecuador).Footnote 74 In Ecuador, research at the Peñon del Rio complex discovered buried fields dating from 500 bce to 500 ce beneath larger, visible fields constructed after 500 ce. Maize phytoliths were identified from both early and late fields.Footnote 75 Raised fields supported intensive agriculture in this region of large-scale flooding and tidal influx.
Areas of the Amazon basin preserve evidence of intentional and non-intentional farming practices that transformed environments into productive, domesticated landscapes.Footnote 76 In addition to anthropogenic burning, already discussed, other elements of transformation included human settlements and their associated gardens; creation of mounds (domestic, ceremonial, burial), forest islands in savannas and wetlands, ring ditch sites, and raised fields; creation of black earths (resulting from domestic debris and large quantities of charcoal that may have been deliberately added); creation of paths, trails, and roads, including extensive systems of raised causeways; fisheries management; and agroforestry (culling non-economic species and replacing them with useful ones). Many of these practices were ancient and persistent in the Amazon.
American agriculture in worldwide perspective
New data, especially plant microfossils (phytoliths, starch grains, pollen), demonstrate that agriculture is as old in the American tropics as in the early Old World primary centres.Footnote 77 Plant domestication began in the early Holocene, and the longer-term environmental changes that accompanied the Pleistocene–Holocene transition can be considered the ultimate causal factors behind the development of food production. The identification of proximate causation in specific cases is much more conjectural, as cultural and environmental factors are difficult to disentangle, especially given the limitations of the archaeological record.
In the American tropics, early food producers were semi-sedentary to sedentary, occupying alluvial or wetland-edge habitats. Groups appear to have been organized at the level of family or hamlet, with no evidence for social complexity. Expansion of forests in the neotropics during the early Holocene changed plant distributions, closing formerly open woodlands and altering edge habitats favoured by many starch-rich root and tuber species. Among the human responses indicated by the record of early agriculture are creating and maintaining open habitats for favoured plants, altering mobility patterns as resources expanded or contracted, changing diet in response to changing availabilities of foods, and increasing densities of desirable plants and animals by cultivation/management. The record indicates that in the early Holocene there were frequent and dispersed plant domestications: some were advantageous and early domesticates spread, sometimes widely, through social interactions among foragers and horticulturalists. Cultivation was small-scale, in well-watered settings.
Increasingly productive crops fuelled population growth, which led to the spread of societies dependent on agriculture into new habitats, and creation of built environments for farming. This last is the most visible threshold of the process, having left its mark throughout the Americas on the landscape, in sediment cores, and in numbers of sites. Agriculture eventually spread into all suitable environments in the Americas, with landscape modification and crop improvements opening up or increasing the potential of previously unsuitable or geographically limited environments. With clear evidence that the roots of plant domestication lay in the early Holocene, the challenge now facing us is to expand the palaeoenvironmental and archaeological records of this process, and to better understand people–plant interrelationships during the late Glacial period.
20D.R.Piperno et al., ‘Starch grain and phytolith evidence for early ninth millennium bp maize from the central Balsas River valley, Mexico’, Proceedings of the National Academy of Sciences, 106 (2009), 5020–4.
23B.D.Smith and C.W.Cowan, ‘Domesticated crop plants and the evolution of food production economies in Eastern North America’, in P.E.Minnis (ed.), People and Plants in Ancient Eastern North America (Washington, DC: Smithsonian Books, 2003), 105–25.
24 C.M. Scarry, ‘Patterns of wild plant utilization in the prehistoric Eastern Woodlands’, in Minnis (ed.), People and Plants, 50–104.
27L.W.Huckell, ‘Ancient maize in the American Southwest: what does it look like and what can it tell us?’, in J.E.Staller et al. (eds.), Histories of Maize: Multidisciplinary Approaches to the Prehistory, Biogeography, Domestication, and Evolution of Maize (Amsterdam and London: Elsevier Academic Press, 2006), 97–107.
28 L.W. Huckell and M.S. Toll, ‘Wild plant use in the North American Southwest’, in Minnis (ed.), People and Plants, 37–114.
33J.Iriarte, ‘New perspectives on plant domestication and the development of agriculture in the New World’, in T.P.Denham et al. (eds.), Rethinking Agriculture: Archaeological and Ethnoarchaeological Perspectives (Walnut Creek, CA: Left Coast Press, 2007), 167–88.
35K.E.Stothert, ‘Expression of ideology in the formative period of Ecuador’, in J.S.Raymond and R.L.Burger (eds.), Archaeology of Formative Ecuador (Washington, DC: Dumbarton Oaks Research Library and Collection, 2003), 337–420.
41A.J.Ranere et al., ‘The cultural and chronological context of early Holocene maize and squash domestication in the central Balsas River valley, Mexico’, Proceedings of the National Academy of Sciences, 106 (2009), 5014–18.
44R.G.Lesure, ‘Early social transformations in the Soconusco’, in R.G.Lesure (ed.), Early Mesoamerican Social Transformations: Archaic and Formative Lifeways in the Soconusco Region (Berkeley: University of California Press, 2011), 1–24.
46 M. Blake and H. Neff, ‘Evidence for the diversity of late Archaic and early Formative plant use in the Soconusco region of Mexico and Guatemala’, in Lesure (ed.), Early Mesoamerican Social Transformations, 47–66.
47G.J.Fritz, ‘The transition to agriculture in the desert borderlands: an introduction’, in L.D.Webster et al. (eds.), Archaeology Without Borders: Contact, Commerce, and Change in the US Southwest and Northwestern Mexico (Boulder: University Press of Colorado, 2008), 25–33.
49J.B.Mabry, ‘Changing knowledge and ideas about the first farmers in southeastern Arizona’, in B.J.Vierra (ed.), The Late Archaic across the Borderlands: From Foraging to Farming (Austin: University of Texas Press, 2005), 41–83.
50 R.J. Hard and J.R. Roney, ‘The transition to farming on the Río Casas Grandes and in the southern Jornada Mogollon region’, in B.J. Vierra (ed.), Late Archaic across the Borderlands, 141–86.
51T.M.Whitmore and B.L.Turner II, ‘Landscapes of cultivation in Mesoamerica on the eve of the conquest’, Annals of the Association of American Geographers, 82 (1992), 402–25.
52W.E.Doolittle, ‘Agriculture in North America on the eve of contact: a reassessment’, Annals of the Association of American Geographers, 82 (1992), 386–401.
53W.M.Denevan, ‘The pristine myth: the landscape of the Americas in 1492’, Annals of the Association of American Geographers, 82 (1992), 369–85.
76 C.L. Erickson, ‘Amazonia: the historical ecology of a domesticated landscape’, in Silverman and Isbell (eds.), Handbook of South American Archaeology, 157–83.
77 Piperno and Pearsall, Origins of Agriculture; Pearsall and Stahl, ‘Origins and spread of early agriculture’.
References
Further reading
Bermejo, J.E.H. and León, J. (eds.). Neglected Crops: 1492 from a Different Perspective. Rome: Food and Agriculture Organization of the United Nations, 1994.
Blake, M. and Neff, H.. ‘Evidence for the diversity of late Archaic and early Formative plant use in the Soconusco region of Mexico and Guatemala.’ In Lesure, R.G. (ed.), Early Mesoamerican Social Transformations: Archaic and Formative Lifeways in the Soconusco Region. Berkeley: University of California Press, 2011. 47–66.
Hard, R.J. and Roney, J.R.. ‘The transition to farming on the Rio Casas Grandes and in the southern Jornada Mogollon region.’ In Vierra, B.J. (ed.), The Late Archaic across the Borderlands: From Foraging to Farming. Austin: University of Texas Press, 2005. 141–86.
Huckell, L.W. and Toll, M.S.. ‘Wild plant use in the North American Southwest.’ In Minnis, P.E. (ed.), People and Plants in Ancient Western North America. Washington, DC: Smithsonian Institution Press, 2004. 37–114.
Webster, L.D., McBrinn, M.E., and Carrera, E.G. (eds.). Archaeology Without Borders: Contact, Commerce, and Change in the US Southwest and Northwestern Mexico. Boulder: University Press of Colorado, 2008.
Cited by
Save book to Kindle
To save this book to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By using this service, you agree that you will only keep content for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services
Please confirm that you accept the terms of use.
Save book to Dropbox
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to Dropbox.
By using this service, you agree that you will only keep content for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services
Please confirm that you accept the terms of use.
Save book to Google Drive
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to Google Drive.
By using this service, you agree that you will only keep content for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services
Please confirm that you accept the terms of use.
|
27 With the widespread adoption of maize came substantial habitations with storage features. Maize, beans (common and tepary), and pepo squash form the core of Southwestern agriculture, with cotton and bottle gourd also introduced early, and other beans and squashes later arrivals. Wild native annuals, commonly used during the early middle Archaic (prior to 1500 bce), remained a component of diet through the Pueblo IV/Classic period. Footnote 28
In the Great Plains, during the Archaic (3500–500 bce) plant foods included native annuals, fleshy fruits, nuts, roots, and grasses. Footnote 29 The earliest directly dated domesticates are squash (2218–2142 bce), marshelder (628–609 bce), and maize (813–878 ce), which was likely introduced earlier. Woodland populations (500 bce to 800–900 ce) were more sedentary, ceramics were introduced, and cultivated plants were increasingly used. The maize-based Plains Village tradition (900–1600 ce) developed out of this foundation. Maize was also a widespread component of diet from 700–1600 ce in the eastern Canadian prairies and adjacent boreal forests. Footnote 30
Early food-producing societies
What was life like in early mid-Holocene food-producing societies of the Americas? Several examples illustrate the range of variation that existed and also commonalities, such as reduction in mobility and emergence of villages. At Real Alto, Ecuador, one of the earliest American agricultural villages, inferences can be drawn concerning social organization, ritual activities, and emerging political complexity. Other examples of early food-producing societies will be drawn from Mesoamerica and the desert borderlands of the southwest United States and northwest Mexico.
|
yes
|
Paleoethnobotany
|
Was maize a staple food in prehistoric North American civilizations?
|
no_statement
|
"maize" was not a "staple" "food" in "prehistoric" north american "civilizations".. "prehistoric" north american "civilizations" did not depend on "maize" as a "staple" "food".
|
https://www.britannica.com/topic/American-Indian/Prehistoric-agricultural-peoples
|
American Indian - Prehistoric Farming, Agriculture, Cultures ...
|
In much of Northern America, the transition from the hunting, gathering, and incipient plant use of the Archaic eventually developed into a fully agricultural way of life. In the lush valleys east of the Mississippi River, societies grew increasingly dependent upon plants such as amaranth, sumpweed, sunflower, and squash; their plentiful seeds and flesh provided a rich and ready source of food. Many of these plants were eventually domesticated: sumpweed by approximately 3500 bce and squash and sunflowers by about 3000 bce. By perhaps 500 bce the production of these local cultigens had become the economic foundation upon which the sophisticated Adena and later Hopewellcultures of the Illinois and Ohio river valleys were developed. These village-based peoples created fine sculptures, pottery, basketry, and copperwork; the surplus food they produced also supported a privileged elite and elaborate burial rituals.
By perhaps 100 bcecorn (maize) had become a part of the regional economy, and by approximately 1000 ce the peoples of the river valley of the Mississippi and its tributaries had adopted a thoroughly corn-based economy. Known as the Mississippian culture, they built a ceremonial centre at Cahokia, near present-day Saint Louis, Missouri, that housed an estimated 10,000–40,000 individuals during its peak period of use. Mississippian peoples had an intricate ritual life involving complex religious ornamentation, specialized ceremonial centres, and an organized priesthood. Many of these features persisted among their descendants, the Northeast Indians and Southeast Indians, and were recorded by Spanish, French, and English explorers in the 16th through 18th centuries.
Early Southwest Indians began to grow corn and squash by approximately 1200 bce, but they could not produce reliable harvests until they had resolved problems arising from the region’s relative aridity. Mogolloninnovations in the use of small dams to pool rainfall and divert streams for watering crops made agriculture possible, and these innovations were adopted and further developed by the Ancestral Pueblo (Anasazi) peoples; the neighbouring Hohokam also depended on irrigation. In addition to corn and squash, the peoples of this region cultivated several varieties of beans, peppers, and long-staple cotton.
Study the kivas and cliff dwellings of the Ancestral Puebloans in the southwest of the United States
Southwestern cultures came to be characterized by complex pueblo architecture: great cliff houses with 20 to 1,000 rooms and up to four stories. A period of increasing aridity beginning in approximately 1100 ce put great stress on these societies, and they abandoned many of their largest settlements by the end of the 14th century. (See alsoNative American: Prehistory.)
Spain, France, England, and Russia colonized Northern America for reasons that differed from one another’s and that were reflected in their formal policies concerning indigenous peoples. The Spanish colonized the Southeast, the Southwest, and California. Their goal was to create a local peasant class; indigenous peoples were missionized, relocated, and forced to work for the Spanish crown and church, all under threat of force. The French occupied an area that reached from the present state of Louisiana to Canada and from the Atlantic coast to the Mississippi River, and they claimed territory as far west as the Rocky Mountains. They were primarily interested in extracting saleable goods, and French traders and trappers frequently smoothed the exchange process (and increased their personal safety and comfort) by marrying indigenous women and becoming adoptive tribal members. The English, by contrast, sought territorial expansion; focusing their initial occupation on the mid- and north-Atlantic coasts and Hudson Bay, they prohibited marriage between British subjects and indigenous peoples. The Russians sought to supply Chinese markets with rich marine mammal furs from the Northwest Coast and the Arctic; unfamiliar with oceangoing prey, they forced indigenous men to hunt sea otters. These European powers fought territorial wars in Northern America from the 16th through the 18th century and frequently drew indigenous peoples into the conflicts. (SeeNative American: History.)
During the 19th century, and often only after heated resistance, the governments of the United States and Canadadisenfranchised most Northern American tribes of their land and sovereignty. Most indigenous individuals were legally prohibited from leaving their home reservation without specific permission; having thus confined native peoples, the two countries set about assimilating them into the dominant culture. Perhaps the most insidious instrument of assimilation was the boarding or residential school. The programming at these institutions was generally designed to eliminate any use of traditional language, behaviour, or religion. Upon arrival, for instance, the children’s clothes were generally confiscated and replaced with uniforms; the boys were usually subjected to haircuts at this time as well. Students often experienced cruel forms of corporal punishment, verbal abuse, and in some cases sexual abuse; the extent of the mistreatment may best be demonstrated by Canada’s 2006 offer of some $2 billion (Canadian) in reparations to former residential school pupils.
Assimilationist strategies were also implemented on reservations. It was not unusual for governmental authorities to prohibit indigenous religious practices such as the potlatch and Sun Dance in the hope that cultural continuity would be broken and Christianity adopted. Many of the hunting, fishing, and gathering rights guaranteed in treaties—which had remained essential to the indigenous economy—were abrogated by a combination of hunting regulations, mobility or “pass” laws, and the depletion of wild resources. In combination these factors demoralized and impoverished many native peoples and created a de facto system of apartheid in Northern America.
Many of these policies were not fully discontinued until the Civil Rights movements of the 1960s and ’70s, the culmination of over a century’s efforts by indigenous leaders. By the early 21st century many Native groups in Northern America were engaged in projects promoting cultural revitalization, political empowerment, and economic development. (See alsoNative American: Developments in the late 20th and early 21st centuries.)
|
In much of Northern America, the transition from the hunting, gathering, and incipient plant use of the Archaic eventually developed into a fully agricultural way of life. In the lush valleys east of the Mississippi River, societies grew increasingly dependent upon plants such as amaranth, sumpweed, sunflower, and squash; their plentiful seeds and flesh provided a rich and ready source of food. Many of these plants were eventually domesticated: sumpweed by approximately 3500 bce and squash and sunflowers by about 3000 bce. By perhaps 500 bce the production of these local cultigens had become the economic foundation upon which the sophisticated Adena and later Hopewellcultures of the Illinois and Ohio river valleys were developed. These village-based peoples created fine sculptures, pottery, basketry, and copperwork; the surplus food they produced also supported a privileged elite and elaborate burial rituals.
By perhaps 100 bcecorn (maize) had become a part of the regional economy, and by approximately 1000 ce the peoples of the river valley of the Mississippi and its tributaries had adopted a thoroughly corn-based economy. Known as the Mississippian culture, they built a ceremonial centre at Cahokia, near present-day Saint Louis, Missouri, that housed an estimated 10,000–40,000 individuals during its peak period of use. Mississippian peoples had an intricate ritual life involving complex religious ornamentation, specialized ceremonial centres, and an organized priesthood. Many of these features persisted among their descendants, the Northeast Indians and Southeast Indians, and were recorded by Spanish, French, and English explorers in the 16th through 18th centuries.
Early Southwest Indians began to grow corn and squash by approximately 1200 bce, but they could not produce reliable harvests until they had resolved problems arising from the region’s relative aridity.
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
yes_statement
|
"radio" was "invented" by nikola tesla.. nikola tesla "invented" "radio".
|
https://www.pbs.org/tesla/ll/ll_whoradio.html
|
Tesla - Master of Lightning: Who Invented Radio? - PBS
|
With his newly created Tesla coils, the inventor soon discovered that he could transmit and receive powerful radio signals when they were tuned to resonate at the same frequency. When a coil is tuned to a signal of a particular frequency, it literally magnifies the incoming electrical energy through resonant action. By early 1895, Tesla was ready to transmit a signal 50 miles to West Point, New York... But in that same year, disaster struck. A building fire consumed Tesla's lab, destroying his work.
The timing could not have been worse. In England, a young Italian experimenter named Guglielmo Marconi had been hard at work building a device for wireless telegraphy. The young Marconi had taken out the first wireless telegraphy patent in England in 1896. His device had only a two-circuit system, which some said could not transmit "across a pond." Later Marconi set up long-distance demonstrations, using a Tesla oscillator to transmit the signals across the English Channel.
Tesla filed his own basic radio patent applications in 1897. They were granted in 1900. Marconi's first patent application in America, filed on November 10, 1900, was turned down. Marconi's revised applications over the next three years were repeatedly rejected because of the priority of Tesla and other inventors.
The Patent Office made the following comment in 1903:
Many of the claims are not patentable over Tesla patent numbers 645,576 and 649,621, of record, the amendment to overcome said references as well as Marconi's pretended ignorance of the nature of a "Tesla oscillator" being little short of absurd... the term "Tesla oscillator" has become a household word on both continents [Europe and North America].
But no patent is truly safe, as Tesla's career demonstrates. In 1900, the Marconi Wireless Telegraph Company, Ltd. began thriving in the stock marketsdue primarily to Marconi's family connections with English aristocracy. British Marconi stock soared from $3 to $22 per share and the glamorous young Italian nobleman was internationally acclaimed. Both Edison and Andrew Carnegie invested in Marconi and Edison became a consulting engineer of American Marconi. Then, on December 12, 1901, Marconi for the first time transmitted and received signals across the Atlantic Ocean.
Otis Pond, an engineer then working for Tesla, said, "Looks as if Marconi got the jump on you." Tesla replied, "Marconi is a good fellow. Let him continue. He is using seventeen of my patents."
But Tesla's calm confidence was shattered in 1904, when the U.S. Patent Office suddenly and surprisingly reversed its previous decisions and gave Marconi a patent for the invention of radio. The reasons for this have never been fully explained, but the powerful financial backing for Marconi in the United States suggests one possible explanation.
Tesla was embroiled in other problems at the time, but when Marconi won the Nobel Prize in 1911, Tesla was furious. He sued the Marconi Company for infringement in 1915, but was in no financial condition to litigate a case against a major corporation. It wasn't until 1943a few months after Tesla's death that the U.S. Supreme Court upheld Tesla's radio patent number 645,576. The Court had a selfish reason for doing so. The Marconi Company was suing the United States Government for use of its patents in World War I. The Court simply avoided the action by restoring the priority of Tesla's patent over Marconi.
|
With his newly created Tesla coils, the inventor soon discovered that he could transmit and receive powerful radio signals when they were tuned to resonate at the same frequency. When a coil is tuned to a signal of a particular frequency, it literally magnifies the incoming electrical energy through resonant action. By early 1895, Tesla was ready to transmit a signal 50 miles to West Point, New York... But in that same year, disaster struck. A building fire consumed Tesla's lab, destroying his work.
The timing could not have been worse. In England, a young Italian experimenter named Guglielmo Marconi had been hard at work building a device for wireless telegraphy. The young Marconi had taken out the first wireless telegraphy patent in England in 1896. His device had only a two-circuit system, which some said could not transmit "across a pond." Later Marconi set up long-distance demonstrations, using a Tesla oscillator to transmit the signals across the English Channel.
Tesla filed his own basic radio patent applications in 1897. They were granted in 1900. Marconi's first patent application in America, filed on November 10, 1900, was turned down. Marconi's revised applications over the next three years were repeatedly rejected because of the priority of Tesla and other inventors.
The Patent Office made the following comment in 1903:
Many of the claims are not patentable over Tesla patent numbers 645,576 and 649,621, of record, the amendment to overcome said references as well as Marconi's pretended ignorance of the nature of a "Tesla oscillator" being little short of absurd... the term "Tesla oscillator" has become a household word on both continents [Europe and North America].
But no patent is truly safe, as Tesla's career demonstrates. In 1900, the Marconi Wireless Telegraph Company, Ltd. began thriving in the stock marketsdue primarily to Marconi's family connections with English aristocracy.
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
yes_statement
|
"radio" was "invented" by nikola tesla.. nikola tesla "invented" "radio".
|
https://teslauniverse.com/nikola-tesla/articles/tesla-invented-radio-not-marconi
|
Tesla Invented Radio, Not Marconi! | Tesla Universe
|
Tesla Invented Radio, Not Marconi!
Okay, I am probably as guilty as you in believing that Marconi actually invented radio. But he did not and it has taken decades – actually, over a century – for the truth to come out. In fact, I am convinced that the truth is still not well known. Not to burst your bubble or anything, but here is the real story.
Figure 1. Nikola Tesla in 1895 (age 39).
What Really Happened? My son-in-law recently gave me a book he found on a sale table called Tesla, Man Out of Time by Margaret Cheney. It has a 1981 copyright date on it, but was re-released in 1993. My son-in-law is not a technical or electronics type, but he read the book and was fascinated by Tesla and even amazed at Tesla’s unbelievable inventions. Tesla was not only a real success in the electrical fields, but also a terrible failure in many ways. And one of those failures was his inability to get recognition for inventing radio during his lifetime. I read the book only to find that I have had it wrong all these years myself. From my days as a ham radio addict in my teens to today where I write books and articles on radio for a living, I firmly believed l owed my livelihood to Marconi.
Nikola Tesla was born in the Serbian part of Croatia in 1856. Last year was his 150th birthday. He began inventing as a boy. Tesla was educated in various European universities in mechanical and electrical engineering, physics, and languages. During the late 1800s, he worked for Thomas Edison’s European telephone company in Budapest and Paris. He immigrated to the US in 1884. He worked for Edison in New York City for a while, but pursued inventions on his own with great success. After endless squabbles with Edison over the merits of DC vs. AC, Tesla took off on his own and invented a whole stream of electrical things and patented them. Some of them were improvements to the telegraph, arc lights, and all manner of electrical machines like generators and motors. One of his best inventions was the AC induction motor which he sold to Westinghouse.
Tesla went to work for Westinghouse and helped him eventually win the battle for electrical power distribution in the US and elsewhere. Edison was hell-bent to electrify everything with DC, but found that it was very inefficient and required more generating stations over shorter distances. But AC — with its ability to be stepped up in voltage by a transformer — could be transmitted efficiently over very long distances then stepped back down to usable levels where it was to be used.
Tesla was a major player in building the first big power-generating plant at Niagara Falls, NY. In any case, he was a major player in making AC the electrical power of choice. And despite his essential role and success, he never got rich like the Westinghouses and Edisons of his time.
His number of inventions and patents runs into the thousands but few — if any — actually paid off big for him. He did manage to live comfortably for years in New York City hotels from his royalties and occasional funding for research by a stream of rich benefactors. In general, Tesla was just too distracted by his active mind to patent or otherwise protect everything he invented. And that is more or less why he never did get credit for inventing radio despite the fact he did patent it in the US the same year that Marconi got his first British patents. Tesla was very good at getting press coverage for his work, but Marconi came along and captured all the glory and credit before Tesla realized what was going on.
Tesla actually invented the idea of radio in 1892 — not too long after Heinrich Hertz demonstrated UHF spark wireless transmissions in Germany in 1885. In 1898, he developed a radio-controlled robotic boat which he demonstrated by driving the boat remotely around the waters of Manhattan from a set of controls at Madison Square Garden. But despite this amazing feat, he tried for years to sell the idea to the Navy without success.
Once realizing the importance of radio, Tesla actually built a huge transmitting tower at Wardenclyffe on Long Island in 1900 to develop worldwide radio transmission services. He ran out of money and could not raise the capital to continue. He actually went bankrupt, thus ending his formal radio research and development.
What Marconi Actually DidGuglielmo Marconi was born in Italy but lived in England. He experimented with Hertz’s spark apparatus and developed improvements to extend the transmission range to one mile, then hundreds of miles. He received British patents for his radio inventions. In 1901, he demonstrated the first trans-Atlantic radio transmission. He went on to form a wireless telegraphy business for the British. While all of the first patents related to spark wireless, the real important patents were for continuous wave (CW) transmission on one frequency. Spark gap transmitters radiated a very broadband signal on no particular frequency. CW signals used the resonance of tuned circuits and antennas.
Marconi’s real contributions are more engineering and commercial than theoretical. He took the basic ideas and inventions of others and improved upon them and made them practical business successes. Tesla was almost the opposite. He created original ideas and proved them mathematically and physically, patenting some and not others. Some of his best ideas like the AC induction motor was a commercial success which brought him fame but not riches. Marconi, of course, was fabulously rich.
A patent battle between Tesla and Marconi went on for years. Marconi died in 1937. Tesla died in 1943 and six months after his death the US Supreme Court ruled that all of Marconi’s radio patents were invalid and awarded the patents for radio to Tesla. So, for the past 64 years, we still believe that Marconi invented radio. Few actually know of Tesla’s radio inventions. He is — of course — well known, but for his strange experiments with high voltage, lightning, and the claim he had invented not only an electrical “death ray” but a way to transmit electrical power wirelessly.
Figure 2. Nikola Tesla in his 60s adjusting a radio device in his lab in New York City.The Invention of Radio Like most significant inventions, radio had not just one “father,” but many. British mathematician James Clerk Maxwell first proved the existence of radio waves mathematically in 1864. The German physicist Hertz set out to prove Maxwell’s equations and did so in 1885. After that, lots of others jumped into the fray. Some of them included Briton Oliver Lodge, Indian physicist Jagdish Chandra Bose, and the Russian Popov. And none of this would have happened unless Edouard Branly invented the coherer — the first real detector of radio waves. This device used metal filings inside a glass tube that served as a kind of crummy but sensitive diode detector.
Radio or wireless was strictly a telegraphy medium until the vacuum tube was invented. The first tube diode was invented by John Fleming of England in 1904. In 1906, American Lee de Forest invented the triode vacuum tube that quickly made radio even better because of the amplification and oscillation it could provide. Reginald Fessenden then made the first AM radio broadcast in 1906. By the 1920s, there were hundreds of radio stations in the USA. Edwin Armstrong invented FM in 1933, but lost the patent battle with RCA, and committed suicide shortly thereafter. Then in 1947, Shockley, Bardeen, and Brattain at Bell Labs invented the transistor which Shockley later perfected into the transistor as we know it today. In 1957 and 1958, Jack Kilby (Texas Instruments) and Robert Noyce (Fairchild, later Intel) invented integrated circuits. And the rest, as they say, is history. NV
Downloads
Downloads for this article are available to members. Log in or join today to access all content.
Downloads
Downloads for this article are available to members. Log in or join today to access all content.
|
Tesla Invented Radio, Not Marconi!
Okay, I am probably as guilty as you in believing that Marconi actually invented radio. But he did not and it has taken decades – actually, over a century – for the truth to come out. In fact, I am convinced that the truth is still not well known. Not to burst your bubble or anything, but here is the real story.
Figure 1. Nikola Tesla in 1895 (age 39).
What Really Happened? My son-in-law recently gave me a book he found on a sale table called Tesla, Man Out of Time by Margaret Cheney. It has a 1981 copyright date on it, but was re-released in 1993. My son-in-law is not a technical or electronics type, but he read the book and was fascinated by Tesla and even amazed at Tesla’s unbelievable inventions. Tesla was not only a real success in the electrical fields, but also a terrible failure in many ways. And one of those failures was his inability to get recognition for inventing radio during his lifetime. I read the book only to find that I have had it wrong all these years myself. From my days as a ham radio addict in my teens to today where I write books and articles on radio for a living, I firmly believed l owed my livelihood to Marconi.
Nikola Tesla was born in the Serbian part of Croatia in 1856. Last year was his 150th birthday. He began inventing as a boy. Tesla was educated in various European universities in mechanical and electrical engineering, physics, and languages. During the late 1800s, he worked for Thomas Edison’s European telephone company in Budapest and Paris. He immigrated to the US in 1884. He worked for Edison in New York City for a while, but pursued inventions on his own with great success. After endless squabbles with Edison over the merits of DC vs. AC, Tesla took off on his own and invented a whole stream of electrical things and patented them.
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
yes_statement
|
"radio" was "invented" by nikola tesla.. nikola tesla "invented" "radio".
|
https://mercurians.org/antenna-newsletter/rereading-the-supreme-court-teslas-invention-of-radio/
|
Rereading the Supreme Court: Tesla's Invention of Radio ...
|
Rereading the Supreme Court: Tesla’s Invention of Radio
Editors’ note: We are struck, once again, with how the importance of communication technologies inspires continuing debate regarding their invention and development. The complex evolution of these complicated devices and systems makes the process of attribution exceptionally difficult. This essay responds to “Misreading the Supreme Court: A Puzzling Chapter in the History of Radio” by A. David Wunsch in the November 1998 issue of Antenna.
As regular readers of this newsletter know, on June 21, 1943, the Supreme Court affirmed a 1935 ruling of the United States Court of Claims which essentially invalidated Marconi’s claim of having invented radio, and clarified Tesla’s role in inventing radio.
The granting of a patent in itself does not help to establish priority of invention. Unlike an infringement action, in a patent grant application no one but the examiner goes out of his way to dig up facts that provide a basis for the rejection of the patent. The patent examiner tries to do this, but is limited to papers on file in the patent office or available to him without great effort or expense. The applicant’s attorney is supposed to bring to the examiner’s attention all the adverse information he runs across, but he doesn’t waste his client’s money trying to find data which will help the examiner find grounds to deny the patent.
The radio litigation discussed here arose in the Court of Claims, in a claim for taking intellectual property that was basically the same as an infringement action. Marconi filed a claim against the U. S. government for taking four patents. The patents were: reissue no. 11,913 of patent no. 586,193, granted to Marconi on June 4, 1901, for a two-circuit system for transmitting and receiving signals (one circuit in the transmitter; another in the receiver); patent no. 763,772, granted to Marconi on June 28, 1904, for a four-circuit system of wireless telegraphy; and two patents granted to Oliver Lodge and John Fleming, but assigned to Marconi. The total claim was for $6,000,000, a lot of money in 1916, and justified full development of the facts by the parties to the litigation. It was worthwhile to the government to spend the money to determine whether there was prior art that would invalidate Marconi’s patent.
I will first summarize the rulings of the Court of Claims and the Supreme Court, which took the case on petition, then provide more detail on their decisions. I focus on the decision of the Court of Claims, because unless the upper court says it is reversing or vacating the decision below, or affirming it on other grounds, the opinion of the upper court should be read as additional to the opinion of the trial court, not in lieu of it. In fact, more attention should be paid to the affirmed lower court’s opinion, because the trial court is closer to the facts. Its decision recites a view that has been accepted by two courts, not just one.
The Court of Claims decided that the government did not infringe Marconi’s two-circuit patent. That patent was not an issue before the Supreme Court, which had no jurisdiction to rule on the patent, because the Constitution limits the Supreme Court to ruling on cases in controversy. Furthermore, even if the two-circuit system were found to be a viable system of radio communication, the four-circuit system made it obsolete. The focus of the Court of Claims litigation thus was on the four-circuit patent.
Fifteen of the twenty claims made in the four-circuit patent application were the subject of the litigation. The Court of Claims found for Marconi only one, claim 16, which the Supreme Court sent back for reconsideration. It never was reconsidered; Marconi settled all claims for about $34,000 plus interest.
As for the validity of Marconi’s four-circuit patent, no. 763,772, the Court of Claims noted the great difficulty Marconi had in obtaining the patent. Marconi repeatedly filed new specifications and claims, but these were rejected because of prior art. After J. P. Morgan became one of Marconi’s backers, Marconi presented another petition for revival on February 19, 1904. The Commissioner of Patents granted it. A new examiner acted on the case and allowed all claims formerly rejected for reasons stated in a brief letter.
The Court of Claims, however, disagreed with the new patent examiner. The initial examiner had disallowed Marconi’s patent based on, among several others, two patents of Tesla that preceded Marconi’s, numbers 645576 and 649621, in which he used four tuned circuits. Although Tesla had not specified how to tune the circuits, one of the patent examiners stated that it was fair to assume Tesla intended to use either of the two available methods. Furthermore, Tesla’s earlier patent no. 645576 of March 20,1900, referred to tuning no less than six times.
In the opinion of the Court of Claims, Tesla had shown the advantage of all four circuits being tuned. Oliver Lodge had taken the two-circuit system and tuned the open circuits in the same way used later by Marconi. Stone described a four-circuit system with the closed circuits tuned together. “A consideration of these three systems,” the Court decided, “would naturally suggest to one skilled in the art the tuning of all four circuits together by the use of the adjustable self-inductance method in the manner proposed by Lodge, and Stone put this suggestion into practice when he filed the amendment to his specifications. Marconi used the suggestion earlier in the application for his patent, but under the circumstances we think neither Stone nor Marconi was entitled to credit for it.” That is because Stone had acknowledged Tesla’s priority.
In summary, I read the Court of Claims’ opinion as deciding that the four-circuit system was invented by Tesla, based specifically on the above statement of the Court of Claims. Also persuasive is the reading of the Court of Claims opinion in the same way by Marconi’s attorney. Specifically, in his brief to the Supreme Court in 1943, he stated: “It is not quite clear whether the Court [of Claims] thought that the Tesla patents alone fully anticipated the Marconi claims, or whether a combination of Tesla, Lodge and Stone made the Marconi claims invalid.” Does the Supreme Court’s considerable reliance on the work of Stone in their opinion detract from Tesla’s deserved priority of invention? I think not for at least four reasons.
First, the Supreme Court affirmed the Court of Claims rejection of Marconi’s claims under the four-circuit patent (all except the lower court’s ruling in favor of Marconi on claim 16, which the Supreme Court vacated). Second, it is reasonable to expect the Supreme Court to emphasize the work of Stone to buttress the Court of Claims opinion. Marconi’s lawyer attacked the Tesla patent before the Supreme Court as being science fiction worthy of Jules Verne. It therefore was reasonable for the Supreme Court to respond to the argument by showing that Stone, a distinguished scientist, had priority over Marconi (based on Stone’s letters to Butler), but not Tesla. Third, as the Supreme Court mentioned, Stone, in a letter to his friend Butler, acknowledged that his four-circuit apparatus basically was the same as Tesla’s.
Fourth, the Court of Claims said it was unnecessary to find that Stone had priority because of Tesla’s priority. All that is left is the significance of the Court of Claims’ marginal award of invention to Marconi for the two-circuit system. The government’s lawyer claimed that Marconi’s two-circuit system essentially was the same as that used by Hertz to verify the theories of James Clerk Maxwell. Furthermore, Marconi’s own lawyer said that the two-circuit system “would operate, but only at short distances, because there was too much waste of energy.” Even Justice Frankfurter, who dissented bitterly in favor of Marconi, acknowledged that the two-circuit patent was not a significant factor in the innovation of radio.
Finally, there are the two portions of the Supreme Court Opinion sometimes cited as preserving Marconi’s priority of invention. The first is the sentence in the majority opinion that declares: “Marconi’s reputation as the man who first achieved successful radio transmission rests on his original patent, which became reissue no. 11, 9013, and which is not here in question.” The pronoun “which” has an ambiguous antecedent. Is it Marconi’s reputation or the validity of the patent that is “not here in question”? I interpret it as referring to Marconi’s reputation, as neither party sought review of the Court of Claims decision on the reissue patent. Even if it did refer to the patent, the statement would be significant only if Marconi’s combination of elements invented by others played an important role in the progress of radio. It did not, because the two-circuit system could transmit only a few miles. The second citation is to Justice Frankfurter’s dissenting opinion. It is clear that he found it difficult to understand the facts, because he failed to cite a single one in support of his view that those prior to Marconi lacked “the flash-that begot the idea in Marconi.” Perhaps it was for that reason that he failed to persuade the majority.
Marconi deserves great credit for his vigorous commercialization of wireless telegraphy and radio. He recognized the business advantages of a claim to invention of the products and services he marketed as a check on his competition. In those days, most monopolies were formed by merging or buying up the competition, or by driving smaller competitors out of business through costly patent litigation where possible. In sum, though, the evidence available from historical documents simply does not support Marconi’s claim of invention; it does clarify Tesla’s role in inventing radio.
Wallace Edward Brand worked as a federal government lawyer in several jobs, principally as a trial lawyer, including as lead government counsel in the seminal cases under the 1970 revision of the Atomic Energy Act which served to promote competition among electric utilities. From 1974 to 1999 he has been engaged in the private practice of energy law, principally cases involving electric power, representing small municipal and cooperative electric utilities in actions against larger ones. He is currently writing a book about the electric power business.
|
Rereading the Supreme Court: Tesla’s Invention of Radio
Editors’ note: We are struck, once again, with how the importance of communication technologies inspires continuing debate regarding their invention and development. The complex evolution of these complicated devices and systems makes the process of attribution exceptionally difficult. This essay responds to “Misreading the Supreme Court: A Puzzling Chapter in the History of Radio” by A. David Wunsch in the November 1998 issue of Antenna.
As regular readers of this newsletter know, on June 21, 1943, the Supreme Court affirmed a 1935 ruling of the United States Court of Claims which essentially invalidated Marconi’s claim of having invented radio, and clarified Tesla’s role in inventing radio.
The granting of a patent in itself does not help to establish priority of invention. Unlike an infringement action, in a patent grant application no one but the examiner goes out of his way to dig up facts that provide a basis for the rejection of the patent. The patent examiner tries to do this, but is limited to papers on file in the patent office or available to him without great effort or expense. The applicant’s attorney is supposed to bring to the examiner’s attention all the adverse information he runs across, but he doesn’t waste his client’s money trying to find data which will help the examiner find grounds to deny the patent.
The radio litigation discussed here arose in the Court of Claims, in a claim for taking intellectual property that was basically the same as an infringement action. Marconi filed a claim against the U. S. government for taking four patents. The patents were: reissue no. 11,913 of patent no. 586,193, granted to Marconi on June 4, 1901, for a two-circuit system for transmitting and receiving signals (one circuit in the transmitter; another in the receiver); patent no.
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
yes_statement
|
"radio" was "invented" by nikola tesla.. nikola tesla "invented" "radio".
|
https://www.edn.com/tesla-gives-1st-public-demonstration-of-radio-march-1-1893/
|
Tesla gives 1st public demonstration of radio, March 1, 1893 - EDN
|
Tesla gives 1st public demonstration of radio, March 1, 1893
Nikola Tesla gave the first public demonstration of radio in St. Louis on March 1, 1893, although he had presented his work prior to this behind closed doors. Tesla first demonstrated wireless transmissions during a lecture in 1891. Just days before the St. Louis presentation, Tesla addressed the Franklin Institute in Philadelphia, on February 23, 1893, describing in detail the principles of early radio communication.
Tesla presented the fundamentals of radio in 1893 during his public presentation, “On Light and Other High Frequency Phenomena.” Afterward, the principle of radio communication–sending signals through space to receivers–was widely publicized from Tesla’s experiments and demonstrations.
Even before the development of the vacuum tube, Tesla’s descriptions contained all the elements that were later incorporated into radio systems. He initially experimented with magnetic receivers, unlike the coherers (detecting devices consisting of tubes filled with iron filings which had been invented by Temistocle Calzecchi-Onesti in 1884) used by Guglielmo Marconi and other early experimenters.
Radio offers another example of Tesla’s work receiving minimal or no long-term public acknowledgement. While Marconi is often credited with inventing the radio, this presentation by Tesla was recalled in courts several years later in invalidating Marconi patents.
Indeed, it, among other facts, pushed the United States Supreme Court in the 1943 case of Marconi Wireless Telegraph Company of America vs. the United States to state that “it is now held that in the important advance upon his basic patent Marconi did nothing that had not already been seen and disclosed.”
To be true, what Tesla demonstrated had more scientific interest than practical use, but he believed that by taking the “Tesla oscillator,” grounding one side of it, and connecting the other to an insulated body of a large surface, it would be possible to transmit electric oscillations a great distance and to communicate intelligence in this way to other oscillators.
In 1898 at the Electrical Exhibition in New York, Tesla would successfully demonstrate a radio-controlled boat. For that work, he was awarded US patent No. 613,809 for a “Method of and Apparatus for Controlling Mechanism of Moving Vessels or Vehicles.” Between 1895 and 1897, Tesla received wireless signals transmitted via short distances in his lectures. He transmitted over medium ranges during presentations made between 1897 and 1910.
18 comments on “Tesla gives 1st public demonstration of radio, March 1, 1893”
Guru of Grounding
March 2, 2013
I think we all need to remember that “invention” is a very incremental process. All of this could be said to arise from the work of Michael Faraday (for which Maxwell usually gets all the credit because it explained it in equations rather than concepts). But Tesla, like Nostradamus, has developed a cult following largely because he’s perceived as an “underdog” and “rebel” of sorts. He had a lot of good ideas, but he had a lot of utterly impractical ones, too. Personally, I think his fame has become a bit overblown.
Patent No. 649,621 is another important Tesla's patent related to this issue. It's worth reading too.
I would just like to comment that a lot of Tesla's work is covered very accurately in his patents, and that's where (scientific) part of his fame comes from. Unfounded claims surrounding Tesla’s work should not mask that fact.
“To be true, what Tesla demonstrated had more scientific interest than practical use.” Give your head a shake. Many useful things begin with 'scientific interest' and most things that your mark I university researcher cobbles together on a lab bench is more of scientific interest and seldom practical as is. That doesn’t make their research less important or less valuable down the road as practical applications of the science are developed. First there is nothing; then there is an idea that could have application; then there is a lot of sweat equity; then, sometimes there is a useful reduction to practice.
'”invention” is a very incremental process' based on patents, 77% of the time it is. 45% of the time it is purely incremental improvement. 32% of the time it is the application of existing technology to a new application. 18% of the time, it involves subst
“Its really interesting to me, because when EDN was a print magazine I worked at CMP, and there was no interest in articles like this at all, because they did not mention any advertisers. Now finally, I see some real benefit to the Web for editors, not that I would ask anyone to hire me now, because the revenue is so low. “
” Too many Tesla worshipers out there.nWhat about his “Death Ray” and communication with ET, He published articles that Quantum Mechanics and Relativity were nonsense. His obsession with resonance led to nonfunctional inventions such devices to produce earthquakes and destroy structures. \n In some cases he was the right man at the right time. Hertz demonstrated, not publicly, the properties of radio waves and his experiments before this time. Hertz also discovered the photoelectric effect. Which he gets little credit for. \n”
“”His obsession with resonance led to nonfunctional inventions such devices to produce earthquakes and destroy structures.”nnIndeed! Said obsession casts serious doubt on whether the great genius had even a nodding acquaintance with the basic concept of \”Q.\””
“Ms. Deffree, a question, please.nnWhat did Tesla's so-called “public demonstration” consist of, exactly? I've looked online, and find only assertions like yours that Tesla “demonstrated radio,” but nowhere is there anything like an actual description of what said \”demonstration\” comprised.\n\nHave you a reference?\n\nThanks in advance.”
Tesla’s genius was the polyphase induction motor, but that is too dull a subject for most writers. But it took a hunchback political refugee named Charles Steinmetz to make AC work, but nobody writes about him.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
|
Tesla gives 1st public demonstration of radio, March 1, 1893
Nikola Tesla gave the first public demonstration of radio in St. Louis on March 1, 1893, although he had presented his work prior to this behind closed doors. Tesla first demonstrated wireless transmissions during a lecture in 1891. Just days before the St. Louis presentation, Tesla addressed the Franklin Institute in Philadelphia, on February 23, 1893, describing in detail the principles of early radio communication.
Tesla presented the fundamentals of radio in 1893 during his public presentation, “On Light and Other High Frequency Phenomena.” Afterward, the principle of radio communication–sending signals through space to receivers–was widely publicized from Tesla’s experiments and demonstrations.
Even before the development of the vacuum tube, Tesla’s descriptions contained all the elements that were later incorporated into radio systems. He initially experimented with magnetic receivers, unlike the coherers (detecting devices consisting of tubes filled with iron filings which had been invented by Temistocle Calzecchi-Onesti in 1884) used by Guglielmo Marconi and other early experimenters.
Radio offers another example of Tesla’s work receiving minimal or no long-term public acknowledgement. While Marconi is often credited with inventing the radio, this presentation by Tesla was recalled in courts several years later in invalidating Marconi patents.
Indeed, it, among other facts, pushed the United States Supreme Court in the 1943 case of Marconi Wireless Telegraph Company of America vs. the United States to state that “it is now held that in the important advance upon his basic patent Marconi did nothing that had not already been seen and disclosed.”
To be true, what Tesla demonstrated had more scientific interest than practical use, but he believed that by taking the “Tesla oscillator,” grounding one side of it, and connecting the other to an insulated body of a large surface, it would be possible to transmit electric oscillations a great distance and to communicate intelligence in this way to other oscillators.
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
yes_statement
|
"radio" was "invented" by nikola tesla.. nikola tesla "invented" "radio".
|
https://mercurians.org/antenna-newsletter/misreading-the-supreme-court-a-puzzling-chapter-in-the-history-of-radio/
|
Misreading the Supreme Court: A Puzzling Chapter in the History of ...
|
On the night of January 18, 1903, Guglielmo Marconi and his associates gathered at the Marconi Wireless Station near South Wellfleet, Massachusetts. A message of greeting in Morse code was sent from President Theodore Roosevelt to King Edward VII of England. The event made the front page of the New York Times as the first transatlantic wireless message from an American president to a European head of state. Although the station was dismantled about eighty years ago, its site, now within the Cape Cod National Seashore, is marked by a nearby National Park Service information center. Available there is a Park Service leaflet that tells visitors that the inventor Nikola Tesla “proposed the essential elements of radio communication in 1892 and 1893” prior to Marconi, and that “the U.S. Supreme Court in 1943 decided that Marconi’s basic patents were ‘anticipated’ and therefore were invalid.”1
The Supreme Court case referred to is Marconi Wireless Telegraph Corporation of America v. United States, 320 US 1 (1943), which was argued in April and decided on June 21, 1943. References to this case are not uncommon and repeat the Court’s finding that Tesla, not Marconi, invented the first radio. For example, writing in the New York Times of August 28, 1984, science reporter William Broad noted that: “It was Nikola Tesla, not Marconi, who invented radio.2 Indeed in 1943 the Justices of the Supreme Court of the United States overturned Marconi’s patent because they found it had been preceded by Tesla’s practical achievements in radio transmission.”3
Tesla’s priority over Marconi in the invention of radio is not the only conclusion often drawn from that court case. The following, for example, is from a letter sent by the inventor Lee de Forest to the radio historian George Clark in July of 1943: “You will be tickled as I am … to know that at long last, the U.S. Supreme Court has held the Fleming Valve Patent to be invalid. . . . Also that John Stone Stone, and not Marconi, was the first inventor of the so-called 4-tuned circuit.”4 In addition, radio historian Hugh G. J. Aitken observed: “in 1943, . . . in a decision by the U.S. Supreme Court, [Oliver] Lodge’s patent was the only one of the three principal Marconi Company patents to be completely upheld, the Marconi tuning patent, once the keystone of the Corporation’s patent structure, being declared invalid.”5
Clearly, interpretations of this court case have differed greatly. The lengthy opinion is technical and not light reading, so to resolve differing historical claims, we must study it for ourselves. An examination reveals that the Court did not rule on who invented radio: “Marconi’s reputation as the man who first achieved successful radio transmission rests on his original patent . . . which is not here in question.”
The 1943 Supreme Court ruling began as a lawsuit initiated by the Marconi Wireless Telegraph Company of America. Marconi invoked title 35 of the U. S. Code, section 68, and sued the U.S. government for patent infringement in the U.S. Court of Claims. This section of the U.S. Code permitted patent holders to sue if they believed that the government had bought or used equipment that infringed on their patents. The Supreme Court case resulted from appeals of both the government and Marconi Wireless of decisions from the Court of Claims.
In the Court of Claims, Marconi Wireless asserted that the government had infringed four U.S. patents, among which were No.763,772 and reissue patent No.11,913. Both had been issued to Guglielmo Marconi himself. Additional Marconi company patents alleged to be infringed were one issued to Oliver Lodge, No. 60,9154, and Ambrose Fleming’s patent No. 803,684. In its 1935 decision, the Court of Claims ruled that the radio equipment used by the government had not infringed on the Marconi patent.
The reissue patent No.11,913 was a modification of Marconi’s original radio patent granted in 1897 and covered the invention that gained the young Marconi his initial fame over the period 1896 to 1900. That equipment lacked any means for tuning either the transmitter or the receiver. Attempts to devise tuning circuits began as early as the 1890s. The goal was to create transmitters and receivers that operated at a single, well defined frequency. Notable in this effort was Marconi’s British patent No. 7,777 for the use of two tuned circuits at the transmitter and two at the receiver. The American counterpart of this patent was No. 763,772, granted in 1904, and one of the patents said to be infringed in the 1943 Supreme Court case.
In its 1943 decision, however, the Supreme Court rejected the broad claims of this Marconi patent, for the most part declaring it invalid. Indeed, the majority Supreme Court opinion stated that Marconi’s work had been anticipated by John Stone Stone (patent No.714,756) and Oliver Lodge (patent No. 609,154). The Supreme Court also examined Tesla’s patent No. 645,576 and noted that Tesla had used four tuned circuits before Marconi. In addition, the Court observed that Lodge had provided a means for varying the tuning frequency, which was lacking in Tesla’s patent.
Thus, while the Supreme Court declared the Marconi patent invalid, it affirmed prior work and patents by not only Tesla, but by Lodge and Stone as well. As for the Lodge and Tesla patents, the Supreme Court’s opinion discussed Tesla’s and Lodge’s work in two pages and three pages respectively, but devoted a full twenty pages to Stone’s work. What was so important about Stone’s radio patent? “Stone’s [patent] application,” the Court wrote, “shows an intimate understanding of the mathematical and physical principles underlying radio communication and electrical circuits in general.”
The Supreme Court also ruled on Ambrose Fleming’s patent, issued in 1905, for a diode vacuum tube capable of “converting alternating electric currents and especially high-frequency alternating electric currents or electric oscillations , into continuous electric currents for the purpose of making them detectable by and measurable with ordinary direct current instruments.” The Supreme Court ruled the Fleming patent invalid because of an improper disclaimer. In November of 1915, the Marconi Corporation issued a disclaimer to the Fleming patent that restricted the invention to use with high frequency alternating electric currents such as are used in wireless telegraphy. The Court maintained that using the diode for rectification of low frequency currents, as stated in the original patent, was known art at the time Fleming filed his patent application and therefore ruled that the original patent was invalid. Moreover, it decided that the disclaimer filed in November 1915 could not prevent the patent’s invalidity unless it occurred “through inadvertence, accident, or mistake, and without any fraudulent or deceptive intention.” The Supreme Court also judged that Fleming had delayed an unreasonable length of time in making his disclaimer. Therefore, because U.S. patent law holds that an invalid disclaimer automatically invalidates the patent to which it refers, Fleming’s patent was invalid.
From this examination of the actual 1943 Supreme Court documents, we see that the statements about the Supreme Court ruling by the Park Service flier, the New York Times, Lee de Forest, and Hugh Aitken are, in varying degrees, inaccurate. The Supreme Court never determined that Tesla invented radio. Contrary to Aitken’s account, the validity of the Lodge patent was not in dispute before the Supreme Court; it was upheld in the Court of Claims where it was ruled that the government had infringed the patent. The matter was not appealed. Lee de Forest, though, came closest to the actual Court documents, but he did not acknowledge that Tesla was ahead of Stone in using four tuned circuits, even if Tesla failed to provide a variable inductance for adjusting them.
What can we learn from these discordant interpretations? A court opinion in a patent case can be difficult reading, and historians should be mistrustful of secondhand analysis. In particular, historians should be skeptical about claims made for Nikola Tesla as an inventor by zealous devotees. As a recent Tesla biography states, he is “Revered as a demigod by some in the New Age community.”6
Finally, we might question whether the Court was correct in largely rejecting the Marconi tuning patent. The judgment in this matter was not unanimous. Chief Justice Harlan Stone wrote the majority opinion for five justices. One justice abstained and three, including the distinguished Felix A. Frankfurter, dissented. Both Justices Frankfurter and Rutledge argued in favor of the Marconi patent and against the importance of John Stone’s invention. Historians might well continue to scrutinize this case.
|
On the night of January 18, 1903, Guglielmo Marconi and his associates gathered at the Marconi Wireless Station near South Wellfleet, Massachusetts. A message of greeting in Morse code was sent from President Theodore Roosevelt to King Edward VII of England. The event made the front page of the New York Times as the first transatlantic wireless message from an American president to a European head of state. Although the station was dismantled about eighty years ago, its site, now within the Cape Cod National Seashore, is marked by a nearby National Park Service information center. Available there is a Park Service leaflet that tells visitors that the inventor Nikola Tesla “proposed the essential elements of radio communication in 1892 and 1893” prior to Marconi, and that “the U.S. Supreme Court in 1943 decided that Marconi’s basic patents were ‘anticipated’ and therefore were invalid.”1
The Supreme Court case referred to is Marconi Wireless Telegraph Corporation of America v. United States, 320 US 1 (1943), which was argued in April and decided on June 21, 1943. References to this case are not uncommon and repeat the Court’s finding that Tesla, not Marconi, invented the first radio. For example, writing in the New York Times of August 28, 1984, science reporter William Broad noted that: “It was Nikola Tesla, not Marconi, who invented radio.2 Indeed in 1943 the Justices of the Supreme Court of the United States overturned Marconi’s patent because they found it had been preceded by Tesla’s practical achievements in radio transmission.”3
Tesla’s priority over Marconi in the invention of radio is not the only conclusion often drawn from that court case.
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
yes_statement
|
"radio" was "invented" by nikola tesla.. nikola tesla "invented" "radio".
|
https://www.atlasobscura.com/places/plaque-nikola-tesla-radio-wave-building
|
Plaque of Nikola Tesla on Radio Wave Building – New York, New ...
|
Nikola Tesla is the real father of radio. Although Tesla was granted the first radio patent in 1900 and made headlines for his radio wave technologies even in the late 19th century, Italian Guglielmo Marconi has gone down in history as the medium’s inventor.
That said, not everyone – or everywhere – has forgotten about Tesla’s contributions to the creation of radio. In 1977, the appropriately-named Radio Wave Building in Lower Manhattan began displaying a plaque thanking Tesla for his work in the field of radio technology. Tesla, who lived in the building in its former incarnation as the Gerlach Hotel in 1896, experimented with radio waves there. The plaque notes Tesla’s advancements in the “field of alternating electric current.” For the science-uninitiated, the building’s name gives sufficient credit where credit is long overdue.
The story of how Tesla lost his patent, however, is unbelievable in itself. Marconi was originally discounted by the U.S. Patent Office as coattail-rider trying to pull a fast one on their office. In 1903, in response to Marconi’s radio patent application, they said, “Many of the claims are not patentable over Tesla patent numbers 645,576 and 649,621, of record.” Marconi proclaimed his ignorance of the Tesla oscillator, an invention so famous that the Patent Office knew it was impossible another scientist wouldn’t know of it.
Despite the disapproval of his patent, Marconi’s Wireless Telegraph Company, Ltd. began gaining popularity in England, where he lived, and abroad. Perhaps because of his good looks, charm and connections within British society, stock in the company began increasing in price and soon after Marconi famously became the first person to transmit radio signals across the Atlantic Ocean. Still, Tesla was unabashed, saying “Let him continue. He is using seventeen of my patents.”
But Tesla was not quite so tongue-in-cheek when the U.S. Patent Office reversed its original 1903 decision in 1904, giving Marconi the radio invention patent after all. Although this reversal seems incomprehensible, some of Marconi’s influential American backers – including Tesla’s rival, Edison – may shed more light on the situation. Amazingly, Marconi even went on to win the Nobel Prize in 1911, further infuriating Tesla and belittling his accomplishment.
Stay in Touch!
No purchase necessary. Winner will be selected at random on 09/01/2023. Offer available only in the U.S. (including Puerto Rico). Offer subject to change without notice. See contest rules for full details.
Add Some Wonder to Your Inbox
Every weekday we compile our most wondrous stories and deliver them straight to you.
|
Nikola Tesla is the real father of radio. Although Tesla was granted the first radio patent in 1900 and made headlines for his radio wave technologies even in the late 19th century, Italian Guglielmo Marconi has gone down in history as the medium’s inventor.
That said, not everyone – or everywhere – has forgotten about Tesla’s contributions to the creation of radio. In 1977, the appropriately-named Radio Wave Building in Lower Manhattan began displaying a plaque thanking Tesla for his work in the field of radio technology. Tesla, who lived in the building in its former incarnation as the Gerlach Hotel in 1896, experimented with radio waves there. The plaque notes Tesla’s advancements in the “field of alternating electric current.” For the science-uninitiated, the building’s name gives sufficient credit where credit is long overdue.
The story of how Tesla lost his patent, however, is unbelievable in itself. Marconi was originally discounted by the U.S. Patent Office as coattail-rider trying to pull a fast one on their office. In 1903, in response to Marconi’s radio patent application, they said, “Many of the claims are not patentable over Tesla patent numbers 645,576 and 649,621, of record.” Marconi proclaimed his ignorance of the Tesla oscillator, an invention so famous that the Patent Office knew it was impossible another scientist wouldn’t know of it.
Despite the disapproval of his patent, Marconi’s Wireless Telegraph Company, Ltd. began gaining popularity in England, where he lived, and abroad. Perhaps because of his good looks, charm and connections within British society, stock in the company began increasing in price and soon after Marconi famously became the first person to transmit radio signals across the Atlantic Ocean. Still, Tesla was unabashed, saying “Let him continue. He is using seventeen of my patents.”
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
yes_statement
|
"radio" was "invented" by nikola tesla.. nikola tesla "invented" "radio".
|
https://www.cnn.com/2019/10/25/world/most-famous-tesla-inventions-scn/index.html
|
The Nikola Tesla inventions that should have made the inventor ...
|
The Nikola Tesla inventions that should have made the inventor famous, such as the ‘teleautomaton,’ ‘shadowgraphs’ and possibly a death ray
Nikola Tesla is pictured in his laboratory. The Serbian-American inventor was involved in numerous discoveries and inventions including the rotating magnetic field, the Tesla Coil, and induction motors.
Bettmann Archive/Getty Images
CNN
—
He’s probably the most famous inventor you’ve never heard of.
You might recognize his name because of the car brand named after him or because he’s one of the main characters in the new film “The Current War: Director’s Cut,” starring Benedict Cumberbatch as Thomas Edison. But it’s the inventor Nikola Tesla who should have been more famous for his inventions than history has awarded.
Tesla was a scientist and visionary who developed the basis for AC electric power that most of the planet uses today and pioneered numerous technologies that improve our everyday lives. A Serbian-American who emigrated to New York City in 1884, Tesla held approximately 300 patents.
“There’s not a lot of modern conveniences that we currently enjoy that weren’t touched by Nikola Tesla in some way,” said Marc Alessi, executive director of the Tesla Science Center at Wardenclyffe in New York, where teams are refurbishing Tesla’s lab into a museum and innovation hub.
Nikola Tesla's lab at Wardenclyffe in New York, which is being transformed into a science, education, and technology center.
Shutterstock
“If Tesla didn’t accelerate the AC current system, we would be 50 years behind technologically than where we are today.”
But it’s not just AC power that Tesla worked on. Motors, radios, X-rays, neon signs and other technology was advanced by his extraordinary mind. We take a look at the most famous and important inventions that Nikola Tesla contributed to.
Alternating current
This is the Tesla technology that sparked a war with Edison, the developer of direct current, and it’s the subject of the new film.
Back in 1884, Tesla left Europe to work for Edison, who supposedly promised him $50,000 to fix the problems with DC power. Meanwhile, Tesla’s alternating current had fewer issues. With AC power, the current is reversed numerous times per second, making it easy to convert to higher and lower voltages.
“He was working 20 hour days, and the whole time he was saying ‘let’s switch to AC current, it will work better,’ ” Alessi told CNN. But Edison never paid him the money, and claimed the promise was a joke. “Tesla quit and he ended up in a battle with Edison,” Alessi said.
According to the US Department of Energy, Edison did not want to lose royalties he was earning from his DC patents, so he attempted to discredit Tesla’s AC power through a misinformation campaign that touted alternating current as dangerous. Edison even publicly electrocuted stray animals using AC power as a scare tactic. Tesla countered by publicly shocking himself with 250,000 volts using alternating current to showcase its safety.
But Tesla had the last laugh, because today, alternating current is predominantly used to power most of the world’s electricity.
AC motors are also used in refrigerators, power tools and fans. Meanwhile, DC motors are still used for some industrial machines and conveyors, but often require more maintenance.
Radio
History often touts Italian entrepreneur Guglielmo Marconi as the inventor of radio. But sometimes history is wrong.
Marconi is credited with sending the first transatlantic radio transmission, using technology from 17 of Tesla’s patents.
The two inventors became embroiled in a patent war. In fact, the United States Supreme Court revoked Marconi’s radio patents in 1943 in favor of Tesla and two other scientists, Oliver Lodge and John Stone. Unfortunately, Tesla and Marconi had already passed away by the time the court handed down their decision.
Italian electrical engineer and nobel laureate Guglielmo Marconi with the one of his wireless radio apparatuses. (Photo by Hulton Archive/Getty Images)
For this and his other work, Zoric said, “Tesla is often referred as ‘The man who invented 20th century.”
Remote control
You can also thank Tesla for the ability to change the channel without having to get off the couch.
Tesla invented one of the world’s earliest remote controls, which he called a “teleautomaton.” He patented his device in 1898 as a “Method of and Apparatus for Controlling the Mechanism of Moving Vessels or Vehicles,” which he used it to control a miniature boat from afar during a demonstration at Madison Square Garden.
According to the Tesla Museum in Belgrade, Tesla knew how important the invention could be, so he patented it in 11 countries as well.
X-ray technology
Tesla was also a pioneer of X-ray technology. He experimented with radiation and managed to take some of the first X-ray images of the human body, which he called “shadowgraphs.” Tesla was also one of the first scientists to hypothesize that X-rays could be harmful.
But this is another area of research where he rarely gets credit.
According to one 2008 academic article published in RadioGraphics, “Every radiologist is aware of Nikola Tesla’s research in the field of electromagnetism … but if the discovery of X-rays is mentioned, only a few radiologists associate it with Tesla’s name.”
Nikola Tesla demonstrates an experiment in his New York City lab in 1895.
Tesla Science Center
Tesla contributed to medicine in other ways as well. The unit of energy named after Tesla is used to measure the strength of the magnets in MRI systems.
Hydroelectric power
Tesla was also a pioneer of renewable energy. Nine out of the 12 patents used to build one of the world’s first hydroelectric stations, erected at Niagara Falls, New Yorkbelonged to Tesla.
“As a child, when his uncle read him a book about Niagara Falls, the first thing he thought was about energy. “That water falling is energy,” explained Alessi. “At the dawn of our using fossil fuels for the industrial revolution, Tesla was already saying ‘That’s not the way we should go. That’s dirty and finite.’ “
Part of the American portion of Niagara Falls, New York, where Tesla harnessed the power of the water to create one of the worl's first hydroelectric power stations, The Adams Plant. (Photo credit should read DON EMMERT/AFP/Getty Images)
DON EMMERT/AFP/AFP/Getty Images
According to the Tesla Science Center, Tesla helped pave the way toward clean energy because he understood the physics behind energy and what might be possible in the future.
Zoric added, “Even back then, he proposed the use of renewable energy sources: water, wind and sun.”
Tesla’s legacy
Tesla was involved in many more discoveries and creations, including the rotating magnetic field, the speedometer, and the Tesla Coil, which is a transformer that produces sparks by creating high voltage at a low current.
“His work on Tesla coils, which use inductance to generate large voltages (e.g. lightning in the air) are the basis of the circuits used for the first radios … cathode ray tubes, and more,” explained Larry Pileggi, department head of electrical and computer engineering and professor at Carnegie Mellon University. “But the transmission of those large voltages over long distances that could be captured to provide power remotely was never successful.”
Tesla even reportedly invented a “death ray” that could be used as a weapon of war.
When he died in 1943, there was so much interest in what he was working on that the FBI raided his hotel room within hours of his death, Alessi said.
Mostly, Tesla envisioned his inventions, especially AC power, improving people’s lives. Experts say he wanted to bring safe electric power to the masses – to make factory worker’s lives easier at work, and light up worker’s homes so they study in the evenings to improve themselves.
“He loved the technology and what it could do,” said Alessi. “Tesla would be willing to lose money if it would help people.”
|
But Tesla had the last laugh, because today, alternating current is predominantly used to power most of the world’s electricity.
AC motors are also used in refrigerators, power tools and fans. Meanwhile, DC motors are still used for some industrial machines and conveyors, but often require more maintenance.
Radio
History often touts Italian entrepreneur Guglielmo Marconi as the inventor of radio. But sometimes history is wrong.
Marconi is credited with sending the first transatlantic radio transmission, using technology from 17 of Tesla’s patents.
The two inventors became embroiled in a patent war. In fact, the United States Supreme Court revoked Marconi’s radio patents in 1943 in favor of Tesla and two other scientists, Oliver Lodge and John Stone. Unfortunately, Tesla and Marconi had already passed away by the time the court handed down their decision.
Italian electrical engineer and nobel laureate Guglielmo Marconi with the one of his wireless radio apparatuses. (Photo by Hulton Archive/Getty Images)
For this and his other work, Zoric said, “Tesla is often referred as ‘The man who invented 20th century.”
Remote control
You can also thank Tesla for the ability to change the channel without having to get off the couch.
Tesla invented one of the world’s earliest remote controls, which he called a “teleautomaton.” He patented his device in 1898 as a “Method of and Apparatus for Controlling the Mechanism of Moving Vessels or Vehicles,” which he used it to control a miniature boat from afar during a demonstration at Madison Square Garden.
According to the Tesla Museum in Belgrade, Tesla knew how important the invention could be, so he patented it in 11 countries as well.
X-ray technology
Tesla was also a pioneer of X-ray technology. He experimented with radiation and managed to take some of the first X-ray images of the human body, which he called “shadowgraphs.”
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
no_statement
|
"radio" was not "invented" by nikola tesla.. nikola tesla did not "invent" "radio".
|
https://www.electronicdesign.com/technologies/communications/article/21759673/marconi-did-not-invent-radio
|
Marconi Did Not Invent Radio | Electronic Design
|
Marconi Did Not Invent Radio
The history of the development of radio technology doesn't rest in the hands of one man. As with any technology, countless inventors, designers, and developers contribute to what becomes today's status quo.
To some of you it may be blasphemy to say that Marconi was not the inventor of radio. But if you think that, you may as well start getting over it, because he didn't. All these years you probably heard about how Marconi single-handedly created wireless. Well, so did I. Yet as I found out recently, Marconi did not originate the idea. On the other hand, Marconi did indeed contribute considerably to the technology at the time. What he did was to take the basic concepts of others and make it into a practical workable system. That's called engineering. So if Marconi didn't invent radio, who did?
The History of Wireless
The basic concepts of radio were actually predicted and proved mathematically by British physicist James Clerk Maxwell in 1864. Then German physicist Heinrich Hertz took Maxwell's ideas and demonstrated them in practice in the 1885-1886 time frame. He used UHF waves to do this in his lab using a spark gap type apparatus. At that point lots of others, encouraged by Hertz's work, experimented with various systems of wireless telegraphy. Some of those include Russian Alexander Popov, Brit Oliver Lodge and Indian Jagadish Chandra Bose. And, of course, Marconi. Marconi actually received the famous British patent 7777 for inventing radio in 1897.
Most of this early work was spark gap technology that generated an signal like ultra wideband (UWB) that covered a huge bandwidth. It worked well with telegraphy. But perhaps the unsung hero in all of this development, in my opinion, is Edouard Branly, who invented the coherer. For those of you who have not followed the development of radio, a coherer is the early version of what we would call a diode today. The coherer was a glass tube filled with metallic filings. It actually performed like a rectifier, albeit a lousy one, but it did work. The key was to make it sensitive enough to respond to very low-level signals of early radio. Without a coherer on the receiving end, radio would have never gotten off the ground.
Marconi was very successful in assembling a system for wireless telegraphy. Although he was born in Italy, Marconi spent a great deal of his life in England. It was there that he got the patent and formed the British wireless service. He later formed the Marconi Wireless Telegraph Company. Then in 1901, Marconi demonstrated trans-Atlantic wireless by sending the letter S (dot-dot-dot) from his station in Poldhu, Cornwall to Signal Hill in St. John's, Newfoundland. Marconi went on to become very successful (and rich) selling early wireless services and products and gathering royalties from his patents.
The Weird Genius Thought Of It First
The real inventor of radio in now considered to be Nikola Tesla. Tesla was born in Croatia, educated in Europe, and eventually immigrated to the US in 1884. Tesla was a genius. He invented so much stuff that it is hard to catalog all of it (much less understand it). This process is still going on today, as Tesla's claims of "death rays" and the transmission of electrical power wirelessly are still being examined. One of his earliest successes was the invention of the AC induction motor. He also helped George Westinghouse defeat Thomas Edison in the battle of AC vs. DC in the electrical power distribution war of the late 1800's. AC eventually won simply because you could step AC up with a transformer and transmit it over long distances then step it back down. This was a more efficient and economic way to transmit power than Edison's DC system, which required many more generation stations close to the customers. Tesla also worked on the first big AC generating plant at Niagara Falls.
In any case, Tesla got the idea for radio back in 1892 and demonstrated a remotely controlled boat in 1898. He did get basic US patents in 1897; these were for single-frequency radio, not spark gap. Yet somehow, he never got recognition for this work. While Marconi took the basic idea and ran with it, Tesla was always on to something new. Once he invented something, his overly active mind had him creating some other fabulous new invention. So Marconi and others got all of the credit...and the money. Tesla's big shot at wireless glory was the worldwide radio transmission system he invented and built at Wardenclyffe on Long Island. A huge tower was built, along with most of the apparatus, to make it work. But he ran out of money and went bankrupt. Everyone else got the glory.
Tesla was a certified genius, but he was a terrible businessman. While he lived comfortably for most of his life, he never got rich. Yet he did make many others very rich. He died penniless in 1943, eight months before the U.S. Supreme court threw out all other radio patents and granted them to Tesla. Reading the whole history of wireless today, it is easy see that Tesla was really the father of radio.
The Rest Of The Story
As with any technology, lots of people are involved in creating and developing it. That is certainly true of wireless. So while Tesla should really get the credit for the concept, we have many others to thank for their work, especially Marconi. After that early work, Edison invents the light bulb (along with Swan of England), Fleming creates the first vacuum tube diode in 1904, and Lee DeForest then develops the first triode tube in 1907. Once we got the tube, amplification made radio even better. Fessenden creates amplitude modulation in 1906 and by the 1920's there are hundreds of radio stations on the air in the U.S. alone. Armstrong invents FM in 1933 and commits suicide after RCA steals his patents. Then comes the transistor in 1947 and the integrated circuit in 1957-1958, thanks to Jack Kilby of Texas Instruments and Robert Noyce of Fairchild and later Intel. And here we are today. I wonder if Tesla or Marconi would even recognize our current versions of wireless, advanced as they are.
|
The key was to make it sensitive enough to respond to very low-level signals of early radio. Without a coherer on the receiving end, radio would have never gotten off the ground.
Marconi was very successful in assembling a system for wireless telegraphy. Although he was born in Italy, Marconi spent a great deal of his life in England. It was there that he got the patent and formed the British wireless service. He later formed the Marconi Wireless Telegraph Company. Then in 1901, Marconi demonstrated trans-Atlantic wireless by sending the letter S (dot-dot-dot) from his station in Poldhu, Cornwall to Signal Hill in St. John's, Newfoundland. Marconi went on to become very successful (and rich) selling early wireless services and products and gathering royalties from his patents.
The Weird Genius Thought Of It First
The real inventor of radio in now considered to be Nikola Tesla. Tesla was born in Croatia, educated in Europe, and eventually immigrated to the US in 1884. Tesla was a genius. He invented so much stuff that it is hard to catalog all of it (much less understand it). This process is still going on today, as Tesla's claims of "death rays" and the transmission of electrical power wirelessly are still being examined. One of his earliest successes was the invention of the AC induction motor. He also helped George Westinghouse defeat Thomas Edison in the battle of AC vs. DC in the electrical power distribution war of the late 1800's. AC eventually won simply because you could step AC up with a transformer and transmit it over long distances then step it back down. This was a more efficient and economic way to transmit power than Edison's DC system, which required many more generation stations close to the customers. Tesla also worked on the first big AC generating plant at Niagara Falls.
In any case, Tesla got the idea for radio back in 1892 and demonstrated a remotely controlled boat in 1898.
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
no_statement
|
"radio" was not "invented" by nikola tesla.. nikola tesla did not "invent" "radio".
|
https://artsandculture.google.com/story/who-really-invented-the-light-bulb-and-other-myths-debunked/mwWRW_mbN0mdJQ?hl=en
|
Who Really Invented The Light Bulb? And Other Myths Debunked ...
|
History is not fixed. Here you’ll find a selection of very well known inventions and their origin stories. Somewhere along the way these stories have been warped, changed, or completely rewritten. Luckily, some historians and researchers have worked hard to uncover the real narrative and made sure the right people are credited and celebrated.
Davy Sir Humphry 1778-1829LIFE Photo Collection
An Englishman – not Thomas Edison – created the light bulb
Thomas Edison is credited with inventing a whole host of valuable inventions but the real story behind them often reveals a different pattern of events. It was actually British inventor Sir Humphry Davy who was the first to invent the electric light in 1809.
Davy Sir Humphry 1778-1829LIFE Photo Collection
He made the discovery by connecting two wires to a battery and attaching a charcoal strip between the other end of the wires. The charged carbon glowed making the first arc lamp.
LIFE Photo Collection
Edison’s version wasn’t released until 1877, but his bulbs were better equipped leading to them being better known, pushing Davy’s previous work to the sidelines.
Edison with early motion picture film and projector (1912) by General Electric CompanyMuseum of Innovation & Science
Edison also didn’t invent motion picture
Once again, Edison manages to be credited with another huge invention but doesn’t deserve the praise. It was actually Louis Le Prince, a French artist, who was the inventor of the early motion picture camera. In Leeds, England in 1888 Prince used a single lens camera to shoot 16 pictures a second without blurring the exposure.
While we know some of Prince’s work now, there is a troubling conspiracy surrounding the whole invention and Edison’s claim to fame. In 1890, two years after his achievement, Prince boarded a train bound for Dijon, but disappeared and was never seen again. Years later, during a patent trial for Edison’s motion picture “invention”, Prince’s son was found shot dead in New York. American courts would later dismiss all of Prince’s work.
Monopoly (Darrow Edition) (1933) by Charles DarrowThe Strong National Museum of Play
Elizabeth Magie invented Monopoly 30 years before the classic Parker Brothers’ version
The accepted history of the beloved board game Monopoly has come to symbolize something like The American Dream in a microcosm. The story goes that Charles Darrow, an unemployed designer, invented the game, pitched it to the Parker Brothers company, and thereby became a millionaire himself.
The Landlord's Game (1910) by Elizabeth MagieThe Strong National Museum of Play
In reality, an almost exactly similar game, ‘The Landlord’s Game’, was patented by Elizabeth Magie in 1903. The game was designed to promote progressive economics and act as a warning against the evils of monopolies. Magie never made more than $500 from her invention, and her right to the patents for Monopoly wasn’t uncovered until the 1970s.
20-Chien-Shiung-WuNational Women's Hall of Fame
Chien-Shiung Wu’s scientific contributions to the atomic bomb were ignored
During World War II nuclear physicist Chien-Shiung Wu was recruited to work on the Manhattan Project in the development of the atomic bomb. Fast forward to the 1950s and Wu began working with theoretical physicists, Tsung-Dao Lee and Chen Ning Yang, who wanted her help in disproving the law of parity. The law said that there was “a fundamental symmetry in the behavior of everything in nature, including atomic particles.”
Although Wu’s colleagues had developed the theory to disprove the law, it was actually Wu who created and conducted the experiments that served as proof. In 1957, Lee and Yang both received the Nobel Prize for their work, but Wu’s contribution was completely ignored. Despite outrage by Wu’s peers, the decision to exclude Wu from the prize was never changed.
Lise Meitner at the Lindau MeetingLindau Nobel Laureate Meetings
Lise Meitner’s work on nuclear fission was forgotten due to being in exile
Austrian physicist Lise Meitner was integral to the discovery of nuclear fission. In the early 20th century, after moving to Germany, she began a long partnership with chemist Otto Hahn. When Nazi Germany annexed Austria in 1938, Meitner was forced to flee because she was of Jewish descent. She eventually settled in Sweden and continued to collaborate with Hahn from afar. In Berlin, Hahn’s team conducted experiments that would prove to be the evidence for nuclear fission, but it was Meitner and her nephew (Otto Frisch) who ultimately described the theory and coined the term, “nuclear fission”.
When Hahn published the discovery, he left Meitner out of it. It’s thought this might be due to rising tensions caused by Nazi Germany as she was of Jewish heritage – yet the real reason remains unknown. Regardless, Hahn was awarded the Nobel Prize in Chemistry in 1944 for the discovery of nuclear fission and Meitner’s contribution was not acknowledged. After scientists realised that nuclear fission could be used as a weapon Meitner was invited to work on the same Manhattan Project as Wu to develop the atomic bomb. She refused, stating: “I will have nothing to do with a bomb!”
Tesla on Arrival to AmericaNikola Tesla Museum
Nikola Tesla was the real inventor of the radio
In the 1890s, both Guglielmo Marconi and Nikola Tesla were fighting to develop the radio, but it is Marconi’s efforts that are remembered. Tesla actually received more of the early patents for the technology; in 1897 he filed for and was granted the first radio patent, which became the basis for much of his future work, including radio-controlled boats, torpedoes, and radio frequency feedback.
Tesla's Laboratory in Long IslandNikola Tesla Museum
His developments in radio dates back beyond Marconi's announcement of radio technology as his "invention" but Marconi is more commonly credited with inventing the radio because he was able to take all these technologies and turn them into a commercial product.
LIFE Photo Collection
Galileo Galilei did not invent the telescope
Italian astronomer, physicist, and engineer Galileo Galilei is credited with many inventions and discoveries including the telescope. Yet most historians agree that it was actually Dutch spectacle maker Hans Lippershay who had been making magnification devices using the improved quality of glassmaking of the time.
Supposedly, Galileo had heard about these and decided to build his own, making some improvements in the process. One of the reasons why Galileo was credited with inventing the telescope is because he was the first person to use these new optics as a scientific instrument, which is where the real value was added.
There’s a lot of controversy and intrigue surrounding the invention of the telephone. Alexander Graham Bell is often credited as the inventor of the telephone since he was awarded the first successful patent. However, Antonio Meucci also developed a talking telegraph, called the ‘teletrofono’. Around 11 years later (still five years before Bell’s phone came out), Meucci filed a temporary patent on his invention in 1871. But in 1874, he failed to send in the $10 necessary to renew his patent.
LIFE Photo Collection
Two years after that, Bell registered his telephone patent. Meucci attempted to sue him by retrieving the original sketches and plans he sent to a lab at Western Union, but the record had conveniently disappeared. Controversially, Bell was working at the very same Western Union lab where Meucci swore he sent his original sketches. The Italian inventor died, never profiting from his invention and faded away into obscurity, while Bell claimed full credit.
|
She refused, stating: “I will have nothing to do with a bomb!”
Tesla on Arrival to AmericaNikola Tesla Museum
Nikola Tesla was the real inventor of the radio
In the 1890s, both Guglielmo Marconi and Nikola Tesla were fighting to develop the radio, but it is Marconi’s efforts that are remembered. Tesla actually received more of the early patents for the technology; in 1897 he filed for and was granted the first radio patent, which became the basis for much of his future work, including radio-controlled boats, torpedoes, and radio frequency feedback.
Tesla's Laboratory in Long IslandNikola Tesla Museum
His developments in radio dates back beyond Marconi's announcement of radio technology as his "invention" but Marconi is more commonly credited with inventing the radio because he was able to take all these technologies and turn them into a commercial product.
LIFE Photo Collection
Galileo Galilei did not invent the telescope
Italian astronomer, physicist, and engineer Galileo Galilei is credited with many inventions and discoveries including the telescope. Yet most historians agree that it was actually Dutch spectacle maker Hans Lippershay who had been making magnification devices using the improved quality of glassmaking of the time.
Supposedly, Galileo had heard about these and decided to build his own, making some improvements in the process. One of the reasons why Galileo was credited with inventing the telescope is because he was the first person to use these new optics as a scientific instrument, which is where the real value was added.
There’s a lot of controversy and intrigue surrounding the invention of the telephone. Alexander Graham Bell is often credited as the inventor of the telephone since he was awarded the first successful patent. However, Antonio Meucci also developed a talking telegraph, called the ‘teletrofono’.
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
no_statement
|
"radio" was not "invented" by nikola tesla.. nikola tesla did not "invent" "radio".
|
https://interestingengineering.com/culture/7-myths-about-nikola-tesla-you-need-to-stop-believing
|
7 Myths About Nikola Tesla You Need to Stop Believing
|
Like most of history, the development of a technology tends to be the end product of a series of steps. All current scientists and engineers are, to coin a phrase, standing on the shoulder of giants.
See Also
Tesla, if he were still alive today, would most certainly agree. After all, as he wrote about it in 1900:
"The scientific man does not aim at an immediate result; he does not expect that his advanced ideas will be readily taken up. His work is like that of the planter – for the future. His duty is to lay the foundation for those who are to come, and point the way."
2. Tesla didn't actually invent the induction coil either
Here is another myth about Tesla that seems to do the rounds. Whilst Tesla did create his own device based in part on the principles of induction, called, aptly enough, the Tesla coil and induction motor; it wasn't originally his idea.
In fact, induction was the work of none other than the great and prolific Michael Faraday. As for the induction coil itself, this was the work of the very talented Mr. Nicholas Callan in 1836.
Both Faraday's and Callan's work predate Tesla's birth by several decades. Early induction coils were the first types of a transformer and had applications in x-ray machines, spark-gap radio transmitters and other devices between the 1880s and 1920s.
3. But, didn't Tesla invent the transformer?
Nope, sorry. This is another myth about Tesla that doesn’t seem to die very easily.
The first transformer was actually developed by the Ganz company in Budapest in the late 1870s. At this time, Tesla was still in school and hadn't even begun his first job at a telephony business.
It is likely that whilst working in Budapest in 1880; he first laid eyes on the technology which would later inspire some of his work in transformers. The first modern transformer, as we know it, was invented in 1885 by William Stanley and his idea was based, in turn, on the ideas of Gaulard and Gibbs.
Gaulard had used his transformer in the 1884 Lanzo to Turin AC power demonstration.
It wouldn't be until around 1885 that Tesla would join the ranks of the few who were working on AC at the time. But it should be noted that Tesla did often make mention that he had his own design in mind for a full AC system in 1882.
4. Tesla's Niagara Falls hydropower plant was a world's first
As believable as this one is, it is actually quite false. AC power plants were first developed in Europe between 1878 and 1885.
Westinghouse himself would hire William Stanley, Oliver Shallenberger, Benjamin Lamme, and others to build AC power systems in North America to build some in the U.S. in 1885.
Tesla wouldn't join Westinghouse until three years later, in 1888.
In 1878, the first hydroelectric power scheme appeared at Cragside in Northumberland. This site was developed by William Armstrong and was used to power a single arc lamp in his art gallery - as you do.
The first 3-phase AC power plant emerged, for commercial purposes, in 1893 at the Redlands Power Plant. One of the first hydroelectric power stations was built by Edison in Appleton, Wisconsin in 1882.
As for Hydroelectric three-phase AC power plants, the first was developed at Frankfurt in 1891 by Dobrovolsky.
As you can see, Tesla, whilst undoubtedly a master of the technology, he was merely either directly influenced or improved on existing solutions.
5. Tesla was a shrinking violet
Another common myth about Tesla, and one the author once believed himself was that Tesla was something of a 'shrinking violet'.
[see-also]
In fact, this myth could not be further from the truth. As you have seen, electricity was a hot topic at the time of Tesla's life, and he must have been a man of great charisma for us to even remember his name today.
No doubt, he took some lessons from Edison, who was both a shrewd businessman and showman.
Tesla also lived in the heart of New York and would have been abundantly aware that he must relentlessly promote himself to become successful.
This would have become especially the case after his famous split from Edison whilst trying to forge his own company. Throughout his career, Tesla would put spectacular demonstrations of his inventions.
6. Tesla didn't invent the radio either, sorry
Another common myth is that Tesla invented Radio. In fact, independently of Guglielmo Marconi, Tesla did develop a device that enabled wireless communication in 1896 which he patented in 1897.
This discovery would eventually win Marconi the Nobel Prize. Tesla's own patents were later revoked by the U.S. Patent office which sparked a legal battle between the two until well into the 1940s.
But, both of their work was predated by a Russian physicist, Alexander Popov. He successfully demonstrated a working radio receiver a year before Marconi and Tesla, in 1895.
But, all of their work, including Popov, would not have been possible without the works of many scientists before them. It should be noted that Tesla can rightfully be called the inventor of Radio Control (RC) with his 1898 demonstration in Madison Square Gardens.
7. Some claim that Tesla invented Radar
But the truth is not so clear cut - in fact you might say it's a 'can of worms.'
Radar, in and of itself, would not exist without the groundbreaking work of German physicist Heinrich Hertz. He demonstrated the existence of electromagnetic waves (including radio) in the late 1880s, thus validating the theories of James Clerk Maxwell from the 1860s.
Christian Hulsmeyer (a German inventor), in the early 1900s, provided public demonstrations in Germany and the Netherlands that radio waves could be used to detect ships.
He envisioned it being used to avoid ship-to-ship collisions.
Other pioneers included Lee De Forest, Edwin Armstrong, Ernst Alexanderson, Marconi, Albert Hull, Edward Victor Appleton, and Russian developers who developed a Radar system in 1934.
Sir Robert Watson-Watt famously demonstrated the first HF radar system in 1935. This operated at 6MHz and had a range of 8 miles (just under 13 km).
|
Tesla also lived in the heart of New York and would have been abundantly aware that he must relentlessly promote himself to become successful.
This would have become especially the case after his famous split from Edison whilst trying to forge his own company. Throughout his career, Tesla would put spectacular demonstrations of his inventions.
6. Tesla didn't invent the radio either, sorry
Another common myth is that Tesla invented Radio. In fact, independently of Guglielmo Marconi, Tesla did develop a device that enabled wireless communication in 1896 which he patented in 1897.
This discovery would eventually win Marconi the Nobel Prize. Tesla's own patents were later revoked by the U.S. Patent office which sparked a legal battle between the two until well into the 1940s.
But, both of their work was predated by a Russian physicist, Alexander Popov. He successfully demonstrated a working radio receiver a year before Marconi and Tesla, in 1895.
But, all of their work, including Popov, would not have been possible without the works of many scientists before them. It should be noted that Tesla can rightfully be called the inventor of Radio Control (RC) with his 1898 demonstration in Madison Square Gardens.
7. Some claim that Tesla invented Radar
But the truth is not so clear cut - in fact you might say it's a 'can of worms.'
Radar, in and of itself, would not exist without the groundbreaking work of German physicist Heinrich Hertz. He demonstrated the existence of electromagnetic waves (including radio) in the late 1880s, thus validating the theories of James Clerk Maxwell from the 1860s.
Christian Hulsmeyer (a German inventor), in the early 1900s, provided public demonstrations in Germany and the Netherlands that radio waves could be used to detect ships.
He envisioned it being used to avoid ship-to-ship collisions.
|
no
|
Radio
|
Was radio invented by Nikola Tesla?
|
no_statement
|
"radio" was not "invented" by nikola tesla.. nikola tesla did not "invent" "radio".
|
https://meroli.web.cern.ch/lecture_nikola_tesla.html
|
Nikola Tesla: discover how his 10 predictions changed the world.
|
Nikola Tesla: how his predictions changed the world
Are you also one of those fascinated by the
Tesla Coil and by the eternal fighting between
Tesla and the American Edison? Then you are
in the right place. Enjoy the reading and let
me know your opinion in the comments?
Few characters in science history have around
themselves an aura of a legend like the inventor
Nikola Tesla (1856-1943).
Those who know the story
of Nikola Tesla, know that any term used to
describe his profession might seem reductive.
Scientist, physicist, engineer, inventor, none
of these words describe sufficiency the life
of Nikola Tesla. Maybe genius only can minimally
describe the life of this man.
A main actor of the electricity
revolution, Tesla was the prototype of the mad
genius. He invented the Tesla coil, whose impressive
electrical discharges were the symbol of the
mad scientist's laboratory. He had at the same
time superhuman mental abilities which
allowed him to elaborate in his mind complex electrical
machines and build them without taking any notes.
Nikola Tesla was a visionary
that changed the history of humanity with his
inventions. No wonder then if there is a large
community that consider the brilliant scientist a
kind of divinity. Tesla is according to them
the true father of an impressive amount of fundamental
inventions like the transistor, the radio, the
radar, the X-ray as well as the alternating
current.
Tesla's vision was always
ahead of his contemporaries; he imagined already
two centuries ago, a world where people could
communicate with radio waves, illuminate cities
without using wires. He was not afraid by the
innovation: for this aspect, Tesla will always
be an indisputable example.
Below you will find 10
of the thousands of inventions of Nikola Tesla,
even if many of these are not attributed to
him yet. Write in the comments at the bottom
which one is your favorite.
1. AC alternating current
TOne of Teslaâs greatest inventions was his alternating current (AC) power system. Prior to Tesla, most electricity was generated and transported using direct current (DC), a technology created by Thomas Edison. AC profoundly changed the way electricity could be transmitted over long distances, as it allows for greater efficiencies by reducing power loss and enabling power transmission at higher voltages. This revolutionized the way we use electricity today, propelling us into the modern age of convenience and comfort.
2. Light
Of course, he did not invent
the light, but he discovered how to channel
it and made possible for example the creation
of neon light. Tesla's invention of the
alternating current (AC) system of
electricity, along with his patents for high
voltage transformers, enabled neon light to
become widely available for commercial and
industrial use. This was due to the fact
that AC electricity allowed for a much
higher voltage than the direct current (DC)
system that was used before.
This made it possible
to create a much brighter, more efficient
light than before. Additionally, Tesla's
high voltage transformers made it possible
to power the lights for much longer
distances.
3. X Rays
Tesla described the discovery of X-rays by a German physicist, Wilhelm Roentgen in 1895. Intrigued and excited by this discovery, Tesla began to experiment with Roentgen's work. He soon developed an electrical lamp that gave off an intense "radiant energy." His invention allowed for the production of more powerful X-rays, which made it possible for doctors and scientists to get detailed images of internal organs and anatomy. As a result, physicians could diagnose and treat diseases much better than ever before.
4. Radio
In spite of
Guglielmo Marconiâs popular claim of inventing the radio, it was, in fact, Nikola Tesla who had first demonstrated a functioning version in 1893. Tesla patented this invention in 1897 and presented it to the public during a demonstration at his New York laboratory. This invention revolutionized communication, paving the way for television, mobile phones and todayâs wireless internet. Tesla went on to dedicate himself to developing remote control - a device working on similar principles as his radio inventions. The first two vessels that were guided by signals sent by remote control were launched simultaneously across two bodies of water - one boat at Madison Square Garden using terrestrial rays, and another boat in Uxbridge using invisible rays from wireless technology.
5. Remote control
Although later developed
during the two world wars, this is one of many
revolutionary inventions made by Nikola Tesla.
He demonstrated a radio-controlled boat that
could move in any direction. Even if today it might seem obvious, 100 years
ago the idea to control the movements of any
object by remote control was considered magic.
6. Electric motor
His research and experiments led to the development of the AC motor, which later revolutionized the power industry. His work also laid the foundation for modern electrical engineering and the use of alternating current. Tesla's inventions and research also helped to develop induction motors, polyphase systems, and single-phase alternating current motors. All of these technologies have become integral parts of the modern electrical grid.
7. Robotics
With the idea that all
living things are driven by electrical impulses,
Nikola Tesla put the basis for the creation of
robotics.
8.Laser
Used in thousands of ways,
the laser has revolutionized many fields, including
surgery. Tesla also made significant
contributions to the field of optics,
including introducing the concept of optical
resonance, which is an important concept in
the development of lasers. He also developed
an early form of holography, which was later
used in the development of laser technology.
Finally, Tesla was the first to suggest the
use of lasers for medical applications,
which later became a reality.
9. Wireless
Nikola Tesla is widely credited with
being the first person to discover and
develop the concept of wireless
communication. While building his laboratory
in Colorado Springs in 1899, Tesla conducted
experiments in which he sent wireless
electrical signals over a distance of 25
miles. He also constructed the first radio
transmitter and receiver, which he used to
transmit Morse code messages.
Tesla also built a large tower on Long
Island, New York, that transmitted wireless
energy. This tower, known as the
Wardenclyffe Tower, was designed to provide
free wireless energy to anyone who wanted
it, an idea that was ahead of its time.
Tesla's work laid the groundwork for the
advent of radio, television, cellular
phones, and wireless internet.
10. Free energy
Nikola Tesla suggested
the use of the earth energy to feed human activities,
leading consequently to a world less dependent
on fossil fuels and with fewer wars. A world
where the energy might be free for everyone.
This idea of free energy and freedom was presumably
the cause of the marginalization that Nikola
Tesla suffered.
With this page we are
trying to remember the genius of the geniuses,
unfortunately dead in solitude.
|
Additionally, Tesla's
high voltage transformers made it possible
to power the lights for much longer
distances.
3. X Rays
Tesla described the discovery of X-rays by a German physicist, Wilhelm Roentgen in 1895. Intrigued and excited by this discovery, Tesla began to experiment with Roentgen's work. He soon developed an electrical lamp that gave off an intense "radiant energy." His invention allowed for the production of more powerful X-rays, which made it possible for doctors and scientists to get detailed images of internal organs and anatomy. As a result, physicians could diagnose and treat diseases much better than ever before.
4. Radio
In spite of
Guglielmo Marconiâs popular claim of inventing the radio, it was, in fact, Nikola Tesla who had first demonstrated a functioning version in 1893. Tesla patented this invention in 1897 and presented it to the public during a demonstration at his New York laboratory. This invention revolutionized communication, paving the way for television, mobile phones and todayâs wireless internet. Tesla went on to dedicate himself to developing remote control - a device working on similar principles as his radio inventions. The first two vessels that were guided by signals sent by remote control were launched simultaneously across two bodies of water - one boat at Madison Square Garden using terrestrial rays, and another boat in Uxbridge using invisible rays from wireless technology.
5. Remote control
Although later developed
during the two world wars, this is one of many
revolutionary inventions made by Nikola Tesla.
He demonstrated a radio-controlled boat that
could move in any direction. Even if today it might seem obvious, 100 years
ago the idea to control the movements of any
object by remote control was considered magic.
6. Electric motor
His research and experiments led to the development of the AC motor, which later revolutionized the power industry. His work also laid the foundation for modern electrical engineering and the use of alternating current.
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
no_statement
|
"radio" was not "invented" by nikola tesla.. nikola tesla did not "invent" "radio".
|
https://www.businessinsider.com/10-items-not-invented-by-who-you-think-2011-8
|
The Wright Brothers Didn't Invent the Airplane...and 9 Other ...
|
The Wright brothers did not invent the airplane
The story: In March of 1902, the New Zealand farmer took flight for roughly 350 yards (by most eyewitness accounts) in a monoplane aircraft before crashing into a hedge. This little-known experiment took place months before the Wright brothers more sustained flight.
Microsoft did not create the computer desktop and GUI
Leif K-Brooks
Alleged Inventor: Microsoft
Actual Inventor: Xerox PARC
The story: The computer desktop and its graphical user interface -- which lets people interact with their computer as they would the physical world, with buttons and windows -- is typically attributed to Microsoft, but the distinction should go the Xerox company.
The Xerox Alto was released and sold to the public in 1973, though the GUI wouldn't become economically viable until Apple's Lisa and Macintosh machines of the 1980s.
Henry Ford did not invent the first automobile
The story: When most people think of the automobile, they think of Henry Ford and his assembly lines. In fact, it was a German contemporary, Karl Benz, who was awarded the first patent for an automobile fueled by gas in 1886.
Benz's "motorwagon", first designed in 1885, went on sale to the general public a few years later. He built 25 models in the first 5 years, and his wife is credited with making the first road trip.
Edison did not invent X-ray photography
izismile.com
Alleged Inventor: Thomas Edison
Actual Inventor: Wilhelm Röntgen
The story: Thomas Edison is credited with many discoveries, but this should not be one of them. Instead, German scientist Wilhelm Röntgen was first, and to this day X-rays are called "Röntgen rays" in his native language.
The effects of X-radiation had been observed before, but Wilhelm Röntgen was the first to systemically study them, writing the paper "On a New Kind of Rays" in 1885. He called them "X" rays because at the time it was an unknown type of radiation.
Thomas Edison did not invent moving pictures
The story: Once again, Edison grabs credit where none is due. But Frenchman Louis Le Prince was the first to shoot moving pictures on paper film. In Leeds, England in 1888, he used a single lens camera to shoot 16 pictures a second, without blurring the exposure.
Conspiracy factor: Prince boarded a train bound for Dijon in 1890 and disappeared, never to be seen again. Years later, during a patent trial for Edison's "invention" of the moving picture, Prince's son was found shot dead in New York. American courts would later dismiss all of Prince's work.
The telescope was not invented by Galileo
The story: There are many claims for creator of the telescope, including famed astronomer Galileo, but most can agree that German-Dutch lens maker Hans Lippershey was the first to record its design.
In 1608, Lippersheyn -- inspired by his children -- put together a convex objective lens and a concave eyepiece, which Galileo would use to great effect the following year.
Edison was not the first to record audio
AP
Alleged inventor: Thomas Edison
Actual Inventor: Édouard-Léon Scott de Martinville
The story: While Edison may have invented the phonograph, with its ability to play back recordings, it was the invention of the phonautograph in 1857 that gave us the first machine to record audio.
Scott de Martinville's recording was recently unearthed and modern technology was used to play it back, revealing the first audio recording to be: a singer, most likely the inventor himself, performing a snippet of the song "Au clair de la lune."
Edison also did not invent the light bulb
Alleged inventor: Thomas Edison
Actual Inventor: Sir Humphry Davy
The story: Yet again, Edison receives the nod here over the real inventor, Sir Humphry Davy. In the early 1800s, well before Edison's time, Davy invented the first electric light by hooking up a powerful battery to a piece of carbon, which lit up and became the earliest incarnation of what we use today.
Edison's bulbs were better equipped and became more well known, but his version wasn't released until 1877.
Marconi never invented the radio
The story: There's plenty of controversy when it comes to the invention of radio -- no doubt because there were many advances and changes in wireless technology over the years -- but Tesla should be recognized for patenting his original version of the radio.
In 1897, Tesla filed for and was granted the first radio patent, which became the basis for much of his future work, including radio-controlled boats, torpedoes, and radio frequency feedback. His work with radio dated back beyond Marconi's announcement of radio technology as his "invention."
Al Gore did not invent the internet
The story: Though the former vice president was one of the wallets behind the emergence of the world wide web, it was Vinton Cerf and his team who should be given the credit.
Cerf, known as "the father of the internet," helped create the ARPANET system, a 1970s precursor of the internet. TCP/IP was also co-designed by Cerf. Without Cerf, it's hard to imagine that the complex series of systems that we know today as the internet would have been brought together.
|
Edison was not the first to record audio
AP
Alleged inventor: Thomas Edison
Actual Inventor: Édouard-Léon Scott de Martinville
The story: While Edison may have invented the phonograph, with its ability to play back recordings, it was the invention of the phonautograph in 1857 that gave us the first machine to record audio.
Scott de Martinville's recording was recently unearthed and modern technology was used to play it back, revealing the first audio recording to be: a singer, most likely the inventor himself, performing a snippet of the song "Au clair de la lune. "
Edison also did not invent the light bulb
Alleged inventor: Thomas Edison
Actual Inventor: Sir Humphry Davy
The story: Yet again, Edison receives the nod here over the real inventor, Sir Humphry Davy. In the early 1800s, well before Edison's time, Davy invented the first electric light by hooking up a powerful battery to a piece of carbon, which lit up and became the earliest incarnation of what we use today.
Edison's bulbs were better equipped and became more well known, but his version wasn't released until 1877.
Marconi never invented the radio
The story: There's plenty of controversy when it comes to the invention of radio -- no doubt because there were many advances and changes in wireless technology over the years -- but Tesla should be recognized for patenting his original version of the radio.
In 1897, Tesla filed for and was granted the first radio patent, which became the basis for much of his future work, including radio-controlled boats, torpedoes, and radio frequency feedback. His work with radio dated back beyond Marconi's announcement of radio technology as his "invention. "
Al Gore did not invent the internet
The story: Though the former vice president was one of the wallets behind the emergence of the world wide web, it was Vinton Cerf and his team who should be given the credit.
|
yes
|
Radio
|
Was radio invented by Nikola Tesla?
|
no_statement
|
"radio" was not "invented" by nikola tesla.. nikola tesla did not "invent" "radio".
|
https://rhslegend.com/3153/ae/underrated-history-nikola-tesla/
|
Underrated History: Nikola Tesla – The Legend
|
Underrated History: Nikola Tesla
I think we can all agree that humanity is pretty amazing. It might be hard to remember that in the dark times of today with social media, riots, and the pandemic currently still going on, that we truly are a unique species that has done many wonderful things. Maybe you wouldn’t believe it when you look back on history and only see wars, evil people, bad philosophies, and oppression. However, in all honesty, history is not just the doom and gloom. While it’s true a lot of things worth remembering are the dark sides of history, there are many elements of history that are often overlooked and are wholesome, interesting, admirable, and sometimes downright hysterical. They might not deepen your understanding of the world, but they will remind you that humanity is good, sometimes petty, and overall hilarious. Even in the Shakespearean tragedies, there is something good or funny to be discovered.
Last year I wrote an article on the First Viral Meme, a symbol that American soldiers would draw wherever they went during World War Two. Today it’s referred to as ‘Kilroy was Here’, and since then I have wanted to write other articles informing people about random historical topics that no one talks about. So join me in my final year of high school as I write about random people, things, and events that you probably didn’t know about. I hope that along the way, I’ll either make you appreciate humanity or bust your gut with laughter. Either one works.
From the overlooked to the hilarious to the stupid to the mislead: this is Underrated History!
Nikola Tesla
Last year, while drowning neck-deep in the void we call Pinterest, I stumbled across an image of Nikola Tesla and a generalized list of his accomplishments. Some were true, most were false, but that’s beside the point. Either way, for some reason, his name stuck with me and I quickly became fascinated with this man I had never heard of who had done so much for our modern world and technology. Within a few months, I knew I wanted to write an article about him, but the pandemic happened and I never got the chance to tell his story. Now, though, I can not think of a better way to properly introduce this series and what I wish to accomplish with it other than by talking about Nikola Tesla. He is quite possibly the most underrated inventor of the modern era, and one of the kindest people who ever lived. Without him, our world would quite possibly look very different.
So why don’t we get started?
Early Life
Nikola Tesla was born on July 10th, 1856 during a ravenous thunderstorm in the Austrian Empire, which is now today modern-day Croatia. At the time, children born during storms and natural disasters were seen as bad luck, and the midwife for Tesla’s birth predicted that he would be a ‘child of darkness.’ In response, Tesla’s mother said, ‘No, he shall be a child of light.’ This statement will become quite ironic later in Tesla’s life, so keep it in mind.
Throughout Tesla’s early childhood, his mother, who herself came from a long line of inventors, nurtured him to be this supposed ‘Child of Light’ that she predicted. She specifically taught him excellent memorization skills and helped develop his already strong imagination. By high school, he was able to perform integral calculus in his head, causing many of his teachers to accuse him of cheating. Then, later in life, he would be able to speak eight different languages and imagine inventions perfectly as if they were right in front of him, all of which he attested to his mother’s teachings throughout his childhood.
To the surprise of no one, Tesla graduated early from high school top of his class. Everything seemed to point in the direction of going to a great University, however, his father was rather adamant about him following his own footsteps and becoming a priest. It might have happened too, if not for Tesla becoming extremely sick at the age of 17 from cholera. He was bedridden for days and many assumed that he would not live. Then, one day, Tesla told his father that he felt that he might live if his father allowed him to go to University. Reluctantly, his father agreed, as long as his son lived. Miraculously, Tesla got better, and his father kept his word, allowing his son to go to University to study engineering.
While at University, at least for the first year, Tesla received the highest grades possible and was a star student on the Dean’s list. He was an incredibly hard worker, or an incredibly stupid one. As several of his comrades claimed, Tesla would work from 3 AM to 11 PM, and got only three hours of sleep per night. It was so concerning to his teachers that they wrote a letter to his father saying that if he didn’t take better care of himself, he might die from overwork and burnout. Tesla pushed through, however, and the lack of sleep and long work hours would be a habit he would keep for the rest of his life, for better or worse.
You’d think Tesla would also graduate top of his class, but it didn’t pan out that way. Tesla became heavily addicted to gambling in his 2nd year and gave away his entire tuition and allowance within the span of a year, forcing him to drop out of school. Not long after, he ran away from his home to Budapest. So abruptly, in fact, that many of his friends thought he had drowned in a nearby river. While in Budapest, he worked as the chief electrician at a telephone exchange where he reportedly invented, and eventually perfected, a telephone amplifier.
After a few years in Budapest, Tesla moved to Paris in 1882 to work for the Continental Edison Company. His job was to install indoor lighting around the city, but management realized that his talents were wasted doing such a simple job, and rehired him to construct and fix dynamos and electrical motors. Soon, the company had him traveling around Europe fixing at other Edison branches.
In 1884, Tesla was invited to work for Edison Machine Works in New York City, which he accepted. He moved across the pond with only four cents in his pocket. Eventually, Tesla would meet Thomas Edison, who he at the time greatly admired.
Side Note: While many consider Thomas Edison to be the inventor of the lightbulb, this is not true. There are many contributors to the invention of the lightbulb, but the person who should be credited to its creation is a man named Joseph Swan, who owned the patent to what was essentially a beta version of the modern lightbulb. (Patent: a government authority or license conferring a right or title for a set period, especially the sole right to exclude others from making, using, or selling an invention…. basically copyright for inventions) Edison then bought the patent from Swan and hired 50 assistants to work and perfect the lightbulb, to which he then sold, and eventually through (dumb) historians became known as its inventor.
Tesla and Edison got along swimmingly in the beginning of their relationship. Then it all went into the garbage when, in 1885, Tesla offered to redesign Edison’s DC (direct current: electricity that flows in a single, direct direction) motors and generators. The motors and generators at the time were extremely inefficient as they tended to break down, and spark, causing many safety hazards. Edison agreed to the deal and offered Tesla 1 Million dollars in today’s currency for the job. Obviously, Tesla took the deal, and he fixed everything wrong with the motors, as promised. They were reliable, no longer sparked, and were simpler in design. In fact, the design was so good that it’s still used today in common household appliances, power tools, smartphones, pumps, and the Tesla electric car.
Yes, in case you were wondering, the Tesla Company was named after Nikola Tesla. No, he was not the founder.
With the job complete, Tesla went to Edison to collect his payment for a job well done. However, all Edison did was laugh and went, “Mr. Tesla, you do not understand our American humor.” Tesla never saw a dime of the money he was promised.
Upset and angry, Tesla left Edison’s company to start his own electric company, as the Electrical Revolution was taking place. While getting his company started, Tesla dug ditches for two dollars a day, or roughly 50 dollars in today’s money, in order to make ends meet. It wasn’t long, however, until he managed to strike a deal with George Westinghouse, the founder and owner of Westinghouse Electric and Manufacturing Company, where Tesla sold his patents to Westinghouse to manufacture for regular compensation. (To be clear, Tesla did not invent AC electricity, he simply owned the patent to AC devices.) As expected, this partnership put Tesla in direct competition with Edison’s company, as, once again, Edison owned the patent for DC electricity. Thus beginning the War of the Currents
The Current Wars
Edison’s DC electrical system was worse to Tesla’s/Westinghouse’s AC electrical system in almost every possible way. It caused dangerous sparks, and could only transmit electricity one mile. Therefore, it required a power plant every square mile and cables as thick as a person’s arm. Meanwhile, AC (Alternating current: an electric current which periodically reverses direction and changes its magnitude) was able to use thinner wires, and could transmit electricity over several miles, and it didn’t cause dangerous sparks.
It should have been an obvious choice, but through shady business deals, and patent suppression, Edison was able to put up a fight, despite knowing his system was inferior in every way. He did this by attempting to make the public despise AC. How, you ask? Edison paid schoolboys 25 cents to kidnap pets and other small animals. Then he would take the stolen pets and electrocuted them until they died, in front of crowds of people using AC electricity. He wanted to prove that it was too dangerous to use in homes. He even produced a movie that was released in many theaters where an elephant was electrocuted and killed.
By the way, if you wish to know something that Thomas Edison did in fact invent, then look no further than the Electric Chair, which he made sure used AC electricity. (I’ll be fair to the guy, he also invented the phonograph and motion picture camera.)
In response, Tesla put on a light show at the 1893 World Fair by holding two lightbulbs up and shooting AC electricity through his body to produce light. Westinghouse also produced the lights at the fair, further proving to the people there that Westinghouse electricity was perfectly safe. From that point on, it was game over for Edison. After the World Fair, AC electricity became the standard for all households due to its advantages. Today, AC is the form of electricity that powers everyone’s homes. (Told you his mother’s prediction of Tesla being a ‘child of light’ was ironic.) Albeit, DC is also used in everyone’s homes, but as electrical chargers and plugs.
With AC becoming the standard, Nikola Tesla quickly became renowned and popular with common citizens and the rich alike. Meanwhile, Edison lost control of his company, General Electric, after some merges/partnerships that didn’t pan out how he expected.
Patents and Inventions
Nikola Tesla was a brilliant man who deserved every piece of acclaim he received after the War of the Currents. He was a hard worker and persistent. However, many of you might be wondering what else Tesla invented. Sure he helped develop the popularity of AC electricity, but what else? What made me claim he is one of the most important inventors of the modern era? Before we continue with the rest of his life and death, I will tell you a fraction of what Tesla invented, as he created over a dozen inventions and had close to 300 patents over a span of 26 countries by the end of his life.
Of course, you know about Tesla’s involvement with AC electricity, as he designed AC motors, generators, transformers and power transmission devices, which together used 25 of the most valuable patents since the telephone. All 25 of which Tesla created, and are still used today. However, Tesla also invented the first hydro-electrical plant powerful enough to provide light for an entire city. This plant was put in Niagara Falls, and a few statues of the man have since been placed there in his honor.
Another invention of his was radio, and wireless transmission. And by radio, I mean he discovered ways to transmit radio waves, not… the radio, as in the device we use in cars to listen to music. While many assume Guglielmo Marconi was the inventor of radio, the truth is that Marconi’s work was heavily based on Tesla’s. In fact, Tesla demonstrated a radio controlled boat in 1898 (basically a remote control toy boat) roughly 3 years before Marconi would take the credit for radios invention. To be clear, Marconi was the first person to transmit what was essentially a wireless message across the Atlantic, which is how he got the invention credited to him, but he did not invent radio.
This next invention is highly debated on who found it, even to this day, but I will mention it simply because Tesla did contribute to it, even if just a little: x-rays. As I said, it is unclear who invented x-ray, Wilhelm Roentgen, or Nikola Tesla. Some say Tesla’s x-ray photos came first, others say Roentgen’s came first. However, Tesla was one of the first, and he was undoubtedly the first to predict that x-rays may be extremely dangerous, during a time when people believed x-rays could cure blindness.
Tesla also figured out the resonant frequency of the earth over 50 years before technology was invented to simply prove his discovery.
I think I have proven my point. Tesla truly was a brilliant man, but he was also human, and humans are flawed. For example, he did not believe in electrons, and thought that all of Einstein’s work was nonsense. (If you ever see someone say that Einstein called Tesla a genius, please know this probably never happened as they did not like each other whatsoever. And, if it did happen, Einstein meant it sarcastically.) But out of everything, Tesla’s biggest flaw and failure, was probably what would have been his biggest/most ambitious invention, if it had succeeded… the Tesla Tower.
Tesla Tower
The basic idea for the Tesla Tower, was that it would deliver energy/electricity, to every home in the world for free, as long as people had something called a Tesla Coil. Which was another of Tesla’s inventions. It was a very interesting project with very kind intentions, but after JP Morgan, the main financial backer, pulled out of the project, it quickly collapsed.
Today the Tesla Tower is heavily romanticized as something that was indeed possible if JP Morgan wasn’t so greedy, but this is not true. Even today, Tesla’s idea is beyond our current technology. In addition, modern scientists say that the basis for the Tesla Tower was built on multiple of Tesla’s misunderstandings of radio wave physics. There were many other complications with the idea as well, but that was the main problem.
Late Life and Death
After the failure that was the Tesla Tower, Nikola Tesla retreated from polite society and entered a dramatic stage of depression, loneliness and denial. Tesla continued to invent and produce patents, as he claimed that inventing was the only time he ever felt truly happy, but he was slowly losing his mind. This led to more and more failures, making him become more and more secluded than he already was, until he reportedly had a mental breakdown.
By the late part of his life, Tesla was diagnosed with insanity. To just give you a general idea of how insane, here’s a fun pop culture reference you probably didn’t know where it came from until just now: Nikola Tesla claimed in one of his final interviews that he was head over heels in love with a pigeon. In fact, he spent close to 2000 dollars healing a white pigeon with grey wingtips. Like I said, completely insane.
In the final years of his life, Tesla stayed inside the New Yorker hotel, living off a diet of milk and biscuits, as he was completely penniless.
Undoubtedly, you all must be shocked and confused. After all, how in the world did Tesla die penniless? He invented so many things, even if not all were credited to him! Well, sometime between the Current Wars and the beginning of the Tesla Tower, Nikola Tesla was called to a meeting with George Westinghouse, where he learned that the Westinghouse Company was going under. In a last ditch attempt to save the company, George Westinghouse begged Tesla to lower the price of his patents. (Remember, Tesla sold his patents to Westinghouse for regular compensation.) When Tesla found out about this, he told George Westinghouse that he was grateful to the man for believing in him when no one else did. Then, he tore up the compensation contract. This meant that George Westinghouse and his company was no longer obligated to pay Tesla a dime for using his patents or inventions. If Tesla did not do this, he might have died the world’s first billionaire.
From then on, Tesla stopped collecting compensation for his patents and invention, as he no longer cared if he was properly credited or not. He always claimed that money did not mean anything to him throughout his entire life, as he was in fact a humanitarian (humanitarian: concerned with or seeking to promote human welfare.).
For the remainder years of his life, what money Tesla did get went back into his inventions, future patents, and inventions.
Then, on January 7th, 1943, Tesla passed away, penniless and alone.
End
When I first read about Nikola Tesla, I was deeply upset that he died penniless and alone, much like how I hope many of you feel. A man who had done so much for our world died underappreciated, and then almost forgotten in America. I’ll be honest, I was angry, but now, after learning more about his life and the things he did, I can’t help but think that his death was very fitting. Almost ideal, even. After all, if he never tore up that contract, if he died living on his royalties and in an expensive mansion with a high-class laboratory, would you believe me if I told you that Tesla was an honest man who invented for the better of humanity rather than greed?
That’s the thing about Tesla. He did not make the world a better place to make money, he made money to make the world a better place.
Tesla was not a God, like many in his ‘Fan Club’ claim he was. Many of his ideas and beliefs were wrong or destined to never work. He was undoubtedly crazy, and at times too idealistic, but he invented radio and x-ray. He patented AC electricity and created so much more.
Without a doubt, Nikola Tesla is one of the most important, and underrated, inventors of our modern era.
|
Before we continue with the rest of his life and death, I will tell you a fraction of what Tesla invented, as he created over a dozen inventions and had close to 300 patents over a span of 26 countries by the end of his life.
Of course, you know about Tesla’s involvement with AC electricity, as he designed AC motors, generators, transformers and power transmission devices, which together used 25 of the most valuable patents since the telephone. All 25 of which Tesla created, and are still used today. However, Tesla also invented the first hydro-electrical plant powerful enough to provide light for an entire city. This plant was put in Niagara Falls, and a few statues of the man have since been placed there in his honor.
Another invention of his was radio, and wireless transmission. And by radio, I mean he discovered ways to transmit radio waves, not… the radio, as in the device we use in cars to listen to music. While many assume Guglielmo Marconi was the inventor of radio, the truth is that Marconi’s work was heavily based on Tesla’s. In fact, Tesla demonstrated a radio controlled boat in 1898 (basically a remote control toy boat) roughly 3 years before Marconi would take the credit for radios invention. To be clear, Marconi was the first person to transmit what was essentially a wireless message across the Atlantic, which is how he got the invention credited to him, but he did not invent radio.
This next invention is highly debated on who found it, even to this day, but I will mention it simply because Tesla did contribute to it, even if just a little: x-rays. As I said, it is unclear who invented x-ray, Wilhelm Roentgen, or Nikola Tesla. Some say Tesla’s x-ray photos came first, others say Roentgen’s came first. However, Tesla was one of the first, and he was undoubtedly the first to predict that x-rays may be extremely dangerous, during a time when people believed x-rays could cure blindness.
|
yes
|
Manuscripts
|
Was the 'Gutenberg Bible' the first book printed with movable type?
|
yes_statement
|
the 'gutenberg bible' was the first "book" "printed" with "movable" "type".. the 'gutenberg bible' holds the distinction of being the first "book" "printed" with "movable" "type".
|
https://www.openculture.com/2019/07/jikji.html
|
The Oldest Book Printed with Movable Type is Not The Gutenberg ...
|
The history of the printed word is full of bibliographic twists and turns, major historical moments, and the significant printing of books now so obscure no one has read them since their publication. Most of us have only the sketchiest notion of how mass-produced printed books came into being—a few scattered dates and names. But every schoolchild can tell you the first book ever printed, and everyone knows the first words of that book: “In the beginning….”
The first Gutenberg Bible, printed in 1454 by Johannes Gutenberg, introduced the world to movable type, history tells us. It is “universally acknowledged as the most important of all printed books,” writes Margaret Leslie Davis, author of the recently published The Lost Gutenberg: The Astounding Story of One Book’s Five-Hundred-Year Odyssey. In 1900, Mark Twain expressed the sentiment in a letter “commenting on the opening of the Gutenberg Museum,” writes M. Sophia Newman at Lithub. “What the world is to-day,” he declared, “good and bad, it owes to Gutenberg. Everything can be traced to this source.”
There is kind of an oversimplified truth in the statement. The printed word (and the printed Bible, at that) did, in large part, determine the course of European history, which, through empire, determined the course of global events after the “Gutenberg revolution.” But there is another story of print entirely independent of book history in Europe, one that also determined world history with the preservation of Buddhist, Chinese dynastic, and Islamic texts. And one that begins “before Johannes Gutenberg was even born,” Newman points out.
The oldest extant text ever printed with movable type predates Gutenberg himself (born in 1400) by 23 years, and predates the printing of his Bible by 78 years. It is the Jikji, printed in Korea, a collection of Buddhist teachings by Seon master Baegun and printed in movable type by his students Seok-chan and Daijam in 1377. (Seon is a Korean form of Chan or Zen Buddhism.) Only the second volume of the printing has survived, and you can see several images from it here.
Impressive as this may be, the Jikji does not have the honor of being the first book printed with movable type, only the oldest surviving example. The technology could go back two centuries earlier. Margaret Davis nods to this history, Newman concedes, writing that “movable type was an 11th century Chinese invention, refined in Korea in 1230, before meeting conditions in Europe that would allow it to flourish.” This is more than most popular accounts of the printed word say on the matter, but it’s still an inaccurate and highly cursory summary of the evidence.
Newman herself says quite a lot more. In essays at Lithub and Tricycle, she describes how printing techniques developed in Asia and were taken up in Korea in the 1200s by the Goryeo dynasty, who commissioned a printer named Choe Yun-ui to reconstruct a woodblock print of the massive collection of ancient Buddhists texts called the Tipitaka after the Mongols burned the only Korean copy. By casting “individual characters in metal” and arranging them in a frame—the same process Gutenberg used—he was able to complete the project by 1250, 200 years before Gutenberg’s press.
This text, however, did not survive, nor did the countless number of others printed when the technology spread across the Mongol empire on the Silk Road and took root with the Muslim Uyghurs. It is possible, though “no clear historical evidence” yet supports the contention, that movable type spread to Europe from Asia along trade routes. “If there was any connection,” wrote Joseph Needham in Science and Civilization in China, “in the spread of printing between Asia and the West, the Uyghurs, who used both block printing and movable type, had good opportunities to play an important role in this introduction.”
Without surviving documentation, this early history of printing in Asia relies on secondary sources. But “the entire history of the printing press” in Europe” is likewise “riddled with gaps,” Newman writes. What we do know is that Jikji, a collection of Korean Zen Buddhist teachings, is the world’s oldest extant book printed with movable type. The myth of Johannes Gutenberg as “a lone genius who transformed human culture,” as Davis writes, “endures because the sweep of what followed is so vast that it feels almost mythic and needs an origin story to match.” But this is one inventive individual in the history of printing, not the original, godlike source of movable type.
Gutenberg makes sense as a convenient starting point for the growth and worldwide spread of capitalism and European Christianity. His innovation worked much faster than earlier systems, and others that developed around the same time, in which frames were pressed by hand against the paper. Flows of new capital enabled the rapid spread of his machine across Europe. The achievement of the Gutenberg Bible is not diminished by a fuller history. But “what gets left out” of the usual story, as Newman tells us in great detail, “is startlingly rich.”
“Only very recently, mostly in the last decade” has the long history of printing in Asia been “acknowledged at all” in popular culture, though scholars in both the East and West have long known it. Korea has regarded Jikji “and other ancient volumes as national points of pride that rank among the most important of books.” Yet UNESCO only certifiedJikji as the “oldest movable metal type printing evidence” in 2001. The recognition may be late in coming, but it matters a great deal, nonetheless. Learn much more about the history, content, and provenance of Jikji at this site created by “cyber diplomats” in Korea after UNESCO bestowed World Heritage status on the book. And see a fully digitized copy of the book here.
We’re hoping to rely on our loyal readers rather than erratic ads. To support Open Culture’s educational mission, please consider making a donation. We accept PayPal, Venmo (@openculture), Patreon and Crypto! Please find all options here. We thank you!
Comments (7)
You can skip to the end and leave a response. Pinging is currently not allowed.
It is worth noting, I think, that Guttenberg then must have been a great innovator. An invention becomes an innnovation when successfully introduced to the market. And that surely is no small achievement for such a disruptive technology!
The point is that typecasting system PREDATES Gutenberg, and was widely used in Asia. There are just no surviving books from that period.
Gutenberg, on the other hand holds full credit for being first that used typecast printing in Europe, and started printing books when time was ripe for their proliferation. I think that is an example of right technology applied under right cultural circumstances.
Don’t be afraid to admit that somebody on the other side of the world already knew about stuff Europe found out about much later… Who took credit for their invention is different can of worms…
It’s so telling that you prominently featured a link to the propoganda web site for North Korea, where they proclaim in a video that Gutenberg was a “mere metalworker” who could not possibly have developed a printing press by himself. When your sources are not credible that reflects very poorly on yours.
Good for you for reading critically. There are no such things as unbiased sources, as the marginalization of printing history in Asia shows. The information in this article comes from the many other other sources quoted and linked in the text. Consult those too, and make up your own mind.
Now, this is a very well historically known fact. There is a book that exists; why do you still not believe it? Look at the Tripikata Koreana, it was designated a UNESCO world heritage. Asia was not a savage country in the past as most people think. They had a very high standard of living, made rocket-propelled arrows, and had other great innovations. This is not North Korean propaganda. It is slowly being accepted by international society.
About Us
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.
|
The history of the printed word is full of bibliographic twists and turns, major historical moments, and the significant printing of books now so obscure no one has read them since their publication. Most of us have only the sketchiest notion of how mass-produced printed books came into being—a few scattered dates and names. But every schoolchild can tell you the first book ever printed, and everyone knows the first words of that book: “In the beginning….”
The first Gutenberg Bible, printed in 1454 by Johannes Gutenberg, introduced the world to movable type, history tells us. It is “universally acknowledged as the most important of all printed books,” writes Margaret Leslie Davis, author of the recently published The Lost Gutenberg: The Astounding Story of One Book’s Five-Hundred-Year Odyssey. In 1900, Mark Twain expressed the sentiment in a letter “commenting on the opening of the Gutenberg Museum,” writes M. Sophia Newman at Lithub. “What the world is to-day,” he declared, “good and bad, it owes to Gutenberg. Everything can be traced to this source.”
There is kind of an oversimplified truth in the statement. The printed word (and the printed Bible, at that) did, in large part, determine the course of European history, which, through empire, determined the course of global events after the “Gutenberg revolution.” But there is another story of print entirely independent of book history in Europe, one that also determined world history with the preservation of Buddhist, Chinese dynastic, and Islamic texts. And one that begins “before Johannes Gutenberg was even born,” Newman points out.
The oldest extant text ever printed with movable type predates Gutenberg himself (born in 1400) by 23 years, and predates the printing of his Bible by 78 years. It is the Jikji, printed in Korea, a collection of Buddhist teachings by Seon master Baegun and printed in movable type by his students Seok-chan and Daijam in 1377. (Seon is a Korean form of Chan or Zen Buddhism.) Only the second volume of the printing has survived, and you can see several images from it here.
|
no
|
Manuscripts
|
Was the 'Gutenberg Bible' the first book printed with movable type?
|
yes_statement
|
the 'gutenberg bible' was the first "book" "printed" with "movable" "type".. the 'gutenberg bible' holds the distinction of being the first "book" "printed" with "movable" "type".
|
https://en.wikipedia.org/wiki/Movable_type
|
Movable type - Wikipedia
|
The world's first movable type printing technology for paper books was made of porcelain materials and was invented around AD 1040 in China during the Northern Song dynasty by the inventor Bi Sheng (990–1051).[1] The earliest printed paper money with movable metal type to print the identifying code of the money was made in 1161 during the Song dynasty.[2] In 1193, a book in the Song dynasty documented how to use the copper movable type.[3] The oldest extant book printed with movable metal type, Jikji, was printed in Korea in 1377 during the Goryeo dynasty.
The spread of both movable-type systems was, to some degree, limited to primarily East Asia. The development of the printing press in Europe may have been influenced by various sporadic reports of movable type technology brought back to Europe by returning business people and missionaries to China.[4][5][6] Some of these medieval European accounts are still preserved in the library archives of the Vatican and Oxford University among many others.[7]
Around 1450, German goldsmith Johannes Gutenberg introduced the metal movable-type printing press in Europe, along with innovations in casting the type based on a matrix and hand mould. The small number of alphabetic characters needed for European languages was an important factor.[8] Gutenberg was the first to create his type pieces from an alloy of lead, tin, and antimony—and these materials remained standard for 550 years.[9]
The technique of imprinting multiple copies of symbols or glyphs with a master type punch made of hard metal first developed around 3000 BC in ancient Sumer. These metal punch types can be seen as precursors of the letter punches adapted in later millennia to printing with movable metal type. Cylinder seals were used in Mesopotamia to create an impression on a surface by rolling the seal on wet clay.[11]
Seals and stamps may have been precursors to movable type. The uneven spacing of the impressions on brick stamps found in the Mesopotamian cities of Uruk and Larsa, dating from the 2nd millennium BC, has been conjectured by some archaeologists as evidence that the stamps were made using movable type.[12] The enigmatic Minoan Phaistos Disc of c. 1800–1600 BC has been considered by one scholar as an early example of a body of text being reproduced with reusable characters: it may have been produced by pressing pre-formed hieroglyphic "seals" into the soft clay. A few authors even view the disc as technically meeting all definitional criteria to represent an early incidence of movable-type printing.[13][14]
Following the invention of paper during the Chinese Han dynasty, writing materials became more portable and economical than the bones, shells, bamboo slips, metal or stone tablets, silk, etc. previously used. Yet copying books by hand was still labour-consuming. Not until the Xiping Era (172–178 AD), towards the end of the Eastern Han dynasty, did sealing print and monotype appear. It was soon used for printing designs on fabrics, and later for printing texts.
Woodblock printing, invented by about the 8th century during the Tang dynasty, worked as follows. First, the neat hand-copied script was stuck on a relatively thick and smooth board, with the front of the paper, which was so thin that it was nearly transparent, sticking to the board, and characters showing in reverse, but distinctly, so that every stroke could be easily recognized. Then carvers cut away the parts of the board that were not part of the character, so that the characters were cut in relief, completely differently from those cut intaglio. When printing, the bulging characters would have some ink spread on them and be covered by paper. With workers' hands moving on the back of paper gently, characters would be printed on the paper. By the Song dynasty, woodblock printing came to its heyday. Although woodblock printing played an influential role in spreading culture, there were some significant drawbacks. Firstly, carving the printing plate required considerable time, labour and materials; secondly, it was not convenient to store these plates; and finally, it was difficult to correct mistakes.
Bi Sheng (畢昇) (990–1051) developed the first known movable-type system for printing in China around 1040 AD during the Northern Song dynasty, using ceramic materials.[16][17] As described by the Chinese scholar Shen Kuo (沈括) (1031–1095):
When he wished to print, he took an iron frame and set it on the iron plate. In this he placed the types, set close together. When the frame was full, the whole made one solid block of type. He then placed it near the fire to warm it. When the paste [at the back] was slightly melted, he took a smooth board and pressed it over the surface, so that the block of type became as even as a whetstone.
For each character there were several types, and for certain common characters there were twenty or more types each, in order to be prepared for the repetition of characters on the same page. When the characters were not in use he had them arranged with paper labels, one label for each rhyme-group, and kept them in wooden cases.
If one were to print only two or three copies, this method would be neither simple nor easy. But for printing hundreds or thousands of copies, it was marvelously quick. As a rule he kept two forms going. While the impression was being made from the one form, the type was being put in place on the other. When the printing of the one form was finished, the other was then ready. In this way the two forms alternated and the printing was done with great rapidity.[16]
In 1193, Zhou Bida, an officer of the Southern Song dynasty, made a set of clay movable-type method according to the method described by Shen Kuo in his Dream Pool Essays, and printed his book Notes of The Jade Hall (《玉堂雜記》).[18] The ceramic movable type was also mentioned by Kublai Khan's counsellor Yao Shu, who convinced his pupil Yang Gu to print language primers using this method.[3]
The claim that Bi Sheng's clay types were "fragile" and "not practical for large-scale printing" and "short lived"[19] was refuted by later experiments. Bao Shicheng (1775–1885) wrote that baked clay moveable type was "as hard and tough as horn"; experiments show that clay type, after being baked in an oven, becomes hard and difficult to break, such that it remains intact after being dropped from a height of two metres onto a marble floor. The length of clay movable types in China was 1 to 2 centimetres, not 2mm, thus hard as horn. But similar to metal type, ceramic type did not hold the water-based Chinese calligraphic ink well, and had an added disadvantage of uneven matching of the type which could sometimes result from the uneven changes in size of the type during the baking process.[20][21]
There has been an ongoing debate regarding the success of ceramic printing technology as there have been no printed materials found with ceramic movable types. However, it is historically recorded to have been used as late as 1844 in China from the Song dynasty through the Qing dynasty.[18][22]: 22
Movable type was invented during the Northern Song dynasty around the year 1041 by the commoner Bi Sheng. Bi Sheng's movable type was fired in porcelain. After his death, the ceramic movable-type passed onto his descendants. The next mention of movable type occurred in 1193 when a Southern Song chief counsellor, Zhou Bida (周必大), attributed the movable-type method of printing to Shen Kuo. However Shen Kuo did not invent the movable type but credited it to Bi Sheng in his Dream Pool Essays.[3]
A revolving typecase for wooden type in China, from Wang Zhen's book published in 1313
Bi Sheng (990–1051) of the Song dynasty also pioneered the use of wooden movable type around 1040 AD, as described by the Chinese scholar Shen Kuo (1031–1095). However, this technology was abandoned in favour of clay movable types due to the presence of wood grains and the unevenness of the wooden type after being soaked in ink.[16][23]
In 1298, Wang Zhen (王禎), a Yuan dynasty governmental official of Jingde County, Anhui Province, China, re-invented a method of making movable wooden types. He made more than 30,000 wooden movable types and printed 100 copies of Records of Jingde County (《旌德縣志》), a book of more than 60,000 Chinese characters. Soon afterwards, he summarized his invention in his book A method of making moveable wooden types for printing books. Although the wooden type was more durable under the mechanical rigors of handling, repeated printing wore down the character faces, and the types could only be replaced by carving new pieces. This system was later enhanced by pressing wooden blocks into sand and casting metal types from the depression in copper, bronze, iron or tin. This new method overcame many of the shortcomings of woodblock printing. Rather than manually carving an individual block to print a single page, movable type printing allowed for the quick assembly of a page of text. Furthermore, these new, more compact type fonts could be reused and stored.[16][17] Wang Zhen used two rotating circular tables as trays for laying out his type. The first table was separated into 24 trays in which each movable type was categorized based on a number corresponding with a rhyming pattern. The second table contained miscellaneous characters.[3]
The set of wafer-like metal stamp types could be assembled to form pages, inked, and page impressions taken from rubbings on cloth or paper.[17] In 1322, a Fenghua county officer Ma Chengde (馬称德) in Zhejiang, made 100,000 wooden movable types and printed the 43-volume Daxue Yanyi (《大學衍義》). Wooden movable types were used continually in China. Even as late as 1733, a 2300-volume Wuying Palace Collected Gems Edition (《武英殿聚珍版叢書》) was printed with 253,500 wooden movable types on order of the Qianlong Emperor, and completed in one year.[3]
At least 13 material finds in China indicate the invention of bronze movable type printing in China no later than the 12th century,[26] with the country producing large-scale bronze-plate-printed paper money and formal official documents issued by the Jin (1115–1234) and Southern Song (1127–1279) dynasties with embedded bronze metal types for anti-counterfeit markers. Such paper-money printing might date back to the 11th-century jiaozi of Northern Song (960–1127).[22]: 41–54
The typical example of this kind of bronze movable type embedded copper-block printing is a printed "check" of the Jin dynasty with two square holes for embedding two bronze movable-type characters, each selected from 1,000 different characters, such that each printed paper note has a different combination of markers. A copper-block printed note dated between 1215 and 1216 in the collection of Luo Zhenyu's Pictorial Paper Money of the Four Dynasties, 1914, shows two special characters – one called Ziliao, the other called Zihao – for the purpose of preventing counterfeiting; over the Ziliao there is a small character (輶) printed with movable copper type, while over the Zihao there is an empty square hole – apparently the associated copper metal type was lost. Another sample of Song dynasty money of the same period in the collection of the Shanghai Museum has two empty square holes above Ziliao as well as Zihou, due to the loss of the two copper movable types. Song dynasty bronze block embedded with bronze metal movable type printed paper money was issued on a large scale and remained in circulation for a long time.[27]
The 1298 book Zao Huozi Yinshufa (《造活字印書法》) by the Yuan dynasty (1271–1368) official Wang Zhen mentions tin movable type, used probably since the Southern Song dynasty (1127–1279), but this was largely experimental.[28] It was unsatisfactory due to its incompatibility with the inking process.[16]: 217 But by the late 15th century these concerns were resolved and bronze type was widely used in Chinese printing.[29]
During the Mongol Empire (1206–1405), printing using movable type spread from China to Central Asia.[clarification needed] The Uyghurs of Central Asia used movable type, their script type adopted from the Mongol language, some with Chinese words printed between the pages – strong evidence that the books were printed in China.[30]
In 1725 the Qing dynasty government made 250,000 bronze movable-type characters and printed 64 sets of the encyclopedic Gujin Tushu Jicheng (《古今圖書集成》, Complete Collection of Illustrations and Writings from the Earliest to Current Times). Each set consisted of 5,040 volumes, making a total of 322,560 volumes printed using movable type.[30]
Korean movable type from 1377 used for the JikjiPrinted pages of the Jikji
In 1234 the first books known to have been printed in metallic type set were published in Goryeo dynastyKorea. They form a set of ritual books, Sangjeong Gogeum Yemun, compiled by Choe Yun-ui.[31][32]
While these books have not survived, the oldest book existing in the world printed in metallic movable types is Jikji, printed in Korea in 1377.[33]
The Asian Reading Room of the Library of Congress in Washington, D.C. displays examples of this metal type.[34] Commenting on the invention of metallic types by Koreans, French scholar Henri-Jean Martin described this as "[extremely similar] to Gutenberg's".[35] However, Korean movable metal type printing differed from European printing in the materials used for the type, punch, matrix, mould and in method of making an impression.[36]
The techniques for bronze casting, used at the time for making coins (as well as bells and statues) were adapted to making metal type. The Joseon dynasty scholar Seong Hyeon (성현, 成俔, 1439–1504) records the following description of the Korean font-casting process:
At first, one cuts letters in beech wood. One fills a trough level with fine sandy [clay] of the reed-growing seashore. Wood-cut letters are pressed into the sand, then the impressions become negative and form letters [moulds]. At this step, placing one trough together with another, one pours the molten bronze down into an opening. The fluid flows in, filling these negative moulds, one by one becoming type. Lastly, one scrapes and files off the irregularities, and piles them up to be arranged.[31]
A potential solution to the linguistic and cultural bottleneck that held back movable type in Korea for 200 years appeared in the early 15th century—a generation before Gutenberg would begin working on his own movable-type invention in Europe—when Sejong the Great devised a simplified alphabet of 24 characters (hangul) for use by the common people, which could have made the typecasting and compositing process more feasible. But Korea's cultural elite, "appalled at the idea of losing hanja, the badge of their elitism", stifled the adoption of the new alphabet.[17]
A "Confucian prohibition on the commercialization of printing" also obstructed the proliferation of movable type, restricting the distribution of books produced using the new method to the government.[37] The technique was restricted to use by the royal foundry for official state publications only, where the focus was on reprinting Chinese classics lost in 1126 when Korea's libraries and palaces had perished in a conflict between dynasties.[37]
Scholarly debate and speculation has occurred as to whether Eastern movable type spread to Europe between the late 14th century and early 15th centuries.[31][6]: 58–69 [38][5][39] For example, authoritative historians Frances Gies and Joseph Gies claimed that "The Asian priority of invention movable type is now firmly established, and that Chinese-Korean technique, or a report of it traveled westward is almost certain."[4] However, Joseph P. McDermott claimed that "No text indicates the presence or knowledge of any kind of Asian movable type or movable type imprint in Europe before 1450. The material evidence is even more conclusive."[39]
The Printing Revolution in the 15th century: Within several decades around 270 European towns took up movable-type printing.[40]European output of movable-type printing from Gutenberg to 1800[41]
Johannes Gutenberg of Mainz, Germany, is acknowledged as the first to invent a metal movable-type printing system in Europe: the printing press, 78 years after Jikji (the oldest preserved book printed with movable metal type) had been printed in Korea. Gutenberg, as a goldsmith, knew techniques of cutting punches for making coins from moulds. Between 1436 and 1450 he developed hardware and techniques for casting letters from matrices using a device called the hand mould.[6] Gutenberg's key invention and contribution to movable-type printing in Europe, the hand mould, was the first practical means of making cheap copies of letterpunches in the vast quantities needed to print complete books, making the movable-type printing process a viable enterprise.[citation needed]
Before Gutenberg, scribes copied books by hand on scrolls and paper, or print-makers printed texts from hand-carved wooden blocks. Either process took a long time; even a small book could take months to complete. Because carved letters or blocks were flimsy and the wood susceptible to ink, the blocks had a limited lifespan.[citation needed]
Gutenberg and his associates developed oil-based inks ideally suited to printing with a press on paper, and the first Latin typefaces. His method of casting type may have differed from the hand-mould used in subsequent decades. Detailed analysis of the type used in his 42-line Bible has revealed irregularities in some of the characters that cannot be attributed to ink spread or type wear under the pressure of the press. Scholars conjecture that the type pieces may have been cast from a series of matrices made with a series of individual stroke punches, producing many different versions of the same glyph.[42][need quotation to verify]
Editing with movable metal – cca. 1920
It has also been suggested[by whom?] that the method used by Gutenberg involved using a single punch to make a mould, but the mould was such that the process of taking the type out disturbed the casting, causing variants and anomalies, and that the punch-matrix system came into use possibly around the 1470s.[43]
This raises the possibility that the development of movable type in the West may have been progressive rather than a single innovation.[44]
Gutenberg's movable-type printing system spread rapidly across Europe, from the single Mainz printing press in 1457 to 110 presses by 1480, with 50 of them in Italy. Venice quickly became the centre of typographic and printing activity. Significant contributions came from Nicolas Jenson, Francesco Griffo, Aldus Manutius, and other printers of late 15th-century Europe. Gutenberg's movable type printing system offered a number of advantages over previous movable type techniques. The lead-antimony-tin alloy used by Gutenberg had half the melting temperature of bronze,[45][46] making it easier to cast the type and aided the use of reusable metal matrix moulds instead of the expendable sand and clay moulds. The use of antimony alloy increased hardness of the type compared to lead and tin[47] for improved durability of the type. The reusable metal matrix allowed a single experienced worker to produce 4,000 to 5,000 individual types a day,[48][49] while Wang Chen had artisans working 2 years to make 60,000 wooden types.[50]
If the glyph design includes enclosed spaces (counters) then a counterpunch is made. The counter shapes are transferred in relief (cameo) onto the end of a rectangular bar of carbon steel using a specialized engraving tool called a graver. The finished counterpunch is hardened by heating and quenching (tempering), or exposure to a hot cyanide compound (case hardening). The counterpunch is then struck against the end of a similar rectangular steel bar—the letterpunch—to impress the counter shapes as recessed spaces (intaglio). The outer profile of the glyph is completed by scraping away with a graver the material outside the counter spaces, leaving only the stroke or lines of the glyph. Progress toward the finished design is checked by successive smoke proofs; temporary prints made from a thin coating of carbon deposited on the punch surface by a candle flame. The finished letter punch is finally hardened to withstand the rigours of reproduction by striking. One counterpunch and one letterpunch are produced for every letter or glyph making up a complete font.
The letterpunch is used to strike a blank die of soft metal to make a negative letter mould, called a matrix.
Casting
The matrix is inserted into the bottom of a device called a hand mould. The mould is clamped shut and molten type metal alloy (consisting mostly of lead and tin, with a small amount of antimony for hardening) is poured into a cavity from the top. Antimony has the rare property of expanding as it cools, giving the casting sharp edges.[51] When the type metal has sufficiently cooled, the mould is unlocked and a rectangular block approximately 4 cm (1.6 in) long, called a sort, is extracted. Excess casting on the end of the sort, called the tang, is later removed to make the sort the precise height required for printing, known as "type height".
At the end of the 19th century there were only two typefoundries left in the Netherlands: Johan Enschedé & Zonen, at Haarlem, and Lettergieterij Amsterdam, voorheen Tetterode. They both had their own type-height: Enschedé: 65 23/24 points Didot, and Amsterdam: 66 1/24 points Didot – enough difference to prevent a combined use of fonts from the two typefoundries: Enschede would be too light, or otherwise the Amsterdam-font would print rather bold. This was a way of keeping clients.[53]
In 1905 the Dutch governmental Algemeene Landsdrukkerij, later: "State-printery" (Staatsdrukkerij) decided during a reorganisation to use a standard type-height of 63 points Didot. Staatsdrukkerij-hoogte, actually Belgium-height, but this fact was not widely known[by whom?].
Modern, factory-produced movable type was available in the late 19th century. It was held in the printing shop in a job case, a drawer about 2 inches high, a yard wide, and about two feet deep, with many small compartments for the various letters and ligatures. The most popular and accepted of the job case designs in America was the California Job Case, which took its name from the Pacific coast location of the foundries that made the case popular.[54]
Traditionally, the capital letters were stored in a separate drawer or case that was located above the case that held the other letters; this is why capital letters are called "upper case" characters while the non-capitals are "lower case".[55]
Compartments also held spacers, which are blocks of blank type used to separate words and fill out a line of type, such as em and en quads (quadrats, or spaces. A quadrat is a block of type whose face is lower than the printing letters so that it does not itself print.). An em space was the width of a capital letter "M" – as wide as it was high – while an en space referred to a space half the width of its height (usually the dimensions for a capital "N").
Individual letters are assembled into words and lines of text with the aid of a composing stick, and the whole assembly is tightly bound together to make up a page image called a forme, where all letter faces are exactly the same height to form a flat surface of type. The forme is mounted on a printing press, a thin coating of viscous ink is applied, and impressions are made on paper under great pressure in the press. "Sorts" is the term given to special characters not freely available in the typical type case, such as the "@" mark.
Sometimes, it is erroneously stated that printing with metal type replaced the earlier methods. In the industrial era printing methods would be chosen to suit the purpose. For example, when printing large scale letters in posters etc. the metal type would have proved too heavy and economically unviable. Thus, large scale type was made as carved wood blocks as well as ceramics plates.[56] Also in many cases where large scale text was required, it was simpler to hand the job to a sign painter than a printer. Images could be printed together with movable type if they were made as woodcuts or wood engravings as long as the blocks were made to the same type height. If intaglio methods, such as copper plates, were used for the images, then images and the text would have required separate print runs on different machines.
^Sass, Benjamin; Marzahn, Joachim (2010). Aramaic and Figural Stamp Impressions on Bricks of the Sixth Century B.C. from Babylon. Harrassowitz Verlag. pp. 11, 20, 160. ISBN978-3-447-06184-1. "the latter has cuneiform signs that look as if made with a movable type, and impressions from Assur display the same phenomenon
|
The world's first movable type printing technology for paper books was made of porcelain materials and was invented around AD 1040 in China during the Northern Song dynasty by the inventor Bi Sheng (990–1051).[1] The earliest printed paper money with movable metal type to print the identifying code of the money was made in 1161 during the Song dynasty.[2] In 1193, a book in the Song dynasty documented how to use the copper movable type.[3] The oldest extant book printed with movable metal type, Jikji, was printed in Korea in 1377 during the Goryeo dynasty.
The spread of both movable-type systems was, to some degree, limited to primarily East Asia. The development of the printing press in Europe may have been influenced by various sporadic reports of movable type technology brought back to Europe by returning business people and missionaries to China.[4][5][6] Some of these medieval European accounts are still preserved in the library archives of the Vatican and Oxford University among many others.[7]
Around 1450, German goldsmith Johannes Gutenberg introduced the metal movable-type printing press in Europe, along with innovations in casting the type based on a matrix and hand mould. The small number of alphabetic characters needed for European languages was an important factor.[8] Gutenberg was the first to create his type pieces from an alloy of lead, tin, and antimony—and these materials remained standard for 550 years.[9]
The technique of imprinting multiple copies of symbols or glyphs with a master type punch made of hard metal first developed around 3000 BC in ancient Sumer. These metal punch types can be seen as precursors of the letter punches adapted in later millennia to printing with movable metal type. Cylinder seals were used in Mesopotamia to create an impression on a surface by rolling the seal on wet clay.[11]
Seals and stamps may have been precursors to movable type.
|
no
|
Manuscripts
|
Was the 'Gutenberg Bible' the first book printed with movable type?
|
yes_statement
|
the 'gutenberg bible' was the first "book" "printed" with "movable" "type".. the 'gutenberg bible' holds the distinction of being the first "book" "printed" with "movable" "type".
|
https://www.britannica.com/topic/Gutenberg-Bible
|
Gutenberg Bible | Description, History, & Facts | Britannica
|
Gutenberg Bible, also called 42-line Bible or Mazarin Bible, the first complete book extant in the West and one of the earliest printed from movable type, so called after its printer, Johannes Gutenberg, who completed it about 1455 working at Mainz, Germany. The three-volume work, in Latin text, was printed in 42-line columns and, in its later stages of production, was worked on by six compositors simultaneously. It is sometimes referred to as the Mazarin Bible because the first copy described by bibliographers was located in the Paris library of Cardinal Mazarin. The Anthology of Great Buddhist Priests’ Zen Teachings (1377), also known as Jikji, was printed in Korea 78 years before the Gutenberg Bible and is recognized as the world’s oldest extant movable metal type book.
Like other contemporary works, the Gutenberg Bible had no title page, no page numbers, and no innovations to distinguish it from the work of a manuscript copyist. This was presumably the desire of both Gutenberg and his customers. Experts are generally agreed that the Bible, though uneconomic in its use of space, displays a technical efficiency not substantially improved upon before the 19th century. The Gothic type is majestic in appearance, medieval in feeling, and slightly less compressed and less pointed than other examples that appeared shortly thereafter.
|
Gutenberg Bible, also called 42-line Bible or Mazarin Bible, the first complete book extant in the West and one of the earliest printed from movable type, so called after its printer, Johannes Gutenberg, who completed it about 1455 working at Mainz, Germany. The three-volume work, in Latin text, was printed in 42-line columns and, in its later stages of production, was worked on by six compositors simultaneously. It is sometimes referred to as the Mazarin Bible because the first copy described by bibliographers was located in the Paris library of Cardinal Mazarin. The Anthology of Great Buddhist Priests’ Zen Teachings (1377), also known as Jikji, was printed in Korea 78 years before the Gutenberg Bible and is recognized as the world’s oldest extant movable metal type book.
Like other contemporary works, the Gutenberg Bible had no title page, no page numbers, and no innovations to distinguish it from the work of a manuscript copyist. This was presumably the desire of both Gutenberg and his customers. Experts are generally agreed that the Bible, though uneconomic in its use of space, displays a technical efficiency not substantially improved upon before the 19th century. The Gothic type is majestic in appearance, medieval in feeling, and slightly less compressed and less pointed than other examples that appeared shortly thereafter.
|
no
|
Manuscripts
|
Was the 'Gutenberg Bible' the first book printed with movable type?
|
yes_statement
|
the 'gutenberg bible' was the first "book" "printed" with "movable" "type".. the 'gutenberg bible' holds the distinction of being the first "book" "printed" with "movable" "type".
|
https://news.yale.edu/2023/01/24/worlds-oldest-printed-objects-join-gutenberg-bible-beinecke-display
|
World's oldest printed objects join Gutenberg Bible in Beinecke display
|
A woodblock print of a Buddhist incantation that Empress Regnant Shōtoku had mass produced in Japan between 764 and 770 C.E. (Common Era) has joined Yale’s copy of the Gutenberg Bible on display at the Beinecke Rare Book and Manuscript Library. The new display includes the wooden pagoda that housed the printed incantation and a photograph of the maker’s mark on the miniature pagoda’s bottom. (Photos by Andrew Hurley)
Yale’s copy of the Gutenberg Bible, on view since 1963 in a bronze case on the mezzanine of the Beinecke Rare Book & Manuscript Library, is a landmark in the history of the printed word. Today, another landmark of the same history, a 1,250-year-old print of Buddhist prayers — the earliest known printed text that can be reliably dated — joins it on regular display.
The updated display presents a broader and more complete story of humanity’s development of printing over centuries, Beinecke curators say.
The Gutenberg Bible, composed of two volumes, is the first significant book manufactured in the West using metal moveable type. Johannes Gutenberg’s masterpiece represents a revolution in printing in 15th-century Europe that streamlined book manufacturing and accelerated the dissemination of knowledge worldwide.
Even older are the Hyakumantō darani, woodblock prints of a Buddhist Sutra that Empress Regnant Shōtoku had mass produced in Japan between 764 and 770 C.E. (Common Era) after the suppression of the Nakamaro rebellion — nearly 700 years before Gutenberg began churning out two-volume copies of the Latin Vulgate on his novel printing press in Mainz, a town in present-day Germany.
Yale University Library’s East Asia Collection includes several examples of the Buddhist Sutras, prayer scrolls that were kept in miniature wooden pagodas and distributed to 10 prominent Buddhist temples near Japan’s then-capital, Nara. To provide visitors to the Beinecke Library a broader view of the history of printing, one of the scrolls and its pagoda container will replace one of the two volumes of Yale’s Gutenberg Bible in the display case. The replaced volume will be kept in storage and the two volumes will rotate in and out of the display to promote preservation of the Bible.
“For years, visitors to the Beinecke have marveled at the Gutenberg Bible and rightly so,” said Michelle Light, associate university librarian for special collections and director of the Beinecke Library. “By putting a volume of the Gutenberg in conversation with the oldest surviving printed material we’re offering the public the chance to contemplate two momentous historical objects that combined will tell a fuller story about humanity’s use and development of printing.”
Earliest surviving prints
The scrolls are copies of darani, a type of Buddhist incantation. Each of the 10 temples purportedly received 100,000 scrolls, each enclosed in a miniature pagoda. Many of the pagodas have maker’s marks carved into their bottoms. The scrolls likely were printed using woodblocks, although it is possible that at least some were printed with metal plates, according to recent scholarship. Regardless of the method used, the printing of 1 million prayers and the carving of the same number of pagodas must have required a small army of craftsmen, said Ray Clemens, curator of early books and manuscripts at the Beinecke.
Paula Zyats, assistant chief conservator for Yale University Library, prepares to place the prayer scroll and its pagoda container in the display case.
Only scrolls from one of the temples, the Hōryūji temple outside Nara in the Kansai region of Honshū, survived into the modern age. Many were given away as gifts and many of those found their way into the antiquities market, which dispersed them into collections throughout the world, Clemens said.
“They’re not rare in that they exist in many library and museum collections, but most people don’t realize that they are earliest surviving prints,” he said. “They’re also just fascinating objects in their own right.”
In 1934, Asakawa Kan’ichi, professor of history at Yale and founding curator of the university’s Chinese and Japanese collections, acquired four of the prayer scrolls and their pagodas in collaboration with the Yale Association of Japan.
“With their addition to the regular display, the Hyakumantō darani, an acquisition we have long recognized as invaluable to our academic community, will finally reach the broad audience Asakawa hoped these objects would one day reach,” said Haruko Nakamura, librarian for Japanese studies at Yale.
The scrolls and pagodas have already been utilized in numerous exhibitions as well as scholarly publications, including research by Mimi Yiengpruksawan, professor of History of Art in Yale’s Faculty of Arts and Sciences. Additionally, students in courses across various disciplines — including those taught by Edward Kamens in the Department of East Asian Languages & Literatures and Daniel Botsman and Valerie Hansen in the Department of History — have had the opportunity to view them during class.
“This change is not just about acknowledging the long history of printing in East Asia,” said Botsman, a professor of history in Yale’s Faculty of Arts and Sciences. “It also signals a willingness to think expansively about the achievements and contributions of people in different places and times throughout human history.
“Back in the 1930s, Professor Asakawa might not have used the language of diversity and inclusion,” Botsman added. “But as a pioneering scholar of comparative history, I think this is precisely what he was hoping for when he first brought the Hyakumantō darani to Yale. It is wonderful that it will now be so easily accessible to everyone who visits the Beinecke.”
A broader history of print
Gutenberg printed about 180 copies of the Bible, which were first available in about 1455. Hoping to capitalize on the market for luxury goods, he made copies on both paper and vellum. Yale’s copy, printed on paper, is one of only 21 complete copies known to exist. Another 28 partial copies survive. (The four surviving vellum copies are housed at the Library of Congress, the British Museum in London, and the Bibliothèque Nationale in Paris.) It is a 42-line Bible, meaning most pages feature two columns of 42 lines each.
The Bibles were often donated to monasteries by wealthy laypeople. For many years, the Yale copy belonged to the library of the Benedictine abbey at Melk in Austria. During the economic depression after World War II, the monks sold their copy to pay for the abbey’s restoration. Philanthropist Mary Stillman Harkness later acquired the Bible and presented it to Yale in memory of her mother-in-law, Anna M. Harkness, who donated the money to build Harkness Memorial Quadrangle.
Even within Europe, Gutenberg’s Bible was not the first book printed using moveable type. In fact, it isn’t the first book Gutenberg printed on his press. That distinction belongs to Donatus, a small Latin grammar book for schoolchildren named after its author Aelius Donatus, a mid-fourth century teacher of rhetoric and grammar. Only portions of the earlier and less impressive book survive. A fragment made with the same type used to print the Gutenberg Bible resides at the Princeton University Library.
Printed in about 1455 C.E. by Johannes Gutenberg, the Gutenberg Bible is the first significant book manufactured in the West using metal moveable type. Yale’s copy is one of only 21 complete copies known to exist. The volume on display is currently opened to the Book of Exodus when Moses receives the Ten Commandments.
The use of moveable type predates Gutenberg by centuries. Chinese printers began using porcelain moveable type as early as the 1040s. In the 12th century, printers in East Asia began using metal moveable type to produce money and eventually books, Clemens noted. The oldest surviving book printed with metal moveable type is the Korean Buddhist text Jikji, which was made in 1377. The sole surviving copy is housed at the Bibliothèque Nationale.
The new display at the Beinecke is not the first time that visitors could consider the Gutenberg Bible together with an example of the Hyakumantō darani. In 2013, both were showcased in an exhibition on printing that was part of the Beinecke Library’s 50th anniversary celebration. The two texts were among those most celebrated when the Beinecke’s iconic building first opened.
On Oct. 11, 1963, to mark the occasion, a Yale University News Bureau release touted the collection’s highlights: “Here in the Beinecke Library are the Gutenberg Bible, about 1455, the first book printed from movable type, and the Bay Psalm Book of 1640, the first book printed in the American colonies. Far older than both are Japanese prayer scrolls of the 8th century, believed to be the oldest example of type-printing in the world.”
The new display continues to honor Gutenberg’s work, Clemens said, while inspiring a fuller and more dynamic understanding of the history of print.
“Nobody denies the hugely important role Gutenberg played in the making of the modern world, but he did not create printing out of whole cloth,” he said. “His genius is inventing a new and more efficient method of printing, but the concept had existed for a very long time. We hope that pairing the Gutenberg Bible with an important artifact from another culture, and representing another major religious tradition, will inspire people to want to learn more about both objects and the fascinating history they represent.”
The Beinecke Library’s exhibition hall is free and open to the public seven days a week. The collections are accessible to all who register to do research in the reading room on weekdays and Yale Library’s digital collections are accessible to all online.
“More than 150,000 people come through the revolving doors of the exhibition hall every year,” said Michael Morand, director of community engagement. “The Gutenberg Bible is always a big draw — it’s probably one of the most visited items in any of Yale’s collections. On our popular Saturday tours, we always like to offer context and so note other items in the collections, such as the magnificent 8th century scrolls. Not surprisingly, many people ask if they can see them. Now, happily, they can.
“We’re really excited that this updated exhibition will draw more visitors in the years ahead and encourage them to explore the collections more fully.”
Editor's note: This story has been updated to correct the known number of complete vellum copies of the Gutenberg bibles still in existence. There are four, not three. The fourth is located at the Göttingen State and University Library in Germany.
|
E. by Johannes Gutenberg, the Gutenberg Bible is the first significant book manufactured in the West using metal moveable type. Yale’s copy is one of only 21 complete copies known to exist. The volume on display is currently opened to the Book of Exodus when Moses receives the Ten Commandments.
The use of moveable type predates Gutenberg by centuries. Chinese printers began using porcelain moveable type as early as the 1040s. In the 12th century, printers in East Asia began using metal moveable type to produce money and eventually books, Clemens noted. The oldest surviving book printed with metal moveable type is the Korean Buddhist text Jikji, which was made in 1377. The sole surviving copy is housed at the Bibliothèque Nationale.
The new display at the Beinecke is not the first time that visitors could consider the Gutenberg Bible together with an example of the Hyakumantō darani. In 2013, both were showcased in an exhibition on printing that was part of the Beinecke Library’s 50th anniversary celebration. The two texts were among those most celebrated when the Beinecke’s iconic building first opened.
On Oct. 11, 1963, to mark the occasion, a Yale University News Bureau release touted the collection’s highlights: “Here in the Beinecke Library are the Gutenberg Bible, about 1455, the first book printed from movable type, and the Bay Psalm Book of 1640, the first book printed in the American colonies. Far older than both are Japanese prayer scrolls of the 8th century, believed to be the oldest example of type-printing in the world.”
The new display continues to honor Gutenberg’s work, Clemens said, while inspiring a fuller and more dynamic understanding of the history of print.
“Nobody denies the hugely important role Gutenberg played in the making of the modern world, but he did not create printing out of whole cloth,” he said. “His genius is inventing a new and more efficient method of printing, but the concept had existed for a very long time.
|
no
|
Manuscripts
|
Was the 'Gutenberg Bible' the first book printed with movable type?
|
yes_statement
|
the 'gutenberg bible' was the first "book" "printed" with "movable" "type".. the 'gutenberg bible' holds the distinction of being the first "book" "printed" with "movable" "type".
|
https://www.sltrib.com/artsliving/2022/05/22/utah-expert-is-studying/
|
A University of Utah expert is studying the world's oldest movable ...
|
A Utah expert is studying the world’s oldest movable-type book — and it’s not the Gutenberg Bible
A University of Utah researcher is leading a team to study the 14th century Korean book called Jikji.
(University of Utah) Scans of pages of the Gutenberg Bible, top, and Jikji, a Korean Buddhist book that was printed decades before Gutenberg's work. A University of Utah researcher is among those leading a study of the Korean book, believed to be the oldest surviving book printed with movable metal type.
Ask most people what the oldest book made with movable metal type is, and they likely will say the Gutenberg Bible, printed around 1455 in Mainz, Germany.
That isn’t the case, and a University of Utah librarian is part of a research project to give proper due to what scholars say is the first such book, known as Jikji.
“People should learn the bigger picture,” said Randy Silverman, who’s head of preservation at the University of Utah’s Marriott Library, and one of two principal investigators on the collaborative project “From Jikji to Gutenberg.”
Jikji is a Korean Buddhist book — the title translates to “pointing at it directly” — that was printed in 1377. It tells “a compression of the history of the Buddhists,” Silverman said. “It tells how the Buddhists attained enlightenment.”
The project — which involves 40 scholars, working in 14 different time zones worldwide — aims to bring Jikji’s existence to the forefront of printing history around the globe.
In a video explaining the endeavor, Silverman says they are “changing perspectives on the history of printing.”
To give context to the project’s mission, Silverman tells a story about a neighbor’s child on his street, who was drawing a sidewalk-chalk picture of the planet, marked with important cultural milestones. Gutenberg was on there, but Jikji was not.
It’s Silverman’s hope, he said, that the next time that child draws a globe, Jikji will be included among the great milestones.
Silverman said he learned about Jikji a few years ago, when he was invited by UNESCO to give talks in South Korea, in Seoul and Cheongju, about 85 miles south — a place Silverman had not heard of before.
Cheongju is home to the Cheongju Early Printing Museum, which opened in 1992 next to the site of the original temple (Heungdeok) where Jikji was first printed. The museum was designed, Silverman said, to answer the question “How will you take care of Jikji?”
Silverman said he left Cheongju with “an obligation to come back to America and talk about it, because it just seemed like I got cheated. People ought to know that this is the story.”
There’s a story behind why Jikji isn’t as recognized as the Gutenberg Bible. Much of it involves the lack of accessibility of the one original copy of Jikji.
The story goes that the book was taken from Korea by a French diplomat, Victor Collin de Plancy, in the early 1900s, was bought by a collector in 1911, and was donated in 1950 to the Bibliothèque de national Française — the National Library of France. The library hasn’t displayed its copy of Jikji since the early 1970s.
The French and South Korean governments have contested the proper place for the book for years. In 1989, French President François Mitterand offered to send Jikji to Korea, if the Koreans agreed to import French high-speed rail technology; the deal reportedly broke down when the library’s staff objected. Last November, a French cultural minister said her government would consider lending Jikji to South Korea — on the condition that the Korean government not attempt to seize the book and keep it there forever.
Silverman is adamant that he wants both France and South Korea involved in the project he’s leading — but he also noted, “there isn’t a Western intellectual position that doesn’t think there’s a superiority.”
What Silverman said he and the other scholars are most particularly interested in are the printing methods used to create Jikji — notably, metal type.
“The curious thing for me [is] it’s printed the next year as a woodblock printed book,” Silverman said. “Woodblocks and type are used for different purposes, and making metal type is really complicated.”
The book was printed using Chinese characters that are quite intricate. (The precursor of today’s Korean alphabet wasn’t created until about 75 years later.) Woodblock type is easier to duplicate, he said, much like a modern-day photocopier, because the user can reference it and use it again and again.
Over the next five years, the collaborative project will hold a scholarly symposium at the Library of Congress, and publish a 400-page catalog that will delve into type casting, ink, paper and bookbinding, along with patterns of book distribution in Asia. The hope is that in 2027 — the 650th anniversary of Jikji’s printing — the project will bring an international exhibit to nine research libraries in the United States, and 34 more in 14 other countries.
Part of the project is to compare Jikji and Gutenberg, to see how the Korean and European printers of the 14th and 15th centuries differed in binding, ink, and other aspects of printing.
“The cultural advance of humanity is tied up in this investigation,” Silverman explains. “We want to save ideas as a species because we love the idea of advancement. We want to make it better for the next generation.”
|
A Utah expert is studying the world’s oldest movable-type book — and it’s not the Gutenberg Bible
A University of Utah researcher is leading a team to study the 14th century Korean book called Jikji.
(University of Utah) Scans of pages of the Gutenberg Bible, top, and Jikji, a Korean Buddhist book that was printed decades before Gutenberg's work. A University of Utah researcher is among those leading a study of the Korean book, believed to be the oldest surviving book printed with movable metal type.
Ask most people what the oldest book made with movable metal type is, and they likely will say the Gutenberg Bible, printed around 1455 in Mainz, Germany.
That isn’t the case, and a University of Utah librarian is part of a research project to give proper due to what scholars say is the first such book, known as Jikji.
“People should learn the bigger picture,” said Randy Silverman, who’s head of preservation at the University of Utah’s Marriott Library, and one of two principal investigators on the collaborative project “From Jikji to Gutenberg.”
Jikji is a Korean Buddhist book — the title translates to “pointing at it directly” — that was printed in 1377. It tells “a compression of the history of the Buddhists,” Silverman said. “It tells how the Buddhists attained enlightenment.”
The project — which involves 40 scholars, working in 14 different time zones worldwide — aims to bring Jikji’s existence to the forefront of printing history around the globe.
In a video explaining the endeavor, Silverman says they are “changing perspectives on the history of printing.”
To give context to the project’s mission, Silverman tells a story about a neighbor’s child on his street, who was drawing a sidewalk-chalk picture of the planet, marked with important cultural milestones. Gutenberg was on there, but Jikji was not.
|
no
|
Manuscripts
|
Was the 'Gutenberg Bible' the first book printed with movable type?
|
yes_statement
|
the 'gutenberg bible' was the first "book" "printed" with "movable" "type".. the 'gutenberg bible' holds the distinction of being the first "book" "printed" with "movable" "type".
|
https://www.loc.gov/rr/rarebook/coll/255.html
|
Vollbehr (Selected Special Collections: Rare Book and Special ...
|
Selected Special Collections
Otto Vollbehr Collection
Incunabula.
[Dr. Otto Vollbehr, half-length portrait, facing front, standing next to a rare volume of printers' bookmarks which he donated to the Library of Congress]. Prints and Photographs Division, Library of Congress.
The
Vollbehr Collection, stated George Parker Winship of the Harvard Library shortly before its purchase by act of Congress in 1930, "is representative, to an amazing degree, of every sort of publication which came from the fifteenth century presses." The collection contains incunabula produced at 635 different printing establishments and an rich selection of books in vernacular languages. This acquisition quadrupled the number of fifteenth century books held by the Library of Congress and established the Library as the leading center for the study of early printing.
Otto Vollbehr was a German industrialist whose family had made a fortune in the dyestuff industry; he took up book collecting when his physician recommended that he adopt a hobby following a railway accident which left him with a serious nervous condition. In addition to collecting books he acquired “ready-made” collections of 15-18th century book illustrations and of printers’ marks.
The treasure of the Vollbehr Collection is the copy of the Bible produced by Johann Gutenberg at Mainz about 1456-the first book printed with movable type in the western world. The Library's Gutenberg Bible is one of the three surviving perfect copies on vellum. The work had been in the possession of the Benedictine Order for nearly five centuries before it was acquired by Dr. Otto Vollbehr from the Abbey of Saint Paul in eastern Carinthia, Austria. Bound as three volumes, the Bible retains the bookplate of the monastery of Saint Blasius (the owner of the work until the late eighteenth century) as well as its late sixteenth century white pigskin binding. There are 3,114 volumes in the Vollbehr Collection.
Select Digitized Material from the Otto Vollbehr Collection
The first great book printed in Western Europe from movable metal type. The Bible was completed in Mainz, Germany, probably in late 1455. Johann Gutenberg, who lived from about 1397 to 1468, is generally credited with inventing the process of making uniform and interchangeable metal type and developing the materials and methods to make printing possible.
|
Selected Special Collections
Otto Vollbehr Collection
Incunabula.
[Dr. Otto Vollbehr, half-length portrait, facing front, standing next to a rare volume of printers' bookmarks which he donated to the Library of Congress]. Prints and Photographs Division, Library of Congress.
The
Vollbehr Collection, stated George Parker Winship of the Harvard Library shortly before its purchase by act of Congress in 1930, "is representative, to an amazing degree, of every sort of publication which came from the fifteenth century presses." The collection contains incunabula produced at 635 different printing establishments and an rich selection of books in vernacular languages. This acquisition quadrupled the number of fifteenth century books held by the Library of Congress and established the Library as the leading center for the study of early printing.
Otto Vollbehr was a German industrialist whose family had made a fortune in the dyestuff industry; he took up book collecting when his physician recommended that he adopt a hobby following a railway accident which left him with a serious nervous condition. In addition to collecting books he acquired “ready-made” collections of 15-18th century book illustrations and of printers’ marks.
The treasure of the Vollbehr Collection is the copy of the Bible produced by Johann Gutenberg at Mainz about 1456-the first book printed with movable type in the western world. The Library's Gutenberg Bible is one of the three surviving perfect copies on vellum. The work had been in the possession of the Benedictine Order for nearly five centuries before it was acquired by Dr. Otto Vollbehr from the Abbey of Saint Paul in eastern Carinthia, Austria. Bound as three volumes, the Bible retains the bookplate of the monastery of Saint Blasius (the owner of the work until the late eighteenth century) as well as its late sixteenth century white pigskin binding. There are 3,114 volumes in the Vollbehr Collection.
Select Digitized Material from the Otto Vollbehr Collection
The first great book printed in Western Europe from movable metal type. The Bible was completed in Mainz, Germany, probably in late 1455. Johann Gutenberg,
|
yes
|
Manuscripts
|
Was the 'Gutenberg Bible' the first book printed with movable type?
|
yes_statement
|
the 'gutenberg bible' was the first "book" "printed" with "movable" "type".. the 'gutenberg bible' holds the distinction of being the first "book" "printed" with "movable" "type".
|
https://lithub.com/so-gutenberg-didnt-actually-invent-the-printing-press/
|
So, Gutenberg Didn't Actually Invent Printing As We Know It ...
|
So, Gutenberg Didn’t Actually Invent Printing As We Know It
On the Unsung Chinese and Korean History of Movable Type
If you heard one book called “universally acknowledged as the most important of all printed books,” which do you expect it would be?
If you were Margaret Leslie Davis, the answer would be obvious. Davis’s The Lost Gutenberg: The Astounding Story of One Book’s Five-Hundred-Year Odyssey, released this March, begins with just that descriptor. It recounts the saga of a single copy of the Gutenberg Bible—one of the several surviving copies of the 450-year-old Bible printed by Johannes Gutenberg, the putative inventor of the printing press, in one of his earliest projects—through a 20th-century journey from auction house to collector to laboratory to archive.
Davis quotes Mark Twain, who wrote, in 1900, a letter celebrating the opening of the Gutenberg Museum. For Davis, Twain’s words were “particularly apt.” “What the world is to-day,” Twain wrote, “good and bad, it owes to Gutenberg. Everything can be traced to this source. . . .” Indeed, Gutenberg’s innovation has long been regarded an inflection point in human history—an innovation that opened the door to the Protestant Reformation, Renaissance, the scientific revolution, the advent of widespread education, and a thousand more changes that touch nearly everything we now know.
The only problem?
The universal acclaim is, in fact, not so universal—and Gutenberg himself is a, but not the, source of printing. Rather, key innovations in what would become revolutionary printing technology began in east Asia, with work done by Chinese nobles, Korean Buddhists, and the descendants of Genghis Khan—and, in a truth Davis acknowledges briefly, their work began several centuries before Johannes Gutenberg was even born.
*
In a traditional printing press, small metal pieces with raised backwards letters, known as movable type, are arranged in a frame, coated with ink, and applied to a piece of paper. Take the paper away, and it’s a printed page. Do this with however many pages make up a book, and there’s a printed copy. Do this many times, and swiftly printed, mass-produced books appear.
The printing press is often said to have been created by Gutenberg in Mainz, Germany, around 1440 AD, and it began taking root in Europe in the 1450s with the printing of the aforementioned Bible. Books themselves had been present in Europe long before then, of course, but only in hand-copied volumes that were accessible mainly to members of the clergy. Access to mass-produced books revolutionized Europe in the late 1400s, with advancing literacy altering religion, politics, and lifestyles worldwide.
“What the world is to-day,” Twain wrote, “good and bad, it owes to Gutenberg. Everything can be traced to this source.”
At least, this is how the story is rendered in most books, including, for the most part, The Lost Gutenberg. But a single sentence late in the book nods to a much longer story before that: “Movable type was an 11th-century Chinese invention, refined in Korea in 1230, before meeting conditions in Europe that would allow it to flourish—in Europe, in Gutenberg’s time.”
That sentence downplays and misstates what occurred.
The first overtures towards printing that began around roughly 800 AD, in China, where early printing techniques involving chiseling an entire page of text into a wood block backwards, applying ink, and printing pages by pressing them against the block. Around 971 AD, printers in Zhejiang, China, produced a print of a vast Buddhist canon called the Tripitaka with these carved woodblocks, using 130,000 blocks (one for each page). Later efforts would create early movable type—including the successful but inefficient use of ideograms chiseled in wood and a brief, abortive effort to create ceramic characters.
Meanwhile, imperial imports from China brought these innovations to Korean rulers called the Goryeo (the people for whom Korea is now named), who were crucial to the next steps in printing history. Their part of the story is heavy with innovation in the face of invasion.
First, in 1087 AD, a group of nomads called the Khitans attempted to invade the Korean peninsula. This prompted the Goryeo government to create its own Tripitaka with woodblock printing, perhaps with the aim of preserving Korean Buddhist identity against invaders. The attempt would be prescient; it preserved the concept and technique for later years, when more invaders eventually arrived. In the 12th and 13th centuries, the Mongol ruler Genghis Khan had created the largest empire in human history, which stretched from the Pacific coast of Asia west to Persia. After he died in 1227, his successor, Ögedei Khan, continued conquering, including gaining ground that Genghis Khan had never held. In 1231, Ögedei ordered the invasion of Korea, and in 1232, invading Mongol troops reached the capital. As part of their conquering, they burned the Korean copy of the Tripitaka to ash.
The Goryeo dynasty immediately recreated the book. This is thought to have been “as prayers to the power of Buddhas for the protection of the nation from the invading Mongols,” per a text by Thomas Christensen, but it was also done with the intention of preserving the dynasty’s culture. This was important; attacks by Mongols would continue for the next 28 years.
The Tripitaka reboot was scheduled to take Korean monks until 1251 AD to complete, and, meanwhile, the rulers began expanding into printing other books. In 1234 AD, they asked a civil minister named Choe Yun-ui to print a Buddhist text called The Prescribed Ritual Text of the Past and Present (Sangjeong Gogeum Yemun). But the lengthy book would have required an impossibly large number of woodblocks, so Choe came up with an alternative. Building on earlier Chinese attempts to create movable type, he adapted a method that had been used for minting bronze coins to cast 3-dimensional characters in metal. Then he arranged these pieces in a frame, coated them with ink, and used them to press sheets of paper. When he was done, he could reorganize the metal characters, eliminating the need to persistently chisel blocks. It was faster—to a certain extent. He completed the project in 1250 AD.
Perhaps it should be Choe Yun-ui whose name we remember, not Gutenberg’s.
It is important to recognize what this means. The innovation that Johannes Gutenberg is said to have created was small metal pieces with raised backwards letters, arranged in a frame, coated with ink, and pressed to a piece of paper, which allowed books to be printed more quickly. But Choe Yun-ui did that—and he did it 150 years before Gutenberg was even born.
Perhaps it should be Choe Yun-ui whose name we remember, not Gutenberg’s.
However, Korea’s printed books did not spread at a rapid pace, as Gutenberg’s books would 200 years later. Notably, Korea was under invasion, which hampered their ability to disseminate their innovation. In addition, Korean writing, then based closely on Chinese, used a large number of different characters, which made creating the metal pieces and assembling them into pages a slow process. Most importantly, Goryeo rulers intended most of its printing projects for the use of the nobility alone.
Nonetheless, it is possible that printing technology spread from East to West. Ögedei Khan, the Mongol leader, had a son named Kublai who had situated himself as a ruler in Beijing. Kublai Khan had access to Korean and Chinese printing technology, and he may have shared this knowledge with another grandson of Genghis Khan, Hulegu, who was then ruling the Persian part of the Mongol empire. This could have moved printing technologies from East Asia westward by thousands of miles. “Mongols just tended to take their technologies everywhere they go, and they become a part of local culture, sometimes acknowledged, sometimes not,” Colgate University Asian history professor David Robinson explains.
To get from East Asia to Persia at that time, one traveled the Silk Road. In the middle of that route lay the homeland of the Uyghur people, a Turkic ethnic group that had been recruited into the Mongol army long before. “If there was any connection in the spread of printing between Asia and the West,” the scholar Tsien Tsuen-Hsien wrote in Science and Civilization in China in 1985, “the Uyghurs who used both blocking printing and movable type had good opportunities to play an important role in this introduction.”
This is because, in the 13th century, Uyghurs were considered distinguished, learned people—the sort for whom printing might be a welcome innovation. They had also something no one else in printing had had up till then: an alphabet, a simple group of relatively few letters for writing every word one wished to say.
There was no explosion of printing in the Western Mongol empire. “There was no market, no need for the leaders to reach out to their subjects, no need to raise or invest in capital in a new industry,” the historian John Man points out in his book, The Gutenberg Revolution. Nonetheless, movable-type Uyghur-language prints have been discovered in the area, indicating the technology was used there.
Furthermore, the Mongols may have carried the technology not only through Uyghur and Persian territory, but into Europe, including Germany. The Mongol empire repeatedly invaded Europe from roughly 1000 to 1500 AD; that period saw the entry of enough Western Asian recruits and captives to bring the loanword horde from their Turkic languages into European ones. “Generally, if something is going from East Asia [to the west], it would be hard to imagine without the Mongols,” Christopher Atwood, a Central Eurasian Studies professor at Indiana University, said in an interview.
The fantastical idea that Gutenberg alone invented the printing press ignores an entire continent and several centuries of relevant efforts.
Eventually, early capitalists in Europe invested in Johannes Gutenberg’s business venture—the one that combined technology quite like the movable type innovated by Choe Yun-ui with a screw-threaded spiral mechanism from a wine or olive press to ratchet up printing to commercial speeds. That business took decades of his life to bring to fruition, forced him into bankruptcy, and led to court filings by investors who repeatedly sued him to get their money back. As Davis notes in The Lost Gutenberg, these records are the means by which we know Gutenberg and his Bible: “This most famous of books has origins that we know little about. The stories we tell about the man, and how the Bibles came to be, have been cobbled together from a fistful of legal and financial records, and centuries of dogged scholarly fill-in-the-blank.”
*
Indeed, the entire history of the printing press is riddled with gaps. Gutenberg did not tell his own story in documents created on the printing presses he built; to the best of modern knowledge, he did not leave any notes on his work at all. And if Gutenberg was reticent, the Mongols, their Uyghur compatriots, and Eastern Asia government heads were even more so.
But if doubts are natural, then the result we’ve made of them is not. The fantastical idea that Gutenberg alone invented the printing press ignores an entire continent and several centuries of relevant efforts and makes no effort to understand how or why technology might have spread. During a study of Gutenberg’s lettering techniques, computer programmer Blaise Agṻera y Arcas pointed out how strange this is: “The idea that a technology emerges fully formed at the beginning is nuts. Anyone who does technology knows that’s not how it works.”
To her credit, Davis notes the same, explaining it this way: “Perhaps the image of Johannes Gutenberg as a lone genius who transformed human culture endures because the sweep of what followed is so vast that it feels almost mythic and needs an origin story to match.”
But Davis, who was unavailable for an interview for this article, does little to correct the record in The Lost Gutenberg. She mentions China just a few times and Korea only once—and the Mongols, Uyghurs, and non-Christian aspects of printing history not at all.
Indeed, she never explains that the Gutenberg Bible is not universally acclaimed as the most important book in history. Nor are copies of the Bible the oldest books created with movable type that still exist today—although a reader could be forgiven for gathering that impression from The Lost Gutenberg.
Rather, the earliest extant movable-type-printed book is the Korean Baegun Hwasang Chorok Buljo Jikji Simche Yojeol (“The Anthology of Great Buddhist Priests’ Zen Teachings”). It dates to 1377 and has served as a starting point for scholarship on the origin of movable type.
Korea regards it and other ancient volumes as national points of pride that rank among the most important of books. But it is only very recently, mostly in the last decade, that their viewpoint and the Asian people who created printing technologies have begun to be acknowledged at all. Most people—including Davis, who declined an interview with the remark, “I’m afraid I can’t really add much further on the topic of ancient printing”—still don’t know the full story.
M. Sophia Newman is a writer and medical editor from Chicago. As a health journalist, she reported from Ghana, Kenya, South Africa, Bangladesh, India, Nepal, and France, and has received grants from the International Thomas Merton Society, Collegeville Institute, and the Pulitzer Center for Crisis Reporting. In addition, Sophia has researched mental health in Bangladesh under a Fulbright fellowship and earned a certification in global mental health from the Harvard Program on Refugee Trauma.
|
Anyone who does technology knows that’s not how it works.”
To her credit, Davis notes the same, explaining it this way: “Perhaps the image of Johannes Gutenberg as a lone genius who transformed human culture endures because the sweep of what followed is so vast that it feels almost mythic and needs an origin story to match.”
But Davis, who was unavailable for an interview for this article, does little to correct the record in The Lost Gutenberg. She mentions China just a few times and Korea only once—and the Mongols, Uyghurs, and non-Christian aspects of printing history not at all.
Indeed, she never explains that the Gutenberg Bible is not universally acclaimed as the most important book in history. Nor are copies of the Bible the oldest books created with movable type that still exist today—although a reader could be forgiven for gathering that impression from The Lost Gutenberg.
Rather, the earliest extant movable-type-printed book is the Korean Baegun Hwasang Chorok Buljo Jikji Simche Yojeol (“The Anthology of Great Buddhist Priests’ Zen Teachings”). It dates to 1377 and has served as a starting point for scholarship on the origin of movable type.
Korea regards it and other ancient volumes as national points of pride that rank among the most important of books. But it is only very recently, mostly in the last decade, that their viewpoint and the Asian people who created printing technologies have begun to be acknowledged at all. Most people—including Davis, who declined an interview with the remark, “I’m afraid I can’t really add much further on the topic of ancient printing”—still don’t know the full story.
M. Sophia Newman is a writer and medical editor from Chicago. As a health journalist, she reported from Ghana, Kenya, South Africa, Bangladesh, India, Nepal, and France, and has received grants from the International Thomas Merton Society, Collegeville Institute, and the Pulitzer Center for Crisis Reporting.
|
no
|
Manuscripts
|
Was the 'Gutenberg Bible' the first book printed with movable type?
|
no_statement
|
the 'gutenberg bible' was not the first "book" "printed" with "movable" "type".. the 'gutenberg bible' does not have the distinction of being the first "book" "printed" with "movable" "type".
|
https://www.europeana.eu/en/blog/europes-first-printed-book
|
Europe's First Printed Book | Europeana
|
How do we know what Europe’s first printed book was? Until the 18th century this question was open to speculation. 15th-century printed books usually have no title page and do not always give the printer’s name.
A reliable historical source from 1499, the Cologne ‘Cronica’, had told of the Gutenberg Bibles, however, their locations were either unknown or they were undated and therefore not credible. Around 25 copies of the first Bible printed in two columns with 42 lines per page were identified during the course of the 18th century. The name of the printer, Johannes Gutenberg, does not appear in any of them. Today we know of 49 of the 180 originally printed 42-line Bibles. Of these 21 are complete.
(Re)discovery of the Gutenberg Bible in Berlin
What happened? Christoph Hendreich (1630-1702), head librarian of Berlin’s Electoral Library, discovered a two-volume Latin Bible in folio format in the library’s collections. It was printed on vellum in a gothic font known as black letter. Hendreich linked his find to the 1499 Cologne ‘Cronica’.
However, his discovery remained a hidden gem until 1760 when the scholar Karl Conrad Oelrich published a facsimile of an extract from the Bible. This dramatically changed the situation: scholars used Oelrich’s facsimile to identify other copies of the Gutenberg Bible from the same edition. This copy is still preserved in the Berlin State Library.
Moveable Type
The Gutenberg Bible was produced in Mainz in 1455. It is the first book in Europe to be printed using moveable type: a system of printing that uses individual units of letters and punctuation marks. A mixture of lamp soot, varnish and egg white was used for ink. The text was printed either on vellum, i.e. parchment, or on paper. Vellum was more durable and thicker but also more expensive.
Unique Copies through Illuminations
Following the manuscript tradition, copies of the Gutenberg Bible were normally decorated at the instruction of their purchasers, mostly monastic houses.
The vellum copy at the Bibliothèque nationale de France has particularly impressive illuminations: spectacular marginal decorations on two pages and a huge variety of illuminated first letters or initials.The illuminations correlate with those in a model book (Musterbuch) for illuminators that was in use at the time. The borders resemble others that were in circulation in the same region and around the same time as the Gutenberg Bible.
TheGöttingen Model Book(Staats- und Universitätsbibliothek Göttingen, f. 11v) on the left shows examples of acanthus leaf borders in different colour combinations that are very similar in style to the marginal decoration in the BnF vellum copy of the Gutenberg Bible (vol. 3 f. 1, right). Other motifs found in the Musterbuch are also repeated in various illuminated initials throughout this copy. Bibliothèque nationale de France, No Copyright – Other Known Legal Restrictions
The paper copy at the National Library of Scotland has fewer illuminations; they are in gold and colour and originate in Germany, possibly at Erfurt. The Bible has marginal notes in a continental hand. It probably remained on the Continent until it came into the possession of David Steuart (1747-1824), the former Lord Provost of Edinburgh, in 1796. He sold it to the Advocates Library, the National Library’s predecessor, for 150 guineas.
The only other book Johannes Gutenberg seems to have printed was a schoolbook: the Latin grammar by Donatus. The printing process with movable type pioneered by him was soon taken up by others. By the end of the 15th century, printing presses had been established in more than 250 towns and cities across Europe.
The blog post is a part of the Rise of Literacy project, where we take you on an exploration of literacy in Europe thanks to the digital preservation of precious textual works from collections across the continent.
|
How do we know what Europe’s first printed book was? Until the 18th century this question was open to speculation. 15th-century printed books usually have no title page and do not always give the printer’s name.
A reliable historical source from 1499, the Cologne ‘Cronica’, had told of the Gutenberg Bibles, however, their locations were either unknown or they were undated and therefore not credible. Around 25 copies of the first Bible printed in two columns with 42 lines per page were identified during the course of the 18th century. The name of the printer, Johannes Gutenberg, does not appear in any of them. Today we know of 49 of the 180 originally printed 42-line Bibles. Of these 21 are complete.
(Re)discovery of the Gutenberg Bible in Berlin
What happened? Christoph Hendreich (1630-1702), head librarian of Berlin’s Electoral Library, discovered a two-volume Latin Bible in folio format in the library’s collections. It was printed on vellum in a gothic font known as black letter. Hendreich linked his find to the 1499 Cologne ‘Cronica’.
However, his discovery remained a hidden gem until 1760 when the scholar Karl Conrad Oelrich published a facsimile of an extract from the Bible. This dramatically changed the situation: scholars used Oelrich’s facsimile to identify other copies of the Gutenberg Bible from the same edition. This copy is still preserved in the Berlin State Library.
Moveable Type
The Gutenberg Bible was produced in Mainz in 1455. It is the first book in Europe to be printed using moveable type: a system of printing that uses individual units of letters and punctuation marks. A mixture of lamp soot, varnish and egg white was used for ink. The text was printed either on vellum, i.e. parchment, or on paper. Vellum was more durable and thicker but also more expensive.
|
yes
|
Manuscripts
|
Was the 'Gutenberg Bible' the first book printed with movable type?
|
no_statement
|
the 'gutenberg bible' was not the first "book" "printed" with "movable" "type".. the 'gutenberg bible' does not have the distinction of being the first "book" "printed" with "movable" "type".
|
https://www.nytimes.com/2003/07/23/college/texas-puts-gutenberg-bible-on-internet.html
|
Texas Puts Gutenberg Bible on Internet - The New York Times
|
Texas Puts Gutenberg Bible on Internet
AUSTIN, Texas (AP) -- The University of Texas has put its entire two-volume Gutenberg Bible on the Internet, making it easier for scholars and the public to browse one of the world's most valuable books.
``Just as Johann Gutenberg made knowledge more accessible with the invention of the printing process, this digitization project continues that legacy,'' said Richard Oram, head librarian at the university's Harry Ransom Center, one of the world's top cultural archives.
The Ransom Center edition is not the first to go digital. Gutenberg Bibles in England and Japan already have been posted on the Internet and the Library of Congress has one available on CD-ROM, Oram said.
However, Ransom Center officials think their copy is the best of the lot, calling it the most-used version still in existence.
Gutenberg's Bible revolutionized printing in Western civilization. Printed in Mainz, Germany, in the 1450s, the books were the first major Western book printed from movable type.
According to the Ransom Center, only about 200 were produced and only 48 copies exist today, each one of them unique since local artisans were hired to illuminate the letters opening each book.
The Ransom Center acquired its two-volume copy, which includes some illuminations in gold leaf, in 1978. Oram estimated the copy, which is 1,268 pages in two volumes, is worth up to $20 million.
The Texas Gutenberg was used in monasteries in southern Germany as late as the 1760s. It was marked up by monks who scratched out some passages and corrected others. Other markings indicate which sections were to be read aloud or reserved for church services.
``Our copy is the most interesting in the world,'' Oram said.
One top scholar agreed.
``This is probably the most extensively annotated and corrected copy surviving,'' said Paul Needham of Princeton University's Scheide Library. ``This is a very great treasure.''
Needham said the online access, and the soon-to-be-developed high resolution CD-ROM, will be a boon to scholars who want to look at the Bible without traveling to Austin where it is enclosed in temperature-controlled glass and under the watch of 24-hour security.
Ransom Center staff began digitally scanning the Bible's linen pages in June 2002. The finished project gives Web viewers 7,000 images and special software was used to allow for full visibility of the text and illuminations.
|
Texas Puts Gutenberg Bible on Internet
AUSTIN, Texas (AP) -- The University of Texas has put its entire two-volume Gutenberg Bible on the Internet, making it easier for scholars and the public to browse one of the world's most valuable books.
``Just as Johann Gutenberg made knowledge more accessible with the invention of the printing process, this digitization project continues that legacy,'' said Richard Oram, head librarian at the university's Harry Ransom Center, one of the world's top cultural archives.
The Ransom Center edition is not the first to go digital. Gutenberg Bibles in England and Japan already have been posted on the Internet and the Library of Congress has one available on CD-ROM, Oram said.
However, Ransom Center officials think their copy is the best of the lot, calling it the most-used version still in existence.
Gutenberg's Bible revolutionized printing in Western civilization. Printed in Mainz, Germany, in the 1450s, the books were the first major Western book printed from movable type.
According to the Ransom Center, only about 200 were produced and only 48 copies exist today, each one of them unique since local artisans were hired to illuminate the letters opening each book.
The Ransom Center acquired its two-volume copy, which includes some illuminations in gold leaf, in 1978. Oram estimated the copy, which is 1,268 pages in two volumes, is worth up to $20 million.
The Texas Gutenberg was used in monasteries in southern Germany as late as the 1760s. It was marked up by monks who scratched out some passages and corrected others. Other markings indicate which sections were to be read aloud or reserved for church services.
``Our copy is the most interesting in the world,'' Oram said.
One top scholar agreed.
``This is probably the most extensively annotated and corrected copy surviving,'' said Paul Needham of Princeton University's Scheide Library. ``This is a very great treasure.''
|
yes
|
Volcanology
|
Was the 1815 Tambora eruption the deadliest in recorded history?
|
yes_statement
|
the "1815" tambora "eruption" was the "deadliest" in "recorded" "history".. the "1815" tambora "eruption" holds the "record" for being the "deadliest" in "recorded" "history".
|
https://www.titlemax.com/discovery-center/lifestyle/deadliest-natural-disasters-by-type/
|
The Deadliest Known Natural Disasters by Type | TitleMax
|
The Deadliest Known Natural Disasters
What have been the worst natural disasters in history? We’ve scanned through records to find the world’s worst disasters of every type, from floods to volcanic eruptions. When Earth attacks man, how bad can it be? Scan this list of deadliest natural disasters to see the often-unpredictable toll that world disasters have had on civilizations.
What is the worst natural disaster ever?
Excluding viral and bacterial pandemics, the deadliest natural disaster in history was the Great Chinese Famine of 1959 to 1961, which caused a massive loss of life estimated somewhere between 30 and 45 million people. By far, this drought and subsequent famine was the deadliest natural disaster in the world, leading to starvation on a massive scale. Some may argue that this isn’t quite the deadliest natural disaster due to the government of the People’s Republic of China having a hand in bad food distribution, agricultural policies, and regulations that aggravated the problem. One thing’s clear, though: It was the worst famine in the world following what was likely the worst drought in history.
What was the worst flood in history?
As of now, the 1931 Chinese flood of the Yangzi and Huai rivers was the worst flood, killing somewhere between a million and 4 million people. Following a drought, the floodwaters took over an area about the size of England. Dangerous floods affected other waterways throughout the country, too. There was an intense outbreak of illnesses, overcrowding, and a lack of food following the flood. Though the numbers are disputed, with the contemporary Chinese government saying that the death toll was more likely around 2 million, it’s still considered the worst flood in history.
What was the deadliest earthquake in history?
The worst earthquake ever is yet another Chinese catastrophe, but this one happened way back in 1556 in Shaanxi province. If you look at the earthquake death toll, that’s by far the worst at disputably 4 million deaths. If you’re not looking at the most deadly earthquakes but just the worst in order of magnitude, the worst happened in 1960 in Chile, with a magnitude of 9.6. Shaanxi’s earthquake was likely an 8.0, but it’s hard to tell because it happened before modern instruments were available.
What was the biggest cyclone in history?
Of the deadliest tropical cyclones is the mighty Bhola Cyclone in 1970, which caused 500,000 deaths and $86 million in damage. That’s the worst cyclone ever. In case you were wondering, on the other side of the globe, the worst hurricane ever was the Great Hurricane of 1780. In more recent history, the worst was Hurricane Mitch in 1998. It’s the deadliest among a string of recent category 5 hurricanes.
What is the biggest tsunami in the world that’s ever been recorded?
The deadliest tsunami in history was the Indian Ocean tsunami in 2004, which smashed over areas such as the Nicobar Islands, Burma, Indonesia, and parts of Sri Lanka. The total tsunami death toll was 280,000 people. The total energy of the tsunami was equivalent to about five megatons of TNT. The wave was about 15 to 30 meters (50 to 100 ft) high and traveling at speeds of 800km/h (500mph) when it hit Indonesia. In terms of casualties, it’s the worst tsunami ever, but the biggest tsunami in the world happened in Alaska, where a landslide in 1958 resulted in 100-foot-high waves.
What was the biggest volcanic eruption ever?
The deadliest volcano in the world was the Mount Tambora eruption in 1815, which resulted in 71,000 deaths. Of course, that’s the deadliest volcano eruption in human history; the volcano at Yellowstone erupted about 2.2 million years ago, and it probably packed a bit more of a punch, making it the worst volcano ever.
What was the deadliest tornado in the world?
Luckily, tornadoes are usually not the most dangerous natural disasters around the world, but there has been one twister that managed to kill more than a thousand people: the Daulatpur-Saturia tornado in 1989, the deadliest tornado of all time. The runner-up is considered to be the biggest tornado ever, in terms of its duration and the size of its damage path: the Tri-State Tornado in the U.S., which happened in 1925.
Check our list of disasters to see some of the other categories, like the biggest avalanche in the world, the worst blizzard in history, and the deadliest wildfire ever. We’ve listed the biggest natural disasters in the world by how much they’ve had an impact on human life, but the Earth is far older and stronger than that: Remember that there may have been plenty of volcanoes, earthquakes, and tsunamis that we don’t know about! But you can check out our world’s worst natural disasters list to see how quickly natural events have disrupted human life.
|
The total tsunami death toll was 280,000 people. The total energy of the tsunami was equivalent to about five megatons of TNT. The wave was about 15 to 30 meters (50 to 100 ft) high and traveling at speeds of 800km/h (500mph) when it hit Indonesia. In terms of casualties, it’s the worst tsunami ever, but the biggest tsunami in the world happened in Alaska, where a landslide in 1958 resulted in 100-foot-high waves.
What was the biggest volcanic eruption ever?
The deadliest volcano in the world was the Mount Tambora eruption in 1815, which resulted in 71,000 deaths. Of course, that’s the deadliest volcano eruption in human history; the volcano at Yellowstone erupted about 2.2 million years ago, and it probably packed a bit more of a punch, making it the worst volcano ever.
What was the deadliest tornado in the world?
Luckily, tornadoes are usually not the most dangerous natural disasters around the world, but there has been one twister that managed to kill more than a thousand people: the Daulatpur-Saturia tornado in 1989, the deadliest tornado of all time. The runner-up is considered to be the biggest tornado ever, in terms of its duration and the size of its damage path: the Tri-State Tornado in the U.S., which happened in 1925.
Check our list of disasters to see some of the other categories, like the biggest avalanche in the world, the worst blizzard in history, and the deadliest wildfire ever. We’ve listed the biggest natural disasters in the world by how much they’ve had an impact on human life, but the Earth is far older and stronger than that: Remember that there may have been plenty of volcanoes, earthquakes, and tsunamis that we don’t know about!
|
yes
|
Volcanology
|
Was the 1815 Tambora eruption the deadliest in recorded history?
|
yes_statement
|
the "1815" tambora "eruption" was the "deadliest" in "recorded" "history".. the "1815" tambora "eruption" holds the "record" for being the "deadliest" in "recorded" "history".
|
https://sevenpie.com/5-deadliest-volcanic-eruptions-to-occur-in-history/
|
5 Deadliest Volcanic Eruptions to Occur In History – SevenPie.com ...
|
5 Deadliest Volcanic Eruptions to Occur In History
#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; }
/* Add your own MailChimp form style overrides in your site stylesheet or in this style block.
We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */
1. Mt. Tambora’s 1815 Eruption
It’s hard to imagine that Mt Tambora, a 2850m was a tall and large volcano just 200 years ago. The eruption in 1815 was so large that the force of the eruption destroyed it’s top, leaving a very large crater where the top once was.The eruption caused devastation on the entire island, literally all vegetation was destroyed. The volcanic column reaches a height of 43km, and given the scale of the eruption, it affected weather patterns all over the world. 1816 was known as the “Year Without Summer” as the 120 million tons of sulfur block the sun’s rays from reaching the surface. It was reported that in June 1816, it snowed in several countries. Approximately 71,000 people died on the island and many more from diseases and famine.
2. Mt Krakatoa
The 1883 eruption of Krakota was so big that it forever changed the landscape of the volcano. It destroyed the island the volcano was sitting on, breaking it into three. The sound generated from the eruption was the loudest recorded sound in history – people as far as 3,000 KM away heard the sound. The eruption was equivalent to 200 Megatons of TNT and generated a landslide that created a tsunami that hit Java and Sumatra. 36,000 people were killed as a result, and there were reports of people in the African coast finding skeletons in raft months after it’s an eruption. Like Mt Tambora’s eruption, Krakatoa’s eruption changed the climate. The gas circled around the world, blocking out the sun and plunged world temperatures by 2.1 F, and only return to normal after 5 years.
3. Mt Pelee’s 1902 Eruption
Located in the island of Martinique in the Caribbean, Mt Pelee erupted with force on 8 May 1902. And It didn’t take long to show it’s deadly force. The toxic cloud reaching up to 1075 degrees C descended upon the town of St Pierre, killing animals, residents, and burning literally everything it touched.The town of St Pierre was completely decimated by the eruption, reduced to ashes. Of the 28,000 killed in the town, only 3 survivors were found.
4. Mt Pinatubo
Mt Pinatubo’s 1991 eruption was one of its kind. On June 15. 1991, after weeks of intermittent eruptions, it finally unleashed the climax, the second-largest eruption of the 20th century. Its volcanic explosivity index was VEI 6, a “Colossal” by scientific standards that only happens every 50-100 years. To make things worse, the eruption hit at the same time as Typhoon Yunya hit the Philippines, causing lahar flows, further exacerbating the damage. Around 800 people died, and more than 200,000 were left homeless. The eruption has world-reaching effects. The gas released by the volcano flew high into the atmosphere, blocking out sunlight and reducing the amount of sunlight by 10%. Temperatures of the Northern hemisphere plunged by 0.5 C, and globally it fell by 0.4 C
5. Mt St Helens
The 18 May 1980 eruption of Mt. St Helens completely changed the face of the volcano. It erupted in a unique way – sideways instead of vertical. The vertical eruption reduces the height from 2950 m to 2549 m and caused a landslide. 57 people died, but the environmental impact was colossal. It costed $1.1 billion of damage and 2.4 million cubic yards of ash fell over 11 states. Infrastructure around the mountain was crippled. 4 billion board feet of timber were destroyed and millions more of animals and fish were killed. The eruption forever changed Mt. St Helens’ image, completely removing its caldera. Check out the differences below, taken from the same spot.
Did we miss out on any deadly volcanic eruptions? Let us know in the comment section below!
#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; }
/* Add your own MailChimp form style overrides in your site stylesheet or in this style block.
We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */
|
5 Deadliest Volcanic Eruptions to Occur In History
#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; }
/ * Add your own MailChimp form style overrides in your site stylesheet or in this style block.
We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */
1. Mt. Tambora’s 1815 Eruption
It’s hard to imagine that Mt Tambora, a 2850m was a tall and large volcano just 200 years ago. The eruption in 1815 was so large that the force of the eruption destroyed it’s top, leaving a very large crater where the top once was. The eruption caused devastation on the entire island, literally all vegetation was destroyed. The volcanic column reaches a height of 43km, and given the scale of the eruption, it affected weather patterns all over the world. 1816 was known as the “Year Without Summer” as the 120 million tons of sulfur block the sun’s rays from reaching the surface. It was reported that in June 1816, it snowed in several countries. Approximately 71,000 people died on the island and many more from diseases and famine.
2. Mt Krakatoa
The 1883 eruption of Krakota was so big that it forever changed the landscape of the volcano. It destroyed the island the volcano was sitting on, breaking it into three. The sound generated from the eruption was the loudest recorded sound in history – people as far as 3,000 KM away heard the sound. The eruption was equivalent to 200 Megatons of TNT and generated a landslide that created a tsunami that hit Java and Sumatra. 36,000 people were killed as a result, and there were reports of people in the African coast finding skeletons in raft months after it’s an eruption. Like Mt Tambora’s eruption, Krakatoa’s eruption changed the climate.
|
yes
|
Volcanology
|
Was the 1815 Tambora eruption the deadliest in recorded history?
|
yes_statement
|
the "1815" tambora "eruption" was the "deadliest" in "recorded" "history".. the "1815" tambora "eruption" holds the "record" for being the "deadliest" in "recorded" "history".
|
https://www.wkyc.com/article/weather/200-years-ago-we-endured-a-year-without-a-summer/95-216260491
|
200 years ago, we endured a 'year without a summer' | wkyc.com
|
200 years ago, we endured a 'year without a summer'
Snow in June, freezing temperatures in July, a killer frost in August: "The most gloomy and extraordinary weather ever seen," according to one Vermont farmer.
<p>On April 10, 1815, the Tambora Volcano produced the largest eruption in recorded history. (Photo: NASA)</p>
Author: Doyle Rice (USA TODAY , WKYC)
Published: 5/26/2016 4:06:13 PM
Updated: 4:09 PM EDT May 26, 2016
Snow in June, freezing temperatures in July, a killer frost in August: "The most gloomy and extraordinary weather ever seen," according to one Vermont farmer.
Two centuries ago, 1816 became the year without a summer for millions of people in parts of North America and Europe, leading to failed crops and near-famine conditions.
While they didn't know the chill's cause at the time, scientists and historians now know that the biggest volcanic eruption in human history, on the other side of the world — Mount Tambora in Indonesia in April 1815 — spewed millions of tons of dust, ash and sulfur dioxide into the atmosphere, temporarily changing the world's climate and dropping global temperatures by as much as 3 degrees.
In addition to food shortages, the natural climate change caused disease outbreaks, widespread migration of people looking for a better home and religious revivals as people tried to make sense of it all.
The gloom spread to the literary world, too: that foul, frigid year inspired the plot of Mary Shelly's epic horror novel Frankenstein.
And it could happen again. Big volcanoes can erupt at anytime and with little warning, potentially changing the climate and giving a temporary reprieve to man-made global warming.
"We cannot reliably predict exactly when a volcano will erupt, or how powerful it will be, until the eruption is nearly upon us," said Nicholas P. Klingaman, co-author of the book The Year without Summer.
A volcano erupts
The eruption of Tambora, on April 10, 1815, on the island of Sumbawa in what's now Indonesia, was 100 times more powerful than the 1981 Mount St. Helens blast, according to the U.S. Geological Survey, which ranked the eruption as a seven on its eight-level volcanic explosivity index.
The volcano spewed out enough ash and pumice to cover a square area 100 miles on each side of the mountain to a depth of almost 12 feet, according to the book, The Year without Summer, by Klingaman and his father, William K. Klingaman.
It was by far the deadliest volcanic eruption in human history, the Klingamans wrote, with a death toll of at least 71,000 people, up to 12,000 of whom killed directly by the eruption, according to the journal Progress in Physical Geography.
When a volcano erupts, it does more than spew clouds of ash, which can cool a region for a few days and disrupt airline travel. It also spews sulfur dioxide, NASA reports.
Mount Tambora (Photo: USA Today)
If the eruption is strong enough, it shoots that sulfur dioxide high into the stratosphere, more than 10 miles above Earth's surface. Up there, sulfur dioxide reacts with water vapor to form sulfate aerosols.
Because these aerosols float above the altitude of rain, they don't get washed out. Instead they linger, reflecting sunlight and cooling the Earth's surface, which is what caused the weather and climate impacts of Tambora's eruption to occur more than a year later.
Miserable summer
Heavy snow fell in northern New England on June 7-8, with 18- to 20-inch high drifts. In Philadelphia, the ice was so bad "every green herb was killed and vegetables of every description very much injured," according to the book American Weather Stories.
Frozen birds dropped dead in the streets of Montreal, and lambs died from exposure in Vermont, the New England Historical Society said.
On July 4, one observer wrote that "several men were pitching quoits (a game) in the middle of the day with heavy overcoats on." A frost in Maine that month killed beans, cucumbers and squash, according to meteorologist Keith Heidorn. Ice covered lakes and rivers as far south as Pennsylvania, according to the Weather Underground.
By the time August rolled around, more severe frosts further damaged or killed crops in New England. People reportedly ate raccoons and pigeons for food, the New England Historical Society said.
Europe also suffered mightily: the cold and wet summer led to famine, food riots, the transformation of stable communities into wandering beggars and one of the worst typhus epidemics in history, according to The Year without Summer.
Scientists’ best estimate is that the global-average temperature cooled by almost 2 degrees in 1816 said Nicholas Klingaman, who is also a meteorologist at the University of Reading in the United Kingdom. Land temperatures cooled by about 3 degrees, he added.
Sulphuric gases rise from the crater of Mt. Tambora on the island of Sumbawa, Indonesia. (Photo: Haraldur Sigurdsson, University of Rhode Island)
How soon could it happen again?
Eruptions on the scale of Tambora occur once every 1,000 years on average, but smaller events can still substantially impact the climate, Klingaman said. Krakatoa's 1883 eruption in Indonesia caused global cooling nearly five years later, even though it ejected less material into the atmosphere than Tambora.
Similarly, Pinatubo in 1991 in the Philippines caused global temperatures to cool by about 1 degree, Klingaman said. Eruptions of that magnitude — about one-sixth the size of the Tambora eruption — happen about once every 100 years.
"Any of those other volcanoes could erupt again, perhaps on a scale of Tambora or greater," said Mike Mills, an atmospheric chemist at the National Center for Atmospheric Research in Boulder, Colo.
The subject of how volcanoes affect climate is relatively new: Scientists didn't confirm the link between volcanic eruptions and global cooling until the 1960s and 1970s, Klingaman said.
A new area of research delves into how volcanic eruptions could interact with global warming, Mills said. The chemist says his own investigation found the role of volcano eruption in global climate change could be underestimated. And it could help explain why global warming appeared to temporarily slow down early this century when volcanic activity increased.
With global temperatures at record highs, a massive eruption today could halt man-made climate change. But the effect would be only temporary: Warming would pick up where it left off once all the stratospheric dust settled out, a process that could a few years or up to a decade, Weather Underground reported.
The fallout on the people affected by the event, though, could last far longer. The colorful, dusty skies from Tambora's epic blast inspired some of British artist J. M. W. Turner's most spectacular sunset paintings — some of which were painted decades after the eruption.
|
A volcano erupts
The eruption of Tambora, on April 10, 1815, on the island of Sumbawa in what's now Indonesia, was 100 times more powerful than the 1981 Mount St. Helens blast, according to the U.S. Geological Survey, which ranked the eruption as a seven on its eight-level volcanic explosivity index.
The volcano spewed out enough ash and pumice to cover a square area 100 miles on each side of the mountain to a depth of almost 12 feet, according to the book, The Year without Summer, by Klingaman and his father, William K. Klingaman.
It was by far the deadliest volcanic eruption in human history, the Klingamans wrote, with a death toll of at least 71,000 people, up to 12,000 of whom killed directly by the eruption, according to the journal Progress in Physical Geography.
When a volcano erupts, it does more than spew clouds of ash, which can cool a region for a few days and disrupt airline travel. It also spews sulfur dioxide, NASA reports.
Mount Tambora (Photo: USA Today)
If the eruption is strong enough, it shoots that sulfur dioxide high into the stratosphere, more than 10 miles above Earth's surface. Up there, sulfur dioxide reacts with water vapor to form sulfate aerosols.
Because these aerosols float above the altitude of rain, they don't get washed out. Instead they linger, reflecting sunlight and cooling the Earth's surface, which is what caused the weather and climate impacts of Tambora's eruption to occur more than a year later.
Miserable summer
Heavy snow fell in northern New England on June 7-8, with 18- to 20-inch high drifts. In Philadelphia, the ice was so bad "every green herb was killed and vegetables of every description very much injured," according to the book American Weather Stories.
|
yes
|
Volcanology
|
Was the 1815 Tambora eruption the deadliest in recorded history?
|
yes_statement
|
the "1815" tambora "eruption" was the "deadliest" in "recorded" "history".. the "1815" tambora "eruption" holds the "record" for being the "deadliest" in "recorded" "history".
|
https://www.ksdk.com/article/weather/200-years-ago-we-endured-a-year-without-a-summer/63-216440561
|
200 years ago, we endured a 'year without a summer' | ksdk.com
|
200 years ago, we endured a 'year without a summer'
Snow in June, freezing temperatures in July, a killer frost in August: "The most gloomy and extraordinary weather ever seen," according to one Vermont farmer.
<div> On April 10, 1815, the Tambora Volcano produced the largest eruption in recorded history.</div>
Author: KSDK Staff
Published: 5/26/2016 4:59:50 PM
Updated: 4:59 PM CDT May 26, 2016
Snow in June, freezing temperatures in July, a killer frost in August: "The most gloomy and extraordinary weather ever seen," according to one Vermont farmer.
Two centuries ago, 1816 became the year without a summer for millions of people in parts of North America and Europe, leading to failed crops and near-famine conditions.
While they didn't know the chill's cause at the time, scientists and historians now know that the biggest volcanic eruption in human history, on the other side of the world — Mount Tambora in Indonesia in April 1815 — spewed millions of tons of dust, ash and sulfur dioxide into the atmosphere, temporarily changing the world's climate and dropping global temperatures by as much as 3 degrees.
In addition to food shortages, the natural climate change caused disease outbreaks, widespread migration of people looking for a better home and religious revivals as people tried to make sense of it all.
The gloom spread to the literary world, too: that foul, frigid year inspired the plot of Mary Shelly's epic horror novel Frankenstein.
And it could happen again. Big volcanoes can erupt at anytime and with little warning, potentially changing the climate and giving a temporary reprieve to man-made global warming.
"We cannot reliably predict exactly when a volcano will erupt, or how powerful it will be, until the eruption is nearly upon us," said Nicholas P. Klingaman, co-author of the book The Year without Summer.
Map of Mount Tambora. Source ESRI
A volcano erupts
The eruption of Tambora, on April 10, 1815, on the island of Sumbawa in what's now Indonesia, was 100 times more powerful than the 1981 Mount St. Helens blast, according to the U.S. Geological Survey, which ranked the eruption as a seven on its eight-level volcanic explosivity index.
The volcano spewed out enough ash and pumice to cover a square area 100 miles on each side to a depth of almost 12 feet, according to the book, The Year without Summer, by Klingaman and his father, William K. Klingaman.
It was by far the deadliest volcanic eruption in human history, the Klingamans wrote, with a death toll of at least 71,000 people, up to 12,000 of whom killed directly by the eruption, according to the journal Progress in Physical Geography.
When a volcano erupts, it does more than spew clouds of ash, which can cool a region for a few days and disrupt airline travel. It also spews sulfur dioxide, NASA reports.
Sulphuric gases rise from the crater of Mt. Tambora on the island of Sumbawa, Indonesia. The eruption of Tambora in 1815 was the largest volcanic eruption in human history and resulted in a period of global cooling known as the year without a summer.
If the eruption is strong enough, it shoots that sulfur dioxide high into the stratosphere, more than 10 miles above Earth's surface. Up there, sulfur dioxide reacts with water vapor to form sulfate aerosols.
Because these aerosols float above the altitude of rain, they don't get washed out. Instead they linger, reflecting sunlight and cooling the Earth's surface, which is what caused the weather and climate impacts of Tambora's eruption to occur more than a year later.
Miserable summer
Heavy snow fell in northern New England on June 7-8, with 18- to 20-inch high drifts. In Philadelphia, the ice was so bad "every green herb was killed and vegetables of every description very much injured," according to the book American Weather Stories.
Frozen birds dropped dead in the streets of Montreal, and lambs died from exposure in Vermont, the New England Historical Society said.
On July 4, one observer wrote that "several men were pitching quoits (a game) in the middle of the day with heavy overcoats on." A frost in Maine that month killed beans, cucumbers and squash, according to meteorologist Keith Heidorn. Ice covered lakes and rivers as far south as Pennsylvania, according to the Weather Underground.
By the time August rolled around, more severe frosts further damaged or killed crops in New England. People reportedly ate raccoons and pigeons for food, the New England Historical Society said.
Europe also suffered mightily: the cold and wet summer led to famine, food riots, the transformation of stable communities into wandering beggars and one of the worst typhus epidemics in history, according to The Year without Summer.
Scientists’ best estimate is that the global-average temperature cooled by almost 2 degrees in 1816 said Nicholas Klingaman, who is also a meteorologist at the University of Reading in the United Kingdom. Land temperatures cooled by about 3 degrees, he added.
Biggest eruptions in cubic miles of ejecta.
How soon could it happen again?
Eruptions on the scale of Tambora occur once every 1,000 years on average, but smaller events can still substantially impact the climate, Klingaman said. Krakatoa's 1883 eruption in Indonesia caused global cooling nearly five years later, even though it ejected less material into the atmosphere than Tambora.
Similarly, Pinatubo in 1991 in the Philippines caused global temperatures to cool by about 1 degree, Klingaman said. Eruptions of that magnitude — about one-sixth the size of the Tambora eruption — happen about once every 100 years.
"Any of those other volcanoes could erupt again, perhaps on a scale of Tambora or greater," said Mike Mills, an atmospheric chemist at the National Center for Atmospheric Research in Boulder, Colo. The subject of how volcanoes affect climate is relatively new: Scientists didn't confirm the link between volcanic eruptions and global cooling until the 1960s and 1970s, Klingaman said.
A new area of research delves into how volcanic eruptions could interact with global warming, Mills said. The chemist says his own investigation found the role of volcano eruption in global climate change could be underestimated. And it could help explain why global warming appeared to temporarily slow down early this century when volcanic activity increased.
With global temperatures at record highs, a massive eruption today could halt man-made climate change. But the effect would be only temporary: Warming would pick up where it left off once all the stratospheric dust settled out, a process that could a few years or up to a decade, Weather Underground reported.
The fallout on the people affected by the event, though, could last far longer. The colorful, dusty skies from Tambora's epic blast inspired some of British artist J. M. W. Turner's most spectacular sunset paintings — some of which were painted decades after the eruption.
|
The volcano spewed out enough ash and pumice to cover a square area 100 miles on each side to a depth of almost 12 feet, according to the book, The Year without Summer, by Klingaman and his father, William K. Klingaman.
It was by far the deadliest volcanic eruption in human history, the Klingamans wrote, with a death toll of at least 71,000 people, up to 12,000 of whom killed directly by the eruption, according to the journal Progress in Physical Geography.
When a volcano erupts, it does more than spew clouds of ash, which can cool a region for a few days and disrupt airline travel. It also spews sulfur dioxide, NASA reports.
Sulphuric gases rise from the crater of Mt. Tambora on the island of Sumbawa, Indonesia. The eruption of Tambora in 1815 was the largest volcanic eruption in human history and resulted in a period of global cooling known as the year without a summer.
If the eruption is strong enough, it shoots that sulfur dioxide high into the stratosphere, more than 10 miles above Earth's surface. Up there, sulfur dioxide reacts with water vapor to form sulfate aerosols.
Because these aerosols float above the altitude of rain, they don't get washed out. Instead they linger, reflecting sunlight and cooling the Earth's surface, which is what caused the weather and climate impacts of Tambora's eruption to occur more than a year later.
|
yes
|
Volcanology
|
Was the 1815 Tambora eruption the deadliest in recorded history?
|
yes_statement
|
the "1815" tambora "eruption" was the "deadliest" in "recorded" "history".. the "1815" tambora "eruption" holds the "record" for being the "deadliest" in "recorded" "history".
|
https://digpodcast.org/2018/05/13/mount-tambora/
|
Mount Tambora and the Year Without a Summer - DIG
|
Mount Tambora and the Year Without a Summer
The 1815 volcanic eruption of Mount Tambora changed history. The year following the eruption, 1816 was known in England as the “Year without a Summer,” in New England as “18-hundred-and-froze-to-death”, and “L’annee de la misere” or “Das Hungerhjar” in Switzerland. Germans dubbed 1817 as “the year of the beggar.” The Chinese and Indians had no name for it but the years following the massive eruption were remembered as ones of intense and widespread suffering. Scientists are, only today, uncovering the historical impacts of this ecological disaster. Suddenly we have climatic data which have reshaped our understanding of the events of 1815 and the years that followed. Now it is historians’ job to explore the social, political, and cultural influence of this catastrophic event. All this and more today as we explore the eruption of Mount Tambora in April 1815.
Listen, download, watch on YouTube, or scroll down for the transcript.
Transcript of Mount Tambora and the Year Without a Summer
Marissa: In 2004, Icelandic volcanologist Haraldur Sigurdsson was visiting Sumbawa, a medium-sized island in the Indonesian archipelago. The Sumbawa savannahs are ideally suited to the breeding of horses and cows. Its population, around 1.4 million people, work primarily in agriculture and mining. One of Sigurdsson’s local guides informed him of a small gully where locals had found old pottery and other goods. They called it “museum gully” and knew nothing more of it. They might have been surprised when Sigurdsson enthusiastically asked them to take him there.
Averill: The guides took Sigurdsson to museum gully. Using ground-penetrating radar, he and his team from the University of Rhode Island, uncovered the remains of a 19th century village frozen in time. They excavated one structure, a home which contained the carbonized remains of two people. The woman brandished a utility knife as if she was in the course of preparing a meal or performing an ordinary task around the house. The couple was surrounded by their belongings: furniture, iron tools, bronze bowls and pottery. Sigurdsson knew that this site had been preserved by history’s deadliest volcano, Mount Tambora, whose 1815 eruption changed history.
Marissa: The locals on Sumbawa knew little of this event which had occurred only 200 years ago on the island they call home. This is not, of course, because they were uneducated or disinterested. They knew nothing of the eruption because few who lived on the island, and no one who lived on the mountain at the time survived to tell the story. The Tambora people and their Rajah lived closest to the volcano before the eruption. One April evening, their culture, their language, and their lifestyle, became extinct within a matter of hours. The rest of the world was oblivious to the eruption for months. Even after news of the event reached the rest of the globe, they had no idea that they were already weathering its impact.
The year following the eruption, 1816 was known in England as the “Year without a Summer,” in New England as 18-hundred-and-froze-to-death, and “L’annee de la misere” or “Das Hungerhjar” in Switzerland. Germans dubbed 1817 as “the year of the beggar.” The Chinese and Indians had no name for it but the years following the massive eruption were remembered as ones of intense and widespread suffering. Scientists are, only today, uncovering the historical impacts of this ecological disaster. Suddenly we have climatic data which have reshaped our understanding of the events of 1815 and the years that followed. Now it is historians’ job to explore the social, political, and cultural influence of this catastrophic event. All this and more today as we explore the eruption of Mount Tambora in April 1815.
I’m Marissa Rhodes
And I’m Averill Earls
And we are your historians for this episode of Dig.
Averill: In the beginning of the 19th century, Mount Tambora had been considered extinct. No one alive at the time knew of any Tambora eruptions since the start of recorded history. We know now that before 1815, Mount Tambora had not erupted for over 5,000 years. Starting sometime in 1812, the villagers living on the mountain’s terraces and at its foot reported hearing occasional rumbling and seeing small eruptions of steam. These developments were interesting to the inhabitants of the mountain who spoke a now-extinct language related to Khmer, the language of Cambodia. They probably discussed it amongst themselves and with the traders and guides from the British and Dutch East India Companies who occasionally docked in Sumbawa’s primary port, called Bima. No one seemed particularly worried about Tambora’s awakening.
Marissa: This was, until April 5, 1815. A loud explosion was heard up to 620 miles away (1,000 km). A 15 mile-high column of hot ash and smoke shot out of the massive volcano. Over 10,000 residents were killed immediately in this initial eruption. Two entire principalities, Tambora, and Pekat had been vaporized. Others close by choked on poisonous gases or were buried in ash and pumice, where they stayed until American students began excavating their resting places 200 years later. On April 10, the volcano erupted again, this time the column of ash and fire thrown from the volcano reached 25 miles high. This second explosion was heard over 1,500 miles away in Sumatra. The entire top of the mountain, totaling about a mile high, was blown off entirely, changing Mount Tambora’s appearance forever. Much of the archipelago and its adjacent seas were plunged into darkness for days. Volcanic ash reached as far as 620 miles away from the site.
Averill: The Rajah of Sangarr, a small principality on Sumbawa survived the disaster and described the site of the eruption for posterity:
“[T]hree distinct columns of flame burst forth, near the top of Tomboro [sic] Mountain, all of them apparently within the verge of the crater; and after ascending separately to a very great height, their tops united in the air in a troubled confused manner. In a short time the whole mountain next [to] Saugar [sic] appeared like a body of liquid fire extending itself in every direction… Between nine and ten p.m. ashes began to fall, and soon after a violent whirlwind ensued, which blew down nearly every house in the village of Saugar, carrying the tops and light parts along with it… In the part of Saugar adjoining [Mount Tambora] its effects were much more violent, tearing up by the roots the largest trees and carrying them into the air together with men, houses, cattle, and whatever else came within its influence. This will account for the immense number of floating trees seen at sea… The sea rose nearly twelve feet higher than it had ever been known to be before, and completely spoiled the only small spots of rice lands in Saugar, sweeping away houses and every thing within its reach.”
Marissa: The Sumbawa people who did not die in either eruption suffered in the following weeks. They endured transplantation or homelessness as hot rivers of lava flowed over their villages. Thousands drowned in resultant tsunamis or suffered fatal injuries in volcanic wind gales. Tens of thousands died from thirst, hunger, disease or malnutrition over the following months because their rice crops and infrastructure were destroyed. Their water supply was also poisoned by ash, pyroclastic flows and the aerosolized gases it absorbed. Two weeks after the eruption, Lieutenant Owen Phillips was charged with delivering rice and drinking water to the island from stores on Java. He encountered a horrible scene. Few recognizable built structures still existed on the island and both land and sea were littered with uprooted trees and rotting corpses. [Philips was actually the person who recorded the Rajah’s eyewitness account we just read]
Averill: An estimated 117,000 people died as a direct result of the two April eruptions. Survivors who lived on the farther reaches of the island appeared to have emigrated en masse in the eruption’s aftermath. The islanders reached such depths of desperation that they started selling themselves as slaves to Sulawesi [SU-la-WEY-see] pirates as a survival strategy. Within a year of the event, half of Sumbawa’s population were dead or departed. Only years later did new groups arrive to repopulate and rebuild the island. Nearly all of Sumbawa’s buildings date to after 1815. At least two small kingdoms were lost entirely and we know very little about them. The Dutch and British East India companies, whose activities generated most if the documents we have about 19th-century Indonesia, knew little of the Tambora or Pekat people. Neither company had been successful at regulating trade in Sumbawa and other small islands in the archipelago by that time.
Mt. Sumbing, a Javanese volcano | Public Domain / Wikimedia Commons
Marissa: The Dutch had been in Indonesia since 1603 but focused their efforts on Java which was closer to the mainland. Sailors had used Mount Tambora as a landmark and guide in their journeys but their contact with the people on Sumbawa was minimal compared to interactions in the bustling ports of Java. The nearby islands of Lombok, Bali and East Java suffered considerable crop damage after the eruption but news of their struggles did not travel far. Unlike the explosion of Krakatoa in the 1880s, this eruption went largely unreported. The telegraph had not yet been invented and the volcano’s immediate damage was confined to the lesser colonized islands which were still comparatively insular.
Averill: What’s more is that until the eruption of Krakatoa (70 years later), scientists were unsure of the climatic impact of volcanic eruptions. For this reason, studies on Tambora and its impacts are all fairly recent. First-hand documentation of its 1815 eruption are incredibly rare so its death toll, and its immediate consequences went undetermined for decades. Most of the accounts we have from the time are the recorded observations of British and Dutch sailors in the area. In the last few years, scientists are beginning to understand that the deposits of ash, pumice and solidified lava flows have completely reshaped the island’s topography. The Mount Tambora eruption has long since been identified as the cause of the “Year without a Summer” but in 1816, few people had any idea that such a cataclysmic event had passed and no one knew that it would wreak havoc all over the world for the next three years.
Marissa: First I want to make sure we give our listeners an idea of the scale of this eruption. Krakatoa, which is often used as an example of the quintessential natural disaster, flung 4.5 cubic miles of pumice, rock, ash and other debris into the atmosphere, no small amount. But when Mount Tambora erupted, it expelled 36 cubic miles of debris into the atmosphere. There’s no comparison. Those of us who have seen Dante’s Peak, starring Pierce Brosnan and Linda Hamilton, might remember it was about the 1980 eruption of Mount St. Helen’s in Washington state. Tambora’s eruption was 100 times the size of the eruption of Mount St. Helen’s. It’s nearly impossible to imagine the scale of this disaster, really. When something is so massive and devastating, it almost starts to mean nothing… just numbers, right?
Dante’s Peak film poster, 1997 | Fair Use
Averill: One way we can measure Tambora’s destructiveness is by exploring its impact over the rest of the world in the following years. But keep in mind, at the time, no one knew that this eruption was to blame for the events that followed. The ash thrown up into the atmosphere by the violent explosion settled over the entire archipelago, blotting out the light for days after the event. This consequence was obviously perceived by people who were living at the time but what they could not have known was that the volcano had emitted 80 megatons (80 million metric tons) of sulphur dioxide which rose into the stratosphere, creating a band around the tropics. There, they oxidized into sulphate aerosol particles which were distributed globally over the next year. These aerosols were deposited on the ice covering both of the Earth’s poles, and these deposits continued for two years. Tambora’s emissions were preserved in ice, appearing as they did in the months after the eruption, and studied in ice cores extracted in 2009.
Marissa: These sulphate particles refract and absorb the sun’s light in such a way that it leads to a cooling effect on the ground. The world’s temperatures plummeted for the next 3 years. This, in addition to other weather anomalies triggered by the eruption, disrupted ecosystems all over the planet. I think we talked about proxies in the Little Ice Age episode? But a quick refresh: proxies are records of historical temperatures that still exist today. Scientists have used ice cores as we mentioned earlier, historical documentation by cultures all over the world, as well as dendrochronology — the reading of tree rings– to determine temperature patterns after the eruption. Being historians, we’re most interested in the document proxies, and to be honest, we’re hardly qualified to talk about the other more scientific ways of studying this phenomenon. You need to go over to Lady Science Pod for that.
Averill: In North America and Europe, 19th-century people had a nerdy habit of recording the temperature and other meteorological data every day. Thomas Jefferson was one of these nerds. Scholars have been able to use his records to prove Tambora’s impact on global temperatures. For example, on May 17, 1816, Jefferson wrote:
“[T]he spring has been unusually dry and cold. our average morning cold for the month of May in other years has been 63° of Farenheit. in the present month it has been to this day an average of 53° and one morning as low as 43°. repeated frosts have killed the early fruits and the crops of tobacco and wheat will be poor.”[Jefferson to David Baillie Warden, May 17, 1816, in PTJ:RS, 10:65.]
Two months later, New England experienced a summer snow storm that dropped 10 inches of snow on unsuspecting villages. Now just one document like this doesn’t mean much, but added to hundreds of other documents where people similarly record plummeting temperatures, they act as evidence that Tambora’s impact was far-reaching.
Marissa: The gazettes in Qing China reported the daily weather in enough detail that scholars have been able to measure Tambora’s impact on China’s weather between 1815 and 1819. They have also been able to use people’s personal diaries, etc to corroborate temperatures. During these years, China suffered from unseasonably cool and wet weather. Summer frosts and snow falls destroyed rice and buckwheat crops on such a scale that some areas, such as the southwestern Yunnan province, suffered severe famine. The Great Yunnan famine was the result of 3 successive crop failures. It was so severe that people were reportedly selling their children, committing murder-suicides, and eating clay in desperation.
Averill: Starvation and desperation in Yunnan killed many people. The survivors of the Yunnan famine were understandably scarred and resentful, and they sought ways to protect themselves from another such disaster. One way they did this was by turning to cash crops, specifically poppy. Poppy plants, used to produce opium, were hardy enough to withstand the temperature fluctuations resulting from Tambora. Even though they didn’t provide sustenance to those who grew them, they brought in cash which solved Yunnan’s food insecurity crisis at least temporarily. Farmers weren’t the only ones benefiting from this transition to cash crops. The state benefited as well because it was able to tax the lucrative crop and extract impressive revenue.
Marissa: What seemed like an innovative solution to famine at the time ended up having grave consequences. The famine contributed to the decline of the Qing dynasty and unfortunately coincided with the arrival of Western gun boats. Great Britain launched a strategy of gunboat diplomacy where it used intimidation in Chinese ports to force trade deals which unilaterally benefited them. Meanwhile, the opium problem worsened. Neither the farmers nor the state had any monetary incentive to stop growing poppy. Eventually the Yunnan province was entirely dedicated to the cultivation of poppy and was forced to import all of its grain from southeast Asia. So Yunnan was not producing any food at all… so much for increased food security! The opium-dependent population living in China’s ports bought foreign opium supplies in massive quantities which drained China of its silver (triggering the first and second Opium wars with Britain). It also created generations of Chinese people who struggled their entire lives with opium addiction. The Chinese state recognized widespread opium addiction as a national crisis as early as the 1830s.
The Daoguang Emperor with his empress, imperial consorts, and children in the palace | Public Domain / Wikimedia Commons
Averill: This course of events is known in Chinese history as the Daoguang Depression. It was crucial to shaping China’s interactions with the West. In centuries past, China had enjoyed economic stability, population growth and competent, widespread political influence. Tambora’s impact on the climate made the Chinese vulnerable to Western exploitation and interference. This set China on a path toward deterioration: the decline of the Qing, the Opium Wars, and the Taiping Rebellion followed.
Marissa: Some of the modern world’s most dangerous pathogens also owe their strength to Tambora’s eruption. India’s monsoons were delayed in 1816 and 1817 by Tambora’s sulphate gases. The dramatic alteration in moisture content in Indian towns and cities resulted in a mutation to the cholera bacterium. This mutation triggered the deadliest cholera epidemic in history, known as the Bengal Cholera. In November 1817, this mutated cholera killed 5,000 people in 5 days. The disease quickly spread and became a pandemic lasting, in Asia, until 1821. Death tolls are staggeringly high and a total has never been calculated. We know 10,000 British soldiers stationed in India died of cholera but estimates of Indian deaths are projected to be in the hundreds of thousands. The bacterium did not become any less deadly as it traveled across Asia. Bangkok, for example, reported 30,000 cholera deaths. By 1823, the mutated strain reached Europe, and then North America shortly after. Worldwide, experts estimate that the cholera pandemic triggered by Tambora’s gases killed millions.
Averill: For centuries India has been regarded as the “Homeland of Cholera.” Public health officials and sometimes even historians accused the Indian government of neglecting sanitation, and the Indian people of unhygienic practices which transformed the country into a vector of the highly contagious disease. The British Empire used India’s susceptibility to cholera to justify their colonial activities there. The British were able to frame India as a third-world country incapable of ruling itself and Britain as the modern, civilized world power willing to influence the Indian state for the better. (Gandhi disagrees lol) The impact of British colonialism in India is still being felt today. Studies of Tambora’s impact have rectified this myth somewhat. There was little that India could have done to prevent the mutation and spread of the cholera bacteria in light of the monsoon failures they endured.
Marissa: Another indirect impact of Tambora are the riots that ensued across the globe in response to widespread famine and disease. In most modern societies, it’s not as easy to see the connections between agricultural output and food security. But at the time of Tambora’s eruption, the vast majority of the world was still engaged in subsistence agriculture. So variations in agricultural output had direct impacts on how much food made it to the table. One crop failure was serious. Two crop failures in a row was dire. Most regions would have used any food stores to supplement the first failure. Three crop failures in a row, as we saw with Hunnan province in China, was an emergency. Gillen D’Arcy Wood, author on a new book about Tambora, put it well: “For three years following Tambora’s explosion, to be alive, almost anywhere in the world, meant to be hungry.” In many areas, food scarcity led to riots and other unrest, especially in Europe which was struggling with the aftermath of the Napoleonic Wars during the Year without a Summer.
Averill: In 1816 Ireland, for example, it rained four times more than was typical. Crop failure was so severe that many were forced to sell their clothes and hair for food. Wearing rags in the cold and wet weather made the Irish susceptible to typhus, which added to their misery. Ireland’s Chief Secretary Charles Grant wrote: “In the years 1816 and 1817, the state of the weather was so moist and wet, that the lower orders in Ireland were almost deprived of fuel werewith to dry themselves, and of food whereon to subsist. They were obliged to feed on esculent plants such as mustard seeds, nettles, potato-tops, and potato-stalks- a diet which brought on a debility of body and encouraged the disease more than anything else could have done.” In Ballina (County Mayo), protests ensued over the export of oatmeal. The riot became so violent that the military was deployed to protect the town. Three rioters were killed and many more were seriously wounded.
Marissa: Bavarian towns such as Augsburg and Memmingen were in turmoil for similar reasons. Rumors were circulating that authorities were exporting corn to Switzerland. The local newspapers illustrate the levels of desperation felt by Bavarian villagers. They reported that: “thousands of men and women, [were] ripping chunks out of a living chestnut mare… They’re sending our corn out to Switzerland.” The rumors of export sparked several riots that shut down the small cities.
In England, the East Midlands also experienced unrest related to food insecurity. Villagers in Pentrich, Derbyshire suffered high grain prices, and post-war unemployment on a large-scale. They amassed weapons and marched on the village, killing one servant whose master refused to join the rising. A spy in their midst informed on them and so magistrates were able to neutralize the uprising fairly quickly but dozens of men were indicted on treason. Three of them were executed publicly to dissuade other hungry and disaffected groups from doing the same.
Averill: Many of these riots revolved around the export of food in towns where the local populations were near starvation. In the Bavarian cases Marissa mentioned, magistrates were trying to provide relief to Switzerland, where Tambora’s impact was particularly severe. There, the price of grain quadrupled between 1815 and 1817. As in other parts of the world, cold weather led to unripened crops and wet weather caused the rest to rot in the fields. Snow fell in record amounts for the two winters following the eruption and there was 80% more rainfall than in an average year. Residents reported having to heat their homes throughout the summer months. Unseasonably cold weather in the summer of 1816 prevented the annual melting of the Alpine ice caps so that when the cooling subsided in 1817, there was more ice than usual and the Swiss experienced unprecedented flooding. Switzerland may have been unequally affected by Tambora but it’s also the best-studied area because it was the setting of an important cultural milestone for English literature.
A still from The Bride of Frankenstein, depicting Mary Shelley, Percy Bysshe Shelley, and Lord Byron in Geneva | Universal Pictures, Public Domain
Marissa: England’s youngest and most promising authors gathered at Lake Geneva in the Summer of 1816. Among them were Lord Byron, Percy Shelley, and Mary Godwin (soon to be Mary Shelley), the novelist and daughter of the badass feminist Mary Wollstonecraft. Lord Byron was escaping crushing debts and rumors of incest in England. [I just listened to the History Chicks episode on Ada Lovelace, who was Lord Byron’s daughter– so I have this family on my mind] But anyway, Byron was living in a villa on Lake Geneva. Mary, Percy, and Mary’s sister Claire visited Byron, intending to escape London’s dreary weather with a tour of Europe. But they were obviously unaware of Tambora’s impact on Europe’s climate. After witnessing the wet and wintry bleakness of a post-Tambora Swiss summer, Mary wrote “Never was a scene more awfully desolate.” The group holed up in a villa and challenged each other to pass the time by telling the best ghost stories. It was the only activity that seemed appropriate given the dreary weather. Several notable literary works emerged from this friendly storytelling competition. Mary Shelley’s Frankenstein, Lord Byron’s poem Darkness, and the seeds of a novel about a blood-sucking man, which was used later by John William Polidori to write The Vampyre.
Averill: This story might have been dramatized somewhat. But only in recent decades have scholars connected the Tambora eruption to this semi-mythical origin story for the Gothic movement in art and literature. But English literature was not the only cultural consequence of Tambora’s climatic impact. In the Chinese province of Yunnan which we discussed earlier, a new genre called famine poetry developed among the residents of the province. One poet, Li Yuyang, was forced to return home to save his parents from bankruptcy during the famine. He watched his neighbors commit beastly acts such as infanticide, child sales and murder in hungry desperation. He barely survived himself. Suffering from malnutrition and mental illness after the famine, Li died ten years later at the age of 42. Here is an excerpt from one of his poems, translated into English:
“People rush from falling houses in their thousands … (It) is worse than the work of thieves. Bricks crack. Walls fall. In an instant, the house is gone. My child catches my coat And cries out. I am running in the muddy road, then Back to rescue my money and grains from the ruins. What else to do? My loved ones must eat. He writes of parents selling their children for food. Still they know the price of a son Is not enough to pay for their hunger. And yet to watch him die is worse … The little ones don’t understand, how could they? But the older boys keep close, weeping.”
Marissa: Art historians have successfully correlated the post-Tambora optical aerosol depth with the 1816-1817 painting Greifswald in Moonlight by German painter Caspar David Friedrich. So we know with confidence that his painting would have been entirely different if it weren’t for Tambora’s eruption. English painter William Turner developed his painting style after observing the unique and spectacular sunsets of the Year Without a Summer. Ironically, his 1817 painting Eruption of Mt. Vesuvius is the best example of this influence even though Turner had no idea that he was witnessing volcanic skies himself.
Weymouth Bay, 1816, John Constable | c. Victoria and Albert Museum
Averill: Several inventions have also been connected to Tambora’s climatic impact. In 1817 German inventor Karl Drais patented the “walking machine” which is a walking bicycle.. so basically a bicycle without pedals. Drais perceived a need for alternate modes of transportation when the Year without a Summer made grain so expensive that few people could afford to feed horses anymore. During the heights of famine, many horses died of starvation or were killed for meat by their owners. People were in need of a device that would help them travel faster but one that they would not need to feed.
Marissa: This is interesting to me because it tells us that Drais, and his peers had no idea that this was going to pass. They might have thought this was the new normal… because they had no way of knowing that this was a temporary effect of a volcanic eruption.
I wonder how many people felt that way in Europe… that it was dark times and that it wasn’t going to improve. This mindset might have precipitated mass migration to America. The beginning of the first 19th-century wave of immigration to America coincides exactly with the eruption. And most people who came to America cited civil unrest and famine as their motivating factors for leaving Europe. The Irish and Swiss, and Germans made up a majority of this immigrant wave and those were areas that were particularly influenced by volcanic climate change.
Averill: There was also a lot of migration within North America. Land was becoming more scarce on the east coast and the crop failures following the Mount Tambora eruption sent settlers from the Eastern seaboard into the frontier in search of fertile land and resources. This westward movement triggered violent interactions with indigenous peoples which came to characterize the Wild West. So basically… without Tambora we would not have had Manifest Destiny and absolutely zero Spaghetti Westerns…. Just kidding… but really, this sequence of events illustrates how fragile our ecosystems really are. And that our ecosystems are interwoven with human systems in ways that we never realized in the past.
Marissa: We should mention that there was SOME idea that volcanic eruptions were related to weather patterns early on. Benjamin Franklin, for example, posited a correlation between volcanic emissions and weather anomalies. Swiss botanist Heinrich Zollinger was born a few years after Tambora erupted but in the 1830s he studied botany at the University of Geneva and became interested in volcanology. Ever since the Year without a Summer, Swiss scientists had suspected a correlation between the Tambora eruption and the following years’ weather anomalies. In the 1840s, Zollinger moved to Java and spent time studying Tambora. In 1847 he made a detailed drawing of the Tambora caldera and spoke with locals on the island, though he found it to still be depopulated. According to his report, Sumbawa communities were still recovering from the disaster even though it had been 30 years.
Averill: The Swiss never made any definitive connections between volcanic activity and climatic change but by the time of the Krakatoa eruption in 1883, the science community was eager to find such proof. This time, they were backed by a horrified and fascinated international press which turned the eruption into a global event. It helped that the first successful transatlantic telegraph cable had just been laid 17 years earlier. In 1815, it took 6 months for news of the Tambora eruption to reach Britain. When Krakatoa erupted, the entire world was notified within hours. Popular interest in the eruption encouraged geological studies and a public scientific discourse which led to the discovery of how volcanoes function and how they impact the atmosphere.
Marissa: It’s interesting to think of more subtle ways that inclement weather might have impacted culture. Were people generally more depressed for those few years? Was there an increase in mental illness? Vitamin D deficiency? Or even criminality and domestic violence? Does it have anything to do with the enthusiastic reception of Marxism decades later?
Sources and Further Reading:
Brönnimann, Stefan, and Daniel Krämer. Tambora and the “Year Without a Summer” of 1816: A Persepctive on Earth and Human Systems Science. 2016.
Broad, William J. “A Summer Without Sun.” New York Times April 25, 2015, D1.
Dennis, Matthew, and Munger, Michael. 1816: “The Mighty Operations of Nature”: An Environmental History of the Year Without a Summer. 1816: “The Mighty Operations of Nature”: An Environmental History of the Year Without a Summer. University of Oregon, n.d. <http://hdl.handle.net/1794/12417>.
Harington, Charles Richard. The Year Without a Summer? World Climate in 1816. Ottawa: Canadian Museum of Nature, 1992.
Related Posts
Shinto – In Japan, recognizing the spirit of all things – from trees to mountains to interestingly shaped rocks – is part of Shinto. Older than writing in Japan, Shinto is the root of Japanese Read more…
Today, we’re talking about the conservation movement and the creation of Americas National Parks in the late 19th and early 20th century. Listen, download, watch on YouTube, or scroll down for the transcript. Other Episodes Read more…
Today we will be discussing the history of natural history museums in America and the Western World. Many natural history museums, in America and in the western world, were developed during the nineteenth century. These Read more…
|
The couple was surrounded by their belongings: furniture, iron tools, bronze bowls and pottery. Sigurdsson knew that this site had been preserved by history’s deadliest volcano, Mount Tambora, whose 1815 eruption changed history.
Marissa: The locals on Sumbawa knew little of this event which had occurred only 200 years ago on the island they call home. This is not, of course, because they were uneducated or disinterested. They knew nothing of the eruption because few who lived on the island, and no one who lived on the mountain at the time survived to tell the story. The Tambora people and their Rajah lived closest to the volcano before the eruption. One April evening, their culture, their language, and their lifestyle, became extinct within a matter of hours. The rest of the world was oblivious to the eruption for months. Even after news of the event reached the rest of the globe, they had no idea that they were already weathering its impact.
The year following the eruption, 1816 was known in England as the “Year without a Summer,” in New England as 18-hundred-and-froze-to-death, and “L’annee de la misere” or “Das Hungerhjar” in Switzerland. Germans dubbed 1817 as “the year of the beggar.” The Chinese and Indians had no name for it but the years following the massive eruption were remembered as ones of intense and widespread suffering. Scientists are, only today, uncovering the historical impacts of this ecological disaster. Suddenly we have climatic data which have reshaped our understanding of the events of 1815 and the years that followed. Now it is historians’ job to explore the social, political, and cultural influence of this catastrophic event. All this and more today as we explore the eruption of Mount Tambora in April 1815.
I’m Marissa Rhodes
And I’m Averill Earls
|
yes
|
Ufology
|
Was the Phoenix Lights incident a result of military flares?
|
yes_statement
|
the "phoenix" lights "incident" was a "result" of "military" "flares".. "military" "flares" caused the "phoenix" lights "incident".
|
https://en.wikipedia.org/wiki/Phoenix_Lights
|
Phoenix Lights - Wikipedia
|
Lights of varying descriptions were seen by thousands of people between 7:30 pm and 10:30 pm MST, in a space of about 300 miles (480 km), from the Nevada line, through Phoenix, to the edge of Tucson. Some witnesses described seeing what appeared to be a huge carpenter's square-shaped UFO containing five spherical lights. There were two distinct events involved in the incident: a triangular formation of lights seen to pass over the state, and a series of stationary lights seen in the Phoenix area.[3][4]
Both sightings were supposedly due to aircraft participating in Operation Snowbird, a pilot training program of the Air National Guard based in Davis-Monthan Air Force Base in Tucson, Arizona. The first group of lights were later identified as a formation of A-10 Thunderbolt II aircraft flying over Phoenix while returning to Davis-Monthan. The second group of lights were identified as illumination flares dropped by another flight of A-10 aircraft that were on training exercises at the Barry Goldwater Range in southwest Arizona. Fife Symington, governor of Arizona at the time, years later recounted witnessing the incident, describing it as "otherworldly."[5][4]
Reports of similar lights arose in 2007 and 2008, and were attributed to military flares dropped by fighter aircraft at Luke Air Force Base,[6] and flares attached to helium balloons released by a civilian, respectively.[7]
On March 13, 1997, at 7:55 pm MST, a witness in Henderson, Nevada, reported seeing a large, V-shaped object traveling southeast. At 8:15 pm, an unidentified former police officer in Paulden, Arizona, reported seeing a cluster of reddish-orange lights disappear over the southern horizon. Shortly afterwards, there were reports of lights seen over the Prescott Valley, Arizona. Tim Ley and his wife Bobbi, his son Hal and his grandson Damien Turnidge first saw the lights when they were about 65 miles (105 km) away from them.[8]
At first, the lights appeared to them as five separate and distinct lights in an arc shape, as if they were on top of a balloon, but they soon realized that the lights appeared to be moving towards them. Over the next ten or so minutes, the lights appeared to come closer, the distance between the lights increased, and they took on the shape of an upside-down V. Eventually, when the lights appeared to be a couple of miles away, the family said they could make out a shape that looked like a 60-degree carpenter's square, with the five lights set into it, with one at the front and two on each side.[9]
Soon, the object with the embedded lights appeared to be moving toward them, about 100 to 150 feet (30 to 46 m) above them, traveling so slowly that it gave the appearance of a silent hovering object, which seemed to pass over their heads and went through a V opening in the peaks of the mountain range towards Piestewa Peak Mountain and toward the direction of Phoenix Sky Harbor International Airport.[10]
Between 8:30 and 8:45 pm, witnesses in Glendale, a suburb northwest of Phoenix, saw the light formation pass overhead at an altitude high enough to become obscured by the thin clouds. Amateur astronomer Mitch Stanley in Scottsdale, Arizona, also observed the high altitude lights "flying in formation" through a telescope. According to Stanley, they were quite clearly individual airplanes.[11]
Approximately 10:00 pm that same evening, a large number of people in the Phoenix area reported seeing "a row of brilliant lights hovering in the sky, or slowly falling". A number of photographs and videos were taken, prompting author Robert Sheaffer to describe it as "perhaps the most widely witnessed UFO event in history".[12]
According to Sheaffer, what became known as "the Phoenix Lights" incident of 1997 "consists of two unrelated incidents, although both were the result of activities of the same organization: Operation Snowbird, a pilot training program operated in the winter by the Air National Guard, out of Davis-Monthan Air Force Base in Tucson, Arizona."[12] Tucson astronomer and retired Air Force pilot James McGaha said he also investigated the two separate sightings and traced them both to A-10 Thunderbolt II aircraft flying in formation at high altitude.[13]
The first incident, often perceived as a large “flying triangle” by witnesses, began at approximately 8:00 pm, and was due to five A-10 jets from Operation Snowbird following an assigned air traffic corridor and flying under visual flight rules. Federal Aviation Administration (FAA) rules concerning private and commercial aircraft do not apply to military aircraft, so the A-10 formation displayed steady formation lights rather than blinking collision lights. The formation flew over Phoenix and on to Tucson, landing at Davis-Monthan AFB about 8:45 pm.[12]
The second incident, described as "a row of brilliant lights hovering in the sky, or slowly fallings" began at approximately 10:00 pm, and was due to a flare drop exercise by different A-10 jets from the Maryland Air National Guard, also operating out of Davis-Monthan AFB as part of from Operation Snowbird.[12] The U.S. Air Force explained the exercise as utilizing slow-falling, long-burning LUU-2B/B illumination flares dropped by a flight of four A-10 aircraft on a training exercise at the Barry M. Goldwater Air Force Range in western Pima County, Arizona. The flares would have been visible in Phoenix and appeared to hover due to rising heat from the burning flares creating a "balloon" effect on their parachutes, which slowed the descent.[14] The lights then appeared to wink out as they fell behind the Sierra Estrella mountain range to the southwest of Phoenix.
A Maryland ANG pilot, Lt. Col. Ed Jones, responding to a March 2007 media query, confirmed that he had flown one of the aircraft in the formation that dropped flares on the night in question.[14] The squadron to which he belonged was at Davis-Monthan AFB on a training exercise at the time, and flew training sorties to the Goldwater Air Force Range on the night in question, according to the Maryland ANG. A history of the Maryland ANG published in 2000 asserted that the squadron, the 104th Fighter Squadron, was responsible for the incident.[15] The first reports that members of the Maryland ANG were responsible for the incident were published in The Arizona Republic in July 1997.[16]
Later comparisons with known military flare drops were reported on local television stations, showing similarities between the known military flare drops and the Phoenix Lights.[6] An analysis of the luminosity of LUU-2B/B illumination flares, the type which would have been in use by A-10 aircraft at the time, determined that the luminosity of such flares at a range of approximately 50–70 miles (80–113 km) would fall well within the range of the lights viewed from Phoenix.[17]
During the Phoenix event, numerous still photographs and videotapes were made showing a series of lights appearing at a regular interval, remaining illuminated for several moments, and then going out. The images were later determined to be the result of mountains not visible by night that partially obstructed the view of aircraft flares from certain angles to create the illusion of an arc of lights appearing and disappearing one by one.[18][17]
Shortly after the 1997 incident, Arizona Governor Fife Symington III held a press conference, joking that "they found who was responsible" and revealing an aide dressed in an alien costume. Later in 2007, Symington reportedly told a UFO investigator he'd had a personal close encounter with an alien spacecraft but remained silent "because he didn't want to panic the populace". According to Symington, "I'm a pilot and I know just about every machine that flies. It was bigger than anything that I've ever seen. It remains a great mystery. Other people saw it, responsible people," Symington said Thursday. "I don't know why people would ridicule it".[13][19][20][21]
On April 21, 2008, lights were reported over Phoenix by local residents.[23] These lights reportedly appeared to change from square to triangular formation over time. A valley resident reported that shortly after the lights appeared, three jets were seen heading west in the direction of the lights. An official from Luke AFB denied any U.S. Air Force activity in the area.[23]
On April 22, 2008, a resident of Phoenix told a newspaper that the lights were nothing more than his neighbor releasing helium balloons with flares attached.[24] This was confirmed by a police helicopter.[24] The following day, a Phoenix resident, who declined to be identified in news reports, stated that he had attached flares to helium balloons and released them from his back yard.[7]
The Phoenix Lights...We Are Not Alone Documentary, Lynne D. Kitei, M.D., executive producer, in collaboration with Steve Lantz Productions. Based on the book, The Phoenix Lights...A Skeptic's Discovery That We Are Not Alone and featuring astronaut Edgar Mitchell and former Governor of Arizona Fife Symington.[25]
|
The first reports that members of the Maryland ANG were responsible for the incident were published in The Arizona Republic in July 1997.[16]
Later comparisons with known military flare drops were reported on local television stations, showing similarities between the known military flare drops and the Phoenix Lights.[6] An analysis of the luminosity of LUU-2B/B illumination flares, the type which would have been in use by A-10 aircraft at the time, determined that the luminosity of such flares at a range of approximately 50–70 miles (80–113 km) would fall well within the range of the lights viewed from Phoenix.[17]
During the Phoenix event, numerous still photographs and videotapes were made showing a series of lights appearing at a regular interval, remaining illuminated for several moments, and then going out. The images were later determined to be the result of mountains not visible by night that partially obstructed the view of aircraft flares from certain angles to create the illusion of an arc of lights appearing and disappearing one by one.[18][17]
Shortly after the 1997 incident, Arizona Governor Fife Symington III held a press conference, joking that "they found who was responsible" and revealing an aide dressed in an alien costume. Later in 2007, Symington reportedly told a UFO investigator he'd had a personal close encounter with an alien spacecraft but remained silent "because he didn't want to panic the populace". According to Symington, "I'm a pilot and I know just about every machine that flies. It was bigger than anything that I've ever seen. It remains a great mystery. Other people saw it, responsible people," Symington said Thursday. "I don't know why people would ridicule it".[13][19][20][21]
On April 21, 2008, lights were reported over Phoenix by local residents.[23] These lights reportedly appeared to change from square to triangular formation over time. A valley resident reported that shortly after the lights appeared, three jets were seen heading west in the direction of the lights.
|
yes
|
Ufology
|
Was the Phoenix Lights incident a result of military flares?
|
yes_statement
|
the "phoenix" lights "incident" was a "result" of "military" "flares".. "military" "flares" caused the "phoenix" lights "incident".
|
https://enigmalabs.io/library/ca2dbe16-51c6-4a89-91c6-007d6f85c011
|
Arizona, March 13, 1997 (The Phoenix Lights) | Enigma Labs
|
The Phoenix Lights is a well-known UFO sighting that took place in the skies over and around Phoenix, Arizona on the night of March 13, 1997. According to witness testimony, the Phoenix Lights actually consists of two separate events spanning from 7:30 PM local time to around 10:30 pm local time, covering approximately 300 miles from the Nevada state line, through Phoenix, and finally to the edge of Tucson, Arizona.
The first sightings occurred as early as 7:30 pm and up to 8:30 pm and involved a series of lights in a âVâ shape that many witnesses describe as a boomerang-shaped craft that flew at a low speed directly over homes across much of the Mesa and Phoenix skyline. At present, there is only one video that purports to show that event.
The second sighting, which occurred around 10 to 10:30 pm, involved a line of up to nine bright lights in the sky southwest of Phoenix that was recorded by a number of witnesses. This event boasts over 20,000 direct witnesses, as well as a number of videos, and is often touted as the largest single mass UFO sighting in history.
In the two and a half decades since the pair of events, a number of ideas and theories have emerged to explain them. In the first sighting, the large number of witnesses who claim to have seen a physical, boomerang-shaped craft has been countered by a lone amateur astronomer who says he saw the lights through his telescope and was able to confirm they were simply planes flying in a V formation and not a single craft.
The second event has been more closely scrutinized, with analysis of the various recordings of the events and testimony from the Air National Guard offering some support for these lights being flares that were dropped by military aircraft. This report was ultimately confirmed by the Air National Guard, even though they had initially denied having any planes in the sky that night.
Following both sightings and the outpouring of concern by local citizens, then Arizona Governor Fife Symington held a press conference where his chief of staff came out in an alien costume and handcuffs. At the time, Symington said this was an effort to calm the frenzy that was building around the pair of sightings. Years later, Symington said he regretted the press conference, while also admitting that he was a witness to the V-shaped lights.
Symingtonâs initially cavalier approach to the concerns of local citizens was in notable contrast to local Phoenix councilwoman Frances Emma Barwood and even Arizona Senator John McCain, both of whom asked for a formal investigation into the mysterious lights. In numerous interviews, Barwood says that like Symington, her office was also flooded with calls by concerned citizens, leading her to demand the Air Force investigate the sightings.
Also notable, the lights were reported to the local FAA by a âgeneral aviation pilotâ who called from his private airplane just before landing to report the lights and see if the airport had anything on their radar. According to that witness, the radar operators said they did not have anything on their scopes. In 2017, actor Kurt Russel came forward as that general aviation pilot, explaining in a live television appearance that he had been arriving in Phoenix with his then 10-year-old son Oliver and that he was the pilot who called them into the FAA.
The First Event: First Witnesses & The Ley Family
Starting around 8 pm local time, a number of Phoenix-area witnesses say they saw a large, boomerang-shaped craft fly slowly over the city, with many witnesses also saying the massive, silent craft flew directly over their heads.
The first report came in at about 6:55 pm PST (7:55 pm MST), from a man who reported seeing a V-shaped object above Henderson, Nevada. That witness said it was about the "size of a (Boeing) 747," that it sounded like "rushing wind," and had six lights on its leading edge.
Soon, other reports were coming in from Prescott valley residents, who said the object was definitely solid since it blocked out stars as it passed overhead. One of the most referenced witness accounts of this first event comes from local residents Tim and Bobbie Ley and their son Hal.
In an interview with the local Phoenix media. Mr. Ley said he was coming into the driveway with his son, Hal, when he looked out the side window and he saw five distinct lights âfar up in the direction of Prescott (Arizona).â
At that point, Hal ran into the house to get his mother, Bobbie. Mr. Ley says that he, his grandson (Damian), his wife Bobbie, and their son Hal continued to monitor the lights as they came closer to their family home.
âAll of a sudden,â said Mr. Ley, âinstead of being five lights in a round, arc shape, the lead light seemed to come out in front, and now it looked like a V-formation flying towards us. â
At this point. Ley says he started to think they may be a formation of helicopters. But as the lights held a perfect formation, Ley said that he and his family started to think it was some kind of object.
âWhen it was about just a couple of miles awayâ¦All of a sudden, I caught the image of what it was. I could see its outline. It was almost the same exact color of the sky, but because it was passing over stars, the stars were being blocked out, and then come back after it passed. And it looked like a very geometric carpenterâs square. Like an equilateral triangle without the bottom.â
Bobbie Ley echoed this description.
âWhen it was coming toward usâ¦I could see it just as clear as a bell, that it was shaped, what I said at the time, was like a carpenterâs square. It looked like a flying carpenterâs square.â
âThe outer edge of this thing was so perfectly straight,â added Mr. Ley, âdark, almost the same color as the sky. But you could see the stars, and then you couldnât see the stars.â
He then describes the massive size of the object as it flew directly overhead, noting that it was so large he could no longer see the farthest edge of the craft. Hal Ley said it was also flying very low as it passed.
âMy nephew Damian said that you probably could have thrown a tennis ball or a rock up and hit it. It was that close,â he said.
Tim Ley also describes the brightness of the lights as they passed directly overhead, noting that they were extremely bright but didnât seem to illuminate the area around them. He then says the craft continued past them and toward some local mountain peaks before he and his family lost sight of it in the haze of the lights from the nearby airport.
The First Event: Nurse David Parker
Another witness who more or less supports the Leysâs account is a nurse named David Parker who recounted his observations on the 2022 television show Mysteries Decoded: The Phoenix Lights.
âI was driving back west, from Mesa to Phoenixâ¦and when I got off the freeway very close to here, there was this massive boomerang floating in the air, coming towards me very slow. Huge. As it came closer, it gently floated over me above my head. I could see the entire skin of the craft. It was like a gunmetal gray, slightly reflective, and it was covered with thousands of what looked like thumbprints, little oval divots, all over the bottom side of it.â
When asked to estimate the size of the craft, Parer said, âIâm gonna say, from wingtip to wingtip, this boomerang-shaped craft, probably a mile to a mile and a half wide. I could see there was multiple lights at the bottom of it.â
Parker also added that the craft took up about 70% of the horizon as it passed overhead.
âI have never seen anything like it,â he said. âWith the massive size that it was, and how it just drifted by. It didnât make any wind, it didnât blow my hair. It didnât make any sound.â
Parker says he was initially very vocal about what he saw, but after Governor Symington minimized the event in the now notorious press conference, he decided to âshut up about it.â
Amateur Astronomer Offers a Different Explanation for First Event
Alongside the numerous witnesses who say they saw a large, V-shaped craft, there is also a witness named Mitch Stanley who said he was able to view the craft through his telescope, and that it was not a single craft.
According to his testimony, Stanley had a Dobsonian telescope that employs a 10-inch mirror. This size telescope gathers 1,500 times as much light as the human eye. According to media reports, Stanleyâs telescope also had an eyepiece that magnified the sky 60 times.
After spotting the lights around 8:30 pm, Stanley says his mother asked what the formation was. He says he checked them out using this telescope, and that he could see that it was not a single object but a formation of individual objects, telling his mother that it was simply âPlanes.â
âThey were planes,â he said. âThereâs no way I could have mistaken that.â
A number of other witnesses to the initial event also say they couldnât tell if it was a single object or simply aircraft flying in formation, but Stanley is the only witness to definitively state that the V-shaped lights were a formation of planes.
Only One Video of the First Event Exists
Since the 1997 sightings, numerous videos of the 10 pm event have surfaced. However, only one video that supposedly shows the first event has surfaced. Mysteries Decoded had this video analyzed by audio and video expert and forensic analyst Jennifer Owen.
âThe power of video processing is, now we can go frame by frame,â explains Owen as she cues up the video in her analysis software. âAnd in doing so, we can see, itâs definitely a singular mass, flying object.â
âItâs geometrically sound,â she adds. âWe can see the sky, but what about right here?â
Owen points to the center of the formation of the lights before continuing her analysis.
âThere nothing in between this mass right here. So, that would lead us to believe that this is an aircraft. This is a pretty big aircraft that holds its shape, and these five lights moving as a singular unit throughout the video.â
Thus far, no other photos or videos of the first event have surfaced.
The Second Event
The second event to take place on the night of March 13, 1997, involved a string of lights along the southwestern edge of Phoenix toward Sierra Estrella. According to witness statements and numerous video recordings, the string of up to nine lights seemed to hang motionlessly in the sky for several minutes before winking out one by one. This event was witnessed by thousands of local residents and caused a flood of calls to local officials for an explanation.
The most famous of the videos of the second event, which has appeared in the majority of media coverage, came from local resident Mike Krzyston, who witnessed and recorded the lights from his Moon Valley home.
The Local Air National Guard, stationed at Tucsonâs Davis-Monthan Air Force Base, initially said they had not dropped any flares that evening, an explanation which had initially been offered as the most likely cause for the lights seen to the southwest of Phoenix. However, a 2017 article titled âThe Phoenix Lights 20 Years Later: Still the Same Set of Panes and Flares over Arizona,â says it was ultimately determined that the lights seen by thousands of residents were indeed flares.
âIn June (1997), KPNX-TV Channel 12 reporter Blair Meeks filmed a drop of flares by military planes over the Air Force gunnery ranges southwest of Phoenix. The hovering lights looked remarkably like the 10 p.m. lights of March 13, and Meeks suggested it as a possible solution to that nightâs second event.â
âWithin days, Tucson Weekly broke the news that the Maryland Air National Guard, in Arizona for winter training, had a squad of A-10 fighters over the gunnery range that night (March 13), and they had dropped flares. An Arizona National Guard public information officer, Captain Eileen Bienz, had determined that the flares had been dropped at 10 p.m. over the North Tac range 30 miles southwest of Phoenix, at an unusually high altitude: 15,000 feet.â
The same report notes that Captain Drew Sullins, spokesman for the Maryland Air National Guard, said that the A-10s, âwhich have squarish wings,â never flew north of Phoenix, âso they could not have been responsible for the formation of planes seen at 8:30 p.m.â
The U.S. Air Force later explained that the second event was slow-falling, long-burning LUU-2B/B illumination flares dropped by a flight of four A-10 Warthog aircraft on a training exercise at the Barry M. Goldwater Air Force Range in western Pima County.
Video Analysis
A self-described video expert named Jim DIletosso determined that the lights seen around 10 pm could not have been flares since his spectral analysis showed their light signature was inconsistent with flares. Numerous experts later countered Diletossoâs claims, noting that the video camcorder recordings he was analyzing did not possess the required data to do such a spectral analysis.
To bolster his claims, Diletosso said he contacted Dr. Richard Powell at the University of Arizona, and that Powell supported his spectral analysis.
âHe called here and I talked to him,â Powell later conceded, âand I could not, for the life of me, understand him.â
âI donât know how you take a photograph or a videotape after the fact and analyze it and get that information out,â added Powell of Diletossoâs suspect claims. âWe didnât say that his method was valid, we said we didnât have any other way that was any better.â
After initially reporting Diletossoâs claim as a valid âspectral analysis,â the Discovery Channel sought a second analysis of the Kryzston footage from Dr. Leonid Rudin at the Pasadena image-processing firm Cognitech. Rudin was also provided a daytime shot from Krzystonâs yard which depicted the distant Sierra Estrella, a mountain range that is not visible in the nighttime video.
In his analysis, Rudin was able to match up the daytime and nighttime shots frame by frame, lining them up perfectly using a distant ridge. As a result, Rudin was able to show that the lights were not only above the Estrella, but also demonstrated how each one blinked out at the precise time they would have reached the top of the mountains. His ultimate conclusion was that this is consistent with the 10 pm lights being flares.
Governor Fife Symington Press Conference & Sighting
In a now infamous press conference the day after the sighting, then Arizona Governor Fife Symington addressed the matter to an anxious public. After some initial statements, Simington turned toward the back of the stage.
âAnd now Iâll ask Doctor Stein and his colleagues to escort the accused into the room so that we may all look upon the guilty party.â
At this point, Symingtonâs Chief of Staff, who was dressed in a generic alien costume and handcuffed, was led to the stage. When the audience of reporters and concerned citizens who appeared to be hoping for an actual explanation of the mysterious lights sounded unamused, Symington shook his head, chuckled, and stated, âNow this just goes to show that you guys are entirely too serious.â
Years later, Symington said he regretted the press conference and ultimately penned a piece for CNN chronicling his own experience, which included witnessing the first event.
âIn 1997, during my second term as governor of Arizona, I saw something that defied logic and challenged my reality,â Symington wrote.
âI witnessed a massive delta-shaped, craft silently navigate over Squaw Peak, a mountain range in Phoenix, Arizona. It was truly breathtaking. I was absolutely stunned because I was turning to the west looking for the distant Phoenix Lights. To my astonishment this apparition appeared; this dramatically large, very distinctive leading edge with some enormous lights was traveling through the Arizona sky. As a pilot and a former Air Force Officer, I can definitively say that this craft did not resemble any man-made object I'd ever seen. And it was certainly not high-altitude flares because flares don't fly in formation.â
In other reports, Sygminton described what he saw as âotherworldly.â
In a 2007 interview with The Daily Courier in Prescott, Arizona, Symington said, "I'm a pilot and I know just about every machine that flies. It was bigger than anything that I've ever seen. It remains a great mystery. Other people saw it, responsible people. I don't know why people would ridicule it."
Conspiracy Theory
Over time, the majority of witnesses have seemingly accepted the answer that the 10 pm lights were most likely flares dropped by Air National Guard pilots. However, some have suggested these flares were only dropped to confuse witnesses to the V-shaped craft seen two hours earlier.
In the episode of Mysteries Decoded, host Jennifer Marshall explains this theory to former Air National Guard pilot Jeff Bucher, who was actually in the air in Phoenix on the night of March 3, 1997, and who says he dropped flares.
âThere is a conspiracy theory that says there actually was some sort of event, whether it was extraterrestrial or foreign military in nature, and that the National Guard deployed the A-10s as a way to kind of cover-up or divert attention away from the first event,â says Marshall. âWhat are your thoughts on that?â
âA scenario that you are talking about, we would almost have to be on a strip alert of some sort, which we donât do, with the munitions loaded to make all of that happen,â countered Bucher.
An anonymous Vietnam War fighter pilot who offered testimony to his experience that night isnât convinced that the flares seen at 10 pm were not dropped to distract from the sighting of the v-shaped craft he and others saw.
âWhy, on this night, were flares dropped on the very northern edge of the range on a night when there is this other sighting?â he asks, also stating he thought he might be designed to confuse those who saw the initial craft.
Kurt Russell Comes Forward as General Aviation Pilot
Reports of the event often refer to a âgeneral aviation pilotâ who spotted the Phoenix Lights as he flew his small private plane in for a landing. That pilotâs identity remained a mystery until 2017 when Actor Kurt Russell came forward during a BBC television broadcast where the incident was being discussed and admitted he was the pilot who had contacted the radar tower to report the lights.
âThe tail number of that plane was Bonanza Two Tango Sierra, and I was the pilot,â said Russell.
Following the clearly shocked reactions by the rest of the panel, which included fellow actor Chris Pratt, the reporter who was discussing the event looked over his paperwork and exclaimed, âit doesnât say that in the briefing!â
Russel then proceeded to recount his experience that night.
âOliver (his son) and I were flying in, I was flying him to go see his girlfriend. And weâre on approach, and I saw six lights over the airport. In an absolute uniform, in a âvâ shape.â
Russell said that after his son expressed concern for their safety, he called the lights into the tower. According to Russell, the tower said that they were not âpainting anything,â meaning the lights he and his son were seeing were not showing up on the airportâs radar.
âI said âwell, okay, Iâm gonna declare itâs unidentified,â explained Russell, âbut it is flying and it is six objects.â
Russell says that at that point they landed without incident, and he put the encounter out of his mind. Two years later, his longtime partner, actor Goldie Hahn, was watching a TV show about the Phoenix Lights, and the incident suddenly came back to him.
âIâm kinda hearing this TV going and I stopped, and I started watching, and it was on that event. Now that was the most viewed event, over 20,000 people saw that. And Iâm watching this, and Iâm feeling like Richard Dreyfus in Close Encounters of the Third Kind. I was like, âwhy do I know this?â And itâs not coming to me. And then they said a pilot reported it, a general aviation pilot reported it on landing. I had never thought of it since then, and I said âthat was me! That was me!â And I said, wait a minute, Iâll go to my log books. And I go to my log books, and there was the flight, at that time, and I didnât mention anything about the UFO.â
Russell points out that he had put it out of his mind and Oliver never mentioned it, so had Goldie not been watching that particular show at that particular time, he may have never made the connection and the identity of the general aviation pilot would have remained unknown.
âThat, to me, was the weird part,â concluded Russell.
Appearances in Media & Legacy
Due to the massive scale of the Phoenix Lights events, which included as many as 20,000 witnesses, a number of documentaries have recounted the events of March 13, 1997. This includes a 2005 documentary titled âPhoenix Lights,â an entire episode of the four-part JJ Abrams Docuseries âUFOâ which appeared on Showtime in 2021, the Mysteries Decoded episode from 2022, and several others.
As of 2022, the Phoenix Lights case, particularly the first of the two events, is still considered unsolved. The second event, although potentially military flares, also remain historically significant as the largest mass sighting in UFO history. Both events continue to inspire books, documentaries, and TV shows to this day.
|
The Second Event
The second event to take place on the night of March 13, 1997, involved a string of lights along the southwestern edge of Phoenix toward Sierra Estrella. According to witness statements and numerous video recordings, the string of up to nine lights seemed to hang motionlessly in the sky for several minutes before winking out one by one. This event was witnessed by thousands of local residents and caused a flood of calls to local officials for an explanation.
The most famous of the videos of the second event, which has appeared in the majority of media coverage, came from local resident Mike Krzyston, who witnessed and recorded the lights from his Moon Valley home.
The Local Air National Guard, stationed at Tucsonâs Davis-Monthan Air Force Base, initially said they had not dropped any flares that evening, an explanation which had initially been offered as the most likely cause for the lights seen to the southwest of Phoenix. However, a 2017 article titled âThe Phoenix Lights 20 Years Later: Still the Same Set of Panes and Flares over Arizona,â says it was ultimately determined that the lights seen by thousands of residents were indeed flares.
âIn June (1997), KPNX-TV Channel 12 reporter Blair Meeks filmed a drop of flares by military planes over the Air Force gunnery ranges southwest of Phoenix. The hovering lights looked remarkably like the 10 p.m. lights of March 13, and Meeks suggested it as a possible solution to that nightâs second event.â
âWithin days, Tucson Weekly broke the news that the Maryland Air National Guard, in Arizona for winter training, had a squad of A-10 fighters over the gunnery range that night (March 13), and they had dropped flares.
|
yes
|
Ufology
|
Was the Phoenix Lights incident a result of military flares?
|
yes_statement
|
the "phoenix" lights "incident" was a "result" of "military" "flares".. "military" "flares" caused the "phoenix" lights "incident".
|
https://tonyortega.org/the-phoenix-lights-20-years-later-still-the-same-set-of-planes-and-flares-over-arizona/
|
The 'Phoenix Lights': 20 years later, still the same set of planes and ...
|
Search
Search
Categories
The ‘Phoenix Lights’: 20 years later, still the same set of planes and flares over Arizona
[In 1998, on the one-year anniversary of the ‘Phoenix Lights,’ we published this lengthy cover story at the Phoenix New Times about what really happened in the skies over Arizona on March 13, 1997. With the 20-year anniversary upon us, we thought we’d post a copy of the story here at our own website in anticipation of what will likely be another wave of misinformation about the events of that night. Count on media outlets once again to fail in their most basic responsibility: To explain that there were TWO, very distinct incidents that happened that night. An earlier “vee” of lights traversed nearly the entire state, and was identified as a group of planes flying in formation by an astronomer in Phoenix using a powerful telescope. Later, a drop of flares was seen over a military range southwest of the city. Because news outlets never make this clear, people who saw the planes argue that flares can’t explain what they saw, and the people who saw the flares know that they didn’t see planes. We cleared up that confusion with this story, and we also profiled the people who were profiting from the confusion they were causing by promoting nonsense about it. Expect more confusion and profit-making now that the 20th anniversary is here. — Tony O.]
THE HACK AND THE QUACK The “Phoenix Lights” made Frances Emma Barwood the darling of the global space-alien lobby. And it’s transformed computer geek Jim Dilettoso into a star in the UFO firmament.
Advertisement
by Tony Ortega
Jim Dilettoso is playing a duet on a piano with a man who has a cross made of his own crusty, drying blood on his forehead.
On Dilettoso’s own head is a mass of curly grayish hair. His mane dips and sways with the fluid rhythm he lays down, and his swaying locks, combined with his wire-rim glasses and the handsome seriousness of his face, evoke the eccentric genius and renowned UFO researcher he’s rumored to be.
Plucking out a tentative melody on the higher keys, a moon-faced Giorgio Bongiovanni beams as he tries to keep up. With his tangled brown locks, Bongiovanni might be taken for a Deadhead if it weren’t for the blackish dried blood decorating his forehead. Ridges of the finger-smoothed ocher make a crude cross a few inches wide; around the cross, a field of fresher, redder blood is smeared.
Bongiovanni’s blood sources are hidden beneath fingerless gloves. Eight years ago, Bongiovanni claims, the Virgin Mary visited him, delivered a message about Jesus consorting with space aliens, and, after Bongiovanni offered to help carry Christ’s message, the Virgin zapped his palms with lasers that came out of her eyes. He’s been carrying his stigmata ever since, rubbing the blood coming from his palms, feet and other sites onto his forehead to maintain his cross.
The duet draws a swarm of photographers who block the view of the other 500 people sitting at tables in the ballroom of the Gold River Casino in Laughlin, Nevada.
It’s the culminating Saturday-night banquet of the Seventh Annual International UFO Congress. There’s a giant blowup space alien in the parking lot. Extraterrestrials and E.T. hybrids disguised as middle-aged white people sit among the Earthly guests munching on a lasagna buffet. In the hall next door, you can get your aura photographed.
Sitting at the head table, naturally, is Arizona secretary of state hopeful and former Phoenix councilwoman Frances Emma Barwood, who is scheduled to address the gathering.
“This is all new to us,” Barwood’s husband, Mike Siavelis, says sheepishly as the evening descends into surreality.
Barwood merely smiles.
Her tablemates include Stephen Bassett, Barwood’s UFO political consultant who’s paid to work the space-alien side of her bid to become secretary of state. He’s busy introducing Barwood to the luminaries of the UFO community.
The man sitting across from Barwood, for example, Dr. Jim Harder, once taught electrical engineering at UC-Berkeley but today helps people, through hypnosis, recover memories of being abducted by aliens. Bassett speaks of Harder in hushed tones, clearly wanting Barwood to know that she’s in the presence of UFO royalty.
Harder’s wife, Cedar, leans over to make an even more startling revelation.
Advertisement
“My husband,” she says, “he’s an E.T.”
“Did he tell you that?” she’s asked.
“He didn’t have to. I realized it by observation.”
She should know. She reveals later that she recently recovered memories of being abducted by aliens herself.
Until Barwood’s speech caps off the night, the UFO Congress will entertain itself with bad stand-up comedy, a “song for the future” by a woman who says she learned it by channeling aliens, and several group photos.
But the highlight is a tribute given to Shari Adamiak, who recently died. Rather than eulogize Adamiak with a description of who she was or what she accomplished, a severe woman chooses instead to tell a remarkable episode from Adamiak’s life.
Adamiak had accompanied UFO researcher Steven Greer on an expedition into Mexico. There, in a remote area, the two were surprised by soldiers carrying AK-47 rifles. Suspiciously, the soldiers’ uniforms carried no insignia. Adamiak and Greer figured they were dead, but they prayed ardently to space aliens. In obvious answer to their plaint, the two spotted a flying saucer overhead.
The craft had no sooner passed when the soldiers, remarkably . . . .
At this point, the narrator halts, sensing that even in this atmosphere of abject credulity, her story is reaching ridiculous proportions. To make sure everyone gets the point, she says emphatically, as a challenge: “This is a true story.”
. . . the soldiers, under the beneficent influence of extraterrestrials, walked to a van, dropped their AK-47s, picked up guitars and began strumming, enabling Adamiak and Greer to make their escape.
“True story,” Bassett assures Barwood.
Truth by assertion: It’s in abundant supply at the UFO Congress, where people are more interested in discussing the implications of aliens living among us than looking for hard evidence of actual landings or abductions. As Cedar Harder will say later, the conventioneers have “moved beyond talking about the nuts and bolts of UFO investigation.”
Aliens are here. They are mating with humans.
And the lights that appeared over Phoenix last March couldn’t possibly have been anything of Earth.
Advertisement
——————–
It’s been a remarkable year since hundreds of Arizonans thrilled to lights seen over much of the state March 13, 1997.
When Barwood, then a councilwoman, asked the city to look into the sightings, she became a national media phenomenon and will no doubt bring much outside attention — and outside campaign donations — to her otherwise unglamorous race for secretary of state.
Jim Dilettoso’s own star has risen as a result of his proclamations that the lights over Phoenix could not have been flares, airplanes or anything else man-made. His scientific-sounding claims have made him and his Tempe firm, Village Labs, a regular in television, radio and newspaper reports.
A recent edition of Hard Copy and upcoming specials on Japanese television, the UPN network, and A&E all feature Dilettoso and the spectral analysis he claims to do from videotapes of the event.
“These were not flares,” he says with certainty.
For many, the assertions of truth are enough.
And for the media, such proclamations not only prove sufficient but make for good copy.
Perhaps no assertion has been as widely taken for proof that aliens visited Phoenix last March than Dilettoso’s claims that his “sophisticated optical analysis” eliminates more prosaic explanations for the March 13 lights. From the Discovery Channel to the Arizona Republic to USA Today, Dilettoso has been advertised as an expert who can divine the nature of lights with his bank of computers. Not one of the publications or programs has described the scientific principles behind Dilettoso’s claims.
With the arrival of the Phoenix Lights anniversary, news reports will no doubt mushroom, and Dilettoso and his techniques will receive more attention as reporters breathlessly tell the UFO story of the decade: how Phoenix has, in only a year, become the center of the UFO cosmos, the site of recurring visits by strange aliens, and home of a heroic political avatar.
What they won’t tell you is that Dilettoso employs the language of science to mask that, given the tools he uses, he is incapable of doing what he claims to be doing.
So what? you say. Does anyone really care if a few oddballs gain notoriety from science fiction? Who are they hurting?
Advertisement
Dr. Paul Scowen, a visiting professor of astronomy at Arizona State University, cares.
“I become quite offended when people pull this sort of nonsense,” Scowen says. “We in the science business make our living doing this stuff to the best ability we can, and applying all of the knowledge that humankind has assembled to this point in science to figure out what’s going on. . . .
“Why should people care? Because it’s been so high-profile and they’ve been told lies. That’s why people should care.”
——————–
[Dilettoso, ‘authenticating’ footage of alien visitors]
Many Valley residents had gone out last March 13 looking for a spectacular event in the night sky. Comet Hale-Bopp was near its closest approach to Earth, and that night it could be seen in the northwest, as bright a comet as has been seen in 20 years.
About 8:30, however, something else appeared — a vee pattern of lights that traveled nearly the entire length of the state in about 40 minutes.
The witnesses included New Times writers. David Holthouse and Michael Kiefer both saw the pattern of five lights move slowly overhead. Holthouse says he perceived that something connected the lights in a boomerang shape; Kiefer disagrees, saying they didn’t seem connected. Like other witnesses, both reported that the vee made no sound, and each saw slightly different colors in the lights. Both watched as the lights gradually made their way south and faded from view.
The many eyewitnesses have elaborated on this basic model: Some saw that the lights were not connected, others swear they saw a giant triangular craft joining them, some felt it was at high altitude, others claim it was barely over their heads and moving very slowly. All seem to be describing the same lights at the same time: About 8:15 the lights passed over the Prescott area, about 15 minutes later the vee moved over Phoenix, and at 8:45 it passed south of Tucson.
That’s about 200 miles in 30 minutes, which indicates that the lights were traveling about 400 miles per hour.
An alert owner of a home video camera caught the 8:30 vee pattern on tape. Terry Proctor filmed the vee for several minutes. The quality of the tape is poor, and even under enhancement the video shows nothing joining the five lights of the pattern. However, the pattern of lights changes over just a few seconds. The lights clearly move in relation to each other, proving that the lights represent five separate objects, rather than a solid body. This is consistent with witness reports from Prescott, where one light trailed the others temporarily.
Advertisement
But someone got an even better view than Proctor and his video camera.
That night, Mitch Stanley and his mother were in the yard of their Scottsdale home, where Stanley has a large Dobsonian telescope.
He and his mother noticed the vee pattern approaching from the northwest. Within seconds, Stanley was able to aim the telescope at the leading three lights of the pattern.
Stanley was using a 10-inch mirror which gathers 1,500 times as much light as the human eye, and an eyepiece which magnified the sky 60 times, effectively transporting him 60 times closer to the lights than people on the ground.
When Stanley’s mother asked him what he saw, he responded, “Planes.”
It was plain to see, Stanley says. Under magnification, Stanley could clearly see that each light split into pairs, one each on the tips of squarish wings. Even under the telescope’s power, the planes appeared small, indicating that they were flying high. Stanley says he followed the planes for about a minute, then turned his telescope to more interesting objects.
“They were planes. There’s no way I could have mistaken that,” he says.
The next day, when radio reports made Stanley aware that many thought they had seen something extraterrestrial, he told Jack Jones, another amateur astronomer, about his sighting. Jones later called both the Arizona Republic and Frances Emma Barwood. Neither called Jones or Stanley back.
Barwood says she passed on Stanley’s name to Dilettoso’s Village Labs, who didn’t call the young man until New Times first reported his story in June.
Although hundreds of Valley residents saw the vee formation, the media have paid much more attention to a separate event that occurred later that night.
At 10 p.m., up to nine bright lights were seen to appear, hover for several minutes, and then disappear southwest of Phoenix in the direction of the Sierra Estrella. Video cameras at points across the Valley caught the string of hovering lights. All nine were visible from some locations, others saw fewer.
Mike Krzyston, from the yard of his Moon Valley home, captured all nine on video. “I hit pay dirt, finally!” he exclaimed as the lights appeared. “This is a major sighting!” said another videographer as he taped five of the lights.
In June, however, KPNX-TV Channel 12 reporter Blair Meeks filmed a drop of flares by military planes over the Air Force gunnery ranges southwest of Phoenix. The hovering lights looked remarkably like the 10 p.m. lights of March 13, and Meeks suggested it as a possible solution to that night’s second event.
Within days, Tucson Weekly broke the news that the Maryland Air National Guard, in Arizona for winter training, had a squad of A-10 fighters over the gunnery range that night, and they had dropped flares. An Arizona National Guard public information officer, Captain Eileen Bienz, had determined that the flares had been dropped at 10 p.m. over the North Tac range 30 miles southwest of Phoenix, at an unusually high altitude: 15,000 feet. (Captain Drew Sullins, spokesman for the Maryland Air National Guard, says that the A-10s, which have squarish wings, never went north of Phoenix, so they could not have been responsible for the formation of planes seen at 8:30 p.m.)
Advertisement
Local UFO investigator Dick Motzer and others have shown that the initial appearance of the 10 p.m. lights, the number of lights seen from different elevations in the Valley, and the timing of the lights’ disappearances all correspond well with flares dropped at high altitude beyond the Sierra Estrella.
But questions remain.
If Stanley saw that the 8:30 lights were airplanes, whose were they? And why did Tucson’s Davis-Monthan Air Force Base, where the Maryland Air National Guard’s A-10s returned that night, initially say it had no planes in the air at that time?
Krzyston and others who taped the 10 p.m. event insist that the 10 p.m. lights hovered in front of, not behind, the Estrella, where the gunnery ranges lie.
Most publicized objections to the 10 p.m. flares hypothesis have come from Jim Dilettoso, who claims that sophisticated tests performed at Village Labs show that the lights filmed by Krzyston and others could not have been flares — whatever caused the 10 p.m. event, Dilettoso claims, was like no source of man-made light.
Local and national media alike have found his statements irresistible.
While careful to tell the mainstream press that he makes no claims about extraterrestrials, that his research simply eliminates the possibility of flares, Dilettoso is perhaps feeling more bold as an increasing number of reporters seeks his opinions.
With all of the seriousness he could muster, he recently told Hard Copy: “These could be the most important events in 50 years.”
——————–
Dilettoso is needlessly conservative. If the lights of March 13 were of otherworldly origin, it would be one of the most significant events in human history.
That’s been the holy grail of a movement spawned decades ago that shows no sign of abating. But research into UFOs has changed considerably, much to the chagrin of investigators who still insist on a scientific approach to unexplained sightings.
Interest in “flying saucers” exploded in post-World War II America, prompting the Air Force to hire an astronomy professor, J. Allen Hynek, and others to investigate. For more than 20 years, Hynek and the rest of the Air Force’s Project Blue Book examined UFO sightings, the vast majority of which were easily explained as natural phenomena.
Advertisement
The military ended Hynek’s contract and Project Blue Book in 1969, and four years later Hynek, by then head of Northwestern University’s astronomy department, created the Center for UFO Studies. The center examined UFO claims scientifically and tabulated its results. In its initial studies, the center found, for example, that 28 percent of sightings were simply bright stars or planets (in 49 of those cases, witnesses estimated that the celestial objects were between 200 feet and 125 miles away).
Of 1,307 cases which the center examined in the early 1970s, only 20 seemed unexplainable. The center stopped short of claiming that those 20 were caused by alien spacecraft.
UFO investigator Philip J. Klass, in an article about Hynek, points out that few present researchers apply the same kinds of rigorous study to the subject. For today’s “investigators,” the slightest mystery is obvious proof of an extraterrestrial presence.
Hynek died in 1986 in Scottsdale. By then, the field he helped pioneer was changing radically.
Jim Marrs is a good example. Author of the best-selling Alien Agenda, Marrs is touted as both an expert on UFOs and the John F. Kennedy assassination (and, incredibly, connects the two in Alien Agenda, suggesting that Kennedy was killed for his knowledge of U.S.-space alien contacts). Oliver Stone mined Marrs’ 1990 book Crossfire for his conspiracy-minded film JFK.
Today, Jim Marrs is giving a sermon.
He’s a featured speaker at the Seventh Annual International UFO Congress. His message: There’s no question aliens are among us. The real question, he asserts, is what their “agenda” is.
“I feel like I’m preaching to the choir. I don’t think I need to explain anything to you,” he says in his Texas twang.
Marrs preaches about our moon, for example, asserting that it is “the original UFO,” and a great mystery. Marrs asserts that, unlike other celestial objects, the moon travels not in an ellipse but “in a nearly perfectly circular orbit.”
No one objects to this falsehood. In fact, the moon moves in a very respectable ellipse which can change its distance from Earth up to 50,000 kilometers.
To Marrs, the sum of this and other effects — which include several basic errors of astronomical knowledge from a best-selling author who claims to be an expert — lead to only one, unavoidable conclusion: It is obvious that an ancient, extraterrestrial race parked the moon in a perfect orbit around Earth.
No one in the audience laughs.
“I don’t have to explain this. You all believe this, right?” Marrs asks, and he gets a resounding “yes” from the choir.
Meanwhile, two women ignore Marrs as they talk about why aliens are abducting so many people. One says aliens want to create a hybrid human-alien race which will be able to operate the advanced technology aliens plan on bestowing us.
Advertisement
The second woman says that the hybrid race would be pan-dimensional, capable of disappearing into the fourth dimension.
Lights in the sky. Bizarre dreams. Objects whizzing by in video shots which look just like bugs out of focus. Memories of alien abductions “recovered” by suggestive hypnotherapists.
The movement barely resembles the field of inquiry taken seriously by the late Hynek.
With a heavy dose of New Age influence, the UFO movement increasingly grows less like a science and more like a religion. Some investigators point to an early case that marked this shift: the elaborate claims of a one-armed Swiss farmer named Eduard “Billy” Meier.
Since 1975, Meier has claimed to have had more than 700 contacts with aliens from the Pleiades star cluster. In most of those contacts, a female alien named Semjase has appeared to Meier, allowed him to photograph her spacecraft, taken him on rides in the craft, and even whisked him into the past to meet Jesus Christ, who was duly impressed with the advice Meier gave him. He has taken more than 1,000 photographs of Semjase’s craft (which Semjase only reveals to Meier when he is alone), as well as photos of alien women, closeups of famous celestial objects, and even the eye of God. Meier claims that he is the reincarnation of Christ and that his teachings, based on what Semjase tells him, will save mankind.
Arizonans were instrumental in promoting Meier-mania. Beginning in the late 1970s, Wendelle C. Stevens, a Tucson UFO enthusiast, and others began touting and publishing Meier’s photos (while playing down the messianic stuff).
Looking at Meier’s photos, it’s hard to believe he was ever taken seriously. Yet several Arizonans assured the UFO-hungry public that they had tested Meier’s photographs and had found them to be genuine.
One of these investigators included a young man who claimed that he had used computers to verify the authenticity of Meier’s photographs.
His name was Jim Dilettoso.
——————–
[Jim Dilettoso in 2015]
Advertisement
Kal Korff is one UFO researcher who believes Jim Dilettoso is a poseur.
Korff became interested in UFOs and began corresponding with Wendelle C. Stevens in the late 1970s. The two swapped UFO photos, and Korff studied the Billy Meier phenomenon. When the normally open Stevens refused to discuss certain aspects of the Meier case, Korff grew suspicious.
His doubts led him to write two books, one in 1980, the second in 1995, debunking the Meier case. In 1991, Korff traveled under an assumed name to Switzerland and inspected many unpublished Meier photographs. Korff’s investigation, revealed in his book Spaceships of the Pleiades, showed that Meier’s outer-space photographs were actually crude snapshots of TV science programs.
One photo is of two out-of-focus women who Meier insisted were aliens. In a tape-recorded interview with Korff, Jim Dilettoso claimed that the photo was authentic because the woman in the foreground had elongated ear lobes. But Korff showed that a clearer, unpublished photo taken by Meier revealed that the elongated ear lobes were actually lengths of the woman’s hair.
In one of Wendelle C. Stevens’ books of Meier photographs, futuristic-looking (for 1979) computer enhancements of the spaceship photos are accompanied by captions which purport to describe tests that authenticated Meier’s photos.
De Anza Systems, a San Jose company, was credited with providing the computers to do the analyses.
In 1981, Korff interviewed De Anza employee Ken Dinwiddie, who confirmed that Dilettoso had brought the Meier photos to his shop. But Dilettoso and another man had simply asked that De Anza make some sample enhancements of the photos as a demonstration.
“They came to De Anza under the pretext of wanting to buy our equipment. We demonstrated it, and they snapped many pictures and left. We made no data interpretations whatsoever,” Dinwiddie told Korff in the presence of two other investigators.
“What about the captions which appear in the [Meier] book under each photo? Are they correct?” Korff asked Dinwiddie.
“Those are their interpretations, not ours. Nothing we did would have defined what those results meant.”
It was clear to Dinwiddie, Korff writes, that Dilettoso and Stevens dreamed up the impressive-sounding captions despite that they had nothing to do with demonstrations De Anza had performed.
Korff showed Dinwiddie a caption below a Meier photo that purports to show a hovering spacecraft: “Thermogram — color density separations — low frequencies properties of light/time of day are correct; light values on ground are reflected in craft bottom; eliminates double exposures and paste-ups.”
“No, we put those colors in the photo!” Dinwiddie exclaimed. “Jim [Dilettoso] said, ‘Can you make the bottom of the object appear to reflect the ground below?’ I said yes, and we performed the operations that they asked for.”
Added Dinwiddie: “My impression of Jim Dilettoso is that he freely chooses to use whatever descriptive text he enjoys to describe things. He is not particularly versed in computer technology. He’s a pretty good piano player, though.”
Advertisement
Korff says that since his book was published in 1995, Dilettoso has made no efforts to dispute its contents.
Dilettoso tells New Times that he didn’t write the captions, but that they aren’t misleading. “If you talked to Ken Dinwiddie today, he would say we didn’t do this.”
New Times did talk to Ken Dinwiddie last week, and he remembers things the way Korff describes them.
Dilettoso has applied even more questionable methods in his “validation” of UFO photographs.
In 1987 and 1988, he worked for an Arizona affiliate of NASA; his work involved helping NASA technology get to the private sector, he says.
But he admits that he wasn’t working for NASA in 1991 when he provided Wendelle C. Stevens with a seven-page analysis of UFO photographs taken in Puerto Rico. On NASA stationery, Dilettoso writes that “this is not an official project,” but concludes that the photos of a flying saucer encountering an F-14 Tomcat are authentic.
Puerto Rican UFO investigator Antonio Huneeus says the case involved a man named Amaury Rivera who claimed he was abducted by aliens on his way home from work in 1988 and managed to get a picture of their spacecraft as it left with three Tomcat jets in hot pursuit. Huneeus says that UFO enthusiasts who were convinced of the truth of Rivera’s story early on now dismiss it as a hoax after, among other things, a photographer named German Gutierrez admitted that he had helped Rivera fake his snapshots.
But Huneeus points out that the case has still played prominently in Mexico, Germany, Hungary, Japan, Argentina and Taiwan, always with the startling revelation that NASA had confirmed the authenticity of Rivera’s photographs.
Dilettoso admits that he was no longer working for NASA when he gave his analysis to Stevens, but he says Stevens had lost the analysis he had done three years earlier when he had been employed by the space agency.
“He came into my office and asked me to write the letter and, you know, I did,” he says. “An Air Force colonel coming to me and asking for that letter, I at least took pause and said ahhh, all right, but this is not an official project,” he says.
So Dilettoso did the favor for Stevens, who indeed is a former Air Force colonel. He’s also an ex-convict. Department of Corrections records show that he pleaded guilty to child molestation and spent five years in prison. He was released in 1988.
Jim Dilettoso is asked to explain how he can look at videotape of the March 13, 10 p.m. event and, using image analysis, declare that the lights are not flares.
He begins by explaining that the electromagnetic spectrum includes x-rays, infrared radiation, visible light.
And musical notes.
Advertisement
It’s one of the least preposterous things Dilettoso says during a two-hour interview.
——————–
He’s sitting in the conference room at Village Labs. In the next room, there’s a bank of computers which has become a fixture in television footage filmed at the Tempe firm. On the walls and spread out over the large table are charts and diagrams which suggest that complex work happens here.
Dilettoso has finished his explanations about music as a form of electromagnetic energy (it isn’t, of course, but it seems rude to interrupt), and he’s now explaining how a camcorder can, even from miles away, record the finest details of a light bulb, such as its glowing filament, if you just know how to extract that image from the recorded blob of light. His computers can do just that, Dilettoso says.
If this were possible, astronomers and other scientists would gladly beat a path to Dilettoso’s door. Unfortunately, there’s something that prevents a camcorder from recording such detail.
It’s called physics.
The power of a camcorder, telescope or other visual device to resolve a distant object is limited by its optics. The larger the mirror or lens used, the greater the power to resolve faraway things. That’s why astronomers crave bigger and bigger mirrors for observatories–the bigger the mirror, the farther into space a telescope can resolve details.
With a lens less than an inch across, the typical camcorder has a rather myopic view of the world. Any light source more than a mile or so away simply cannot be resolved with any detail. Distant lights — streetlights, flares, alien headlights, even — become “point sources.” Like the stars in the night sky, there’s no detail to be made out in them.
The narrow lens of a camcorder focuses the light of a point source onto an electronic chip, which gets excited, so to speak, and releases a pattern of electrons, called pixels, that is translated into an analog signal which is put on videotape. What eventually comes out is your television’s attempt to describe how the electronic chip reacted when it was struck by the light of a distant bonfire, for example.
The actual light from that bonfire is long gone, however, and has nothing physically to do with the electronic signal on your videotape.
Which is a shame. Astronomers have long known that you can learn amazing things from that original source of light.
Unable to reach the stars for tests, scientists figured out how to perform experiments on the light coming from them instead. Using prisms or gratings, astronomers separate that light into its constituent colors, called a spectrum, which allows them to determine a star’s chemical make-up. This process is called spectral analysis.
Advertisement
Trying to do spectral analysis on the image produced by a camcorder, however, would be like testing a portrait of Abraham Lincoln for his DNA. The man and his image are two very separate things.
Still, Jim Dilettoso claims to perform just that kind of magic.
On a computer monitor, he brings up an image of Comet Hale-Bopp. The comet has a line segment cutting across it and, in another window, a corresponding graph with red, blue and green lines measuring the brightness of the slice.
He shows similar frames with similar line segments cutting through streetlights, the known flares captured by Channel 12, and the 10 p.m. lights of March 13.
Each results in a different graph.
It’s rather obvious that the graphs are simply measurements of pixel brightness in the cross-sections he’s taken.
But Dilettoso claims that the graphs show much more. To him, they represent the frequencies of light making up each of the images. He claims he’s doing spectral analysis, measuring the actual properties of the light sources themselves, and can show intrinsic differences between video images of streetlights, flares, and whatever caused the 10 p.m. lights.
Because the graph of a known flare is different than one of the 10 p.m. lights, Dilettoso concludes that they cannot be the same kinds of objects.
In fact, Dilettoso claims that the graphs of the 10 p.m. Phoenix Lights show that they are like no known light produced by mankind.
The fallacy in Dilettoso’s analysis is easily demonstrated. When he’s asked to compare the graph of one known flare to another one in the same frame, he gladly does so. But he admits that the two flares will produce different graphs.
In fact, Dilettoso admits, when he looks at different slices of the same flare image, he never gets the same graph twice. And when he produces some of those graphs on demand, many of them look identical to the graphs of the 10 p.m. lights.
When he’s asked to produce an average graph for a flare, or anything that he could show as a model that he uses to distinguish flares from other sources, he can’t, saying that he knows a flare’s graph when he sees it.
It’s an evasive answer which hints at the truth: Dilettoso is only measuring the way distant lights happen to excite the electronic chip in camcorders (which is affected by atmospheric conditions, camera movement and other factors), and not any real properties of the sources of lights themselves.
Met with skepticism, Dilettoso reacts by claiming that his methods have been lauded by experts.
“Dr. Richard Powell at the University of Arizona believes that my techniques are not merely valid but advanced to the degree where there was nothing more that they could add,” he says.
Powell, the UofA’s director of optical sciences, confirms that he spoke with Dilettoso. “He called here and I talked to him, and I could not, for the life of me, understand him,” Powell says.
“I don’t know how you take a photograph or a videotape after the fact and analyze it and get that information out. We didn’t say that his method was valid, we said we didn’t have any other way that was any better,” Powell says.
Hearing that Powell denies calling his techniques “advanced,” Dilettoso claims that Media Cybernetics, the company which sells Image Pro Plus, told him that the software package would do the kind of spectral analysis he does.
Jeff Knipe of Media Cybernetics disagrees. “All he’s simply doing is drawing a line profile through that point of light and looking at the histogram of the red, green and blue. And that’s really the extent of Image Pro. . . . Spectroscopy is a different field.”
New Times took audio and videotapes of Dilettoso describing his image processing to Dr. Paul Scowen, the visiting professor of astronomy at ASU. Scowen left Great Britain in 1987 and received his Ph.D. in Astronomy at Rice University in 1993; he now uses the Hubble Space Telescope to study star formation.
“All Dilettoso is doing is extracting a brightness profile. It makes no statement about frequency distribution. What he’s getting his knickers in a twist about is he’s heard the term ‘spatial frequency’ and he’s confusing it,” Scowen says. “He’s getting his terms mixed up. He knows the words, but he doesn’t understand the concepts behind them.”
Scowen notes that when Dilettoso is asked about the limitations of camcorders and videotape, he repeatedly responds: “It’s all I’ve got.”
“He’s not saying the rest — that it’s insufficient,” Scowen says.
Curious graduate students peek over Scowen’s shoulders, shaking their heads at the videotapes of the Phoenix Lights and Dilettoso’s claims about them.
“Nobody asks astronomers to take a look at these images. And that’s what we do for a living,” says Ph.D. candidate Steve Mutz.
Professor Rogier Windhorst walks in and asks what his students are poring over. Someone tells him Dilettoso claims to be doing spectral analysis from videotape.
“Oh, you can’t do that. It’s bullshit,” Windhorst barks.
“It’s a consensus now,” Mutz says with a laugh.
——————–
Among the true believers, Jim Dilettoso makes even more surprising claims. At the Seventh Annual International UFO Congress, Dilettoso compared the Phoenix Lights to other UFO sightings through the years and in many parts of the globe.
“If we theorize that the lights are intelligently guided, or perhaps that the lights are perhaps the intelligences themselves, we might find that this new activity is unrelated to disc-shaped flying saucers. . . . It may be that these are light-beings,” Dilettoso told his audience.
To the press, Dilettoso’s careful not to make such outrageous claims. He and his partner, Michael Tanner, instead disseminate a confusing seven-page summary of the many accounts of the 8:30 vee formation, and rather than deduce that different witnesses interpreted the same phenomenon in different ways (which humans have a tendency to do), they suggest that Arizonans actually saw different gigantic triangular crafts at different times and different places. Mitch Stanley is mentioned in a single line: “An amateur astronomer in Phoenix [actually Scottsdale] wrote it off as a formation of conventional airplanes.”
As for the 10 p.m. event, Dilettoso asserts that his video analyses tell him flares could not possibly be what Mike Krzyston and others captured on videotape, saying, “I don’t know what they were. I just know that they weren’t flares.”
A credulous media, more interested in hyping the Phoenix Lights mystery rather than taking a sober look at the evidence, have repeatedly broadcast those claims. The Discovery Channel, in its October 26 program UFO’s Over Phoenix, reported the results of Dilettoso’s “high-tech sophisticated optical analysis” as if they were fact.
To its credit, the Discovery Channel did perform another, and apparently solid, test to the flare hypothesis. The network submitted Krzyston’s footage to Dr. Leonid Rudin at the Pasadena image-processing firm Cognitech. Rudin was also given a daytime shot from Krzyston’s yard showing the distant Sierra Estrella, which is invisible in the nighttime video. Rudin matched the day and night shots frame by frame, lining them up on a distant ridge. The result: an animation loop showing that the flares are not only above the Estrella, but blink out as they reach the top of the mountains, precisely as distant flares would.
In a “10-Files” episode, KSAZ Channel 10, however, questioned the Cognitech analysis. Krzyston insists to Channel 10 that the objects were hovering below the Estrella ridgeline and couldn’t have fallen behind the mountains. Channel 10 suggested cryptically that Cognitech purposely faked its test — “Has the footage been altered? And by whom and why? The mystery continues” — and showed its own test, which a Channel 10 production man claimed took “not long at all,” proving that the 10 p.m. lights in Krzyston’s video were well below the Estrella ridgeline.
New Times asked Scowen to perform the test himself, using two frames grabbed from Krzyston’s original video and a 35 mm daytime photo taken from Krzyston’s yard by UFO researcher Dick Motzer. After a half-hour of careful scaling, positioning, and rotation with imaging software, Scowen found a good match for the ridge visible in both shots. His results: The flares are just above the Estrella ridgeline or right at it, just as Rudin at Cognitech had found.
Afterward, Scowen was shown the “10-Files” episode and its claim that Channel 10 matched the frames quickly. He wonders how they could have checked several parameters in only a short time. “You have to make sure that the zoom is set the same way. If it’s a standard camcorder, there’s no numeric readout of the zoom. . . . Did the guy at Channel 10 match the scale? My guess is that he just laid the two pictures on top of each other.”
Rod Haberer, producer of the “10-Files” piece, says that he’s “comfortable with what we put on the air.” But when he’s asked what software the station used to match and scale the daytime and nighttime shots, he admits that they didn’t use a computer at all. Channel 10 simply laid one image from Krzyston’s video atop another in a digital editing machine.
Scowen says it doesn’t surprise him. “We’re used to dealing with this with the lay public. People do the minimum until they get the answer they want. In science you have to go back and check and recheck to make sure you’re correct. I think Cognitech did a great job,” Scowen says.
Rudin says his firm took its job seriously when the Discovery Channel asked it to match the images. “I testify in a court of law routinely; I’m a diplomate of several forensic societies,” Rudin says. “Basically, you’re talking to the guys who do this for a living.”
Told that an astrophysics professor found the Cognitech experiment more convincing, Haberer suggested that his station had merely presented a different point of view, as if the question of a flare falling either behind or in front of a mountain had more than one answer.
But that’s entertainment, which is what the nation is likely to get on March 11, when the UPN network devotes a half-hour to the Phoenix Lights in its program “UFO: Danger in the Skies.” Producer Hilary Roberts says that Dilettoso is featured prominently and that no, her network did not independently examine his claims. His “analysis” will be one of several voices presented uncritically in the program. “We want the viewer to decide who’s right,” she says, apparently unconcerned that the public can hardly decide what’s true when media deliver unexamined claims as fact.
Perhaps no news organization, however, has been as accommodating to Jim Dilettoso as the Arizona Republic. For weeks following the March 13 incident, the Republic promoted flying saucers in nearly every section. Dilettoso could be found on the front page, claiming to have found a drawing in his attic which, underneath another image, mysteriously depicts an alien autopsy; the article suggested that Dilettoso’s Shroud-of-Turin-like autopsy drawing has something to do with a flying saucer which supposedly landed in Paradise Valley in 1947.
But the Republic’s business section topped that story with a glowing July 1 account about Dilettoso and the cutting-edge things he does at Village Labs.
The paper reported that Dilettoso was on the verge of creating a massive supercomputer network which would give PC owners access to supercomputing power, and claimed that Village Labs and TRW had each invested $3 million in a computer called RenderRing1. One benefit would be the ability to send entire movies over phone lines at incredible speeds. His system would make Tempe the nexus of a special-effects processing center: Village Labs was already helping well-known firms with their special effects, Dilettoso claimed, and had a hand in the complex effects of the movie Titanic.
Dilettoso’s sales pitch sounds familiar. Five years ago, New Times profiled him and his futuristic plans (“High Tech’s Missing Link,” April 21, 1993). Back then, those ambitions were largely the same: Village Labs would develop massive computer networks that would change the movie industry.
Dilettoso also told New Times he had an undergraduate degree from the University of Hartford and a Ph.D. in biomedical engineering from McGill University in Montreal. But records at the University of Hartford showed that he had taken a single math class there; McGill University said it had never heard of him.
Today, Dilettoso denies that he ever claimed to have college degrees. “I have 160 to 180 college credits scattered all over the place. I tell people that all the time,” he says when the subject comes up.
There’s another version of the Village Labs story that Dilettoso is not as quick to tell: that rather than operating from income generated by his computer wizardry, Dilettoso has for years been the beneficiary of eccentric millionaire Geordie Hormel, the heir to the Spam fortune, who pays Village Labs’ bills.
Until last year, that is. Hormel pulled the plug on Village Labs in July 1997, and court records show that after Hormel stopped paying rent, the building’s owner, the Marchant Corporation of California, sued to kick Dilettoso out.
Marchant’s attorneys argued successfully that Hormel, not Dilettoso, was the lessee, and a Superior Court judge found in favor of Marchant, ordering Dilettoso and Village Labs to vacate the premises. But Dilettoso convinced Hormel to bail him out one last time; Hormel shelled out $62,000 for a bond that would allow Dilettoso to file an appeal–and he occupies the building in the meantime, the rent covered by the bond. Hormel says he now regrets paying for it.
Last week, Dilettoso’s appeal ran out. He says that Village Labs will vacate the building in a matter of days.
Hormel’s wife Jamie contends that Dilettoso and Village Labs have existed primarily through her husband’s largess: “[Geordie] has paid everything. He’s paid rent and salaries and lawsuits for when Jim didn’t pay salaries.”
Geordie Hormel confirms that since the company’s founding in 1993, he has put about $2 million into Village Labs. But he’s reluctant to criticize Dilettoso, afraid he won’t get any of his investment back.
His wife is less shy, saying, “[Dilettoso]’s just a liar . . . I mean, there was an article in the Republic in the business section on him and it was such a lie. . . . He tells Geordie that we’re going to get money from TRW in three more weeks, then strings him along for a few more weeks. It’s happened for years.”
Dilettoso defends the Republic article, saying that Village Labs had invested $3 million on the project with TRW. But he later admits that no actual money was put up by his firm; the $3 million figure was a total of Village Labs’ rent and salaries since its inception, most of which was supplied by Hormel. He also admitted that Village Labs’ “design” work was unpaid.
TRW spokeswoman Linda Javier says that in fact neither side put up cash in the project. “We didn’t make any investments. We used a system that was built on our own with R&D funding.” Asked about Dilettoso’s claims, Javier responds, “He has a different way of looking at things.”
Says Jamie Hormel: “Supposedly he was working on that Titanic movie. [But] I haven’t seen him do one thing he was supposed to have done.”
Dilettoso claims that in Village Labs’ work on the special effects for Titanic, he collaborated with a Digital Domain engineer named “Wook.”
“Wook said that Mr. Dilettoso’s and Village Labs’ contribution to the production of Titanic was nothing,” says Digital Domain’s Les Jones. Wook concurs.
When he’s pressed about the claims made in the Republic story, Dilettoso says that it’s true the various deals have not materialized. But he says he was the victim of an elaborate conspiracy by a TRW executive who wanted to learn Village Labs’ techniques and then promote them as his own.
In the meantime, he continues to shop his plans of linking supercomputers, and entertains reporters in front of a bank of computer screens in a studiolike room which he uses for his UFO alchemy.
But he meant for that poster child to be someone who already had global notoriety: Maricopa County Sheriff Joe Arpaio.
It was to Arpaio that Dilettoso steered an EXTRA film crew on May 6. When the crew found the sheriff out to lunch, they went to City Hall in search of another public official to interview. Frances Emma Barwood says she found their questions reasonable — why hadn’t local government done anything about the sightings? And she brought it up in that afternoon’s city council meeting. She wasn’t prepared for the avalanche of attention, praise and ridicule that would follow.
She also didn’t expect to see Arpaio grovel.
Barwood’s instant celebrity was the kind of attention Arpaio craves. So, at a veterans’ function a few days later, Barwood says Arpaio begged her to send him a letter, officially asking his office to investigate the March 13 lights.
She says she promised to do so. But only hours later, Arpaio aide David Hendershott called her and told her not to send it. She says he didn’t explain why.
Hendershott says Barwood remembers things incorrectly. He claims it was Barwood who asked if she could send a letter to Arpaio requesting the posse’s help interviewing witnesses.
“That’s not it at all,” counters Barwood, who says that Arpaio pleaded with her in front of veterans who later told her they were surprised to see him so agitated.
Barwood pressed on as the only public official asking why local, state and federal governments didn’t take an interest in what seemed to be a questionable use of Arizona airspace, at the very least. Barwood was told that the city had no air force and could do nothing about the sightings. The Air Force, meanwhile, told her that it had gotten out of the business of investigating UFOs and that it was a local matter.
Barwood and the many who saw the lights were understandably frustrated.
Davis-Monthan Air Force Base’s spokesman Lieutenant Keith Shepherd didn’t help matters. Shepherd told news organizations, including New Times, that the base had no planes in the air at the time of the 8:30 and 10 p.m. events. In her investigation, however, Captain Eileen Bienz of the Arizona National Guard later heard from National Guard helicopter pilots from a Marana air base that they had spotted a group of A-10s heading for Tucson at about 10 p.m.
Only after Bienz asked Davis-Monthan about the planes did Shepherd confirm that the Maryland Air National Guard had used the base for its winter exercises and had dropped flares southwest of Phoenix that night.
Shepherd told New Times that he had earlier spoken about the base’s own planes. Reporters had simply asked him the wrong question.
It’s no wonder that so many people believe the military maintains a UFO cover-up.
The military’s reluctance to divulge information also led to confusion about what was seen on radar that night. The media have widely circulated reports that the 8:30 and 10 p.m. lights were mysteriously invisible to radar.
But a formation of a craft or crafts traveling at high altitude over Phoenix would have been monitored by FAA radar operators in Albuquerque, not at Sky Harbor Airport, says air traffic controller Bill Grava, who was on duty at Sky Harbor that night and witnessed the later, 10 p.m. lights. Grava says that if five planes in a vee passed over Phoenix at 8:30 p.m., they would have been represented by a sole asterisk on consoles at Sky Harbor — not something that would have raised the curiosity of operators. As for the 10 p.m. event, Grava acknowledges that the North Tac range is beyond Sky Harbor’s radar; if planes dropped flares over the range, it’s no mystery why they would not have appeared on consoles at the airport.
Luke Air Force base has more powerful radar systems. But Luke’s Captain Stacey Cotton says that radar operators at the base were asked if they had seen anything unusual that night, and answered no. She says that a formation of five planes — traveling at high altitude above Sky Harbor’s and outside of Luke’s restricted air spaces — would not have been considered unusual. Neither would a flare drop over the gunnery range.
Whether the 8:30 vee formation did register on the FAA’s radar monitored in Albuquerque will apparently never be known. Despite the fervent activities of UFO investigators in the days following the sightings, no one bothered to make a formal request with the Federal Aviation Administration’s regional office for radar tapes of the Phoenix area for March 13. If anyone had made such a request by March 28, there would be a permanent record for the public to examine, says the FAA’s Gary Perrin.
Meanwhile, no base or airport has come forward to identify the five planes that traveled over Arizona seen by so many people, including Mitch Stanley and his powerful telescope.
It’s hard to blame Barwood for calling for more openness in government.
On the other hand, Barwood lamely complains that she’s been unfairly labeled the UFO candidate. She asserts that her campaign really has nothing to do with space aliens.
She says this as she waits to speak at the International UFO Congress, sitting at a table with her paid UFO campaign consultant, while they’re entertained by the piano playing of a man who wears a cross of his own blood on his forehead in his efforts to spread his message that angels and space aliens are one and the same.
Her January 13 press conference to announce her candidacy was only slightly less weird.
Barwood was flanked by a collection of oddballs that included several UFO dignitaries as well as emissaries representing Arizona’s militias, patriot movement and anti-immigrant groups.
Barwood did her best to deflate the weirdness by talking about mundane, secular secretary of state things. Such tasks are the nominative reward for winning the post, but Barwood admits that she wants it simply because it would put her only a heartbeat away from the governorship. “If Arizona had a lieutenant governor, I’d run for that,” she says.
Barwood says she’s frustrated that reporters only want to hear about her thoughts on UFOs (she’s never seen one, but at the UFO Congress, she makes it clear she thinks the Phoenix Lights must have been some gigantic, triangular spacecraft or military project). The militia-friendly conservative tries to make reporters understand that she’s more interested in other issues, such as guaranteeing Arizonans the right to carry arms in any place and in any way.
But the UFOs will not go away.
When Barwood finishes her press conference, a woman ascends the podium to make her own, unscheduled announcement.
“I would like to speak to the press also. I know what the lights over Phoenix are. I know what’s going on with the federal government,” she says. “It’s my husband. Col. Berger J. Addington, who is the king of kings, the lord of lords. He flies the stealth. He builds cities. And he should flesh up here pretty soon in his multiracial skin. . . . He is the true president of the United States.”
The woman is politely led away from the podium, and Barwood can’t suppress a grin.
|
Search
Search
Categories
The ‘Phoenix Lights’: 20 years later, still the same set of planes and flares over Arizona
[In 1998, on the one-year anniversary of the ‘Phoenix Lights,’ we published this lengthy cover story at the Phoenix New Times about what really happened in the skies over Arizona on March 13, 1997. With the 20-year anniversary upon us, we thought we’d post a copy of the story here at our own website in anticipation of what will likely be another wave of misinformation about the events of that night. Count on media outlets once again to fail in their most basic responsibility: To explain that there were TWO, very distinct incidents that happened that night. An earlier “vee” of lights traversed nearly the entire state, and was identified as a group of planes flying in formation by an astronomer in Phoenix using a powerful telescope. Later, a drop of flares was seen over a military range southwest of the city. Because news outlets never make this clear, people who saw the planes argue that flares can’t explain what they saw, and the people who saw the flares know that they didn’t see planes. We cleared up that confusion with this story, and we also profiled the people who were profiting from the confusion they were causing by promoting nonsense about it. Expect more confusion and profit-making now that the 20th anniversary is here. — Tony O.]
THE HACK AND THE QUACK The “Phoenix Lights” made Frances Emma Barwood the darling of the global space-alien lobby. And it’s transformed computer geek Jim Dilettoso into a star in the UFO firmament.
Advertisement
by Tony Ortega
Jim Dilettoso is playing a duet on a piano with a man who has a cross made of his own crusty, drying blood on his forehead.
On Dilettoso’s own head is a mass of curly grayish hair.
|
yes
|
Ufology
|
Was the Phoenix Lights incident a result of military flares?
|
no_statement
|
the "phoenix" lights "incident" was not a "result" of "military" "flares".. "military" "flares" did not cause the "phoenix" lights "incident".
|
https://www.deviantart.com/evanvizuett/art/Phoenix-Lights-UFO-Redesign-938327400
|
Phoenix Lights UFO Redesign by EvanVizuett on DeviantArt
|
Phoenix Lights UFO Redesign
Description
This alien spacecraft was sighted in the series of widely publicized sightings, the Phoenix Lights (sometimes called the "Lights Over Phoenix"), observed in the skies over the southwesternstates of Arizona and Nevada on March 13, 1997.
Lights of varying descriptions were seen by thousands of people between 7:30 pm and 10:30 pm MST, in a space of about 300 miles (480 km), from the Nevada line, through Phoenix, to the edge of Tucson. Some witnesses described seeing what appeared to be a huge carpenter's square-shaped UFO containing five spherical lights. There were two distinct events involved in the incident: a triangular formation of lights seen to pass over the state, and a series of stationary lights seen in the Phoenix area. Both sightings were due to aircraft participating in Operation Snowbird, a pilot training program of the Air National Guard based in Davis-Monthan Air Force Base in Tucson, Arizona. The first group of lights were later identified as a formation of A-10 Warthog aircraft flying over Phoenix while returning to Davis-Monthan. The second group of lights were identified as flares dropped by another flight of A-10 aircraft that were on training exercises at the Barry Goldwater Range in southwest Arizona. Fife Symington, governor of Arizona at the time, years later recounted witnessing the incident, describing it as "otherworldly."
Reports of similar lights arose in 2007 and 2008 and were attributed to military flares dropped by fighter aircraft at Luke Air Force Base.[5] and flares attached to helium balloons released by a civilian, respectively.
1997 reports
On March 13, 1997 at 7:55 pm MST, a witness in Henderson, Nevada reported seeing a large, V-shaped object traveling southeast. At 8:15 pm, an unidentified former police officer in Paulden, Nevada reported seeing a cluster of reddish-orange lights disappear over the southern horizon. Shortly afterwards, there were reports of lights seen over the Prescott Valley. Tim Ley and his wife Bobbi, his son Hal and his grandson Damien Turnidge first saw the lights when they were about 65 miles (100 km) away from them. At first, the lights appeared to them as five separate and distinct lights in an arc shape, as if they were on top of a balloon, but they soon realized that the lights appeared to be moving towards them. Over the next ten or so minutes, the lights appeared to come closer, the distance between the lights increased, and they took on the shape of an upside-down V. Eventually, when the lights appeared to be a couple of miles away, the family said they could make out a shape that looked like a 60-degree carpenter's square, with the five lights set into it, with one at the front and two on each side. Soon, the object with the embedded lights appeared to be moving toward them, about 100 to 150 feet (30 to 45 meters) above them, traveling so slowly that it gave the appearance of a silent hovering object, which seemed to pass over their heads and went through a V opening in the peaks of the mountain range towards Piestewa Peak Mountain and toward the direction of Phoenix Sky Harbor International Airport. Between 8:30 and 8:45 pm, witnesses in Glendale, a suburb northwest of Phoenix, saw the light formation pass overhead at an altitude high enough to become obscured by the thin clouds. Amateur astronomer Mitch Stanley in Scottsdale, Arizona also observed the high altitude lights "flying in formation" through a telescope. According to Stanley, they were quite clearly individual airplanes.[7]
Approximately 10:00 pm that same evening, a large number of people in the Phoenix area reported seeing "a row of brilliant lights hovering in the sky, or slowly falling". A number of photographs and videos were taken, prompting author Robert Sheaffer to describe it as "perhaps the most widely witnessed UFO event in history".[8]
Explanations
According to Robert Sheaffer, what became known as "the Phoenix Lights" incident of 1997 "consists of two unrelated incidents, although both were the result of activities of the same organization: Operation Snowbird, a pilot training program operated in the winter by the Air National Guard out of Davis-Monthan Air Force Base in Tucson, Arizona".[8] Tucson astronomer and retired Air Force pilot James McGaha said he also investigated the two separate sightings and traced them both to A-10 aircraft flying in formation at high altitude.[9]
The first incident, often perceived as a large “flying triangle” by witnesses, began at approximately 8:00 pm, and was due to five A-10 jets from Operation Snowbird following an assigned air traffic corridor and flying under visual flight rules. Federal Aviation Administration rules concerning private and commercial aircraft do not apply to military aircraft, so the A-10 formation displayed steady formation lights rather than blinking collision lights. The formation flew over Phoenix and on to Tucson, landing at Davis-Monthan about 8:45 pm.[8]
The second incident, described as "a row of brilliant lights hovering in the sky, or slowly fallings" began at approximately 10:00 pm, and was due to a flare drop exercise by different A-10 jets from the Maryland Air National Guard, also operating out of Davis-Monthan air base as part of from Operation Snowbird.[8] The U.S. Air Force explained the exercise as utilizing slow-falling, long-burning LUU-2B/B illumination flares dropped by a flight of four A-10 Warthog aircraft on a training exercise at the Barry M. Goldwater Air Force Range in western Pima County. The flares would have been visible in Phoenix and appeared to hover due to rising heat from the burning flares creating a "balloon" effect on their parachutes, which slowed the descent.[10] The lights then appeared to wink out as they fell behind the Estrella mountain range to the southwest of Phoenix.
A Maryland Air National Guard pilot, Lt. Col. Ed Jones, responding to a March 2007 media query, confirmed that he had flown one of the aircraft in the formation that dropped flares on the night in question.[10] The squadron to which he belonged was in fact at Davis-Monthan AFB, Arizona, on a training exercise at the time and flew training sorties to the Goldwater Range on the night in question, according to the Maryland Air National Guard. A history of the Maryland Air National Guard published in 2000 asserted that the squadron, the 104th Fighter Squadron, was responsible for the incident.[11] The first reports that members of the Maryland Air National Guard were responsible for the incident were published in The Arizona Republic in July 1997.[12]
Later comparisons with known military flare drops were reported on local television stations, showing similarities between the known military flare drops and the Phoenix Lights.[5] An analysis of the luminosity of LUU-2B/B illumination flares, the type which would have been in use by A-10 aircraft at the time, determined that the luminosity of such flares at a range of approximately 50–70 miles would fall well within the range of the lights viewed from Phoenix.[13]
Photos and videos
During the Phoenix event, numerous still photographs and videotapes were made showing a series of lights appearing at a regular interval, remaining illuminated for several moments, and then going out. The images were later determined to be the result of mountains not visible by night that partially obstructed the view of aircraft flares from certain angles to create the illusion of an arc of lights appearing and disappearing one by one.[14][13]
Governor's response
Shortly after the 1997 incident, Arizona Governor Fife Symington III held a press conference, joking that "they found who was responsible" and revealing an aide dressed in an alien costume. Later in 2007, Symington reportedly told a UFO investigator he'd had a personal close encounter with an alien spacecraft but remained silent "because he didn't want to panic the populace". According to Symington, "I'm a pilot and I know just about every machine that flies. It was bigger than anything that I've ever seen. It remains a great mystery. Other people saw it, responsible people," Symington said Thursday. "I don't know why people would ridicule it".[9][15][16][17]
2007 reports
Lights were reported by observers and recorded by the local Fox News television station on February 6, 2007.[5] According to military officials and the Federal Aviation Administration, these were flares dropped by F-16 aircraft training at Luke Air Force Base.[18]
2008 reports
On April 21, 2008, lights were reported over Phoenix by local residents.[19] These lights reportedly appeared to change from square to triangular formation over time. A valley resident reported that shortly after the lights appeared, three jets were seen heading west in the direction of the lights. An official from Luke Air Force Base denied any United States Air Force activity in the area.[19] On April 22, 2008, a resident of Phoenix told a newspaper that the lights were nothing more than his neighbor releasing helium balloons with flares attached.[20] This was confirmed by a police helicopter.[20] The following day, a Phoenix resident, who declined to be identified in news reports, stated that he had attached flares to helium balloons and released them from his back yard.
|
A history of the Maryland Air National Guard published in 2000 asserted that the squadron, the 104th Fighter Squadron, was responsible for the incident.[11] The first reports that members of the Maryland Air National Guard were responsible for the incident were published in The Arizona Republic in July 1997.[12]
Later comparisons with known military flare drops were reported on local television stations, showing similarities between the known military flare drops and the Phoenix Lights.[5] An analysis of the luminosity of LUU-2B/B illumination flares, the type which would have been in use by A-10 aircraft at the time, determined that the luminosity of such flares at a range of approximately 50–70 miles would fall well within the range of the lights viewed from Phoenix.[13]
Photos and videos
During the Phoenix event, numerous still photographs and videotapes were made showing a series of lights appearing at a regular interval, remaining illuminated for several moments, and then going out. The images were later determined to be the result of mountains not visible by night that partially obstructed the view of aircraft flares from certain angles to create the illusion of an arc of lights appearing and disappearing one by one.[14][13]
Governor's response
Shortly after the 1997 incident, Arizona Governor Fife Symington III held a press conference, joking that "they found who was responsible" and revealing an aide dressed in an alien costume. Later in 2007, Symington reportedly told a UFO investigator he'd had a personal close encounter with an alien spacecraft but remained silent "because he didn't want to panic the populace". According to Symington, "I'm a pilot and I know just about every machine that flies. It was bigger than anything that I've ever seen. It remains a great mystery. Other people saw it, responsible people," Symington said Thursday. "I don't know why people would ridicule it".[9][15][16][17]
2007 reports
Lights were reported by observers and recorded by the local Fox News television station on February 6, 2007. [
|
yes
|
Ufology
|
Was the Phoenix Lights incident a result of military flares?
|
no_statement
|
the "phoenix" lights "incident" was not a "result" of "military" "flares".. "military" "flares" did not cause the "phoenix" lights "incident".
|
https://skepticalinquirer.org/2016/11/the-phoenix-lights-become-an-incident/
|
The 'Phoenix Lights' Become an 'Incident' | Skeptical Inquirer
|
The ‘Phoenix Lights’ Become an ‘Incident’
One of the best-known UFO sightings in recent years—the so-called “Phoenix Lights”—took place on the evening of March 13, 1997. They were very widely seen largely because that was one of the best nights to see the bright naked-eye Comet Hale-Bopp, and large numbers of people went outdoors to observe it. They were surprised to see something else in the sky. (There were later, unrelated Phoenix Lights events as well; see, for example, “The Mysterious Phoenix Lights,” SI, July/August 2008.)
The Phoenix Lights episode actually consists of two unrelated incidents, although both were the result of activities of the same organization: Operation Snowbird, a pilot training program operated in the winter by the Air National Guard out of Davis-Monthan Air Force Base in Tucson, Arizona. In the first incident, something described as a large “flying triangle” was sighted during the eight o’clock hour. Five A-10 jets from Operation Snowbird had flown from Tucson to Nellis Air Force Base near Las Vegas several days earlier, and because this was the final night of the operation, they were now returning. The A-10 jets were flying under VFR (visual flight rules), so there was no need for them to check in with airports along the route. They were following the main air corridor for air traffic traveling that route, the “highway in the sky.” (Why a UFO would follow U.S. air traffic corridors is a mystery.) Because they were flying in formation mode, they did not have on their familiar blinking collision lights but instead their formation lights, which look like landing lights (in any case, Federal Aviation Administration rules concerning private and commercial aircraft lights, flight altitudes, etc., do not apply to military aircraft). The A-10s flew over the Phoenix area and flew on to Tucson, landing at Davis-Monthan about 8:45 pm. Some witnesses claim that it was a single huge solid object, but the sole video existing of the objects shows them moving with respect to each other, and hence were separate objects.
In the second incident, starting around 10:00 pm that same evening, hundreds if not thousands of people in the Phoenix area witnessed a row of brilliant lights hovering in the sky, or slowly falling. Many photographs and videos were taken, making this perhaps the most widely witnessed UFO event in history. This was a flare drop practiced by different A-10 jets from the Maryland Air National Guard, also operating out of Davis-Monthan from Operation Snowbird. And since this was the last night of the operation, they seem to have had a lot of flares that needed dropping. On my Bad UFOs Blog, I have written a detailed analysis of each incident.
The “flare drop” explanation is less controversial than that for the “flying triangle,” but even the former is often challenged. Dr. Lynne D. Kitei, for one, isn’t having any of this “flare drop” business. On her website ThePhoenixLights.net (which claims to promote “Evolution to a New Consciousness,” whatever that means), she claims she was watching the Phoenix Lights two years before everyone else, and that her research proves “we are not alone.” By some complicated analysis, she claims to have proven that the objects photographed could not have been flares, although I haven’t run across anyone who understands what she’s saying. I heard her speak at the 2012 International UFO Congress near Phoenix, and some of her photos of UFOs appeared to me to be lights on the ground. Giving up her medical practice to become a full-time promoter of the story, “Dr. Lynne” (as she is sometimes called) has made a documentary film, The Phoenix Lights, and has often appeared on Coast to Coast AM, the well-known late-night paranormal and conspiracy-fest hosted by George Noory, to tell her version of the story. Each year in March around the anniversary of the incident (“We’re coming up on the twentieth anniversary next year!” she excitedly told me at this year’s UFO Congress), she hosts an event in an auditorium in Phoenix in which videos are shown, and witnesses new and old relate their stories. Dr. Lynne is a sweet lady who is unfailingly cheerful and polite, even if you disagree with her (or don’t understand what she’s saying). She has accumulated additional sighting reports from additional witnesses, including accounts of a giant UFO a mile wide hovering over Phoenix’s Sky Harbor Airport.
But now the Phoenix Lights are growing to even more gigantic proportions, if that is possible. A new motion picture, The Phoenix Incident, was being promoted in a big way at this year’s UFO Congress, with a large desk in the vendors’ area proclaiming “The Truth is Coming” and handing out cheesy little boomerangs labeled with the film title. According to the movie’s promotional material:
The Phoenix Incident is a fictionalized heart-pounding thriller based on this real-life event. Written and directed by gaming talent director Keith Arem (Call of Duty, Titanfall) and starring Troy Baker (famed gaming actor) this one-night event uses whistleblower testimony, recovered military footage and eyewitness accounts to create a sci-fi thriller that examines the US military’s alleged engagement of alien spacecrafts.
The movie received its premiere public showing at the UFO Congress at the close of the Friday session. It’s mostly shaky, dark “found footage,” supposedly left behind by four guys who were eaten by aliens. The plot: As Comet Hale-Bopp passes Earth, it is followed by a companion object, a UFO, which falls to Earth and lands in Arizona. Out pour scary aliens, looking somewhat like the creatures in Alien, who start to eat people. Somehow the military covers it all up. The irony is this: while everyone was inside watching the premiere of this silly movie, the Air National Guard was busy dropping flares again over the Barry Goldwater range. And we didn’t see them.
Until now, the Phoenix Lights were simply that: they were just lights in the sky, skeptics and proponents could agree. But this movie, by mixing actual photos and video of the lights and actual witnesses’ accounts with dramatic fictional elements, has succeeded in muddying the waters. In the movie, four men disappear in the desert, becoming lunch for sinister-looking aliens, while the footage they supposedly left behind becomes the basis for this mockumentary. The military somehow knows all about these aliens and apparently drives them off. Operation Snowbird appears in the film—not as the pilot-training program it is but instead as a sinister coverup agency that is sent out to disseminate confusion and falsehood whenever aliens pop up. Relaxing outdoors at the UFO Congress the evening after the showing of this film, I heard a certain know-it-all discussing it and telling the people who had gathered around him, “Our planes engaged the Triangle!” In other words, he claimed that U.S. Air Force jets fought off a gigantic alien triangular craft nineteen years ago.
The claims of a “companion object” following Comet Hale-Bopp were made by an amateur astronomer who claimed to have a photo of it. The claim was promoted on the Coast to Coast AM all-night, all-high-weirdness radio show, then hosted by Art Bell, and set off a sensation lasting two months. The photo shows nothing more than a misidentified star, but this was enough to trigger thirty-nine members of the Heavens Gate UFO cult, led by Marshall Applewhite, to take their own lives on March 26, 1997, so they could “rise up” and join the object supposedly following the comet.
Jacques Vallee, a Silicon Valley venture capitalist who sits on the board of a half-dozen such firms, wants you to send him money. Vallee, a leading UFO author for over fifty years, is crowdsourcing funds for 500 copies of the new (and hopefully revised) “collector’s limited edition” of the 2009 book he coauthored with Chris Aubeck, Wonders in the Sky. The book deals with unexplained reports of things reportedly seen in the sky before the modern UFO era, going all the way back to ancient Rome and Greece. Vallee says that he will present the book, with its “facsimile commemorative coin” and “artistic beauty and scientific merit,” “to science” to show that UFO sightings have been around for a long time and should be taken seriously. I don’t think “science” will ever get to see this purportedly marvelous book, with only 500 copies of it ever to be printed, and all of them presumably in the hands of people who have contributed $220 to the effort. This fundraising scarcely seems necessary since the $110,000 this effort is hoped to bring in ought to be small change to someone like Vallee.
And that part about the “scientific merit” is also pretty dubious. Blogger Jason Colavito, who has been studying the claims in Wonders in the Sky, calls it a “demonstrably false and generally quite unreliable anthology of badly translated and frequently fictitious documents recording premodern UFO sightings. . . . [Vallee] wasn’t able to sell more than 150 of the 500 future copies of Wonders in the Sky he put up for sale late last year”.
Since that was written, Vallee and Aubeck have sold two more; there are now only 348 copies remaining for subscription. For the specifics of Colavito’s criticisms, see http://goo.gl/X1VrfN. Researcher Martin Kottmeyer noted that alleged sightings of “Neith,” a supposed moon of Venus, were cited nine times in the book as unknowns. However:
Neith had been debunked in Nature magazine back in 1887. The Nature author looked into 33 observations/claims that Venus had a satellite. All but one had a good solution along the lines of either the positions of known stars or suspicions of optical ghosts and artifacts of the telescope lenses in use. The final one was guessed to be a minor asteroid passing near Earth.
As for Vallee’s coauthor Chris Aubeck, he recently posted this to a Facebook discussion of apparent errors in the book:
Over the last eight years my interest in UFOs has changed so that I approach the subject as an observer/folklorist/historian/archivist of the evolution of ufology itself, not to defend individual cases. I am deeply involved in plotting the historical roots and development of UFO mythology, so whether anomalous phenomena have acted as stimuli or not isn’t as relevant to me as it was in 2009.
This statement sounds like Aubeck backtracking and washing his hands of Vallee’s claim that this material represents a Challenge to Science (an inside joke; that’s the title of one of Vallee’s early books). Kottmeyer has also shown that the “primary source” consulted for Vallee and Aubeck’s description of a sighting of anomalous objects by the famous French astronomer Charles Messier (entry # 358) was not any contemporary eighteenth-century source but Charles Fort’s Book of the Damned. The description of the incident in Vallee and Aubeck differs from that in actual primary sources but matches Fort’s fanciful description of it. So much for a book boldly heralded by its authors as “a breakthrough in UFO research”!
In other news, UFOlogist Richard Dolan recently declared his belief in chemtrail conspiracies. On March 30, he wrote on his Facebook page:
All day long, I have been watching the aircraft stream across Rochester’s skies. Most of them have been leaving behind trails that do not go away, simply spreading across the sky. For those who do not pay attention, these look like ordinary clouds that have come in. But most of this is not natural. . . . I believe that geo-engineering is real. When I grew up in the 1970s, this type of nonsense did not occur. And I lived just outside New York City, watching major airline traffic every day go over my house. Such artificial clouds never existed back then. This phenomenon is real.
UFO buffs sometimes describe Dolan as “cautious” and “thoughtful,” even though he has long been promoting loopy stuff such as the “secret space program.” Last year, he took a big hit from his participation in promoting the “Roswell slides” (see this column, September/October, 2015). I don’t think we’ll be hearing that kind of talk about Dolan any longer.
Robert Sheaffer
Robert Sheaffer’s “Psychic Vibrations” column has appeared in the Skeptical Inquirer for the past thirty years. He is also author of UFO Sightings: The Evidence (Prometheus 1998). He blogs at www.badUFOs.com.
|
The A-10 jets were flying under VFR (visual flight rules), so there was no need for them to check in with airports along the route. They were following the main air corridor for air traffic traveling that route, the “highway in the sky.” (Why a UFO would follow U.S. air traffic corridors is a mystery.) Because they were flying in formation mode, they did not have on their familiar blinking collision lights but instead their formation lights, which look like landing lights (in any case, Federal Aviation Administration rules concerning private and commercial aircraft lights, flight altitudes, etc., do not apply to military aircraft). The A-10s flew over the Phoenix area and flew on to Tucson, landing at Davis-Monthan about 8:45 pm. Some witnesses claim that it was a single huge solid object, but the sole video existing of the objects shows them moving with respect to each other, and hence were separate objects.
In the second incident, starting around 10:00 pm that same evening, hundreds if not thousands of people in the Phoenix area witnessed a row of brilliant lights hovering in the sky, or slowly falling. Many photographs and videos were taken, making this perhaps the most widely witnessed UFO event in history. This was a flare drop practiced by different A-10 jets from the Maryland Air National Guard, also operating out of Davis-Monthan from Operation Snowbird. And since this was the last night of the operation, they seem to have had a lot of flares that needed dropping. On my Bad UFOs Blog, I have written a detailed analysis of each incident.
The “flare drop” explanation is less controversial than that for the “flying triangle,” but even the former is often challenged. Dr. Lynne D. Kitei, for one, isn’t having any of this “flare drop” business. On her website ThePhoenixLights.net (which claims to promote “Evolution to a New Consciousness,” whatever that means), she claims she was watching the Phoenix Lights two years before everyone else, and that her research proves “we are not alone.”
|
yes
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
https://www.nps.gov/articles/000/ordovician-period.htm
|
Ordovician Period—485.4 to 443.8 MYA (U.S. National Park Service)
|
Ordovician Time Span
Ordovician age fossil brachiopods, Mississippi National River and Recreation Area, Minnesota.
NPS image
Introduction
The naming of the Ordovician Period is tangled with the Cambrian Period. Suffice it to say that a Welsh tribe—Ordovices—inspired the name of this geologic period. The Ordovician System rounded out the threefold division of early Paleozoic rocks (i.e., Cambrian, Ordovician, and Silurian), which are all named for Welsh tribes. Recognizing the Ordovician between the Cambrian and Silurian ended a 40-year controversy, eliminated an “overlapping system,” and created a new interval of time in its own right.
Significant Ordovician events
Beginning in the Ordovician Period, a series of plate collisions resulted in Laurentia, Siberia, and Baltica becoming assembled into the continents of Laurussia by the Devonian and Laurasia by the Pennsylvanian (also see Cambrian Period). Meanwhile, the southern remains of Rodinia (i.e., Gondwana) rotated clockwise and moved northward to collide with Laurasia. The eventual result was the supercontinent Pangaea (“all land”), stretching from pole to pole by Permian time.
During the Ordovician, many new species replaced their Cambrian predecessors. In addition, primitive plants called lycophytes began to move onto land, which was barren until then. Later, in the Devonian, other types of plants colonized terrestrial habitats. Flowering plants, the most prolific type today, appeared even later, during the Cretaceous Period. Also, during the Late Ordovician, massive glaciers formed on Gondwana at the South Pole, causing shallow seas to drain and sea level to drop, which may be a factor in the period ending with a mass extinction that affected many marine communities.
Learn more about events in the Ordovician Period
Though less famous than the Cambrian explosion, marine fauna increased fourfold during the Ordovician, resulting in 12% of all known Phanerozoic marine fauna (Dixon et al. 2001). The sea teemed with life different from its Cambrian predecessors such as bivalves, gastropods (snails), bryozoans (moss animals), and crinoids (sea lilies). Bryozoans first appeared in the Ordovician and comprise an important group of colonial, marine organisms that still exist today. The first coral reefs also appeared during the Ordovician, though solitary corals date back to at least the Cambrian. Additionally the Ordovician is marked by a sudden abundance of trace fossils.
Rocks from the Ordovician Period contain evidence that plants began colonizing dry land at this time. Most experts agree that the ancestors of land plants first evolved in a marine environment, then moved into a freshwater environment and finally onto land.
Movement of life onto land was a major evolutionary step by both plants and animals. Development of land plants was accompanied by migration of modified forms of arthropods onto the land, which were apparently the first forms of animal life to leave the ocean (Rogers 1993).
The Appalachian Mountains are part of a collisional mountain range that includes the Ouachita Mountains of Arkansas and Oklahoma, and the Marathon Mountains of western Texas. The mountain-building event, or orogeny, that created these ranges started in mid-Ordovician time and continued through the Pennsylvanian Period. The tectonic history of the Appalachian Mountains involves the opening of an ancient ocean along a divergent plate boundary, the closing of the ocean during plate convergence, and then more divergence that opened the Atlantic Ocean (Lillie 2005).
As sea level rose during the Cambrian Period and into the Ordovician, the coastline in eastern North America gradually receded to the west. Thick sequences of sediment, especially carbonate rocks such as limestone, were deposited along the edge of the continent. At this time the East Coast was a passive margin. A change in plate motion during the middle of the Ordovician Period set the stage for the ensuing mountain-building event. Moreover, the ocean to the east, which geologists call the Iapetus Ocean, began to close through the process of subduction. The once-quiet Appalachian passive margin changed to a very active plate boundary when a neighboring oceanic plate collided with and began sinking beneath the North American craton. The process of subduction not only destroys the sinking plate, but leads to volcanic activity in the overriding continental plate, and also results in any areas of non-oceanic crust on the sinking plate (such as islands) being “scraped off” and attached to the continental plate. With the birth of this new subduction zone, the early Appalachians were born.
Detailed studies of the southern Appalachians indicate that the formation of this mountain belt was more complex than once thought. Rather than forming during a single continental collision, the Appalachians resulted from several distinct episodes of mountain building that occurred over a period of nearly 300 million years. The final orogeny occurred about 250 million years ago when Africa and Europe collided with North America. The Valley and Ridge physiographic province (Lutgens and Tarbuck 1992), which is present in Shenandoah National Park and or Blue Ridge Parkway, highlights this mountain-building event.
The extinction that occurred at the end of the Ordovician Period devastated marine communities. This extinction is the first major extinction event recorded in the rock record. An estimated 25% of the known (marine) taxonomic families were lost, including the disappearance of one-third of all brachiopod and bryozoan families, as well as numerous groups of conodonts (eel-like animals related to vertebrates), trilobites, and graptolites (colonial worm-like animals).
Geologists have theorized that the extinction at the end of the Ordovician was the result of a single event—the glaciation of the supercontinent Gondwana. Evidence for this glaciation is provided by glacial deposits in the Saharan Desert. When Gondwana passed over the South Pole, continental-size glaciers formed, which resulted in a lowering of sea level because large amounts of water became tied up in ice sheets. In conjunction with the cooling caused by the glaciation, the fall in global sea level, which reduced prime habitat on continental shelves, are likely driving forces for the Ordovician extinction.
Visit—Ordovician Parks
Every park contains some slice of geologic time. Below, we highlight selected parks associated with the Ordovician Period. This is not to say that a particular park has only rocks from the specified period. Rather, rocks in selected parks exemplify a certain event or preserve fossils or rocks from a certain geologic age.
|
The eventual result was the supercontinent Pangaea (“all land”), stretching from pole to pole by Permian time.
During the Ordovician, many new species replaced their Cambrian predecessors. In addition, primitive plants called lycophytes began to move onto land, which was barren until then. Later, in the Devonian, other types of plants colonized terrestrial habitats. Flowering plants, the most prolific type today, appeared even later, during the Cretaceous Period. Also, during the Late Ordovician, massive glaciers formed on Gondwana at the South Pole, causing shallow seas to drain and sea level to drop, which may be a factor in the period ending with a mass extinction that affected many marine communities.
Learn more about events in the Ordovician Period
Though less famous than the Cambrian explosion, marine fauna increased fourfold during the Ordovician, resulting in 12% of all known Phanerozoic marine fauna (Dixon et al. 2001). The sea teemed with life different from its Cambrian predecessors such as bivalves, gastropods (snails), bryozoans (moss animals), and crinoids (sea lilies). Bryozoans first appeared in the Ordovician and comprise an important group of colonial, marine organisms that still exist today. The first coral reefs also appeared during the Ordovician, though solitary corals date back to at least the Cambrian. Additionally the Ordovician is marked by a sudden abundance of trace fossils.
Rocks from the Ordovician Period contain evidence that plants began colonizing dry land at this time. Most experts agree that the ancestors of land plants first evolved in a marine environment, then moved into a freshwater environment and finally onto land.
Movement of life onto land was a major evolutionary step by both plants and animals. Development of land plants was accompanied by migration of modified forms of arthropods onto the land, which were apparently the first forms of animal life to leave the ocean (Rogers 1993).
|
no
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
https://en.wikipedia.org/wiki/Ordovician
|
Ordovician - Wikipedia
|
The Ordovician, named after the Welsh tribe of the Ordovices, was defined by Charles Lapworth in 1879 to resolve a dispute between followers of Adam Sedgwick and Roderick Murchison, who were placing the same rock beds in North Wales in the Cambrian and Silurian systems, respectively.[11] Lapworth recognized that the fossilfauna in the disputed strata were different from those of either the Cambrian or the Silurian systems, and placed them in a system of their own. The Ordovician received international approval in 1960 (forty years after Lapworth's death), when it was adopted as an official period of the Paleozoic Era by the International Geological Congress.
Life continued to flourish during the Ordovician as it did in the earlier Cambrian Period, although the end of the period was marked by the Ordovician–Silurian extinction events. Invertebrates, namely molluscs and arthropods, dominated the oceans, with members of the latter group probably starting their establishment on land during this time, becoming fully established by the Devonian. The first land plants are known from this period. The Great Ordovician Biodiversification Event considerably increased the diversity of life. Fish, the world's first true vertebrates, continued to evolve, and those with jaws may have first appeared late in the period. About 100 times as many meteorites struck the Earth per year during the Ordovician compared with today.[12]
A number of regional terms have been used to subdivide the Ordovician Period. In 2008, the ICS erected a formal international system of subdivisions.[13] There exist Baltoscandic, British, Siberian, North American, Australian, Chinese, Mediterranean and North-Gondwanan regional stratigraphic schemes.[14]
The Ordovician Period in Britain was traditionally broken into Early (Tremadocian and Arenig), Middle (Llanvirn (subdivided into Abereiddian and Llandeilian) and Llandeilo) and Late (Caradoc and Ashgill) epochs. The corresponding rocks of the Ordovician System are referred to as coming from the Lower, Middle, or Upper part of the column. The faunal stages (subdivisions of epochs) from youngest to oldest are:
The Tremadoc corresponds to the (modern) Tremadocian. The Floian corresponds to the lower Arenig; the Arenig continues until the early Darriwilian, subsuming the Dapingian. The Llanvirn occupies the rest of the Darriwilian, and terminates with it at the base of the Late Ordovician.
The Sandbian represents the first half of the Caradoc; the Caradoc ends in the mid-Katian, and the Ashgill represents the last half of the Katian, plus the Hirnantian.[15]
Paleogeographic map of the Earth in the middle Ordovician, 470 million years ago
During the Ordovician, the southern continents were assembled into Gondwana, which reached from north of the equator to the South Pole. The Panthalassic Ocean, centered in the northern hemisphere, covered over half the globe.[17] At the start of the period, the continents of Laurentia (in present-day North America), Siberia, and Baltica (present-day northern Europe) were separated from Gondwana by over 5,000 kilometres (3,100 mi) of ocean. These smaller continents were also sufficiently widely separated from each other to develop distinct communities of benthic organisms.[18] The small continent of Avalonia had just rifted from Gondwana and began to move north towards Baltica and Laurentia, opening the Rheic Ocean between Gondwana and Avalonia.[19][20][21] Avalonia collided with Baltica towards the end of Ordovician.[22][23]
Other geographic features of the Ordovician world included the Tornquist Sea, which separated Avalonia from Baltica;[18] the Aegir Ocean, which separated Baltica from Siberia;[24] and an oceanic area between Siberia, Baltica, and Gondwana which expanded to become the Paleoasian Ocean in Carboniferous time. The Mongol-Okhotsk Ocean formed a deep embayment between Siberia and the Central Mongolian terranes. Most of the terranes of central Asia were part of an equatorial archipelago whose geometry is poorly constrained by the available evidence.[25]
The period was one of extensive, widespread tectonism and volcanism. However, orogenesis (mountain-building) was not primarily due to continent-continent collisions. Instead, mountains arose along active continental margins during accretion of arc terranes or ribbon microcontinents. Accretion of new crust was limited to the Iapetus margin of Laurentia; elsewhere, the pattern was of rifting in back-arc basins followed by remerger. This reflected episodic switching from extension to compression. The initiation of new subduction reflected a global reorganization of tectonic plates centered on the amalgamation of Gondwana.[26][18]
The Taconic orogeny, a major mountain-building episode, was well under way in Cambrian times.[27] This continued into the Ordovician, when at least two volcanic island arcs collided with Laurentia to form the Appalachian Mountains. Laurentia was otherwise tectonically stable. An island arc accreted to South China during the period, while subduction along north China (Sulinheer) resulted in the emplacement of ophiolites.[28]
The ash fall of the Millburg/Big Bentonite bed, at about 454 Ma, was the largest in the last 590 million years. This had a dense rock equivalent volume of as much as 1,140 cubic kilometres (270 cu mi). Remarkably, this appears to have had little impact on life.[29]
There was vigorous tectonic activity along northwest margin of Gondwana during the Floian, 478 Ma, recorded in the Central Iberian Zone of Spain. The activity reached as far as Turkey by the end of Ordovician. The opposite margin of Gondwana, in Australia, faced a set of island arcs.[18] The accretion of these arcs to the eastern margin of Gondwana was responsible for the Benambran Orogeny of eastern Australia.[30][31] Subduction also took place along what is now Argentina (Famatinian Orogeny) at 450 Ma.[32] This involved significant back arc rifting.[18] The interior of Gondwana was tectonically quiet until the Triassic.[18]
Towards the end of the period, Gondwana began to drift across the South Pole. This contributed to the Hibernian glaciation and the associated extinction event.[33]
The Ordovician meteor event is a proposed shower of meteors that occurred during the Middle Ordovician Epoch, about 467.5 ± 0.28 million years ago, due to the break-up of the L chondrite parent body.[34] It is not associated with any major extinction event.[35][36][37]
External mold of Ordovician bivalve showing that the original aragonite shell dissolved on the sea floor, leaving a cemented mold for biological encrustation (Waynesville Formation of Franklin County, Indiana).
Unlike Cambrian times, when calcite production was dominated by microbial and non-biological processes, animals (and macroalgae) became a dominant source of calcareous material in Ordovician deposits.[41]
The Early Ordovician climate was very hot, with intense greenhouse conditions and sea surface temperatures comparable to those during the Early Eocene Climatic Optimum.[42] By the late Early Ordovician, the Earth cooled,[43] giving way to a more temperate climate in the Middle Ordovician,[44] with the Earth likely entering the Early Palaeozoic Ice Age during the Sandbian,[45][46] and possibly as early as the Darriwilian[47] or even the Floian.[43] Evidence suggests that global temperatures rose briefly in the early Katian (Boda Event), depositing bioherms and radiating fauna across Europe.[48] Further cooling during the Hirnantian, at the end of the Ordovician, led to the Late Ordovician glaciation.[49]
The Ordovician saw the highest sea levels of the Paleozoic, and the low relief of the continents led to many shelf deposits being formed under hundreds of metres of water.[41] The sea level rose more or less continuously throughout the Early Ordovician, leveling off somewhat during the middle of the period.[41] Locally, some regressions occurred, but the sea level rise continued in the beginning of the Late Ordovician. Sea levels fell steadily due to the cooling temperatures for about 3 million years leading up to the Hirnantian glaciation. During this icy stage, sea level seems to have risen and dropped somewhat. Despite much study, the details remain unresolved.[41] In particular, some researches interpret the fluctuations in sea level as pre-Hibernian glaciation,[50] but sedimentary evidence of glaciation is lacking until the end of the period.[23] There is evidence of glaciers during the Hirnantian on the land we now know as Africa and South America, which were near the South Pole at the time, facilitating the formation of the ice caps of the Hirnantian glaciation.
Endoceras, one of the largest predators of the OrdovicianFossiliferous limestone slab from the Liberty Formation (Upper Ordovician) of Caesar Creek State Park near Waynesville, Ohio.The trilobite Isotelus from Wisconsin
On the whole, the fauna that emerged in the Ordovician were the template for the remainder of the Palaeozoic. The fauna was dominated by tiered communities of suspension feeders, mainly with short food chains. The ecological system reached a new grade of complexity far beyond that of the Cambrian fauna, which has persisted until the present day.[41] Though less famous than the Cambrian explosion, the Ordovician radiation (also known as the Great Ordovician Biodiversification Event)[18] was no less remarkable; marine faunal genera increased fourfold, resulting in 12% of all known Phanerozoic marine fauna.[52] Several animals also went through a miniaturization process, becoming much smaller than their Cambrian counterparts.[53] Another change in the fauna was the strong increase in filter-feeding organisms.[54] The trilobite, inarticulate brachiopod, archaeocyathid, and eocrinoid faunas of the Cambrian were succeeded by those that dominated the rest of the Paleozoic, such as articulate brachiopods, cephalopods, and crinoids. Articulate brachiopods, in particular, largely replaced trilobites in shelf communities. Their success epitomizes the greatly increased diversity of carbonate shell-secreting organisms in the Ordovician compared to the Cambrian.[55]
Ordovician geography had its effect on the diversity of fauna; Ordovician invertebrates displayed a very high degree of provincialism.[56] The widely separated continents of Laurentia and Baltica, then positioned close to the tropics and boasting many shallow seas rich in life, developed a distinct trilobite fauna from the trilobite fauna of Gondwana, and Gondwana developed distinct fauna in its tropical and temperature zones. However, tropical articulate brachiopods had a more cosmopolitan distribution, with less diversity on different continents. During the Middle Ordovician, beta diversity began a significant decline as marine taxa began to disperse widely across space.[57] Faunas become less provincial later in the Ordovician, though they were still distinguishable into the late Ordovician.[58]
Trilobites in particular were rich and diverse. Trilobites in the Ordovician were very different from their predecessors in the Cambrian. Many trilobites developed bizarre spines and nodules to defend against predators such as primitive eurypterids and nautiloids while other trilobites such as Aeglina prisca evolved to become swimming forms. Some trilobites even developed shovel-like snouts for ploughing through muddy sea bottoms. Another unusual clade of trilobites known as the trinucleids developed a broad pitted margin around their head shields.[59] Some trilobites such as Asaphus kowalewski evolved long eyestalks to assist in detecting predators whereas other trilobite eyes in contrast disappeared completely.[60] Molecular clock analyses suggest that early arachnids started living on land by the end of the Ordovician.[61] Although solitary corals date back to at least the Cambrian, reef-forming corals appeared in the early Ordovician, including the earliest known octocorals,[62][63] corresponding to an increase in the stability of carbonate and thus a new abundance of calcifying animals.[41] Brachiopods surged in diversity, adapting to almost every type of marine environment.[64][65][66] Even after GOBE, there is evidence suggesting that Ordovician brachiopods maintained elevated rates of speciation.[67]Molluscs, which appeared during the Cambrian or even the Ediacaran, became common and varied, especially bivalves, gastropods, and nautiloid cephalopods.[68][69] Cephalopods diversified from shallow marine tropical environments to dominate almost all marine environments.[70] Graptolites, which evolved in the preceding Cambrian period, thrived in the oceans. This includes the distinctive Nemagraptus gracilis graptolite fauna, which was distributed widely during peak sea levels in the Sandbian.[71][23][23] Some new cystoids and crinoids appeared. It was long thought that the first true vertebrates (fish — Ostracoderms) appeared in the Ordovician, but recent discoveries in China reveal that they probably originated in the Early Cambrian.[72] The first gnathostome (jawed fish) may have appeared in the Late Ordovician epoch.[73] Chitinozoans, which first appeared late in the Wuliuan, exploded in diversity during the Tremadocian, quickly becoming globally widespread.[74][75] Several groups of endobiotic symbionts appeared in the Ordovician.[76][77]
In the Early Ordovician, trilobites were joined by many new types of organisms, including tabulate corals, strophomenid, rhynchonellid, and many new orthid brachiopods, bryozoans, planktonic graptolites and conodonts, and many types of molluscs and echinoderms, including the ophiuroids ("brittle stars") and the first sea stars. Nevertheless, the arthropods remained abundant; all the Late Cambrian orders continued, and were joined by the new group Phacopida. The first evidence of land plants also appeared (see evolutionary history of life).
In the Middle Ordovician, the trilobite-dominated Early Ordovician communities were replaced by generally more mixed ecosystems, in which brachiopods, bryozoans, molluscs, cornulitids, tentaculitids and echinoderms all flourished, tabulate corals diversified and the first rugose corals appeared. The planktonic graptolites remained diverse, with the Diplograptina making their appearance. One of the earliest known armoured agnathan ("ostracoderm") vertebrates, Arandaspis, dates from the Middle Ordovician.[78] During the Middle Ordovician there was a large increase in the intensity and diversity of bioeroding organisms. This is known as the Ordovician Bioerosion Revolution.[79] It is marked by a sudden abundance of hard substrate trace fossils such as Trypanites, Palaeosabella, Petroxestes and Osprioneides. Bioerosion became an important process, particularly in the thick calcitic skeletons of corals, bryozoans and brachiopods, and on the extensive carbonate hardgrounds that appear in abundance at this time.
Upper Ordovician edrioasteroidCystaster stellatus on a cobble from the Kope Formation in northern Kentucky with the cyclostome bryozoanCorynotrypa in the background
Green algae were common in the Late Cambrian (perhaps earlier) and in the Ordovician. Terrestrial plants probably evolved from green algae, first appearing as tiny non-vascular forms resembling liverworts, in the middle to late Ordovician.[81] Fossil spores found in Ordovician sedimentary rock are typical of bryophytes.[82]
Colonization of land would have been limited to shorelines
Among the first land fungi may have been arbuscular mycorrhiza fungi (Glomerales), playing a crucial role in facilitating the colonization of land by plants through mycorrhizal symbiosis, which makes mineral nutrients available to plant cells; such fossilized fungal hyphae and spores from the Ordovician of Wisconsin have been found with an age of about 460 million years ago, a time when the land flora most likely only consisted of plants similar to non-vascular bryophytes.[83]
The extinctions occurred approximately 447–444 million years ago and mark the boundary between the Ordovician and the following Silurian Period. At that time all complex multicellular organisms lived in the sea, and about 49% of genera of fauna disappeared forever; brachiopods and bryozoans were greatly reduced, along with many trilobite, conodont and graptolite families.
The most commonly accepted theory is that these events were triggered by the onset of cold conditions in the late Katian, followed by an ice age, in the Hirnantian faunal stage, that ended the long, stable greenhouse conditions typical of the Ordovician.
The ice age was possibly not long-lasting. Oxygen isotopes in fossil brachiopods show its duration may have been only 0.5 to 1.5 million years.[84] Other researchers (Page et al.) estimate more temperate conditions did not return until the late Silurian.
The late Ordovician glaciation event was preceded by a fall in atmospheric carbon dioxide (from 7000 ppm to 4400 ppm).[85][86] The dip may have been caused by a burst of volcanic activity that deposited new silicate rocks, which draw CO2 out of the air as they erode.[86] Another possibility is that bryophytes and lichens, which colonized land in the middle to late Ordovician, may have increased weathering enough to draw down CO2 levels.[81] The drop in CO2 selectively affected the shallow seas where most organisms lived. As the southern supercontinent Gondwana drifted over the South Pole, ice caps formed on it, which have been detected in Upper Ordovician rock strata of North Africa and then-adjacent northeastern South America, which were south-polar locations at the time.
As glaciers grew, the sea level dropped, and the vast shallow intra-continental Ordovician seas withdrew, which eliminated many ecological niches. When they returned, they carried diminished founder populations that lacked many whole families of organisms. They then withdrew again with the next pulse of glaciation, eliminating biological diversity with each change.[87] Species limited to a single epicontinental sea on a given landmass were severely affected.[40] Tropical lifeforms were hit particularly hard in the first wave of extinction, while cool-water species were hit worst in the second pulse.[40]
Those species able to adapt to the changing conditions survived to fill the ecological niches left by the extinctions. For example, there is evidence the oceans became more deeply oxygenated during the glaciation, allowing unusual benthic organisms (Hirnantian fauna) to colonize the depths. These organisms were cosmopolitan in distribution and present at most latitudes.[58]
At the end of the second event, melting glaciers caused the sea level to rise and stabilise once more. The rebound of life's diversity with the permanent re-flooding of continental shelves at the onset of the Silurian saw increased biodiversity within the surviving Orders. Recovery was characterized by an unusual number of "Lazarus taxa", disappearing during the extinction and reappearing well into the Silurian, which suggests that the taxa survived in small numbers in refugia.[88]
An alternate extinction hypothesis suggested that a ten-second gamma-ray burst could have destroyed the ozone layer and exposed terrestrial and marine surface-dwelling life to deadly ultraviolet radiation and initiated global cooling.[89]
Recent work considering the sequence stratigraphy of the Late Ordovician argues that the mass extinction was a single protracted episode lasting several hundred thousand years, with abrupt changes in water depth and sedimentation rate producing two pulses of last occurrences of species.[90]
^Charles Lapworth (1879) "On the Tripartite Classification of the Lower Palaeozoic Rocks,"Geological Magazine, new series, 6 : 1-15. From pp. 13-14: "North Wales itself — at all events the whole of the great Bala district where Sedgwick first worked out the physical succession among the rocks of the intermediate or so-called Upper Cambrian or Lower Silurian system; and in all probability, much of the Shelve and the Caradoc area, whence Murchison first published its distinctive fossils — lay within the territory of the Ordovices; … Here, then, have we the hint for the appropriate title for the central system of the Lower Paleozoic. It should be called the Ordovician System, after this old British tribe."
^Ramos, Victor A. (2018). "The Famatinian Orogen Along the Protomargin of Western Gondwana: Evidence for a Nearly Continuous Ordovician Magmatic Arc Between Venezuela and Argentina". The Evolution of the Chilean-Argentinean Andes. Springer Earth System Sciences: 133–161. doi:10.1007/978-3-319-67774-3_6. ISBN978-3-319-67773-6.
|
Nevertheless, the arthropods remained abundant; all the Late Cambrian orders continued, and were joined by the new group Phacopida. The first evidence of land plants also appeared (see evolutionary history of life).
In the Middle Ordovician, the trilobite-dominated Early Ordovician communities were replaced by generally more mixed ecosystems, in which brachiopods, bryozoans, molluscs, cornulitids, tentaculitids and echinoderms all flourished, tabulate corals diversified and the first rugose corals appeared. The planktonic graptolites remained diverse, with the Diplograptina making their appearance. One of the earliest known armoured agnathan ("ostracoderm") vertebrates, Arandaspis, dates from the Middle Ordovician.[78] During the Middle Ordovician there was a large increase in the intensity and diversity of bioeroding organisms. This is known as the Ordovician Bioerosion Revolution.[79] It is marked by a sudden abundance of hard substrate trace fossils such as Trypanites, Palaeosabella, Petroxestes and Osprioneides. Bioerosion became an important process, particularly in the thick calcitic skeletons of corals, bryozoans and brachiopods, and on the extensive carbonate hardgrounds that appear in abundance at this time.
Upper Ordovician edrioasteroidCystaster stellatus on a cobble from the Kope Formation in northern Kentucky with the cyclostome bryozoanCorynotrypa in the background
Green algae were common in the Late Cambrian (perhaps earlier) and in the Ordovician. Terrestrial plants probably evolved from green algae, first appearing as tiny non-vascular forms resembling liverworts, in the middle to late Ordovician.[81]
|
no
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
http://hyperphysics.phy-astr.gsu.edu/hbase/Geophys/geotime.html
|
Geological time scale
|
Geological time scale
The vast expanse of geological time has been separated into eras, periods, and epochs. The numbers included below refer to the beginnings of the division in which the title appears. The numbers are in millions of years. The named divisions of time are for the most part based on fossil evidence and principles for relative dating over the past two hundred years. Only with the application of radiometric dating have numbers been obtained for the divisions observed from field observations.
Era
Period
Epoch
Plant and Animal Development
Cenozoic
Quaternary
Holocene (.01)
Humans develop
"Age of mammals"
Extinction of dinosaurs and many other species.
Pleistocene (1.8)
Tertiary
Pliocene (5.3)
Miocene (23.8)
Oligocene (33.7)
Eocene (54.8)
Paleocene (65.0)
Mesozoic
Cretaceous (144)
"Age of Reptiles"
First flowering plants
First birds
Dinosaurs dominant.
Jurassic (206)
Triassic (248)
Paleozoic
Permian (290)
"Age of Amphibians"
Extinction of trilobites and many other marine animals
First reptiles
Large coal swamps
Large Amphibians abundant.
Carboniferous: Pennyslvanian (323)
Carboniferous: Mississippian (354)
Devonian (417)
"Age of Fishes"
First insect fossils
Fishes dominant
First land plants
Silurian (443)
Ordovician (490)
"Age of Invertibrates"
First fishes
Trilobites dominant
First organisms with shells
Cambrian (540)
Precambrian - comprises about 88% of geologic time (4500)
First multicelled organisms
First one-celled organisms
Origin of Earth
Adapted from Lutgens and Tarbuck. They cite the Geological Society of America as the source of the data.
There is another kind of time division used - the "eon". The entire interval of the existence of visible life is called the Phanerozoic eon. The great Precambrian expanse of time is divided into the Proterozoic, Archean, and Hadean eons in order of increasing age.
The names of the eras in the Phanerozoic eon (the eon of visible life) are the Cenozoic ("recent life"), Mesozoic ("middle life") and Paleozoic ("ancient life"). The further subdivision of the eras into 12 "periods" is based on identifiable but less profound changes in life-forms. In the most recent era, the Cenozoic, there is a further subdivision of time into epochs.
Geologic Time and the Geologic Column
This approach to the sweep of geologic time follows that in "The Grand Canyon", C.Hill, et al., eds. to organize the different periods of life since the beginning of the Cambrian period. The time data from radiometric dating is taken from that source. The times are in millions of years.
Some descriptive information about the different divisions of geologic time is given below. Lutgens & Tarbuck take on the task of surveying Earth history in one chapter, Chapter 19 of Essentials of Geology. The brief outline below draws from that material and elsewhere to provide a brief sketch of Earth history.
Note that the dates in millions of years are representative values. Research publications would give error bars for such division dates - it is not implied here that these boundaries are known to 3 or 4 significant digits. The division of the geologic column into different periods is largely based upon the varieties of fossils found, taken as indicators of a time period in Earth's history.
Quaternary Period, Cenozoic Era, Phanerozoic Eon [1.8 Myr - 0 ]
In the time scale of Lutgens & Tarbuck, the Quaternary Period is further divided into the Pleistocene Epoch from 1.8 to 0.01 Myr and the most recent Holocene Epoch from 0.01 Myr to the present.
By the beginning of the Quaternary Period, most of the major plate tectonic movements which formed the North American continent had taken place, and the main modifications past that were those produced by glacial action and erosion processess. Human beings emerged during this Period.
Neogene Period, Cenozoic Era, Phanerozoic Eon [23 Myr - 1.8 Myr ]
In the time scale of Lutgens & Tarbuck, the Neogene Period and the Paleogene Period below are combined and called the Tertiary Period. Calling this span from roughly 66 Myr to 1.8 Myr the Tertiary Period is fairly common in geologic literature. It is sometimes referred to as the "age of mammals".
Lutgens & Tarbuck further subdivide this Neogene Period into the Miocene Epoch from 23.8 to 5.3 Myr and the Pliocene Epoch from 5.3 to 1.8 Myr.
Paleogene Period, Cenozoic Era, Phanerozoic Eon [66 Myr - 23 Myr ]
The Paleogene Period (or the early part of the Tertiary Period) represents the time period after the major extinction that wiped out the dinosaurs and about half of the known species worldwide. Lutgens & Tarbuck further subdivide this time period into the Paleocene Epoch (65-54.8Myr), the Eocene Epoch (54.8-33.7Myr), and the Oligocene Epoch (33.7-23.8 Myr).
Cretaceous Period, Mesozoic Era, Phanerozoic Eon [145 Myr - 66 Myr ]
The Cretaceous Period is perhaps most familiar because of the major extinction event which marks the Cretaceous-Tertiary boundary. It is typically called the K-T extinction, using the first letter of the German spelling of Cretaceous, and it marked the end of the dinosaurs. There is large body of evidence associating this extinction with the large impact crater at Chicxulub, Yucatan Peninsula, Mexico.
The Cretaceous, Jurassic and Triassic Periods are collectively referred to as the "age of reptiles".
The first flowering plants appeared near the beginning of the Cretaceous Period.
Evidence suggests that a vast shallow sea invaded much of western North America, and the Atlantic and Gulf coastal regions during the Cretaceous Period. This created great swamps and resulted in Cretaceous coal deposits in the western United States and Canada.
Jurassic Period, Mesozoic Era, Phanerozoic Eon [201 Myr - 145 Myr ]
The distinctive fossil progression characteristic of this period was first found in the Jura Mountains of Russia.
Dinosaurs and other reptiles were the dominant species. The Jurassic Period saw the first appearance of birds.
It appears that a shallow sea again invaded North America at the beginning of the Jurassic Period. But next to that sea vast continental sediments were deposited on the Colorado plateau. This includes the Navajo Sandstone, a white quartz sandstone that appears to be windblown and reaches a thickness near 300 meters.
The early Jurassic Period at about 200 Myr saw the beginning of the breakup of Pangaea and a rift developed between what is now the United States and western Africa, giving birth to the Atlantic Ocean. The westward moving Atlantic plate began to override the Pacific plate. The continuing subduction of the Pacific plate contributed to the western mountains and to the igneous activity that resulted in the Rocky Mountains.
Triassic Period, Mesozoic Era, Phanerozoic Eon [252 Myr - 201 Myr ]
Dinosaurs became the dominant species in the Triassic Period.
In North America there is not much marine sedimentary rock of this period. Exposed Triassic strata are mostly red sandstone and mudstones which lack fossils and suggest a land environment.
Permian Period, Paleozoic Era, Phanerozoic Eon [299 Myr - 252 Myr ]
The Permian Period is named after the Perm region of Russia, where the types of fossils characteristic of that period were first discovered by geologist Roderick Murchison in 1841. The Permian, Pennsylvanian and Mississippian Periods are collectively referred to as the "age of amphibians". By the end of the Permian Period the once dominant trilobites are extinct along with many other marine animals. Lutgens & Tarbuck label this extinction "The Great Paleozoic Extinction" and comment that it was the greatest of at least five major extinctions over the past 600 million years.
The modeling of plate tectonics suggests that at the end of the Permian Period the continents were all together in the form called pangaea, and that the separations that have created today's alignment of continents have all occurred since that time. There is much discussion about the causes of the dramatic biological decline of that time. One suggestion is that having just one vast continent may have made seasons much more severe than today.
The Pennsylvanian Period saw the emergence of the first reptiles. This period saw the development of large tropical swamps across North America, Europe and Siberia which are the source of great coal deposits. Named after the area of fine coal deposits in Pennsylvania.
Amphibians became abundant in this period, and toward the end of it there is evidence of large coal swamps.
Devonian Period, Paleozoic Era, Phanerozoic Eon [419 Myr - 359 Myr ]
The Devonian and Silurian Periods are referred to as the "age of fishes". In the Davonian Period fishes were dominant. Primitive sharks developed. Toward the end of the Davonian there is evidence of insects with the first insect fossils. From finger-sized earlier coastal plants, land plants developed and moved away from the coasts. By the end of the Davonian, fossil evidence suggests forests with trees tens of meters high. The Devonian period is named after Devon in the west of England.
By late Devonian, two groups of bony fishes, the lung fish and the lobe-finned fish had adapted to land environments, and true air-breathing amphibians developed. The amphibians continued to diversify with abundant food and minimal competition and became more like modern reptiles.
The Ordovician and Cambrian Periods are referred to as the "age of invertebrates", with trilobites abundant. In this period, brachiopods became more abundant that the trilobites, but all but one species of them are extinct today. In the Ordovician, large cephalopods developed as predators of size up to 10 meters. They are considered to be the first large organisms. The later part of the Ordovician saw the appearance of the first fishes.
Data suggest that much of North America was under shallow seas during the Ordovician Period. There are large bodies of evaporite rock salt and gypsum which attest to shallow seas.
Cambrian Period, Paleozoic Era, Phanerozoic Eon [541 Myr - 485 Myr ]
The beginning of the Cambrian is the time of the first organisms with shells. Trilobites were dominant toward the end of the Cambrian Period, with over 600 genera of these mud-burrowing scavengers.
The Cambrian Period marks the time of emergence of a vast number of fossils of multicellular animals, and this proliferation of the evidence for complex life is often called the "Cambrian Explosion".
Models of plate tectonic movement suggest a very different world at the beginning of the Cambrian, with that plate which became North America largely devoid of life as a barren lowland. Shallow seas encroached and then receded.
Proterozoic Eon [2500 Myr - 541 Myr ]
Near the end of the Precambrian, there is fossil evidence of diverse and complex multicelled organisms. Most of the evidence is in the form of trace fossils, such as trails and worm holes. It is judged that most of Precambrian life forms lacked shells, making the detection of fossils more difficult. Plant fossils were found somewhat earlier than animal fossils.
There is no coal, oil or natural gas in Precambrian rock.
Rocks from the middle Precambrian, 1200 - 2500 Myr hold most of the Earth's iron ore, mainly as hematite (Fe2O3). This can be taken as evidence that the oxygen content of the atmosphere was increasing during that period, and that it was abundant enough to react with the iron dissolved in shallow lakes and seas. The process of oxidizing all that iron may have delayed the buildup of atmospheric oxygen from photosynthetic life. There is an observable end to this formation of iron ore, so the increase in atmospheric oxygen would have been expected to accelerate at that time.
Fossilized evidence for life is much less dramatic in the pre-Cambrian time frame, with amounts about 88% of Earth's history. The most common Precambrian fossils are stromatolites, which become common about 2000 Myr in the past. Stromatolites are mounds of material deposited by algae. Bacteria and blue-green algae fossils have been found in Gunflint Chert rocks at Lake Superior, dating to 1700 Myr. These are prokaryotic life. Eukaryotic life has been found at about 1000 Myr at Bitter Springs, Australia in the form of green algae.
Archean Eon [4000 Myr - 2500 Myr ]
Evidence for prokaryotic life such as bacteria and blue-green algae has been found in southern Africa, dated to 3100 Myr. Banded iron formations have been dated to 3700 Myr, and presuming that this requires oxygen and that the only source of molecular oxygen in this era was photosynthesis, this makes a case for life in this time period. There are also stromatolites dated to 3500 Myr.
Hadean Eon [4500 Myr - 4000 Myr ]
The age of the Earth is projected to be about 4500 Myr from radiometric dating of the oldest rocks and meteorites. There is evidence of a time of intense bombardment of the Earth in the time period from about 4100 to 3800 Myr in what is called the "late heavy bombardment". There is ongoing discussion about what may have caused this time of intense impacts (see Wiki). There is no evidence for life in this Eon whose name translates to "hellish".
Principles for Relative Dating of Geological Features
From over two hundred years of careful field explorations by geologists, a number of practical principles for determining the relative dates of geologic features have emerged. The assignment of numerical ages to these relative dates had to await the development of radiometric dating.
Law of Superposition: For sedimentary rocks, each bed is older than the one above it and younger than the one below it.
Principle of Original Horizontality. Layers of sediment are generally deposited in a horizontal position. This is useful even if beds of sedimentary rock have been subsequently tilted.
Principle of cross-cutting relationships. A fault or intrusion is younger than the rocks affected by it.
Inclusions: the rock mass containing the inclusions is older than the rock providing the included material.
Unconformities: interruptions of sedimentation with removal of material by erosion and then a resumption of deposition can place rock strata in contact that have a gap of time and material between them.
Principle of fossil succession: fossil organisms succeed one another in a definite order. The fossils observed help to identify the time period in which the organism lived.
|
Jurassic (206)
Triassic (248)
Paleozoic
Permian (290)
"Age of Amphibians"
Extinction of trilobites and many other marine animals
First reptiles
Large coal swamps
Large Amphibians abundant.
Carboniferous: Pennyslvanian (323)
Carboniferous: Mississippian (354)
Devonian (417)
"Age of Fishes"
First insect fossils
Fishes dominant
First land plants
Silurian (443)
Ordovician (490)
"Age of Invertibrates"
First fishes
Trilobites dominant
First organisms with shells
Cambrian (540)
Precambrian - comprises about 88% of geologic time (4500)
First multicelled organisms
First one-celled organisms
Origin of Earth
Adapted from Lutgens and Tarbuck. They cite the Geological Society of America as the source of the data.
There is another kind of time division used - the "eon". The entire interval of the existence of visible life is called the Phanerozoic eon. The great Precambrian expanse of time is divided into the Proterozoic, Archean, and Hadean eons in order of increasing age.
The names of the eras in the Phanerozoic eon (the eon of visible life) are the Cenozoic ("recent life"), Mesozoic ("middle life") and Paleozoic ("ancient life"). The further subdivision of the eras into 12 "periods" is based on identifiable but less profound changes in life-forms. In the most recent era, the Cenozoic, there is a further subdivision of time into epochs.
Geologic Time and the Geologic Column
This approach to the sweep of geologic time follows that in "The Grand Canyon", C.Hill, et al., eds.
|
no
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
https://press.uchicago.edu/books/excerpt/2015/Shaw_Planet_Bugs.html
|
"One Small Step for Arthopods, One Giant Leap for Arthopod-Kind ...
|
“The 165-million-year-long era when dinosaurs roamed the Earth shouldn’t be called the Age of Reptiles. Nor should the era that followed, which extends to the present, be christened the Age of Mammals. Just ask an insect guy. In Planet of the Bugs, Shaw . . . makes a good case that Earth has long been dominated by insects. . . . In a chapter-by-chapter march through time, [he] engagingly chronicles the evolutionary innovations that have rendered insects so successful. . . . Drawing from field studies and the fossil record, Planet of the Bugs is a fascinating look at the rise and proliferation of creatures that shape ecosystems worldwide.”–Science News
Silurian Landfall
All things have beauty, just not all people are able to see it. anonymous (fortune cookie wisdom)
If strength and size were everything, then the lion would not fear the scorpion. (more fortune cookie wisdom)
People of my age vividly remember the events of July 1969 when humans first walked on the moon. We regard them as historically important, and justifiably so. For the first time in nearly four billion years, individuals of a species from earth set their feet in another place entirely, a place so distant and hostile that the challenges of surviving there, even for a short visit, were enormous. Like many of my generation, I remember sitting in front of our grainy black–and–white television, waiting for Neil Armstrong to step off his ladder onto the dusty gray lunar surface. For those of you who are unfamiliar with the term “black–and–white TV,” isn’t it even more noteworthy that we accomplished this feat at a time when most earthbound viewers didn’t have color on their screens? Armstrong’s boot prints are so ingrained in our cultural psyche, I’d bet you could sketch their picture. We’ve all seen them time and again, in books, magazines, posters, and on television.
I propose that there was another day in our history, this one lost in the depths of time, when another set of equally historic footprints were made. But we seldom celebrate or hear about this day in the news. It took place 443 million years ago or more, and like the big bang or a supernova explosion, it was a singular event—the moment when a living organism, an animal, first stepped on the earth.
These earthly footsteps were far more monumental than going to the moon. For the first animals emerging from the oceans and moving onto land, the dry earth was harsh and forbidding. They needed a structural vehicle capable of making the trip: a skeletal system able to sustain the stresses of the terrestrial environment and a locomotion system able to carry them there and back again. They also needed the necessary life–support systems to keep them alive: surface protection from solar radiation as well as extremes of heat and cold, and to prevent water loss, and a respiratory system capable of functioning in a gaseous as well as a liquid environment. Finally, they needed a reason to go there. Life was comfortable enough in the oceans for a long time. What factors motivated animals to move into what seems to have been an impossibly hostile place?
One Small Step for Arthropods
The story of land colonization is usually considered to be the story of the Silurian period, 444 to 419 million years ago. There is evidence that some living things may have been on land before that time. There is even a debate about what it means to be on “land.” We’ll get back to that point. Suffice it to say that the Silurian is the first age of life where we find abundant fossil evidence of both land animals and land plants. By the end of the period these groups had formed terrestrial ecosystems, at least in marginal, wet marshlands. Nevertheless, these simple ecological systems undoubtedly gave rise to all the later land–based communities of life.
I’m surprised by how often people equate the word “animal” with the word “vertebrates.” Recently I came across a science article claiming to be about the “first land animals,” but it was about lungfish. Let me make one thing abundantly clear. The arthropods are animals, and they were the first to lift their little legs and step on land, at least by the Early Silurian. The arthropods were best equipped to make the journey. They had the necessary protective gear (external skeletons) and locomotion system (jointed legs) since the Cambrian years. Those lazy, slow–witted, slimy, lumbering lungfish ancestors of ours didn’t manage to crawl onto land until sometime during the Devonian—a full forty million years after the arthropods accomplished it. The fact that they were able to do it at all is another contingent event, requiring that some fish just happened to develop enough bony structure in its fins to possibly support its bulky weight on land. It’s another coincidence of history, without which none of us terrestrial vertebrates would be here. Again, our mere presence in this story seems nothing less than miraculous.
But as the cartoonist Larry Gonick has adroitly pointed out, we descendents of the lungfishes are the ones who write the history books. And once again it becomes necessary to point out the very subtle human–centrist bias that we have crafted into the history of life, simply by calling the Silurian period the “age of land colonization.” We casually and nonchalantly overlook the glaring fact that vertebrates played no role in this drama. For tens of millions of years we continued to paddle around in the oceans, and now we have the unmitigated nerve to imply that the stage was somehow being set for us. Life proceeded quite nicely on land for tens of millions of years without us, and it might easily have done so forever.
Also, by calling the Silurian the time of land colonization, we subtly distract attention from the other major ecosystem: the oceans. We glorify the colonization of land simply because it is a necessary step in the processes leading to the evolution of humans. But the real Silurian news story is the glorious diversity of life in the oceans. The Silurian marks the time of the first coral reefs. These weren’t composed of corals like the ones we have today, but of ancient rugose and tabulate coral species that later became extinct. The trilobites didn’t go away yet, either; there were still lots of those, along with huge numbers of ammonoid shelled squids and brachiopods and a diversity of fishes. These fishes were mostly jawless, but the Silurian also included the first jawed fishes, the first armor–plated fishes (called “placoderms”— some were up to thirty feet long), and the first freshwater fishes, all of which were also jawless. Before returning to the land, we need to acknowledge that the real pinnacle of biological systems of that time— the peak of Silurian diversity and ecosystem complexity—remained out there in the oceans. We should probably call the Silurian the “age of the first coral reefs.”
Although the trilobites were declining in species richness, some of the remaining species were quite common in the Silurian coral reef ecosystems. One particularly abundant trilobite was Calymene celebra, which is now celebrated as the state fossil of Wisconsin. During Silurian times, what is now Wisconsin was located south of the equator and entirely covered by shallow seas teeming with trilobites. As a result of these ancient warm seas, the limestone formations of southern Wisconsin are layered with Silurian trilobites, mollusks, brachiopods, and corals. The Wisconsin trilobite Calymene was a bottom–feeder that had the ability to roll into a ball to protect itself from predators, a defensive behavior that may have contributed to its continued success.
Wisconsin isn’t the only state to honor a Silurian animal. New York has declared a sea scorpion, Eurypterus remipes, as their official fossil. Sea scorpions lived from the Cambrian through the Permian periods, a span of about 250 million years, and although they originated in the oceans, some colonized brackish and freshwater habitats. Sea scorpions are quite notable as probably the largest arthropods that ever lived. Some of the largest species grew to monstrous body lengths of seven to eight feet long. These animals were not true scorpions but more like a predatory version of a modern horseshoe crab. They had a long, sharp, spinelike tail—hence the name “sea scorpion”—but there is no indication that they could sting. They did have large spiny legs for grasping prey, and some had pincerlike claws. So these were probably the first predators that could efficiently feed on the hard–shelled trilobites and brachiopods.
More than 300 sea scorpion species have been discovered from all around the world, but the New York fossils remain particularly important. The first sea scorpion ever discovered was found in 1818 in Silurian rock layers from that state. Around 420 million years ago, the entire area between Poughkeepsie and Buffalo was covered by shallow Silurian seas, and so the rock formations there are so full of their remains that the region is called the “sea scorpion graveyard.” Without question, these creatures were among the most spectacular residents of the Silurian coral reefs.
Far less spectacular, but far more abundant and diverse, were the brachiopods, which evolved some thirty thousand species in the ancient oceans. Their common name—lamp shells—comes from the fact that the shells of some brachiopods resemble the shape of an ancient Roman lamp. They also resemble clams, but the resemblance is only superficial, as the two shells of a clam are similar to each other in size, while brachiopods have a smaller top shell and a larger bottom one. Lamp shells peaked in diversity during the Ordovician but retained high species richness over Silurian times. Some of the brachiopods cemented their shells to surfaces to keep them in place, so they were important in building the structure of Silurian reefs. So abundant were the lamp shells in Paleozoic seas that they now are probably the most common fossils in the middle–eastern United States. The very first fossil that I discovered as a child was a brachiopod lamp shell, found protruding from a rock along the banks of the Mississippi River. The state of Kentucky has declared any brachiopod as its state fossil, not bothering to name any particular genus or species; there are just too many of them.
One Giant Leap for Arthropod–Kind
The coral reef ecosystems may have been the biological pinnacle of Silurian times, but since insects are fundamentally terrestrial animals, the story of land colonization must still be told, with a slightly more arthropodan bias. It may have taken tens of millions of years, but eventually species richness on land did outpace that of the oceans; the complexity of our tropical forest ecosystems has vastly outstripped the complexity of our ocean reefs ever since. The pitter–patter of those little arthropod feet echoes loudly across the ages and had profound implications in shaping life’s subsequent diversity.
Many biologists have long assumed that plants needed to colonize the land first and to establish ecosystems for animals to occupy. That may not be the case, as some good evidence suggests. Namely, there are trace fossils of arthropod footprints, fossilized tracks, dating to sediments from the Late Ordovician. Even if terrestrial plants were present then, it’s clear from the footprints that arthropods were walking out on the open wet soils, quite separate from plants, at the earliest of times on land.
If arthropods were strolling on the beaches more than 443 million years ago, what they were doing there? They may have been avoiding deepwater predators. We must assume that the very first animals to walk on land were arthropods that lived in the shallowest waters, in the intertidal zones. Our longtime companion the moon played a significant role in the evolution of life by creating these pools and the tides that shape them. When the tides ebbed and flowed, any arthropods that could survive on the moist shorelines at low tide would have benefited greatly, simply by avoiding the big predators. As the Silurian progressed, the coral reefs presented an increasingly hostile environment. While the tide moved out, predators that breathed with gills, such as sea scorpions, cephalopods, fish, and even large trilobites, swam into the deeper waters. The little arthropods that survived along the shorelines enjoyed a peaceful safe haven, perhaps.
figure 3.1. A coiled millipede is a quintessential example of a myriapod: a long, multisegmented arthropod with lots of legs. Creatures somewhat like these were among the first animals to colonize land. (Photo by Kenji Nishida.)
Two groups of arthropods appear to have colonized the shorelines at about the same time: the arachnids and the myriapods. The arachnids were the scorpions and the group from which spiders, mites, and their relatives are descended; the myriapods were long, multisegmented, multilegged creatures, the group from which millipedes, centipedes, and insects evolved. Let’s look at each of these animals in turn and consider how and why they might have migrated to the beaches.
Sting Time on the Beach
Among the oldest fossils of terrestrial animals are the first scorpions, dating from the Late Silurian. We may call the Silurian scorpions “terrestrial” because they clearly moved and foraged outside the water along the shorelines, but the prevailing opinion is that they were essentially semiaquatic. They breathed with numerous flat respiratory plates layered like the pages of a book, which are called “book gills.” These breathing plates must remain wet to function, so the Silurian scorpions must have moved in and out of the water to keep their gills moist. It was not until much later, in the Devonian, that arachnids developed similar but internalized “book lungs” and became fully terrestrial. Like many modern semiaquatic organisms, Silurian scorpions could probably venture along the shores for extended periods, as long as their gills remained wet.
We can learn a lot about these early land colonists by looking not only at Silurian scorpion fossils but also at modern living scorpions. That’s because the living world includes a composite of organisms that evolved at various times in history. Different species evolve at different rates, depending on how they interact with their environments. Well–adapted organisms may not change significantly over long periods of time, so ones that first evolved long ago, like horseshoe crabs and scorpions, are known as “living fossils.” That’s not to say that scorpions haven’t evolved and changed over time. They have. At some point in the Early Silurian there was only 1 scorpion species, and it was aquatic. In the modern world there are more than 1,100 species, and each has unique characteristics. They are all terrestrial, and some have adapted to life in some of the driest conditions, in deserts. But others still require moist living conditions, preferring the earth’s tropical rainforests. Still, when you look at a scorpion you are seeing a body form that originated in the Early Silurian with some of the first land colonists.
Scorpions are nocturnal. By day they hide in cracks and crevices, under rocks, and beneath other objects. If the first scorpions were active at night as well, then their pioneering steps onto land were probably taken in the moonlight, to avoid the sun’s intense ultraviolet radiation. Remember that the ancient scorpions breathed with book gills and could venture out of the water only for as long as the gills stayed wet.
Scorpions are predatory. They never feed on plants, so these arthropods, at least, could easily have colonized the land well before plants did. Modern scorpions feed extensively on insects, which didn’t exist during the Silurian period. What did they eat? If the myriapods occupied the shorelines at the same time, then the scorpions probably ate a lot of them. But if not, there were still plenty of meal choices in the rocky intertidal zone. At low tide, numerous small animals would have been trapped in shallow tidal pools, just as they are today. Soft–bodied animals like annelid worms, small fish, and molting trilobites would have been easy pickings for scorpions, which feed with claw–like chelicerate mouthparts by ripping and tearing their prey to shreds. Scorpions also have large pincerlike claws called pedipalps, capable of manipulating prey and pulling soft tissues from hard shells, and a venomous sting capable of paralyzing small animals. Since brachiopods would have been abundant in the Silurian’s intertidal zones, they too were possibly among the early scorpions’ prey; if a scorpion could hit a brachiopod’s soft parts with its sting, it could then use its pincers to pull the animal’s body from its shell.
It is no secret: scorpions suffer from a major public relations problem. We almost universally loathe them, probably for very good reasons. All scorpions possess potent venoms used to paralyze and subdue prey. At the very least their sting is quite painful to humans, while at the very worst it is sometimes deadly. That, coupled with their habit of moving around only in the darkness where we can’t see them coming, makes them not very much fun to be around. If you travel in the tropics, you really do need to learn to shake out your shoes in the morning, since scorpions like to hide there.
Some scientists have suggested that humans have an instinctive fear of certain dangerous animals like snakes and spiders. We should probably add scorpions to that list, because the mere sight of one quickly sends many of us into a panic. Maybe we retain some primal, genetically programmed fear of these creatures. Consider the situation for our fishy Silurian ancestors. In the deeper waters, by the coral reefs, they had to contend with the likes of the monstrous eurypterid sea scorpions, and in the balmy shallow waters, they had to contend with the likes of the stinging scorpions. The Silurian was not a very pleasant time for our vertebrate ancestors, and once again, we were lucky to have survived it.
Having said all those nasty things about scorpions, I’m going to give you a reason to like them. The females are really nice mothers. In fact, they may provide the oldest case of parental care. Unlike most female arthropods, which simply lay eggs and let the young fend for themselves, female scorpions carry fertilized eggs inside themselves. The eggs take many months to develop, and eventually a female gives live birth to anywhere from six to ninety tiny baby scorpions. Looking like miniature versions of their mother, they crawl onto her back, where they ride around for a week or more. The baby scorpions stay under mom’s protection until they have completed their first molt, then they wander off on their own adventures.
figure 3.2. A mother scorpion with her babies onboard. (Photo by Piotr Naskrecki.)
Just because they are nice mothers doesn’t mean that female scorpions are necessarily nice wives, however. In addition to being dangerous, they tend to be larger than the males, who seem to show an appropriate amount of caution and respect when attempting to mate with them. During their elaborate courtship ritual, a male and female face each other, raise their tails, and move in circles for hours, or even days. Mating eventually occurs indirectly. Male scorpions produce a packet of sperm cells wrapped in a membrane: a spermatophore. When a male deems the time ready, instead of coupling with a female and transferring his sperm cells to her directly, he places his spermatophore on the ground, then attempts to lead her over it. This ancient behavior doesn’t sound very efficient, but it seems to work well enough for scorpions, and we see it preserved in some of the most primitive living insects.
The scorpions’ reproductive behaviors may provide insight into their Silurian landfall. The spermatophore’s membrane helps to slow desiccation, but it needs to remain moist or the sperm cells will dry out and die. Since solar radiation could damage these cells, spermatophore transfer can be more safely done under the cover of darkness. This suggests that scorpions initially colonized shorelines not only to seek food, perhaps, but also to fool around on romantic, moonlit Silurian beaches. The fact that female scorpions retain developing eggs inside their body and give birth to maternally protected live young, however, suggests that the Silurian strands were still dangerous. They may have been comparatively safer than in the deep water, but there were still predators, such as large centipedes, other scorpions, and even larger individuals of the same species, that would have eaten the scorpions’ eggs and young.
She’s Got Legs …
The myriapods, multilegged relatives of the insects, have been present in the background of our story, but we haven’t said much about them. You may remember that back in the Early Cambrian oceans, in the Burgess Shale fauna, a few of these leggy creatures scurried along in the bottom sediments. Their body design was very simple: a head up front with one pair of antennae, followed by lots of segments, each with a pair of legs. It’s the simplest body plan from which a huge range of arthropod forms can be simply evolved, by a process we’ve discussed already with the trilobites: tagmosis. By fusing segments, functional body regions can be formed. By modifying legs, an assortment of feeding appendages or mating structures can also be developed. The myriapods, with their versatile body, now become key players in our story, because they are the ancestors from which modern insects evolved.
Three groups of myriapods are worth mentioning here. The first two are quite familiar: the centipedes and the millipedes. The third is a rare tropical group: the symphylans. All three respire tracheally, by transporting air through internal tubes. This suggests that tracheal respiration was an innovation of the first myriapods which adapted to life on land, and that the myriapods passed it along to the insects. Although the centipedes and millipedes tell us a lot about the early colonization of land, they each have specialized in their own ways and evolved into classes distinct from the insects. The tropical symphylans, on the other hand, have a simpler body plan that more closely resembles the anatomy of the ancestors from which insects developed.
The centipedes are perhaps the most familiar myriapod group. There are more than three thousand species, mostly tropical, and they are active mainly at night. Centipedes have thirty or more legs, two per segment, and they really know how to use them: most can run very quickly. Unlike insects, centipedes do not have a waxy cuticle to prevent water loss. They can dry out rather easily, so they tend to stay in moist habitats near soil and avoid direct sunlight. All centipedes are predators, and they capture small animals with their fanglike front legs, which house venom glands. Most feed on other small arthropods, but some large tropical species, up to ten inches long, are capable of killing small vertebrates. Similar to the predatory scorpions, centipedes were certainly capable of surviving in the rocky intertidal zone and feeding on various other small animals long before plants colonized the land.
The millipedes, the leggiest arthropods, are called “diplopods” because they have evolved a unique body type: each segment has two pairs of legs rather than one, and contains two pairs of nerve bundles and heart valves. This shows that their segments formed when two primitive segments, each with one pair of legs, fused together. There are more than seventy–five hundred millipede species, and although they live primarily in the tropics, they can be found all around the world.
Millipedes are a lot nicer than centipedes. If you want a Silurian pet, I’d highly recommend one. They are friendly, they do not have venom or bite humans, and these days it’s not too unusual to find some of the giant African species for sale in pet shops. Like the centipedes, however, millipedes prefer to stay out of the sunlight, and so they hide in moss, tunnel in soil or under loose rocks, or live in caves. A few species are known to prey upon other soft–bodied arthropods and worms, but most are scavengers that eat decaying vegetation in addition to fungal or bacterial accumulations. It appears that the millipedes are yet another arthropod group that was perfectly capable of colonizing the beaches well before land plants evolved; these scavengers would have been able to feed on lots of non–plant–based organic material such as decaying green algae mats, fungi, and bacterial blooms in Silurian microbial soils.
figure 3. 3 . A white millipede (order Polydesmida) illustrates a unique characteristic of these leggy myriapods: each segment is equipped with four legs. Polydesmids are the largest order of millipedes, with over 2,700 species known. (Photo by Kenji Nishida.)
The symphylans have escaped the notice of most people, but they are very important to the insects’ story because they most closely resemble the ancestral kind of myriapod from which insects evolved: namely, a short creature with fewer segments than millipedes and centipedes and only two unmodified legs per segment. The symphylans are quite small, only about 2 to 10 millimeters long (less than half an inch). There are about 120 known species, and they mostly inhabit the tropics. Like the millipedes, symphylans live secretively in soil, moss, and decaying vegetation and avoid the sunlight. Modern symphylans feed mainly on decaying vegetation, but like the millipedes, they were capable of living on organic materials in microbial soils before land plants appeared.
These mysterious dwellers in the mosses have a very unusual method of reproduction. Male symphylans produce spermatophores, which they leave on top of long plant stalks. Females need to wander around and find them. Upon discovering a spermatophore, a female symphylan bites it, but instead of digesting it she stores the sperm cells inside her cheeks in special pouches. When she lays an egg, she reaches around and picks it up with her mouthparts, fertilizes it, and proceeds to glue the fertilized egg to a piece of moss.
Green Tide: Plants Colonize the Shorelines
Toward the end of the period, new, taller plants joined the myriapods in transforming the Silurian landscape. Two lines of evidence give us a good idea of what they were like. Preserved fossils from approximately 420–year–old Late Silurian sediments contain the archaic rhyniophyte plants, which are named after an early Devonian genus, Rhynia, discovered in Rhynie, Scotland. The oldest one, Cooksonia, was the very first vascular plant, and it grew only a few inches tall. Very simple and semiaquatic, the rhyniophytes lived along marginal habitats and had parts that could emerge out of the water. They did not have leaves, flowers, or deep roots, and the more advanced early Devonian species were also relatively short—about 50 or 60 centimeters long (mostly less than 2 feet). The rhyniophytes had creeping stems that ran sideways along the shore, probed tiny root hairs below into the soil, and sent shoots upward from multiple points along their top. Each vertical shoot forked once or twice, forming reproductive structures called “sporangia” at the upper tips. The rhyniophytes’ lateral stems allowed them to spread thickly over moist shorelines, since they contained vascular fluid–transporting tissues.
The second line of evidence comes from plant DNA. Molecular studies support the long–held assumption that land plants evolved from photosynthetic green algae and that the nonvascular plants— liverworts and mosses—evolved first, around the Silurian, followed later by primitive vascular plants, such as ferns. Liverworts and mosses require a lot of moisture to survive and decompose rapidly when they die, so they did not fossilize well; however, we can be sure that the Late Silurian shorelines were full of them, as well as the rhyniophytes and a diversity of soil fungi.
If I haven’t said much about plants up to now, it’s because the terrestrial arthropods were able to thrive for millions of years before plants arrived and developed the capacity to survive. Arthropods had the initial advantage, because they developed their hard structural parts much earlier. More importantly, being mobile, these animals could pick and choose the time of their land expeditions. Because they’re nocturnally active and can easily avoid the sun’s harmful rays, the arthropods didn’t have to wait for the ozone layer to form before they colonized the land. They just did so under the cover of darkness.
Plants, on the other hand, need sunlight. They didn’t have the option of moving ashore at night and hiding by day. This means that plants were not able to survive on land until two things happened: they had to wait for a sufficient ozone layer to develop so they could remain safely exposed all day, and following this they had to develop structural support mechanisms. By the Late Silurian they solved the problem of structural support by evolving the complex molecules lignin and cellulose, and arranging the tough stuff into fluid–transporting bundles. Some scientists have suggested that plants must have colonized land first because they create the oxygen that terrestrial animals require, but the cyanobacteria and green algae had been producing this gas for billions of years before the plants moved inland. Ironically, they—not animals—needed elevated oxygen levels, for the ozone layer’s ultraviolet filtering effect and to build lignin and cellulose.
It’s fascinating to compare and contrast plants with insects, in terms of how they coped with the difficulties of life on land. Both faced the serious problem of potential water loss, so both evolved cuticles that resist water flow. Since a dense cuticle is impervious to oxygen and carbon dioxide, plants evolved breathing pores, called stomata, which allow gas transfer and can be opened or closed to prevent desiccation. These are directly analogous to insect spiracles. Plants needed to develop a water transport mechanism internally, so they hardened cell walls with water–resistant lignin and built internal pipelines, the tracheids. This is similar to the insects’ open circulatory system, a simple arrangement where the internal organs are awash in fluids. Just as insects developed a skeletal system for structural support, plants built woody tissues with lignin and toughened cell walls with cellulose.
But because plants didn’t have the option of avoiding sunlight, they evolved complex molecules, the flavonoid compounds, which act as sunscreen and protect living cells from excess ultraviolet radiation. To protect their spores, which were exposed on the plants’ highest position, they also evolved another type of sunscreen, sporopollenin.
Some of these plant adaptations influenced insect evolution. Because lignin and cellulose are tough and highly indigestible, they protected early plant stems from potential herbivores. Tens of millions of years elapsed before arthropods figured out ways to consume woody tissues in bulk. The flavonoid sunscreens would have also deterred herbivores. Eventually insects would develop digestive mechanisms to cope with such compounds, and even to build them into their own body defenses, but again that would take tens of millions of years. Only the spores of early plants provided a nutritious, ready food source. The plants defended themselves, however, by placing the spore–forming structures up high, away from millipedes and the like hiding in the soil layer. They also used an herbivore–swamping strategy, producing spores to excess and flooding the environment with more than the plant–feeding arthropods could eat. Millions of years later, in the Devonian period, these nutritious spores may have stimulated the evolution of wings and flight by luring ancient insects high above the ground and giving them a reason to be there.
For a long time, however, the first land animals and plants coexisted peacefully. None of the early terrestrial arthropods were true herbivores. Instead, like scorpions and centipedes, they were predators, or, like millipedes and symphylans, they were scavengers that ate accumulating organic materials in the microbial soils, and maybe some rhyniophyte spores. Modern millipedes and symphylans love to burrow in moss, so the ancient land animals undoubtedly moved into the moss as soon as it arrived. But no evidence suggests that they ate whole plants. My botanist colleagues might get agitated when they hear this, but I like to say that “plants provide a substrate for arthropods.” The mosses gave the myriapods a pleasant place to live in and shelter from the sun. The benefit was mutual because in the process of burrowing and feeding, the myriapods loosened and turned the soil, cycled nutrients through it, and conditioned it for the colonizing plants. Contrary to conventional wisdom, the animals may have moved ashore long before the plants, and in order to move inland, the plants needed the animal communities to prepare the soil.
By the Late Silurian, 419 million years ago, the first terrestrial ecosystems had been established. To us they wouldn’t have looked like much: the inland areas were still windswept, dry, and barren of life, except for microbes in the soil, while along the shorelines mats of green algae and carpets of mosses and liverworts were studded with rhyniophyte stems rising a few feet up. Nevertheless, while the Silurian rhyniophyte marshlands were not tall by our standards, they provided a virtual miniature jungle for the scorpions, centipedes, millipedes, symphylans, and other arthropod residents. But after nearly 26 million years, the Silurian was coming to an end. The Devonian was approaching, and what changes that would bring. Finally, the plants swept across the lands and rose up tall, and the first forests were established. The planet turned green, and the first insect communities arose. And finally, tens of millions of years after those brave arthropods first stepped on land, our lazy ancestors, the tetrapod lungfishes, hardened their fins, took a deep breath, poked their heads out of the water, and wondered … “What’s going on up there?”
|
The story of land colonization is usually considered to be the story of the Silurian period, 444 to 419 million years ago. There is evidence that some living things may have been on land before that time. There is even a debate about what it means to be on “land.” We’ll get back to that point. Suffice it to say that the Silurian is the first age of life where we find abundant fossil evidence of both land animals and land plants. By the end of the period these groups had formed terrestrial ecosystems, at least in marginal, wet marshlands. Nevertheless, these simple ecological systems undoubtedly gave rise to all the later land–based communities of life.
I’m surprised by how often people equate the word “animal” with the word “vertebrates.” Recently I came across a science article claiming to be about the “first land animals,” but it was about lungfish. Let me make one thing abundantly clear. The arthropods are animals, and they were the first to lift their little legs and step on land, at least by the Early Silurian. The arthropods were best equipped to make the journey. They had the necessary protective gear (external skeletons) and locomotion system (jointed legs) since the Cambrian years. Those lazy, slow–witted, slimy, lumbering lungfish ancestors of ours didn’t manage to crawl onto land until sometime during the Devonian—a full forty million years after the arthropods accomplished it. The fact that they were able to do it at all is another contingent event, requiring that some fish just happened to develop enough bony structure in its fins to possibly support its bulky weight on land. It’s another coincidence of history, without which none of us terrestrial vertebrates would be here. Again, our mere presence in this story seems nothing less than miraculous.
But as the cartoonist Larry Gonick has adroitly pointed out, we descendents of the lungfishes are the ones who write the history books.
|
yes
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
http://digitalfirst.bfwpub.com/life_11e_animation/asset/sinauer_life11e_animation_scripts/life11e_2401_continents_scr.html
|
Script
|
Evolution of the Continents
INTRODUCTION
The continents on which we live are on the move, albeit at an average rate of only several centimeters each year. The continents move because they ride on top of gigantic plates that, in turn, float on a molten layer of Earth, called the mantle. Energy, released from radioactive decay in Earth's core, heats up the mantle and sets up convection currents that propel the plates around Earth's surface. The movement of the plates, and the continents that ride on them, is called continental drift.
At times in Earth's history, the continents have coalesced into giant landmasses, but at other times they have traveled away from each other. The positions of the continents affect Earth's climate, the sea levels, the distributions of organisms, as well as the birth and extinction of species. In addition to depicting continental drift, this animation also provides a summary of the state of life at each corresponding period in Earth's history.
ANIMATION SCRIPT
The continents lie on massive plates that are in constant motion on Earth's surface. Their movement, called continental drift, sometimes forces continents together and other times pulls them apart. The continents as we know them today are still in motion, so millions of years in the future their arrangement will look much different.
Let's go back 540 million years, to the beginning of the Cambrian period, when Earth's continents were mostly in the southern hemisphere and were coalescing into larger landmasses. The largest, Gondwana, included the future South America, Africa, India, Australia, and Antarctica.
Around this time, during a period of about 60 million years, a rapid diversification of life took place, known as the Cambrian explosion. Many of the major animal groups represented by species alive today first appeared during these evolutionary radiations.
Over the next 200 million years, the large landmasses continued to approach each other. At the end of the Ordovician period, massive glaciers formed over the southern continents, sea levels dropped about 50 meters, and ocean temperatures dropped. About 75 percent of all animal species became extinct, probably because of these major environmental changes.
During the Silurian period, marine life rebounded, and the first vascular plants and terrestrial arthropods (scorpions and millipedes) evolved. Fishes diversified as bony armor gave way to the less rigid scales of modern fishes, and the first jawed fishes appeared.
In the Devonian period, many animal groups radiated on land and in the sea. Fish diversified and some became formidable predators. Tall trees with fernlike leaves dominated newly appearing forests. The end of the Devonian is marked by a massive extinction of about 75 percent of all marine species.
In the Carboniferous period, about 350 million years ago, extensive swamp forests grew on the tropical continents. The fossilized remains from the forests formed the coal we now mine for energy. The diversity of terrestrial animals increased greatly. Amniotes, with their well protected eggs, evolved, as did giant amphibians and winged insects—the first animals to fly.
During the Permian period, the continents merged into a single supercontinent called Pangaea. On land, the amniotes split into two lineages: the reptiles, and a second lineage that would eventually lead to the mammals. Toward the end of the Permian, conditions for life deteriorated. Massive volcanic eruptions produced ash and gases that blocked sunlight and cooled the climate. The resulting death and decay of forests rapidly used up atmospheric oxygen. In addition, much of Pangaea was located close to the South Pole by the end of the Permian. All of these factors combined to produce the most extensive continental glaciers since the "snowball Earth" hundreds of millions of years earlier.
At the low O2 concentrations at the end of the Permian period, most animals would not be able to survive at elevations above 500 meters, so about half of the land area would have been uninhabitable. Scientists estimate that about 96 percent of all multicellular species became extinct. The few organisms that survived the Permian mass extinction found themselves in a relatively empty world.
In the Mesozoic era, atmospheric oxygen concentrations gradually rose. Pangaea remained largely intact through the Triassic period. On land, conifers became dominant plants, and frogs and reptiles began to diversity. The radiations of reptiles eventually gave rise to crocodilians, dinosaurs, and birds. The first mammals appeared during the Triassic. The end of the Triassic was marked by a mass extinction that eliminated about 65 percent of the species on Earth.
Pangaea began to break up and, by the late Jurassic period, Pangaea became fully divided into two large continents: Laurasia, which drifted northward, and Gondwana in the south. Most of the large terrestrial predators and herbivores of the period were dinosaurs. The earliest known fossils of flowering plants are from late in this period.
During the Cretaceous period, Laurasia and Gondwana broke apart. During the Cretaceous period, Earth was warm and humid. Flowering plants began the diversification that led to their current dominance of the land. By the end of the Cretaceous, many radiations of animal groups, on both land and sea, had occurred. A meteorite caused a mass extinction at the end of the Cretaceous. On land, almost all animals larger than about 25 kg in body weight, including the non-avian dinosaurs, became extinct.
Several significant continental collisions occurred during the Tertiary period. By about 35 mya, the Indian Plate ran fully into the Eurasian Plate, and the Himalayas began to be pushed up as a result. Africa subsequently collided with Eurasia, and South America with North America. In this period, flowering plants dominated on land, and a rapid radiation of mammals occurred. Although the early Tertiary was hot and humid, Earth's climate began to cool, and grasslands spread over much of Earth.
We are living in the Quaternary period, during which four major and about 20 minor "ice ages," have occurred. During ice ages, massive glaciers spread across the continents, and the ranges of animal and plant populations shifted toward the equator.
The last of these glaciers retreated from temperate latitudes less than 15,000 years ago. Organisms are still adjusting to this change. Many high-latitude ecological communities have occupied their current locations for no more than a few thousand years.
It was during the Quaternary period that divergence within one group of mammals, the primates, resulted in the evolution of the hominoid lineage, eventually leading to our modern human species—Homo sapiens. It may be hard to visualize, but humans are incredibly recent arrivals on this 4.5 billion-year-old planet. Even the Cambrian explosion, at about 500 million years ago, could be considered a recent series of events. Life has been evolving on Earth for about 3.8 billion years. The continents and Earth's biological communities have been constantly changing during this time and will continue to change into the future.
CONCLUSION
The changing positions of the continents have had dramatic effects on the Earth's climate and on its living organisms. For example, when the enormous landmass Gondwana formed over the South Pole 500 million years ago, the Earth entered a period of glaciation. Water became trapped in the frozen glaciers, which lowered the sea level dramatically. As the sea level dropped, continental shelves (submerged parts of the continents) became exposed, and the organisms that thrived there would have either died, adapted, or moved to other locations. The temperature of the oceans also dropped. During this time in Earth's history, 75% of marine species became extinct.
The positions of continents over time also explain some interesting features of the distribution of flora and fauna around the globe. For example, the island of Madagascar, currently near the southern tip of Africa, is home to animals that are remarkably similar to animals living in India. India and Madagascar are separated by 4000 km (~2500 miles) of ocean, much too great a distance for land animals to cross. However, at one time, India and Madagascar lay adjacent to each other. Until 90 million years ago, when India and Madagascar split, they could share species.
|
During the Silurian period, marine life rebounded, and the first vascular plants and terrestrial arthropods (scorpions and millipedes) evolved. Fishes diversified as bony armor gave way to the less rigid scales of modern fishes, and the first jawed fishes appeared.
In the Devonian period, many animal groups radiated on land and in the sea. Fish diversified and some became formidable predators. Tall trees with fernlike leaves dominated newly appearing forests. The end of the Devonian is marked by a massive extinction of about 75 percent of all marine species.
In the Carboniferous period, about 350 million years ago, extensive swamp forests grew on the tropical continents. The fossilized remains from the forests formed the coal we now mine for energy. The diversity of terrestrial animals increased greatly. Amniotes, with their well protected eggs, evolved, as did giant amphibians and winged insects—the first animals to fly.
During the Permian period, the continents merged into a single supercontinent called Pangaea. On land, the amniotes split into two lineages: the reptiles, and a second lineage that would eventually lead to the mammals. Toward the end of the Permian, conditions for life deteriorated. Massive volcanic eruptions produced ash and gases that blocked sunlight and cooled the climate. The resulting death and decay of forests rapidly used up atmospheric oxygen. In addition, much of Pangaea was located close to the South Pole by the end of the Permian. All of these factors combined to produce the most extensive continental glaciers since the "snowball Earth" hundreds of millions of years earlier.
At the low O2 concentrations at the end of the Permian period, most animals would not be able to survive at elevations above 500 meters, so about half of the land area would have been uninhabitable. Scientists estimate that about 96 percent of all multicellular species became extinct. The few organisms that survived the Permian mass extinction found themselves in a relatively empty world.
|
yes
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
https://www.ncpedia.org/anchor/natural-history-north
|
The Natural History of North Carolina
|
The Natural History of North Carolina
The land that is North Carolina existed long before humans arrived -- billions of years before, in fact. Based on the age of the oldest rocks found on earth as well as in meteorites, scientists believe that the earth was formed about 4,500 million years (4.5 billion years) ago. The landmass under North Carolina began to form about 1,700 million years ago, and has been in constant change ever since. Continents broke apart, merged, then drifted apart again. As landmasses came together, the Appalachian mountains (and other mountain ranges on the earth) were formed -- and wind and water immediately began to wear them down by erosion. After North Carolina found its present place on the eastern coast of North America, the global climate warmed and cooled many times, melting and re-freezing the polar ice caps and causing the seas to rise and fell, covering and uncovering the Coastal Plain. Recent geologic processes formed the Sand Hills, the Uwharrie Mountains, and the Outer Banks.
The first single-celled life forms appeared as early as 3,800 million years ago. It then took 2,000 million years for the first cells with nuclei -- simple bacteria -- to develop, and another 500 million years for multi-celled organisms to evolve. As life forms grew more complex, they diversified. Plants and animals became distinct. Gradually life crept out from the oceans and took over the land. Seed-bearing plants developed, then flowering plants, and finally grasses. Animals developed hard exterior shells for protection, then interior skeletons. Flying insects, amphibians, reptiles, dinosaurs, birds, and finally mammals emerged. Sudden changes in climate caused mass extinctions that wiped out most of the species on earth, making room for new species to evolve and take their places. The ancestors of humans began to walk upright only a few million years ago, and our species, Homo sapiens, emerged only about 120,000 years ago. The first humans arrived in North Carolina just 10,000 years ago -- and continued the process of environmental change through hunting, agriculture, and eventually development.
To help you understand the vastness of the time scales we're talking about, consider this: If the history of our planet were condensed into a single day, humans would have emerged just 2.3 seconds before midnight, and would have arrived in North Carolina two tenths of a second before midnight -- literally the blink of an eye. And if that last two tenths of a second of human habitation were expanded into a full day, Europeans would have arrived at 11:02 pm, and a student now in eighth grade would have been born at 11:58 pm!
Natural history at a glance
The history of all of these processes -- geologic, climatic, environmental, biological -- is called natural history. Scientists have divided the natural history of the planet into chunks of time called eons, eras, periods, and epochs. These chunks of time have names and approximate dates that correspond to events in geologic or fossil records. As scientists find new evidence, they revise these dates, and they don't always agree on how to do so. The science of natural history, like natural history itself, is an evolutionary process.
This chart summarizes the major events in North Carolina's natural history. Dates are listed in Mya (Million years ago).
Note: the structure of this table is borrowed from Wikipedia. The names and dates of eons, eras, periods, and epochs are also from that page, which is in turn drawn from the time scale agreed upon in 2004 by the International Commission on Stratigraphy. Most of the information in the table is drawn from Fred Beyer, North Carolina: The Years Before Man, a Geologic History (Durham, N.C.: Carolina Academic Press, 1991).
Eon
Era
Period
Epoch
Major events
Start
Phanerozoic
Cenozoic
Neogene
Holocene
The climate stablized as the glaciers retreated, making agriculture possible. Human civilization emerged.
c. 9000 BCE
Phanerozoic
Cenozoic
Neogene
Pleistocene
Many large mammals flourished, then became extinct. Anatomically modern humans evolved.
The Sand Hills formed during this time. Streams eroded the Piedmont and Blue Ridge, carrying sediment to the Coastal Plain. There, water seeped through those sediments, carrying heavier clay downward and leaving behind sands that were piled into dunes by winds.
The polar ice caps melted, and the sea level rose more than 300 feet above its present level. The resulting shoreline can be seen today in an escarpment -- a sharp drop-off -- that runs through Scotland, Hoke, and Cumberland counties. When the seas receded, that sudden change in elevation caused rivers to fall rapidly. The town of Cross Creek, which became Fayetteville, would be located along this "fall line."
About 1.7 million years ago, the present "Ice Age" began. As glaciers and polar ice caps re-formed, sea level fell, exposing the Coastal Plain. Several periods of glaciation (the forming of glaciers) and melting followed, with corresponding falls and rises in sea level. A series of escarpments can now be seen at various points on the Coastal Plain where the shoreline once lay.
The glaciers began to recede for the last time about 18,000 years ago. The rising seas left a ridge above water, creating the modern barrier islands.
Between 10,000 and 15,000 years ago, as the climate warmed, North Carolina's forests began to look as they do today, with pine, spruce, and fir in the cooler Blue Ridge and oak and hickory more common in the Piedmont.
1.8 Mya
Phanerozoic
Cenozoic
Neogene
Pliocene
Homo habilis, the first species of the genus Homo to which humans belong, appeared.
The land surfaces of the Blue Ridge and Piedmont now appeared essentially as they do today. A dry climate with short rainy seasons caused grasslands to flourish in the Piedmont. Shallow sea covered the eastern half of the Coastal Plain, then receded again.
5.3 Mya
Phanerozoic
Cenozoic
Neogene
Miocene
Modern mammal and bird families became recognizable. Grasses spread across the globe, and the first apes appeared.
The ocean retreated completely from the modern Coastal Plain. Rapid erosion in the Piedmont was uneven, and left the Uwharrie Moutains behind.
23.0 Mya
Phanerozoic
Cenozoic
Paleogene
Oligocene
Animals, especially mammals, evolved rapidly and became more diverse. Modern types of flowering plants evolved and spread.
About 31 million years ago, the ocean advanced west as far as present-day New Bern.
33.9 Mya
Phanerozoic
Cenozoic
Paleogene
Eocene
The first grasses appeared. Some of the first modern families of mammals emerged, and primitive whales diversified. An ice cap developed on Antarctica.
The crust under the Coastal Plain began to sink again, and the ocean pushed as far west as the modern Piedmont. The calcium-rich shells of microscopic algae sank to the ocean floor, where over time they became limestone. By the end of the Eocene, the seas had again retreated.
55.8 Mya
Phanerozoic
Cenozoic
Paleogene
Paleocene
Early mammals diversified, and the first large mammals appeared. The world's climate was still tropical, but gradually began to cool.
By the end of the Paleocene, the entire Coastal Plain of North Carolina was again above sea level.
65.5 Mya
Phanerozoic
Mesozoic
Cretaceous
Flowering plants proliferated, along with new types of insects that pollinate them. Many new types of dinosaurs (e.g. Tyrannosaurs, Titanosaurs, duck bills, and horned dinosaurs) evolved on land, as did modern crocodilians (crocodiles and alligators). Modern sharks appeared in the sea. Primitive birds gradually replaced pterosaurs.
The eastern portion of the modern Coastal Plain of North Carolina again lay under water, but the ocean receded late in this period. Elsewhere, the southern landmasses broke up, creating the continents of Africa and South America as well as the southern Atlantic Ocean. The youngest ranges of the Rocky Mountains formed.
At the end of the Cretaceus, 65 million years ago, a mass extinction occurred, and the dinosaurs disappeared.
145 Mya
Phanerozoic
Mesozoic
Jurassic
Conifers and ferns were common. Dinosaurs were diverse, including sauropods, carnosaurs, and stegosaurs. Mammals were common but small. The first birds and lizards appeared. Ichthyosaurs and plesiosaurs were diverse in the oceans.
As the North American continent drifted to the northwest, its trailing edge sank under water, and the Atlantic Ocean formed between North America and Africa. The shore was located near the present Outer Banks.
The Appalachians continued to erode, leaving the flat land that now exists in the eastern Piedmont.
200 Mya
Phanerozoic
Mesozoic
Triassic
Dinosaurs appeared and became dominant, as did ichthyosaurs and nothosaurs in the seas and pterosaurs in the air. The first mammals and crocodilia (ancestors of crocdiles and alligators) also appeared.
As soon as they had formed, the Appalachians began to erode. Wind and rain wore away the rock and carried it as sediment to lower-lying land or to the sea. Meanwhile, the continents began to move apart again.
At this time, North Carolina probably lay near the equator, and had a tropical climate in which a great diversity of life must have flourished.
251 Mya
Phanerozoic
Paleozoic
Permian
Amphibians remained common but small. Reptiles, though, grew larger and diversified. Beetles and flies evolved. A number of invertebrates that no longer exist, such as trilobites, flourished in the oceans.
As the climate cooled, the scale trees, which had flourished in near-tropical conditions, declined and nearly became extinct. Conifers thrived in the cooler climates and dominated the forests.
By 260 million years ago, the Appalachian mountains were complete. The resulting mountain range was 620 miles long, stretching from Canada, Great Britain, Greenland, and Scandinavia all the way south to Louisiana, and the mountains were as high as the highest mountains in the world today. Most likely, the tallest peaks were in what is now the eastern Piedmont and Coastal Plain.
A mass extinction occurred 251 million years ago, marking the end of the Permian period. Some 95 percent of life on Earth became extinct, including 75 percent of amphibian species and 80 percent of reptiles. No one knows why this extinction occurred, but some scientists speculate that changing climate and massive mountain building as the continents collided caused great changes to the environment, in which highly specialized species could no longer survive.
299 Mya
Phanerozoic
Paleozoic
Carboniferous/
Pennsylvanian
Winged insects spread, including very large species. Amphibians were common and diverse. The first reptiles appeared.
About 320 million years ago, the North American and Euro-African continents collided, resulting in the last period of Appalachian mountain building. The land under the Piedmont and Coastal Plain was also pushed upward. The continents were united in a "supercontinent" that geologists call Pangaea.
318 Mya
Phanerozoic
Paleozoic
Carboniferous/
Mississippian
In wetland forests, ferns thrived and primitive trees called scale trees grew more than 100 feet high. Their decayed remains became coal. The portions of the Appalachian region where coal is mined today were then covered in such forests.
Meanwhile, the first vertebrates appeared on land, in coastal swamps, and early sharks were common in the oceans.
359 Mya
Phanerozoic
Paleozoic
Devonian
Plants took over the land. The first horsetails and ferns appeared, as did the first seed-bearing plants, the first trees, and the first (wingless) insects. Fish were common and diverse. The first lungfish, which could breathe air, appeared, followed by the first amphibians.
416 Mya
Phanerozoic
Paleozoic
Silurian
The first vascular plants appeared -- plants with specialized tissues for conducting water and nutrients -- along with the first plants on land. The first millipedes appeared on land. Primitive fish, including the first fish with jaws as well as armoured jawless fish, populated the seas. Sea-scorpions reached a large size. Trilobites and mollusks were diverse.
As the continents of North America and Europe/Africa moved together, more rock was pushed upwards, and over the next 100 million years, the Appalachian mountains were formed. As the Appalachians rose, streams carried sand and mud westward and filled the sea.
444 Mya
Phanerozoic
Paleozoic
Ordovician
In the seas, invertebrates diversified into many new types, and the first tiny vertebrates appeared. The first green plants and fungi appeared on land.
488 Mya
Phanerozoic
Paleozoic
Cambrian
The "Cambrian Explosion" saw a major diversification of life. Many fossils survive from this time. The most modern phyla -- the broadest groupings of animals and plants -- appeared, including the first chordates (ancestors of vertebrates). Trilobites, worms, sponges, brachiopods, and many other animals flourished, as did some giant predators.
By this time, the eastern coast of North America lay somewhere in middle Tennessee; except for islands and volcanoes, North Carolina was under water. About 750 million years ago, the landmasses of North America and Europe/Africa had begun moving towards each other again. The Kings Mountain Belt was formed about 540 million years ago as the Piedmont slowly moved into the rest of the continent.
542 Mya
Proterozoic
Neo- proterozoic
The first fossils of multi-celled animals survive from this period. Very simple multi-celled life forms called eukaryotes appeared as early as 1000 million years ago, and worm-like animals and the first sponges by about 600 million years ago.
The land under North Carolina was pulled apart, and inland seas emerged. Island volcanoes developed, first along the North Carolina-Virginia border, then in an arc from Virginia to Georgia. Rocks formed by those volcanoes extend today over a wide area of the Piedmont and Coastal Plain. Fossilized tracks of primitive worms have been found in those volcanic rocks, formed about 620 million years ago.
1000 Mya
Meso-proterozoic
Green algae colonies appeared in the seas.
About 1,300 million years ago, the first mountains were formed in North Carolina. Called the Grenville Mountains, they eroded long ago, but rocks formed at this time lie underneath the Appalachians and are exposed in parts of the Piedmont and Coastal Plain.
1600 Mya
Paleo- proterozoic
As oxygen-producing bacteria proliferated, the atmosphere became oxygenic -- filled with oxygen -- for the first time. By about 1800 million years ago, the first complex single-celled life-forms -- cells with nuclei -- emerged.
About 1700 million years ago, the land that would become North Carolina began to form.
2500 Mya
Archean
Simple single-celled life emerged as early as 3,800 million years ago. The first oxygen-producing bacteria emerged -- prior to this time, the earth's atmosphere had much carbon dioxide and little oxygen. The oldest miscroscopic fossils that have been found are about 3,400 million years old.
The landmass that would become North America began to form. Rocks that survive from this time show evidence of erosion by the first glaciers. As the earth's liquid interior -- the mantle -- continued to move around its solid core, it created forces that shifted the crust -- the thin, rigid surface of the earth. The crust broke into plates that formed the basis of the first continents. Ever since, they have slowly moved around the earth's surface by a process called plate tectonics.
3800 Mya
Hadean
The earth formed about 4,500 million years ago, as a cloud of gas and dust gradually collapsed into the sun and other bodies of our solar system. By 4,000 million years ago, the earth had a stable crust with oceans and a primitive atmosphere, which probably consisted of water vapor, methane, ammonia, carbon dioxide, and only a tiny amount of oxygen.
The first life forms -- probably self-replicating RNA molecules -- may have evolved as early as 4,000 million years ago.
4500 Mya
How do scientists know...
...the age of rocks and fossils?
Radioactive forms of certain elements such as carbon-14 and uranium-235 are not chemically stable; they slowly decay into stable elements by radiating away particles. Scientists have determined through experiment the rate at which these elements decay. Based on the amount of radioactive material left in a rock, fossil, or artifact, they can determine how long ago it was created.
...the age of the earth?
The oldest rocks found on earth are 4.4 billion years old, so the earth must have formed at least that long ago. The oldest rocks found in meteorites and brought back from the moon are between 4.5 and 4.6 billion years old, and scientists use that figure as an estimate of when the solar system was formed, and with it the earth.
...when water appeared on the earth?
Rocks found in Greenland have been found to be 3,800 million years old. The rocks are metamorphic -- they were changed by heat and pressure. That process can only occur in the presence of liquid water, and so geologists estimate that by this time the earth had oceans -- and an atmosphere, because otherwise the oceans would have evaporated.
...where the continents and oceans used to be?
In some cases, we can look at the fossil record. For example, if fossils of ocean-dwelling animals are found on dry land, we know that when that animal lived, the land must have been under water. When animals found on different parts of the globe have similar ancestors, scientists may surmise that those parts of the earth were once connected by land. Scientists can also determine how fast and in what direction the earth's plates are moving now, and use that information to develop theories about what happened in the past.
...what the climate was like in the distant past?
During the Ice Age, glaciers left telltale signs in the rocks they covered. Sometimes mineral deposits are laid down only in certain climatic conditions -- for example, salt deposits are laid down primarily when the climate is warm and dry (when water evaporates most quickly). In other cases, the fossil record indicates that the earth (or a particular location on it) must have been warm or cool.
...when various kinds of plants and animals appeared?
Based on dating of fossils, we know when various plants and animals lived. Often, though, fossils are incomplete -- they show only part of a species, and scientists have to make educated guesses about the rest. And the fossil record itself is not complete -- we certainly haven't found fossils of every life form that ever existed, or from the entire period that a given life form existed. So while scientists know that certain species existed at certain times, there is a tremendous amount they don't know.
...how different species are related?
Scientists classify species based on their ancestry and evolution: Two species are more closely related if they have a more recent common ancestor. The most obvious way to guess that two species have a common ancestor is their morphology -- what they look like and how they are constructed. But relying on morphology alone can be dangerous, because although dolphins and sharks look much alike, they are not even remotely related -- sharks evolved hundreds of million years ago from aquatic invertebrates, while dolphins evolved much more recently from land-dwelling mammals. Sometimes a complete fossil record shows stages in a species' evolution, through which it can be traced to a more distant ancestor. More recently, scientists have used DNA testing: by comparing the genomes (genetic makeup) of two species, they can determine how closely related the two species are.
Related Topics
Pangaea
Artifacts
Trilobite
The structure of a trilobite, including the antennae and legs, can be seen in the Burgess Shale, a rock formation found in the Canadian Rockies in 1909.
|
Amphibians were common and diverse. The first reptiles appeared.
About 320 million years ago, the North American and Euro-African continents collided, resulting in the last period of Appalachian mountain building. The land under the Piedmont and Coastal Plain was also pushed upward. The continents were united in a "supercontinent" that geologists call Pangaea.
318 Mya
Phanerozoic
Paleozoic
Carboniferous/
Mississippian
In wetland forests, ferns thrived and primitive trees called scale trees grew more than 100 feet high. Their decayed remains became coal. The portions of the Appalachian region where coal is mined today were then covered in such forests.
Meanwhile, the first vertebrates appeared on land, in coastal swamps, and early sharks were common in the oceans.
359 Mya
Phanerozoic
Paleozoic
Devonian
Plants took over the land. The first horsetails and ferns appeared, as did the first seed-bearing plants, the first trees, and the first (wingless) insects. Fish were common and diverse. The first lungfish, which could breathe air, appeared, followed by the first amphibians.
416 Mya
Phanerozoic
Paleozoic
Silurian
The first vascular plants appeared -- plants with specialized tissues for conducting water and nutrients -- along with the first plants on land. The first millipedes appeared on land. Primitive fish, including the first fish with jaws as well as armoured jawless fish, populated the seas. Sea-scorpions reached a large size. Trilobites and mollusks were diverse.
As the continents of North America and Europe/Africa moved together, more rock was pushed upwards, and over the next 100 million years, the Appalachian mountains were formed.
|
yes
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
https://www.miguasha.ca/mig-en/land-based_communities.php
|
Land-based communities - Miguasha
|
Land-based communities
Vertebrates, particularly the first tetrapods, are often credited with conquering land.(72 kb) Yet invertebrates were the first true invaders, having crept onto the continents long before the vertebrates.
Clues left behind in the form of trace fossils tell us that some invertebrates made timid excursions out of the water as early as Ordovician time. It was in the Silurian Period, however, that significant numbers of small arthropods were evolving in the open air after developing respiratory structures. They included spiders, acarids (mites), springtails (so-called garden fleas) and millipedes. Their emergence followed soon after plants began to spread beyond the aquatic world, and fossilized arthropod excrement and digestive systems reveal that many of the arthropods ate these first land plants, including their spores. In doing so, they assisted in the decomposition of organic matter and helped to form the first soils.
The emergence of arthropods onto land occurred more than just once. Evidence from fossils demonstrates that separate groups made the transition from their ancestral aquatic home at different times. Beginning as small creatures highly dependent on damp environments at the start of the Devonian, arthropods became more impressive by the Middle Devonian. The fossil record from that time includes land scorpions, primitive spiders called trigonobartids, large millipedes measuring several centimetres long, and even the first insects, which resemble todays silverfish.
The first land snails also appeared during the Devonian, and were the only group of molluscs to conquer land.
The inventory of the small animals that once populated the shores of the ancient Miguasha estuary is still incomplete. We know with certainty that scorpions and millipedes were among them, but others have only left partial evidence: acid dissolution has revealed abundant pieces of chitin from as yet unidentified arthropods in specific layers of the Escuminac Formation.
|
Land-based communities
Vertebrates, particularly the first tetrapods, are often credited with conquering land.(72 kb) Yet invertebrates were the first true invaders, having crept onto the continents long before the vertebrates.
Clues left behind in the form of trace fossils tell us that some invertebrates made timid excursions out of the water as early as Ordovician time. It was in the Silurian Period, however, that significant numbers of small arthropods were evolving in the open air after developing respiratory structures. They included spiders, acarids (mites), springtails (so-called garden fleas) and millipedes. Their emergence followed soon after plants began to spread beyond the aquatic world, and fossilized arthropod excrement and digestive systems reveal that many of the arthropods ate these first land plants, including their spores. In doing so, they assisted in the decomposition of organic matter and helped to form the first soils.
The emergence of arthropods onto land occurred more than just once. Evidence from fossils demonstrates that separate groups made the transition from their ancestral aquatic home at different times. Beginning as small creatures highly dependent on damp environments at the start of the Devonian, arthropods became more impressive by the Middle Devonian. The fossil record from that time includes land scorpions, primitive spiders called trigonobartids, large millipedes measuring several centimetres long, and even the first insects, which resemble todays silverfish.
The first land snails also appeared during the Devonian, and were the only group of molluscs to conquer land.
The inventory of the small animals that once populated the shores of the ancient Miguasha estuary is still incomplete. We know with certainty that scorpions and millipedes were among them, but others have only left partial evidence: acid dissolution has revealed abundant pieces of chitin from as yet unidentified arthropods in specific layers of the Escuminac Formation.
|
yes
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
https://www.nature.com/articles/d41586-022-03080-1
|
A trove of ancient fish fossils helps trace the origin of jaws
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Hear the latest from the world of science with Benjamin Thompson.
00:45 Piecing together the early history of jawed vertebrates
A wealth of fossils discovered in southern China shed new light onto the diversity of jawed and jawless fish during the Silurian period, more than 400 million years ago. Nature editor Henry Gee explains the finds and what they mean for the history of jawed vertebrates like us.
11:27 A lack of evidence in transgender policy making
Around the world, many laws are being proposed – and passed – regarding the rights of transgender people to participate in various aspects of society. We talk to Paisley Currah, who has written a World View for Nature arguing that these policies are frequently not backed up by data, and that policy affecting trans people’s lives needs to take a more evidence-based approach.
Transcript
Hear the latest from the world of science with Benjamin Thompson.
Welcome back to the Nature Podcast. This week: what a trove of fossil fish reveals about the evolution of jawed vertebrates, and why policy affecting trans people’s lives needs to take a more evidence-based approach. I’m Benjamin Thompson.
[Jingle]
Host: Benjamin Thompson
Humans are – according to Nature editor Henry Gee – a particular kind of specialised bony fish. Go back far enough, and our distant ancestors were swimming the seas, having successfully evolved bones, while another group were exploring the options of a cartilaginous skeleton – the ancestors of today’s sharks and rays. The period in which a lot of this evolution and diversification was happening is called the Silurian, which began 448.3 million years ago. But exactly what was going on with fish during the Silurian has never quite been clear. Reporter Shamini Bundell spoke to Henry Gee to find out about some new fossils that could shed light on the matter.
Interviewer: Shamini Bundell
Thanks for chatting to me, Henry. I wanted to get you on because we have four papers coming through in Nature this week about new fossils from the Silurian period.
Interviewee: Henry Gee
That's correct, yeah. Not one, not two, not three, but four papers.
Interviewer: Shamini Bundell
And so, the Silurian is pretty key to this story, I think. So, this was a period when, on land, there were sort of plants and arthropods and insects kind of taking over, and the sea was full of weird and wonderful fish creatures, which is mostly what we're going to be talking about. But I wondered if you could sort of set the scene a bit for this sort of Silurian story and maybe introduce us to what was going on at that time and some of the kinds of characters we might be talking about?
Interviewee: Henry Gee
Cast your mind back, if you will, to the Devonian period, which was after the Silurian period, which began about 490 million years ago. That's generally known as the age of fishes. But the Silurian period just before that was when a lot of fishes originated, but we haven't got many really good fossils of them. They tend to be a bit scruffy and scrappy and fragmentary, which is inconvenient because a lot of the major evolution in early fishes was happening about then.
Interviewer: Shamini Bundell
And a particularly successful evolutionary invention of this kind of time is jaws, basically, so jawed fish.
Interviewee: Henry Gee
The earliest vertebrates didn't have jaws. Their mouths were kind of suckers. And only two kinds of jawless vertebrates survive today – they're the lamprey and the hagfish.
Interviewer: Shamini Bundell
So, we want to know what was going on in the Silurian that led to all the varieties of jawed fish and jawed creatures now, which includes us, that we see today.
Interviewee: Henry Gee
This is what these papers are all about. Now, these days, there are two kinds of jawed vertebrates. There are the bony fishes, so cod, halibut, sturgeons, seahorses. They're all bony fishes. And then there are the cartilaginous fishes, the sharks and rays. But back in the day, there were two other extinct groups of jawed vertebrates. There were these little tiddly fish called acanthodians, or spiny sharks. They have no internal bony skeleton so they're quite hard to get a grasp on. And the other group are placoderms. These were armoured jawed fishes, and it is likely that all the other groups of jawed vertebrates arose from somewhere in the radiation of placoderms.
Interviewer: Shamini Bundell
So, let's get on to these four papers then. So, this is a team of palaeontologists in China who have discovered several new fossils. How did this come about?
Interviewee: Henry Gee
Yeah, Min Zhu and his crew have hit on a fantastic fish bed – a layer of bones dating from the early Silurian near Chongqing in South China. And rather than just the usual scrappy mess, they found entire articulated skeletons of fishes that really allow us to get a much better view of early fish life.
Interviewer: Shamini Bundell
What's your favourite finding of the four fossil papers?
Interviewee: Henry Gee
Well, one of the most intriguing is going to be a fish called Xiushanosteus, which is a placoderm – one of those early jawed vertebrates probably from which all the others sprang somehow. This one was only a tiddler, about 3 centimetres long.
Interviewer: Shamini Bundell
Tiny.
Interviewee: Henry Gee
Yeah, this one was quite small, but there are lots of different kinds of placoderms, and the relationships between all of them is very kind of contested. But this one, it seems to combine in one body a lot of features of several otherwise disparate groups of placoderms, and also shows signs of bony fishness, and this is going to cause a certain amount of head scratching in the fossil fish community.
Interviewer: Shamini Bundell
So, that's the first of the papers. Tell me a bit about the other fossils.
Interviewee: Henry Gee
There’s Shenacanthus – another little fish that looks like a very early representative of the cartilaginous fish, but it has big dorsal fin plates like you don't see in sharks. This is a placoderm thing. And then we got Qianodus, which is basically a load of teeth. These are the earliest jawed vertebrate teeth anywhere in the fossil record. And then there's another one called Fanjingshania, which looks like one of these spiny sharks or acanthodians, but there are no teeth associated with it. So, when this was going through review, the referee said, ‘Hang about, maybe Qianodus is the teeth that Fanjingshania hasn't got,’ but Zhu et al. convinced everyone that actually that couldn't be the case. So, Fanjingshania, is a kind of toothless acanthodian. And then there's another great thing which is not a jawed fish at all. It's a jawless fish called a galeaspid. Now, the thing about galeaspids is they were only known from their head shields. No other soft part has ever been known. But this one has got the whole fish. And on each side, on the kind of bottom edge of each side, is a fold, like a fold of fins on each side like go faster stripes, and this is kind of interesting because it looks like a precursor to what happened in jawed vertebrates, which is the evolution of paired fins – two at the front and two at the back – which turned into our arms and legs. And the fun thing is they made some reconstructions of this fish and tested it aerodynamically, and it would have generated lift. It would have been a useful thing. And this is the first time from this unique fish bed that we have a whole one.
Interviewer: Shamini Bundell
Does it confuse at all the story of their relations because our sort of modern understanding, we have a very clear tree of evolutionary relationships, where you've got first they invented the jaws, then you've got the placoderms and then bony fish and cartilaginous fish. Has that picture changed at all?
Interviewee: Henry Gee
Oh, very, yes, Shamini. This picture is always changing. Many of these early fishes, they're not clearly definable into modern bony fish, modern cartilaginous fish and so on. They tend to have mixtures of features. So, this early tiny shark thing, Shenacanthus, had bony plates like a placoderm. And this early placoderm had characteristics of bony fish. So, you tend to see that evolution was kind of experimenting. All these groups hadn't quite parted ways yet. They hadn't become as distinct as they are now. And these new finds, to use that well-worn phrase, will raise more questions than they answer.
Interviewer: Shamini Bundell
So, this Silurian story is still being rewritten, with new characters popping up all the time.
Interviewee: Henry Gee
This discovery of this new locality is going to produce lots more, I'm sure, in the future, and it's an entirely new, refreshing window into the past.
Host: Benjamin Thompson
That was Henry Gee, a senior editor at the Nature journal, talking with Shamini Bundell. You can find links to all four of the papers they discussed in the show notes. Next up on the podcast, Dan Fox is here with this week’s Research Highlights.
[Jingle]
Dan Fox
Some people with a rare genetic condition have heightened musical and verbal abilities. And now, thanks to studies in mice, we might know why. Williams-Beuren syndrome is a condition caused by the absence of a specific chunk of genome. This can erase one copy of up to 27 genes and lead to cognitive deficits. But it can also enhance music skills. Researchers studied the auditory cortex – the brain’s sound-processing centre – in mice missing the equivalent genes. They found that the loss of one copy of one gene enhanced the rodents’ ability to distinguish different sound frequencies. The team suggest that losing this gene reduces the levels of a specific protein, which changes the function of some auditory cortex neurons. This change made the mice highly sensitive to small shifts in the frequency of a tone. Read that research in full in Cell.
[Jingle]
Dan Fox
High-resolution imaging has revealed the secrets of how honeybees build their honeycombs. Honeycombs are one of nature's best engineered structures, offering strength for minimal material use. They weigh less than a sheet of paper when empty, but can hold several kilograms of bees’ honey and nectar. To watch a comb’s evolution, researchers used high-energy X-rays to create 3D images with micrometre-scale resolution. These revealed that bees first create a corrugated vertical structure that acts as the comb’s foundation. Its bumps and depressions form a pattern of hexagons on which bees deposit bulbs of wax, and then stretch the wax like pizza dough to build the honeycomb cells’ walls. Construction goes from top to bottom, with the bees reinforcing the foundation with more wax as the comb grows. The team say that these insights could lead to improvements in the structural design of synthetic materials. You don't need to comb the web for that research. It’s in Advanced Materials.
[Jingle]
Host: Benjamin Thompson
Next up, reporter Adam Levy is looking at the lack of evidence in transgender policy.
Interviewer: Adam Levy
What does good policy look like? Of course, any policy has to take into account a host of considerations, from the ethical to the social. But for many, good policies should also be backed up by evidence. What are the impacts of a policy, and are the justifications for a policy supported by the data? And this question is particularly relevant right now for trans people. Transgender people are people whose gender doesn’t line up with the gender they were assigned at birth. So, for example, a trans woman is a woman who was assigned male at birth. That’s as compared to a cisgender person, whose gender is the same as they were assigned at birth. Around the world, many laws are being proposed and passed on the rights of transgender people to participate in various aspects of society. But how well does the available evidence back up all these new policies? Well, not so well, according to Paisley Currah, researcher of political science and women's and gender studies. He's got a World View out in this week's Nature, arguing that policy over trans people's lives needs to take a more evidence-based approach. I caught up with him, and we started by discussing what kinds of policies we're talking about, particularly in the US.
Interviewee: Paisley Currah
In the United States, we've seen, for example, a raft of bills and state legislatures – almost 20 have passed – that have said trans girls can't play on women’s’ and girls’ teams in women's and girls’ sports. Another example would be some of the legislatures and more local libraries and school boards banning the teaching of anything related to transgender people. So, we have quite a lot of stuff happening on that end.
Interviewer: Adam Levy
Now, for sports, for example, just how big a thing even is this in the first place, the idea of trans people and especially trans girls playing sports with other children of their gender, other girls?
Interviewee: Paisley Currah
Right, well, in terms of the political legislative response, it's really like a solution in search of a problem. So, for example, in Utah, they have 75,000 students in Utah who play high school sports. They have one transgender girl, and there were no issues raised about that transgender girl, yet the Utah legislature passed a bill banning trans girls from playing girls’ sports. So, when the governor of Utah, who is a Republican, he vetoed the bill, he said, never had so much ire been directed at so few. And unfortunately, the legislator overrode his veto and it became law. The political response is out of whack with what's going on, on the ground.
Interviewer: Adam Levy
And recently, actually, in August, a judge reversed this law, and so there's been a lot of back and forth here. But regardless, a lot of laws governing the lives of trans people are being pushed forward and in many cases are passing. In your World View, you argue that these policies are running counter to the research, counter to the data. Take a particular spate of policies around which people can use which bathrooms. What does the data say here?
Interviewee: Paisley Currah
Right, well, there's no good solid data saying that allowing trans people to use the bathroom associated with their gender identity poses a problem. For example, there was a study that looked at jurisdictions in Massachusetts that had laws that ensured that people could use the bathroom associated with their gender identity, and they compared those jurisdictions with jurisdictions that didn't have that. And they found no evidence that these laws put anyone, including women, at risk, and that the fears of safety were completely unfounded.
Interviewer: Adam Levy
The other issue we mentioned was which people can play which sports. This question is often asked in a very blanket way: should trans people, especially trans women, be able to compete in sports with people of their own gender? What kind of research is there in this area?
Interviewee: Paisley Currah
Unfortunately, the discussion sometimes isn't about comparing cisgender women to transgender women and what advantages transgender women might have or what disadvantages they might have. Some of the policymaking actually focuses on comparing cisgender men with cisgender women, and that's like a different question. Trans women are not the same as cis men.
Interviewer: Adam Levy
Now, what about something really fundamental – healthcare and healthcare in particular for transgender people. For example, hormone therapies, which are designed to alter an individual's hormones to levels that better reflect their gender identity. How is policy now aligning, or perhaps I should say misaligning, with research?
Interviewee: Paisley Currah
I think maybe that's the most extreme in the bans on transforming care we see, especially in the United States. For example, policymakers will describe hormone therapy as experimental and not proven and unsafe. And like 22 major medical associations have said, ‘No, we've been doing this for a long time. It's not experimental. It's safe. It leads to good outcomes.’ But like with other issues where science is involved, it becomes a he said she said thing. And what's interesting is that conservative judges, sometimes appointed by President Trump even, when they look at these issues in a court and they're faced with the evidence, they've ruled against the conservative lawmakers again and again. So, it's really kind of a sign that the lawmakers are kind of turning transgender issues into a political issue that’s got very little to do with the evidence.
Interviewer: Adam Levy
Now, some people will say that this is purely a political issue and perhaps question why we're talking about it on the Nature Podcast and talking about it in the pages of Nature. Why do you think these questions are questions that the research community ought to think about?
Interviewee: Paisley Currah
I think the research community is concerned with people and the harms that people suffer. So, I do think it's a mistake and it doesn't really help anyone. We get into these abstract fights about what is gender and what is sex, and that becomes politicised, it becomes all about culture. I think it's so important for us to focus on the harms that actual people face when they're denied gender-affirming care, or they're like some seventh-grade volleyball player who's told that they can't play in the girls’ volleyball team. And then when you bring it down to that level, we can see the importance of research to show that these policies are really not needed.
Interviewer: Adam Levy
Do you think that research and data are the only things that should be informing these kinds of policies?
Interviewee: Paisley Currah
No, because I think ultimately, it's also a human rights issue. So, I think we have to always kind of keep that as a backdrop to make sure that every individual has the right to affirm their gender identity and to express their gender. So, for example, with bathrooms, the biggest victims of harassment in bathrooms are often gender non-conforming cisgender women. They're the ones who are threatened, assaulted, chased out of bathrooms, and we don't need research to say that gender non-conforming cisgender women should be allowed to use the women’s bathroom. That's just a human rights point.
Host: Benjamin Thompson
That was Paisley Currah from the City University of New York. To check out his World View, look for a link in the show notes. And that's all we've got time for this week. But just before we go, time to mention a new video on our YouTube channel about research trying to crack the nature of consciousness by dosing volunteers with psychedelic drugs and scanning their brains. Look out for a link to that in the show notes. Don't forget, you can keep in touch with us over on Twitter – we're @NaturePodcast – or you can send us an email to [email protected] I'm Benjamin Thompson. See you all next time.
|
Interviewer: Shamini Bundell
Thanks for chatting to me, Henry. I wanted to get you on because we have four papers coming through in Nature this week about new fossils from the Silurian period.
Interviewee: Henry Gee
That's correct, yeah. Not one, not two, not three, but four papers.
Interviewer: Shamini Bundell
And so, the Silurian is pretty key to this story, I think. So, this was a period when, on land, there were sort of plants and arthropods and insects kind of taking over, and the sea was full of weird and wonderful fish creatures, which is mostly what we're going to be talking about. But I wondered if you could sort of set the scene a bit for this sort of Silurian story and maybe introduce us to what was going on at that time and some of the kinds of characters we might be talking about?
Interviewee: Henry Gee
Cast your mind back, if you will, to the Devonian period, which was after the Silurian period, which began about 490 million years ago. That's generally known as the age of fishes. But the Silurian period just before that was when a lot of fishes originated, but we haven't got many really good fossils of them. They tend to be a bit scruffy and scrappy and fragmentary, which is inconvenient because a lot of the major evolution in early fishes was happening about then.
Interviewer: Shamini Bundell
And a particularly successful evolutionary invention of this kind of time is jaws, basically, so jawed fish.
Interviewee: Henry Gee
The earliest vertebrates didn't have jaws. Their mouths were kind of suckers. And only two kinds of jawless vertebrates survive today – they're the lamprey and the hagfish.
Interviewer: Shamini Bundell
So, we want to know what was going on in the Silurian that led to all the varieties of jawed fish and jawed creatures now, which includes us, that we see today.
Interviewee: Henry Gee
This is what these papers are all about. Now, these days,
|
yes
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
https://earthlyuniverse.com/silurian-earth-first-breath-of-air/
|
Silurian Earth – The First Breath of Air - Earthly Universe
|
Silurian Earth – The First Breath of Air
Maritime life thrived throughout the Silurian period, with nautiloids (centre photo) being the most successful and largest predators at the time.
Following the devastating mass extinctions at the end of the Ordovician Period, the glaciers covering the ancient land of Gondwana receded, and another period of intense global warming began. Afterwards, the very first animals started to settle the land, forming the earliest terrestrial ecosystems and jumpstarting a new phase in the evolution of our planet. In the sixth part of my “A Journey through the History of Earth” series, we’ll be exploring the first of these primordial land-based biomes.
Despite being by far the shortest of the Palaeozoic periods, lasting less than 25-million years, the Silurian saw one of the most important evolutionary events in the history of our world. Although the Ordovician had seen the first primordial mosses colonise coastal areas around the world and the first curious arthropods had started to explore the land, it wasn’t until the middle of the Silurian that the first terrestrial ecosystems became developed enough to function independently of the sea.
The Silurian is the third geological period of the Phanerozoic aeon and the third of the Palaeozoic Era. Like the Ordovician and the Cambrian before it, the name ‘Silurian’ was inspired by the country of Wales where many fossils dating from this time have been identified. The Silurian period was first described and identified in 1835, and it was named after the ancient Celtic Silures tribe, who were contemporaries of the Ordovices some 2,000 years ago.
Highlights of the Silurian
Rapid global warming
Evolution of the first bony fish
First vascular plants settle the land
The first sharks
Giant fungus dominates terrestrial ecosystems
First creature to take a breath of air
Global Warming Redefines the Path of Evolution
443.8-million years ago, the glaciers of the Late Ordovician ice age started to melt, and the sea level rose rapidly, reaching a peak 590 feet (180 m) higher than they are today. Once again, Earth went through an unprecedented period of global warming, lifting the shackles on evolution and allowing early arthropods and brachiopods (worms) to once again continue their exploration of the land. At this time, by far the largest continent was Gondwana, comprising parts of what is now Antarctica and Australia and located in the southeast of the map. The smaller continents of Siberia and Baltica shrank with the rising sea levels, gradually shifting further northwest of the map into the vast Panthalassic Ocean.
During the Early Silurian, the only known multicellular life that had permanently adapted to life on the land were tiny liverwort-type plants forming mossy growths around the shorelines. Nonetheless, the spread of such organisms formed an essential foundation for the first truly land-based ecosystems. Until then, the primitive terrestrial plant life of the Ordovician and Early Silurian was still heavily reliant on the water.
Oxygen levels in the Earth’s atmosphere continued to rise slowly but steadily thanks to the continued spread of photosynthetic organisms. At the same time, early plant life made its journey from the tidal shallows and gradually spread further inland as it became less dependent on the ocean’s waters for sustenance and reproduction. Nonetheless, oxygen still only accounted for 14% of the atmosphere during the Silurian, which is some 30% less than it is today.
Earth continued to warm throughout the first half of the Silurian period, eventually reaching an average global temperature some 3 °C higher than it is today. As the planet recovered from the ice age, life once again started to thrive and evolve, and the dark times of the Late Ordovician extinction event, one of the most severe in Earth’s history, were long behind.
Miniature Forests Crawl across the Land
Cooksonia is the by far the best known and iconic plant fossil of the Silurian period. In real life, the plant was extremely small.
Cooksonia is perhaps the most iconic of all Silurian fossils. One of the earliest known true plants, this tiny leafless organism quickly colonised shorelines in many parts of the world during the middle of the Silurian period. Several species have been identified, and it’s widely believed that they grew in great abundance. Nonetheless, the largest were no longer than a couple of inches (5 cm), forming expanses of miniaturised ‘forests’ in swampy areas. Cooksonia is most notable for being the earliest known vascular plant (tracheophytes), a group of plants that includes trees and all other land plants that have waxy layers to prevent water from escaping – something that’s essential for land-based life.
Arthur Weasley
Guiyu oneiros is one of the earliest bony fish known. Living during the Late Silurian around 419-million years ago, it was also one of the largest fish of its time.
While tiny plants were crawling out of the shallows, life in the oceans continued to expand and diversify, with coral reefs stretching far and wide and giving rise to ever more sophisticated ecosystems. Silurian sea life included the first bony fish, the foot-long (30 cm) guiyu oneiros being one of the largest and best known. Most notable, however, were the eurypterid sea scorpions, a highly successful order of marine predators which are distantly related to arachnids. The ancestors to sharks also appeared during the Silurian, although there is evidence that the earliest sharks had their beginnings in the Ordovician. Other already well-established groups, such as nautiluses, marine gastropods, trilobites and brachiopods, also continued to thrive and diversify throughout the Silurian.
The First Ever Breath of Air
Pneumodesmus newmani is the first known animal to have ever lived permanently on the land. The tiny creature was no larger the a woodlouse, and probably fed on mosses.
In 2004, palaeontologists in Scotland found the definitive evidence of the earliest animal to live on dry land. The fossil was 428-million years ago, and it belonged to a millipede one centimetre long. Even more remarkably, this discovery put back the date of the first terrestrial animal by some 20-million years. Named pneumodesmus newmani after its amateur palaeontologist discoverer Martin Newman, this animal was one of the earliest to breath the air, representing a profound step forward in the evolution of life. Indeed, it might have just been a tiny millipede, but it’s incredible to think that, if it hadn’t been for this enterprising little character, evolution may have taken a very different course.
The latter half of the Silurian was moderately warm, although there was probably still a southern polar icecap covering a part of what is now Africa. Oxygen levels were continuing to increase due to the spread of early land plants, and high carbon dioxide levels kept the world in a strong greenhouse climate with high sea temperatures. These factors combined, along with the essential role played by the lunar tides, to encourage the evolution of larger animals that would eventually migrate out of the tidal shallows and colonise the land to such an extent that they would transform it beyond recognition in the following Devonian period.
Joining cooksonia in its conquest of the land was another now long-extinct clubmoss known as baragwanathia, also a type of vascular plant and one that grew over a metre in length. Like the otherwise unrelated cooksonia, it spread its spores in the wind to reproduce, meaning that it was independent of the oceans.
Giant Mushrooms Take Over
Prototaxites was long assumed to be a primitive plant until it was eventually determined to be a tree-sized fungus.
In the mid-nineteenth century, a bizarre discovery was made of what looked like an extremely ancient fossilized trunk of a conifer dating from the Late Silurian. For almost 150 years, it was assumed to be a very early tree, but the fact that it was much, much bigger than any other terrestrial organism of the time kept everyone baffled. Prototaxites, as it was named, grew up to 26 feet (8 metres) in height and had a trunk-like structure up to 3 feet (1 metre) wide.
A century and a half after the its discovery, prototaxites was eventually determined to be a fungus, probably belonging to the nematophyta phylum which included land-based algae from as early as the Cambrian period. A lot of unanswered questions remain surrounding this incredibly bizarre lifeform, but one thing seems certain: the Late Silurian landscape was dominated by spire-shaped pillars of life that were actually some of the largest mushrooms that ever existed.
Conclusion
The Silurian ended 419.2-million years ago with the end of the Přídolí Epoch, so named after a region near the Czech capital Prague where extensive fossils of cephalopods, bivalves and trilobites were found. Although terrestrial life was still scarce, and had yet to make a significant impact on regions further inland, that was about to change dramatically. Soon, the alien world that was the Silurian Earth would end up being covered by vast swathes of primordial forests, characterising the beautifully colourful Devonian period that we’ll be exploring in the next episod
|
The Silurian is the third geological period of the Phanerozoic aeon and the third of the Palaeozoic Era. Like the Ordovician and the Cambrian before it, the name ‘Silurian’ was inspired by the country of Wales where many fossils dating from this time have been identified. The Silurian period was first described and identified in 1835, and it was named after the ancient Celtic Silures tribe, who were contemporaries of the Ordovices some 2,000 years ago.
Highlights of the Silurian
Rapid global warming
Evolution of the first bony fish
First vascular plants settle the land
The first sharks
Giant fungus dominates terrestrial ecosystems
First creature to take a breath of air
Global Warming Redefines the Path of Evolution
443.8-million years ago, the glaciers of the Late Ordovician ice age started to melt, and the sea level rose rapidly, reaching a peak 590 feet (180 m) higher than they are today. Once again, Earth went through an unprecedented period of global warming, lifting the shackles on evolution and allowing early arthropods and brachiopods (worms) to once again continue their exploration of the land. At this time, by far the largest continent was Gondwana, comprising parts of what is now Antarctica and Australia and located in the southeast of the map. The smaller continents of Siberia and Baltica shrank with the rising sea levels, gradually shifting further northwest of the map into the vast Panthalassic Ocean.
During the Early Silurian, the only known multicellular life that had permanently adapted to life on the land were tiny liverwort-type plants forming mossy growths around the shorelines. Nonetheless, the spread of such organisms formed an essential foundation for the first truly land-based ecosystems.
|
yes
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
https://pressbooks-dev.oer.hawaii.edu/biology/chapter/early-plant-life/
|
Early Plant Life – Biology
|
Learning Objectives
Describe the timeline of plant evolution and the impact of land plants on other living things
The kingdom Plantae constitutes large and varied groups of organisms. There are more than 300,000 species of catalogued plants. Of these, more than 260,000 are seed plants. Mosses, ferns, conifers, and flowering plants are all members of the plant kingdom. Most biologists also consider green algae to be plants, although others exclude all algae from the plant kingdom. The reason for this disagreement stems from the fact that only green algae, the Charophytes, share common characteristics with land plants (such as using chlorophyll a and b plus carotene in the same proportion as plants). These characteristics are absent in other types of algae.
Evolution Connection
Algae and Evolutionary Paths to PhotosynthesisSome scientists consider all algae to be plants, while others assert that only the Charophytes belong in the kingdom Plantae. These divergent opinions are related to the different evolutionary paths to photosynthesis selected for in different types of algae. While all algae are photosynthetic—that is, they contain some form of a chloroplast—they didn’t all become photosynthetic via the same path.
The ancestors to the green algae became photosynthetic by endosymbiosing a green, photosynthetic bacterium about 1.65 billion years ago. That algal line evolved into the Charophytes, and eventually into the modern mosses, ferns, gymnosperms, and angiosperms. Their evolutionary trajectory was relatively straight and monophyletic. In contrast, the other algae—red, brown, golden, stramenopiles, and so on—all became photosynthetic by secondary, or even tertiary, endosymbiotic events; that is, they endosymbiosed cells that had already endosymbiosed a cyanobacterium. These latecomers to photosynthesis are parallels to the Charophytes in terms of autotrophy, but they did not expand to the same extent as the Charophytes, nor did they colonize the land.
The different views on whether all algae are Plantae arise from how these evolutionary paths are viewed. Scientists who solely track evolutionary straight lines (that is, monophyly), consider only the Charophytes as plants. To biologists who cast a broad net over living things that share a common characteristic (in this case, photosynthetic eukaryotes), all algae are plants.
Plant Adaptations to Life on Land
As organisms adapted to life on land, they had to contend with several challenges in the terrestrial environment. Water has been described as “the stuff of life.” The cell’s interior is a watery soup: in this medium, most small molecules dissolve and diffuse, and the majority of the chemical reactions of metabolism take place. Desiccation, or drying out, is a constant danger for an organism exposed to air. Even when parts of a plant are close to a source of water, the aerial structures are likely to dry out. Water also provides buoyancy to organisms. On land, plants need to develop structural support in a medium that does not give the same lift as water. The organism is also subject to bombardment by mutagenic radiation, because air does not filter out ultraviolet rays of sunlight. Additionally, the male gametes must reach the female gametes using new strategies, because swimming is no longer possible. Therefore, both gametes and zygotes must be protected from desiccation. The successful land plants developed strategies to deal with all of these challenges. Not all adaptations appeared at once. Some species never moved very far from the aquatic environment, whereas others went on to conquer the driest environments on Earth.
To balance these survival challenges, life on land offers several advantages. First, sunlight is abundant. Water acts as a filter, altering the spectral quality of light absorbed by the photosynthetic pigment chlorophyll. Second, carbon dioxide is more readily available in air than in water, since it diffuses faster in air. Third, land plants evolved before land animals; therefore, until dry land was colonized by animals, no predators threatened plant life. This situation changed as animals emerged from the water and fed on the abundant sources of nutrients in the established flora. In turn, plants developed strategies to deter predation: from spines and thorns to toxic chemicals.
Early land plants, like the early land animals, did not live very far from an abundant source of water and developed survival strategies to combat dryness. One of these strategies is called tolerance. Many mosses, for example, can dry out to a brown and brittle mat, but as soon as rain or a flood makes water available, mosses will absorb it and are restored to their healthy green appearance. Another strategy is to colonize environments with high humidity, where droughts are uncommon. Ferns, which are considered an early lineage of plants, thrive in damp and cool places such as the understory of temperate forests. Later, plants moved away from moist or aquatic environments using resistance to desiccation, rather than tolerance. These plants, like cacti, minimize the loss of water to such an extent they can survive in extremely dry environments.
The most successful adaptation solution was the development of new structures that gave plants the advantage when colonizing new and dry environments. Four major adaptations are found in all terrestrial plants: the alternation of generations, a sporangium in which the spores are formed, a gametangium that produces haploid cells, and apical meristem tissue in roots and shoots. The evolution of a waxy cuticle and a cell wall with lignin also contributed to the success of land plants. These adaptations are noticeably lacking in the closely related green algae—another reason for the debate over their placement in the plant kingdom.
Alternation of Generations
Alternation of generations describes a life cycle in which an organism has both haploid and diploid multicellular stages ([link]).
Alternation of generations between the 1n gametophyte and 2n sporophyte is shown. (credit: Peter Coxhead)
Haplontic refers to a lifecycle in which there is a dominant haploid stage, and diplontic refers to a lifecycle in which the diploid is the dominant life stage. Humans are diplontic. Most plants exhibit alternation of generations, which is described as haplodiplodontic: the haploid multicellular form, known as a gametophyte, is followed in the development sequence by a multicellular diploid organism: the sporophyte. The gametophyte gives rise to the gametes (reproductive cells) by mitosis. This can be the most obvious phase of the life cycle of the plant, as in the mosses, or it can occur in a microscopic structure, such as a pollen grain, in the higher plants (a common collective term for the vascular plants). The sporophyte stage is barely noticeable in lower plants (the collective term for the plant groups of mosses, liverworts, and lichens). Towering trees are the diplontic phase in the lifecycles of plants such as sequoias and pines.
Protection of the embryo is a major requirement for land plants. The vulnerable embryo must be sheltered from desiccation and other environmental hazards. In both seedless and seed plants, the female gametophyte provides protection and nutrients to the embryo as it develops into the new generation of sporophyte. This distinguishing feature of land plants gave the group its alternate name of embryophytes.
Sporangia in Seedless Plants
The sporophyte of seedless plants is diploid and results from syngamy (fusion) of two gametes. The sporophyte bears the sporangia (singular, sporangium): organs that first appeared in the land plants. The term “sporangia” literally means “spore in a vessel,” as it is a reproductive sac that contains spores [link]. Inside the multicellular sporangia, the diploid sporocytes, or mother cells, produce haploid spores by meiosis, where the 2n chromosome number is reduced to 1n (note that many plant sporophytes are polyploid: for example, durum wheat is tetraploid, bread wheat is hexaploid, and some ferns are 1000-ploid). The spores are later released by the sporangia and disperse in the environment. Two different types of spores are produced in land plants, resulting in the separation of sexes at different points in the lifecycle. Seedless non-vascular plants produce only one kind of spore and are called homosporous. The gametophyte phase is dominant in these plants. After germinating from a spore, the resulting gametophyte produces both male and female gametangia, usually on the same individual. In contrast, heterosporous plants produce two morphologically different types of spores. The male spores are called microspores, because of their smaller size, and develop into the male gametophyte; the comparatively larger megaspores develop into the female gametophyte. Heterospory is observed in a few seedlessvascular plants and in all seed plants.
Spore-producing sacs called sporangia grow at the ends of long, thin stalks in this photo of the moss Esporangios bryum. (credit: Javier Martin)
When the haploid spore germinates in a hospitable environment, it generates a multicellular gametophyte by mitosis. The gametophyte supports the zygote formed from the fusion of gametes and the resulting young sporophyte (vegetative form). The cycle then begins anew.
The spores of seedless plants are surrounded by thick cell walls containing a tough polymer known as sporopollenin. This complex substance is characterized by long chains of organic molecules related to fatty acids and carotenoids: hence the yellow color of most pollen. Sporopollenin is unusually resistant to chemical and biological degradation. In seed plants, which use pollen to transfer the male sperm to the female egg, the toughness of sporopollenin explains the existence of well-preserved pollen fossils. Sporopollenin was once thought to be an innovation of land plants; however, the green algae Coleochaetes forms spores that contain sporopollenin.
Gametangia in Seedless Plants
Gametangia (singular, gametangium) are structures observed on multicellular haploid gametophytes. In the gametangia, precursor cells give rise to gametes by mitosis. The male gametangium (antheridium) releases sperm. Many seedless plants produce sperm equipped with flagella that enable them to swim in a moist environment to the archegonia: the female gametangium. The embryo develops inside the archegonium as the sporophyte. Gametangia are prominent in seedless plants, but are very rarely found in seed plants.
Apical Meristems
Shoots and roots of plants increase in length through rapid cell division in a tissue called the apical meristem, which is a small zone of cells found at the shoot tip or root tip ([link]). The apical meristem is made of undifferentiated cells that continue to proliferate throughout the life of the plant. Meristematic cells give rise to all the specialized tissues of the organism. Elongation of the shoots and roots allows a plant to access additional space and resources: light in the case of the shoot, and water and minerals in the case of roots. A separate meristem, called the lateral meristem, produces cells that increase the diameter of tree trunks.
Addition of new cells in a root occurs at the apical meristem. Subsequent enlargement of these cells causes the organ to grow and elongate. The root cap protects the fragile apical meristem as the root tip is pushed through the soil by cell elongation.
Additional Land Plant Adaptations
As plants adapted to dry land and became independent from the constant presence of water in damp habitats, new organs and structures made their appearance. Early land plants did not grow more than a few inches off the ground, competing for light on these low mats. By developing a shoot and growing taller, individual plants captured more light. Because air offers substantially less support than water, land plants incorporated more rigid molecules in their stems (and later, tree trunks). In small plants such as single-celled algae, simple diffusion suffices to distribute water and nutrients throughout the organism. However, for plants to evolve larger forms, the evolution of vascular tissue for the distribution of water and solutes was a prerequisite. The vascular system contains xylem and phloem tissues. Xylem conducts water and minerals absorbed from the soil up to the shoot, while phloem transports food derived from photosynthesis throughout the entire plant. A root system evolved to take up water and minerals from the soil, and to anchor the increasingly taller shoot in the soil.
In land plants, a waxy, waterproof cover called a cuticle protects the leaves and stems from desiccation. However, the cuticle also prevents intake of carbon dioxide needed for the synthesis of carbohydrates through photosynthesis. To overcome this, stomata or pores that open and close to regulate traffic of gases and water vapor appeared in plants as they moved away from moist environments into drier habitats.
Water filters ultraviolet-B (UVB) light, which is harmful to all organisms, especially those that must absorb light to survive. This filtering does not occur for land plants. This presented an additional challenge to land colonization, which was met by the evolution of biosynthetic pathways for the synthesis of protective flavonoids and other compounds: pigments that absorb UV wavelengths of light and protect the aerial parts of plants from photodynamic damage.
Plants cannot avoid being eaten by animals. Instead, they synthesize a large range of poisonous secondary metabolites: complex organic molecules such as alkaloids, whose noxious smells and unpleasant taste deter animals. These toxic compounds can also cause severe diseases and even death, thus discouraging predation. Humans have used many of these compounds for centuries as drugs, medications, or spices. In contrast, as plants co-evolved with animals, the development of sweet and nutritious metabolites lured animals into providing valuable assistance in dispersing pollen grains, fruit, or seeds. Plants have been enlisting animals to be their helpers in this way for hundreds of millions of years.
Evolution of Land Plants
No discussion of the evolution of plants on land can be undertaken without a brief review of the timeline of the geological eras. The early era, known as the Paleozoic, is divided into six periods. It starts with the Cambrian period, followed by the Ordovician, Silurian, Devonian, Carboniferous, and Permian. The major event to mark the Ordovician, more than 500 million years ago, was the colonization of land by the ancestors of modern land plants. Fossilized cells, cuticles, and spores of early land plants have been dated as far back as the Ordovician period in the early Paleozoic era. The oldest-known vascular plants have been identified in deposits from the Devonian. One of the richest sources of information is the Rhynie chert, a sedimentary rock deposit found in Rhynie, Scotland ([link]), where embedded fossils of some of the earliest vascular plants have been identified.
This Rhynie chert contains fossilized material from vascular plants. The area inside the circle contains bulbous underground stems called corms, and root-like structures called rhizoids. (credit b: modification of work by Peter Coxhead based on original image by “Smith609”/Wikimedia Commons; scale-bar data from Matt Russell)
Paleobotanists distinguish between extinct species, as fossils, and extant species, which are still living. The extinct vascular plants, classified as zosterophylls and trimerophytes, most probably lacked true leaves and roots and formed low vegetation mats similar in size to modern-day mosses, although some trimetophytes could reach one meter in height. The later genus Cooksonia, which flourished during the Silurian, has been extensively studied from well-preserved examples. Imprints of Cooksonia show slender branching stems ending in what appear to be sporangia. From the recovered specimens, it is not possible to establish for certain whether Cooksonia possessed vascular tissues. Fossils indicate that by the end of the Devonian period, ferns, horsetails, and seed plants populated the landscape, giving rising to trees and forests. This luxuriant vegetation helped enrich the atmosphere in oxygen, making it easier for air-breathing animals to colonize dry land. Plants also established early symbiotic relationships with fungi, creating mycorrhizae: a relationship in which the fungal network of filaments increases the efficiency of the plant root system, and the plants provide the fungi with byproducts of photosynthesis.
Career Connection
PaleobotanistHow organisms acquired traits that allow them to colonize new environments—and how the contemporary ecosystem is shaped—are fundamental questions of evolution. Paleobotany (the study of extinct plants) addresses these questions through the analysis of fossilized specimens retrieved from field studies, reconstituting the morphology of organisms that disappeared long ago. Paleobotanists trace the evolution of plants by following the modifications in plant morphology: shedding light on the connection between existing plants by identifying common ancestors that display the same traits. This field seeks to find transitional species that bridge gaps in the path to the development of modern organisms. Fossils are formed when organisms are trapped in sediments or environments where their shapes are preserved. Paleobotanists collect fossil specimens in the field and place them in the context of the geological sediments and other fossilized organisms surrounding them. The activity requires great care to preserve the integrity of the delicate fossils and the layers of rock in which they are found.
One of the most exciting recent developments in paleobotany is the use of analytical chemistry and molecular biology to study fossils. Preservation of molecular structures requires an environment free of oxygen, since oxidation and degradation of material through the activity of microorganisms depend on its presence. One example of the use of analytical chemistry and molecular biology is the identification of oleanane, a compound that deters pests. Up to this point, oleanane appeared to be unique to flowering plants; however, it has now been recovered from sediments dating from the Permian, much earlier than the current dates given for the appearance of the first flowering plants. Paleobotanists can also study fossil DNA, which can yield a large amount of information, by analyzing and comparing the DNA sequences of extinct plants with those of living and related organisms. Through this analysis, evolutionary relationships can be built for plant lineages.
Some paleobotanists are skeptical of the conclusions drawn from the analysis of molecular fossils. For example, the chemical materials of interest degrade rapidly when exposed to air during their initial isolation, as well as in further manipulations. There is always a high risk of contaminating the specimens with extraneous material, mostly from microorganisms. Nevertheless, as technology is refined, the analysis of DNA from fossilized plants will provide invaluable information on the evolution of plants and their adaptation to an ever-changing environment.
The Major Divisions of Land Plants
The green algae and land plants are grouped together into a subphylum called the Streptophytina, and thus are called Streptophytes. In a further division, land plants are classified into two major groups according to the absence or presence of vascular tissue, as detailed in [link]. Plants that lack vascular tissue, which is formed of specialized cells for the transport of water and nutrients, are referred to as non-vascular plants. Liverworts, mosses, and hornworts are seedless, non-vascular plants that likely appeared early in land plant evolution. Vascular plants developed a network of cells that conduct water and solutes. The first vascular plants appeared in the late Ordovician and were probably similar to lycophytes, which include club mosses (not to be confused with the mosses) and the pterophytes (ferns, horsetails, and whisk ferns). Lycophytes and pterophytes are referred to as seedless vascular plants, because they do not produce seeds. The seed plants, or spermatophytes, form the largest group of all existing plants, and hence dominate the landscape. Seed plants include gymnosperms, most notably conifers (Gymnosperms), which produce “naked seeds,” and the most successful of all plants, the flowering plants (Angiosperms). Angiosperms protect their seeds inside chambers at the center of a flower; the walls of the chamber later develop into a fruit.
Art Connection
This table shows the major divisions of green plants.
Which of the following statements about plant divisions is false?
Lycophytes and pterophytes are seedless vascular plants.
All vascular plants produce seeds.
All nonvascular embryophytes are bryophytes.
Seed plants include angiosperms and gymnosperms.
<!– <para>B. –>
Section Summary
Land plants acquired traits that made it possible to colonize land and survive out of the water. All land plants share the following characteristics: alternation of generations, with the haploid plant called a gametophyte, and the diploid plant called a sporophyte; protection of the embryo, formation of haploid spores in a sporangium, formation of gametes in a gametangium, and an apical meristem. Vascular tissues, roots, leaves, cuticle cover, and a tough outer layer that protects the spores contributed to the adaptation of plants to dry land. Land plants appeared about 500 million years ago in the Ordovician period.
Art Connections
[link] Which of the following statements about plant divisions is false?
|
It starts with the Cambrian period, followed by the Ordovician, Silurian, Devonian, Carboniferous, and Permian. The major event to mark the Ordovician, more than 500 million years ago, was the colonization of land by the ancestors of modern land plants. Fossilized cells, cuticles, and spores of early land plants have been dated as far back as the Ordovician period in the early Paleozoic era. The oldest-known vascular plants have been identified in deposits from the Devonian. One of the richest sources of information is the Rhynie chert, a sedimentary rock deposit found in Rhynie, Scotland ([link]), where embedded fossils of some of the earliest vascular plants have been identified.
This Rhynie chert contains fossilized material from vascular plants. The area inside the circle contains bulbous underground stems called corms, and root-like structures called rhizoids. (credit b: modification of work by Peter Coxhead based on original image by “Smith609”/Wikimedia Commons; scale-bar data from Matt Russell)
Paleobotanists distinguish between extinct species, as fossils, and extant species, which are still living. The extinct vascular plants, classified as zosterophylls and trimerophytes, most probably lacked true leaves and roots and formed low vegetation mats similar in size to modern-day mosses, although some trimetophytes could reach one meter in height. The later genus Cooksonia, which flourished during the Silurian, has been extensively studied from well-preserved examples. Imprints of Cooksonia show slender branching stems ending in what appear to be sporangia. From the recovered specimens, it is not possible to establish for certain whether Cooksonia possessed vascular tissues. Fossils indicate that by the end of the Devonian period, ferns, horsetails, and seed plants populated the landscape, giving rising to trees and forests. This luxuriant vegetation helped enrich the atmosphere in oxygen, making it easier for air-breathing animals to colonize dry land.
|
no
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
yes_statement
|
the "silurian" "period" was the "birth" of the first "land" "plants".. "land" "plants" first appeared during the "silurian" "period".
|
https://academic.oup.com/sysbio/article/62/1/93/1657273
|
Phylogenomic Insights into the Cambrian Explosion, the ...
|
Abstract
The timing of the origin of arthropods in relation to the Cambrian explosion is still controversial, as are the timing of other arthropod macroevolutionary events such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth. Analyzing a large phylogenomic dataset (122 taxa, 62 genes) with a Bayesian-relaxed molecular clock, we simultaneously reconstructed the phylogenetic relationships and the absolute times of divergences among the arthropods. Simulations were used to test whether our analysis could distinguish between alternative Cambrian explosion scenarios with increasing levels of autocorrelated rate variation. Our analyses support previous phylogenomic hypotheses and simulations indicate a Precambrian origin of the arthropods. Our results provide insights into the 3 independent colonizations of land by arthropods and suggest that evolution of insect wings happened much earlier than the fossil record indicates, with flight evolving during a period of increasing oxygen levels and impressively large forests. These and other findings provide a foundation for macroevolutionary and comparative genomic study of Arthropoda.
Arthropoda, as the largest animal phylum, may be considered the most successful group of living animal species. It includes the largest class of metazoans on earth, Insecta, which comprise more than half of all described species (Grimaldi and Engel 2005). Beyond sheer numbers, arthropods also exhibit an incredible range of morphological and ecological diversity and can be found in nearly all habitats on our planet. Understanding the evolutionary history of Arthropoda is therefore central to understanding the tempo and mode of evolution on earth. However, resolving the evolutionary history of the early arthropods is difficult as the fossil record is generally scant and episodic (Grimaldi and Engel 2005; Budd and Telford 2009). The timing of the origin of arthropods and its relation to the Cambrian explosion is still controversial, as are several macroevolutionary events, such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth.
Origins: A Short- or Long-Fuse Cambrian Explosion?
Prior to the Cambrian, very few metazoan body or trace fossils have been identified (Briggs and Fortey 2005). This paucity of metazoan fossils in the strata of Earth is broken by the sudden appearance of highly developed metazoan fossils in the Cambrian, a pattern colloquially referred to as the Cambrian evolutionary “explosion” (Conway Morris 2006). The cause of this “explosion” remains an unsolved question (Briggs and Fortey 2005), interpreted by some as evidence of a very dramatic evolutionary radiation, with short evolutionary divergence times among the ancestral lineages of the Cambrian fauna. Continuing with the explosion metaphor, this interpretation is referred to as a “short-fuse” scenario (Fig. 1a; Gould 1989; Conway Morris 2006). Others view this “explosion” as a historical artifact and argue for a deeper taxonomic age of arthropods, with longer divergence times among ancestral lineages extending back into the Precambrian. This alternative is referred to as a “long-fuse” scenario (Briggs and Fortey 2005). Although the short-fuse scenario necessarily requires a dramatic evolutionary radiation, the long-fuse scenario allows for either a gradual diversification or an older, yet still rapid radiation (Fig. 1b, c).
Representation of the competing hypotheses for the ancient evolutionary radiation of the Arthropoda. Squares are the first instance of a given taxonomic group in the fossil record. In gray are indicated the temporal positions of the primary Cambrian Konservat-Langerstatte sites of the Burgess Shale and Chengjiang, along with the termination of the last global glaciation event (i.e., snowball earth). a) short-fuse, b) long-fuse, gradual, c) long-fuse, rapid. Phylogenetic relationships from Regier et al. (2010), fossils dates from Briggs and Fortey (2005).
Numerous studies over the past 2 decades have worked to illuminate the origins of the Cambrian explosion using molecular data. Generally, molecular estimates of metazoan ages have been older than fossil based estimates, and among them there is considerable differences in the length of their Cambrian fuse, much of which can be attributed to the sampling and analytical biases (Bromham 2006). Significant advances in molecular dating with the advent of relaxed clock methods suggest the potential for analyses that are significantly less biased than previous studies (Drummond et al. 2006; Battistuzzi et al. 2010). In fact, a series of relaxed clock studies have reported molecular-based estimates claiming congruence with Cambrian fossil estimates (Aris-Brosou and Yang 2002; Aris-Brosou and Yang 2003; Peterson et al. 2004; Peterson et al. 2008). These results have been welcomed as long sought evidence of a possible congruence between molecular and fossil data (Briggs and Fortey 2005; Budd 2008). However, subsequent re-analysis of these studies identify these claims to derive from spurious results (Blair and Hedges 2005; Ho et al. 2005), or result from excessive restrictions on priors that overly bias posterior estimates (Sanders and Lee 2010).
A very recent study took a different approach by exploiting EST databases, sacrificing even taxon coverage for large numbers of orthologous gene regions (Rehm et al. 2011). The times of divergence in that study were based on a fixed topology taken from a previous study (Meusemann et al. 2010b), and the evolution of amino acids was calibrated by giving 6 nodes minimum ages and one node both a minimum and a maximum age, but the distributions of these age priors are not clear from the paper. The results suggest that the arthropods originated in the Precambrian some 590 Ma. However, several aspects of this study complicate interpretation. First, an EST approach can result in a dataset with a high percentage of missing data that can be problematic for analyses (Sanderson et al. 2010). Second, the taxon coverage of the several important taxa is restricted to 1 or 2 individuals (e.g., Myriapoda, Pycnogonida and Pulmonata) or is missing crucial taxa (e.g., Xenocarida of Pancrustacea). Third, the use of hard constraints for age priors can lead to overparameterization and result in an inability of the actual molecular data to have meaningful effects on posterior estimates (Ho and Phillips 2009; Heled and Drummond 2012).
When Arthropods Colonized Land and Evolved Flight
Arthropods appear to be the first multicellular animals to colonize land and 3 groups are abundant in many ecosystems even today, i.e., hexapods, chelicerates, and myriapods. However, uncertainty remains regarding when (Labandeira 2005) and how many times ancient arthropods colonized land. The first identifiable terrestrial animal fossils, which were the ancestors of chelicerates and myriapods, appear in the late Silurian (ca. 419 Ma; Jeram et al. Edwards 1990). The first fossilized tracks that are clearly terrestrial occur earlier, in the early Ordovician (ca. 490 Ma), and are mostly likely left by chelicerate ancestors (MacNaughton et al. 2002). The phylogenetic position of the hexapods, whose fossils appear in the Devonian (400 Ma), has been controversial, with one hypothesis suggesting that they are sister to the myriapods, which would infer one colonization of land by their common ancestor (Averof and Akam 1995). However, it is becoming increasingly clear that insects are a clade within crustaceans, suggesting independent colonization of land by myriapods and insects, as well as by chelicerates (Regier et al. 2010). Crustaceans, such as woodlice, land crabs, and coconut crabs, have presumably colonized land more recently and multiple times (Wildish 1988; Schubart et al. 1998; Labandeira 2005).
Few molecular studies have explicitly addressed the question of when and how many times land was colonized. Pisani et al. (2004) suggest that the ancestor of chelicerates and myriapods colonized land relatively late in their evolutionary history, i.e., very close to the time suggested by the fossil record. However, that study suffers from sparse taxonomic sampling (n = 8 species), which has been shown to affect estimates of times of divergences (Hug and Roger 2007). Other studies do not explicitly address the colonization of land (Sanders and Lee 2010; Rehm et al. 2011), but their estimates of times of divergence of the 3 major groups suggest that land was colonized much earlier, i.e., in the Cambrian.
Winged insects (Pterygota) are by far the most successful group of terrestrial arthropods (they comprise 98.5% of known hexapods; Mayhew 2003; Grimaldi and Engel 2005), evidently due to their ability to fly. Insects were the first group of organisms to take wing and their flight appears to have evolved only once. Although a diverse range of winged insect fossils are found in the late Carboniferous (ca. 325 Ma), and insect fossils are known before this period at ~420 and 390 Ma, the intervening time period is one of exceptional fossil paucity across nearly all life, known as Romers Gap (see Fig. 1 in Ward; Labandeira et al. 2006). As a result, very little is known about when flight evolved (Mayhew 2003). The recent study by Rehm et al. (2011) suggests that Pterygota diverged from its sister group ~450 Ma, which is in strong conflict with the fossil record.
Here, we use an established Bayesian-relaxed molecular clock approach (Drummond et al. 2006) and explicitly avoid and quantify the previously mentioned biases by (i) using soft bounds on priors (Ho and Phillips 2009), (ii) assessing the effect of priors on posterior estimates, (iii) conducting simulations to assess how autocorrelated rate variation could affect our posterior estimates of the Cambrian explosion [as this violates the underlying assumptions of our analysis (Battistuzzi et al. 2010)], and (iv) assessing the effect of our fossil calibrations on temporal estimates (Hug and Roger 2007). Our data sampling takes advantage of phylogenomic advances in Arthropoda (Table 1) by using an extended phylogenomic dataset based on the study by Regier et al. (2010) for a computationally intensive, simultaneous estimation of phylogenetic relationships and times of divergence.
Materials and Methods
Data
Sixty-two genes, identified as single copy within Arthropoda (Regier et al. 2010), were used in BLAST searches to find homologous sequences in 3 different types of databases [whole-genome sequence (WGS), WGS predicted gene sets, and assembled transcriptomes from EST sequencing projects; see Table S1 for details, available online at: http://datadryad.org/review?wfID=3040&token=836f474a-dd66-4000-af86-59ee83a90139]. We use data from all of the publicly available arthropod genomes (n = 25) and several EST databases (n = 17). Species scientific name, common name, length of DNA sequence obtained, data source (genomic vs. transcriptomic), database, and source are all provided (Supplementary Table S1).
Gene sequence from WGS was obtained by first identifying the beginning and end of the chromosomal region containing the gene of interest, which was then clipped and fed into the gene prediction program Genescan [http://genes.mit.edu/GENSCAN.html; (Burge and Karlin 1997)]. Using custom python scripts, identified genes collected from each species database were aligned with reference sequence using Blastalign software, which maintained codon positions, and concatenated. Final alignment was doublechecked by eye and third positions removed.
Time
Bayesian inference of phylogeny and times of divergence were performed using the BEAST v1.5.4 software package (Drummond and Rambaut 2007). Datasets were analyzed as one partition under the GTR+Γ model with a relaxed clock allowing branch lengths to vary according to an uncorrelated log-normal distribution (Drummond et al. 2006). The branch lengths were also allowed to vary according to an exponential distribution to assess the effects of changing priors on results. The tree prior was set to the birth–death process. Initial runs with BEAST showed that arthropods did not remain monophyletic with regard to the outgroup taxa Tardigrada and Onychophora. Because the monophyly of Arthropoda is not in question, we constrained it to be monophyletic in the analyses. All other priors, except calibration points described below, were left to the defaults in BEAST. Parameters were estimated using 26 independent runs of 10–20 million generations each, with parameters sampled every 2000 generations. Convergence and effective sample sizes of parameter estimates were checked in the Tracer v1.4.6 program; trimmed datasets were combined to yield output files with 147,465 sampled generations, from which summary trees were generated using TreeAnnotator v1.5.3.
Fossil Calibration Points
Analyses were based on 8 calibration points with their priors modeled as a normal distribution, with means set to the estimated age of the fossil and standard deviations about these means were set to give confidence intervals ±5% of the fossil age. The use of a mean distribution for priors, without hard upper or lower constraints, reflects the uncertainty in the fossil record and allows posterior estimates to vary in either direction based on their interactions with the other calibration points during analysis (Sanders and Lee 2007; Ho and Phillips 2009). The fossil calibrated nodes are as follows: (i) the first split in Pycnogonida set to a mean of 425 Ma (SD 11), based on the fossil Haliestes (Arango and Wheeler 2007), (ii) the first split in Pulmonata (spiders and scorpions) with a mean of 417 Ma (SD 11) based on the fossil Proscorpio (Dunlop, Tetlie and Lorenzo 2008), (iii) the node defining Communostraca (barnacles, crabs, lobsters, and wood lice) with a mean of 425 Ma (SD 11) based on the fossil Rhamphoverritor (Briggs et al. 2005), (iv) the first split in Chilopoda (centipedes) at 417 Ma (SD 11) based on the fossil Crussolum (Edgecombe and Giribet 2007), (v) the first split in Vericrustacea (fairy shrimp, copepods, barnacles, and crabs) with a mean of 516 Ma (SD 11) based on the fossils Yicaris and Rehbachiella (Olesen 2004; Zhang et al. 2007; Møller et al. 2008), (vi) the first split in Hexapoda was given a mean of 425 Ma (SD 7), based on the first known fossil of a hexapodan from the late Devonian/early Silurian (Grimaldi and Engel 2005), (vii) the first split in Holometabola with a mean of 300 Ma (SD 11) based on a fossil gall on a fern frond attributed to a holometabolous insect (Labandeira and Phillips 1996), and (viii) the first split in Diptera with a mean of 230 Ma (SD 11) based on the fossil Grauvogelia arzvilleriana (Krzeminski et al. 1994). We note that calibrations 1–5 have been used in Sanders and Lee (2010), calibrations 6 and 8 have been used by Rehm et al. (Rehm et al. 2011), and calibration 8 was used by Wiegmann et al. (2009).
We also tested the effects of using uniform priors instead of normal priors for the calibration points. In these cases, the above times were given as minimum bounds and a maximum bound was given as 100 myr older than the minimum bound.
Simulation Analyses
Previous assessment of the accuracy of relaxed clock methods for temporal inference has explored the effects of node constraints (i.e., fossil dates), tree topology, and taxonomic sampling (Hug and Roger 2007). Taxonomic subsets were found to give nearly identical temporal inferences to the full datasets when enough taxa for phyogenetic inference were used (i.e., tree topology as the same in the subset taxa dataset) and coupled with the same node constraints (Hug and Roger 2007). This is an important observation when assessing the temporal inferences from large phylogenomic datasets, as the BEAST runs of our full dataset (122 taxa for 27,984 bp) with a sufficient number of chains for convergence and sufficient effective sample sizes (147,465 sampled generations) took ~3600 h on a high-performance computer cluster. In comparison, a taxonomic subset consisting of 21 taxa for 27,984 bp, representing the minimal taxa necessary to implement our 8 temporal constraints, generally finished in 40 h on the same cluster. This subset approach was necessary, because simulating and running the full dataset is currently computationally prohibitive: running the subset data alone required roughly 400,000 CPU hours for the final simulation analyses used in this article (40 different parameter settings × 10 runs per parameter setting × 2 modeled scenarios × 12 CPUs per run × 40 h per run).
All simulations used the same topology as our full dataset. However, there were 2 types of trees generated, one with branch lengths based on our observed data (OD) and the other reflecting a Cambrian explosion scenario (CE). CE branch lengths were identical to the OD tree, except for those branch lengths >520 myr old, as they were shortened to conform to a radiation of the basal branches to have occurred within the 20 myr window between 540 and 520 Ma (Supplementary Fig. S1). BEAST is very robust to a range of violations of its lineage rate variation assumptions, except under conditions of autocorrelated rate variation analyzed with an uncorrelated log-normal model (Drummond et al. 2006), which we used in our analyses. Thus, by examining the potential effect of autocorrelated variation on our analyses, we assessed the conditions most likely to have the largest negative influence on our findings.
Autocorrelated rate change assumes the heritability of rate change, with closely related species having more similar rates. Autocorrelated rate change among branches was modeled by modifying the starting tree's branch lengths. Beginning with the basal branch, new descendant lineage rates were drawn from an exponential distribution having a mean equal to the rate of their ancestral lineage as implemented in RateEvolver (Ho et al. 2005) (which was kindly modified by its creator Simon Ho to fit a topology matching our taxonomic subset). The variance of the distribution from which new rates were drawn can be modified, as can the probability of a given branch experiencing a rate change and the mutation rate. All simulations had a probability of rate change per branch of 1, except for clock simulations. Our starting tree had branch lengths equal to millions of years, which were multiplied by the new autocorrelated rates depending on the simulation and settings, and the resulting modified trees then served as templates for simulations of DNA sequence evolution.
DNA sequences were allowed to evolve with branch lengths proportional to mutation rate, starting with the root ancestor, as implemented in SeqGen (Rambaut and Grassly 1997) using the graphical user interface SGRunner, written by T.P. Wilcox and provided as part of the Seq-Gen package. DNA mutation parameters were determined using likelihood ratio testing upon our observed subset data as implemented in Modeltest 3.7 (Posada and Crandall 1998). The best-fit model (GTR + I + G) selected had base frequencies = (0.3138 0.2269 0.2397), substitution model rate matrix = (2.8601 2.7620 1.5287 1.6254 4.7010), gamma distribution shape parameter = 0.8990, and proportion of invariable sites = 0.4251, with the rate of transitions and transversions equal. Using these parameters and modeling gamma rate heterogeneity with 10 categories, SeqGen was used to generate datasets of 21 taxa, for 27,984 nucleotides (i.e., size and mutation parameters were identical to our subset data). BEAST analyses upon these simulated datasets followed previously described analyses on the observed datasets, but with 50 million chains run, sampled every 2000.
For each set of rate variation and variance parameters, 10 random realizations of autocorrelated branch length manipulations were performed and analyzed with BEAST. Output files were combined only after each replicate was checked for posterior asympototic behavior and an appropriate “burn-in” selected. Topologies failing to asymptote after 40 million runs were discarded. The trimmed output was then combined to yield posterior estimates of mean rate, coefficient of variation in rates, ucld.mean, ucld.stdev, and the mean and 95% highest posterior density (HPD) for age of the arthropods. Please see schematic diagram provided for more details (Supplementary Fig. S1).
Results
Phylogenetic Relationships
Available arthropod databases were searched for orthologs of the 62 nuclear genes used by Regier et al. (2008, 2010). We identified a total of 854,786 bp sequence data in 42 new species (average = 19,879 bp per species), 41 of which are from Insecta, represented by 25 species from genome sequence databases and 17 species from expressed sequence tag databases (Table S1). Sequences were aligned and concatenated with the Regier et al. dataset, for a total of 122 taxa in our study, and a total of 28% missing data (our new 42 species had an average of 45% missing data). Our sampling increased the representation of the most species-rich group of macro-organisms, the insects, by 300% compared with the Regier et al. (2010) dataset (which contained 14 species of Insecta). All gene regions included are protein-coding, single copy, and orthologous (Regier et al. 2008), and thus alignment was relatively trivial. For analyses, third codon positions were removed, as they proved to be too variable to be phylogenetically informative (Regier et al. 2008). Analyses concentrated on estimating times of divergence, but topology was estimated concomitantly using the software BEAST (Drummond et al. 2006). The resulting tree file is provided, containing detailed information on node support and age estimates (Supplementary Text S1).
A stable topology is essential to having meaningful estimates of divergence times. Our estimated topology is essentially congruent with that reported by Regier et al. (2010), differing at a few poorly supported nodes (Fig. 2). Major lineages that were previously unrepresented, such as the social insects (Hymenoptera), beetles (Coleoptera), and the flies (Diptera), show expected relationships and reinforce the placement of Hymenoptera as the most basal branching lineage within the holometabolous insects (Fig. 2; Savard et al. 2006; Consortium 2007; Wiegmann et al. 2009; Mutanen et al. 2010). In addition, we find Paraneoptera and Polyneoptera to both be monophyletic. Thus, increased taxon sampling had no significant effects on topology, while providing greater resolution within Insecta, indicating a stable backbone for temporal inference.
Simultaneously estimated phylogenetic relationships and temporal divergences of the arthropods. Posterior probability support for each node is indicated above the branch to the left of the node. The 95% HPD of the node age estimate is given for each node. Nodes that were calibrated with fossils are indicated with orange 95% HPD bars. Taxa with Whole-Genome Sequences available are highlighted yellow; those with comprehensive Expressed Sequence Tag libraries available are highlighted orange. Named clades are discussed in the text.
Dates: An Assessment of Prior Bias on Posterior Estimates
Eight calibration points can be placed with relative certainty on the arthropod phylogeny (Fig. 2; Grimaldi and Engel 2005; Wiegmann et al. 2009; Sanders and Lee 2010). These were modeled as true soft constraints, using a normal distribution centered on the fossil age estimate, reflecting the bidirectionality of uncertainty inherent in such calibrations (Ho and Phillips 2009). Mean estimates and their 95% HPD for the age of clades of interest, and for the last common ancestor between pairwise comparisons among model genomic arthropods, were tabulated for easy access (Tables 2 and 3; a detailed pairwise species level list is also included in Supplementary Table S2).
We next assessed how our priors had influenced the posteriors by comparing our data driven posterior estimates to the results of a “null analysis,” which was solely based upon the fossil priors (i.e., without the DNA data). Of the 8 calibration points, the posterior distributions of age estimates were approximately the same as the prior distributions for 4 nodes (Vericrustacea, Communostraca, Diptera, and Pycnogonida), yet had been “updated” by the data resulting in posteriors that were younger for 2 nodes (Pulmonata and Chilopoda; Fig. 3) and older for 2 nodes (Hexapoda and Holometabola; Fig. 3). Such effects were also reflected in analyses using uniform priors, with the first 4 nodes mentioned above having a posterior distribution whose 95% HPD tails were within the uniform prior bounds (not shown), the second 2 having posterior distributions that are highly skewed up against the lower bound, and the latter 2 similarly skewed to the upper bound (Fig. 3). In sum, our data did significantly influence our posterior estimates, suggesting our soft priors did not result in any over parameterization bias in our posterior estimates for other nodes in our tree (Heled and Drummond 2012).
Comparison of the effect of different priors on posterior temporal estimates for 4 representative fossil calibrated nodes. Two posterior density distributions are shown in each plot, resulting from either a normal or uniform prior, along with the prior normal distribution (derived from running the model without DNA). In the Hexapoda and Holometabola plots, the distributions order from left to right is the prior, the normal, and uniform posterior. Both posterior estimates are older than the normal prior. In the Chilopoda and Pulmonata, distributions are, left to right, normal posterior, normal prior, uniform posterior. The normal posterior estimate is younger than both its prior and the lower limit of the uniform prior. x-axes show the relevant range of time in millions of years, while the y-axis is a normalized value for representing the peak and distribution each set of posterior estimates.
Estimated Ages of Divergences in Arthropoda
The topological robustness and generally low level of rate variation among the branches of our resolved tree (Fig. 4) together provide an excellent foundation for temporal estimation of the age of Arthropoda. No maximal temporal constraints were set in order to minimize bias in our estimate, with the oldest of the 8 calibration points being from Middle Cambrian at 516 Ma for Vericrustacea (a clade containing fairy shrimp, copepods, barnacles and crabs). Based on these constraints, our estimated age for the crown group of Arthropoda was 706 Ma with a 95% HPD of 631–787 Ma (Table 2). The next divergence between Euchelicerata and Mandibulata is estimated to have happened 675 Ma (95% HPD: 608–746 Ma), followed by the divergence between Myriapoda and Pancrustacea at 639 Ma (95% HPD: 580–702 Ma). Euchelicerata is estimated to have begun diversifying 533 Ma (95% HPD: 476–592 Ma), while Myriapoda first diversified 538 Ma (95% HPD: 469–614 Ma) and Hexapoda at 432 Ma (95% HPD: 420–445 Ma).
Variation in rates of molecular evolution across branches of the reconstructed arthropod phylogeny. Posterior estimates of branch rates vary between 0.0002 (dark blue) to 0.0013 (red) mutations per myr.
Test of the Cambrian Fuse Length: “Short” Versus “Long-Gradual”
Our estimates for the age of Arthropoda and the branch lengths for the subsequent diversification significantly reject alternative hypotheses in favor of the “long-gradual fuse” scenario. The 95% HPD for Arthropoda crown group is entirely within the Cryogenian, with the 3 subsequent bifurcations leading to the major lineages of arthropods having all, or nearly all, of their 95% HDP entirely in the Ediacaran, with mean values for these nodes spanning a range of nearly 150 myr (Fig. 4; Table 2). Given these findings, we used simulations to explore how much confidence we can place in this rejection of the “short-fuse” hypothesis, given the possible violations of the underlying lineage rate variation model. In other words, we investigated how much confidence we could place in the rejection of the “short-fuse” scenario under “worst case” scenarios of our estimation method.
Analysis of our “short-fuse” and “long-gradual fuse” simulations focused on the estimated age of Arthropoda, assessing how close to the fossil Cambrian explosion event it was located. For both simulations, the 95% HPD always contained the simulated age of Arthropoda and the width between the lower and upper 95% HPD values increased significantly with higher levels of autocorrelated rate variation (Fig. 5, Table S3). Such positive effects of autocorrelated rate variation on the 95% HPD interval reflect increasing uncertainty in posterior estimations as autocorrelated rate variance increases, as previously observed (Drummond et al. 2006; Battistuzzi et al. 2010). However, this increase in the 95% HPD interval was not random in our simulations (Fig. 5). In the “short-fuse” scenario, only the upper 95% HPD increased dramatically with increases in autocorrelated rate variation, while the lower 95% HPD remained very stable, which likely derives from the calibration points in the younger areas of the dataset (Fig. 5a). In the “long-gradual fuse” scenario, a similar effect was observed although there was a greater decrease in the lower 95% HPD at high levels of autocorrelated rate variation (Fig. 5b). Importantly, while our observed 95% HPD does not include the Cambrian explosion event, the 95% HPD of the “short-fuse” simulations always contain the Cambrian explosion event, even at levels of autocorrelated rate variation much higher than the rate variation in our data. In contrast, simulations of the “long-gradual fuse” scenario, which is based on the branch lengths of our observed tree, only included the Cambrian explosion event at levels of autocorrelated rate variation that are much higher than our observed levels, and in this region the 95% HPD interval is extremely wide reflecting the large uncertainly in estimation.
Simulation results showing posterior age estimates and the 95% HPD for Arthropoda under alternative Cambrian evolution scenarios. Each simulation result (mean (circle), lower (square) and upper (triangle) 95 % HPD) is derived from the combination of 10 independent topological replicates at a given level of autocorrelated rate variation (except when rate = 0). a) Modeling of the “short-fuse” scenario has all branch lengths <520 Ma and prior age constrains identical to our observed results. Branch lengths of basal lineages >520 Ma were modified to conform to a Cambrian explosion event between 540 and 530 Ma (shown in gray on both panels). b) Simulation results using observed branch lengths. Black filled symbols are results from the observed dataset. The x-axis shows increasing levels of autocorrelated rate variation among branches of the resulting simulated analyses. See Figure S1 for schematic.
In sum, our simulations have allowed us to assess the potential negative effects of autocorrelated rate variation on our analyses, as well as the potential limitations of our fossil calibration points. We find our observed branch length insights unaffected by autocorrelation at levels less than and well above the observed autocorrelated levels. Simulations indicate that a “short-fuse” Cambrian explosion could be detected at levels of autocorrelated rate variation nearly 4 times higher than observed levels. Thus, we conclude that we were able to significantly distinguish between the alternative “short-fuse” and “long-gradual fuse” alternatives for the origin of the Arthropods (Fig. 1), and find strong evidence rejecting the “short-fuse” scenario (Fig. 5; Supplementary Table S3).
Discussion
Estimating times of divergence using molecular data is becoming a routine part of studies in molecular systematics and powerful algorithms designed to perform these analyses continue to be developed. However, the amount of data needed to estimate ancient times of divergence within and between phyla is not clear at the moment. Our empirical results complement simulation results (Battistuzzi et al. 2010) in suggesting that datasets comprising about 60 protein coding gene regions with a wide distribution of fossil calibration points are enough to generate meaningful estimates of times of divergence within a phylum.
The Age of Arthropoda and Its Ancient Lineages
Our estimates of the times of the deeper divergences in Arthropoda are younger than most previous attempts to date Precambrian divergences using molecular data and Bayesian relaxed clock methods (e.g., Pisani et al. 2004; Sanders and Lee 2010; Schaefer et al. 2010). Results of the recent phylogenomic analysis (Rehm et al. 2011) are the most comparable to our results based on taxon and character sampling, and we find that some nodes are estimated to be younger and others to be older in our study. We note that Rehm et al. (2011) did not sample some key taxa, based their times of divergence on rates of amino acid changes on a fixed topology taken from Meusemann et al. (2010), and were forced to place an age prior on the root of the topology due to the algorithm they used. In contrast, our study has a more inclusive taxon sampling scheme for basal divergences in Arthropoda (based on Regier et al. 2010), we estimate times of divergence on rates of nucleotide changes while simultaneously estimating topology, and we have no restrictions on the age of the root. Importantly, our phylogenetic hypothesis differs at some crucial nodes compared with that of Meusemann et al. (2010). For instance, Meusemann et al. (2010) sample only 2 Myriapoda and find them to be sister to Pycnogonida+Euchelicerata. They also find Xiphosura (1 sampled taxon) to be sister to Araneae (1 sampled taxon) and these 2 sister to Acari (9 sampled taxa), i.e., horseshoe crabs are found to be within the chelicerate clade. In contrast, Regier et al. (2010), based on maximum likelihood and our study based on Bayesian inference, find Myriapoda (11 sampled taxa) to be sister to Pancrustacea with strong support and Xiphosura (2 sampled taxa) to be sister to the chelicerate clade (which includes 14 sampled taxa of mites, scorpions, and spiders). We find Pycnogonida (5 sampled taxa) to be sister to the rest of Arthropoda, whereas Regier et al. (2010) have it as sister to Euchelicerata. The latter 2 relationships are weakly supported in all analyses.
Our results strongly suggest a Precambrian origin for the arthropods, which is consistent with new fossil finds indicating that spongiform animals had already diverged from eumetazoans at 630 Ma (Maloof et al. 2010). We estimate that the arthropods began diversifying in the Precambrian ~706 Ma (95% HPD: 631–787), while Rehm et al. (2011) estimate a much younger age for the first split in Arthropoda at 562 Ma (95% confidence interval: 523–640), a result that may be influenced by their root priors.
The Paleozoic is an important era for the diversification of lineages leading to present day classes. Xiphosura (horseshoe crabs) diverged from their sister group Arachnida (mites, scorpions, and spiders) during the Middle Cambrian (Table 2, clade Euchelicerata), as did lineages leading to modern day classes of Myriapoda (centipedes and millipedes; Table 2). The times of divergence of these clades are not entirely comparable to Rehm et al. (2011), as their phylogenetic hypothesis is quite different at these nodes, but in general, their times are fairly similar to our estimates. For instance, their estimate for the divergence of centipedes from millipedes is almost identical to ours, placing it in the late Cambrian.
Early divergences in Pancrustacea (crabs, waterfleas, and insects) appear to have happened just prior to the Cambrian, although Cambrian divergences are not ruled out (Table 2). Rehm et al. (2011) do not sample any Oligostraca, thus they are likely missing the earliest divergence in Pancrustacea. Within Pancrustacea our results suggest that the clade Oligostraca needs attention, as Ostracoda is not monophyletic with regard to Mystacocarida, Pentastomida, and Branchiura. This is in contrast to Regier et al. (2010), who recovered Ostracoda as a monophyletic entity, although with little support. The divergence between Branchiura and Pentastomida is estimated to be in the Permian (Table 2, clade Ichthyostraca), ~200 myr younger than the recent estimate of Sanders and Lee (2010), although of note here is that the relationships are very different for this region of the topology compared with the latter study. Both Branchiura and Pentastomida are parasitic and have highly modified morphologies, thus the fossils attributed to Pentastomida from the Cambrian (Waloszek et al. 2006) may represent a stem group of the common ancestor of Branchiura and Pentastomida, which we estimate to have diverged from ostracodans in the early Ordovician or even the late Cambrian (445 Ma, 95% HPD: 362–526).
Our taxon sampling scheme covers a large number of insect lineages including all of the early divergences (Grimaldi and Engel 2005). The initial divergence between Entognatha (springtails) and Insecta (insects) appears to have happened in the Silurian (Table 2, clade Hexapoda), and the early divergences in Insecta during the Devonian (Fig. 2), leading to Archaeognatha, Zygentoma, and Pterygota (winged insects). Pterygota diversified during the late Devonian and early Carboniferous (Fig. 2). This is in stark contrast to Rehm et al. (2011), who found that Pterygota diversified in the late Silurian or early Devonian, despite giving this node a minimum age in the late Carboniferous. Similarly, we find that the most diverse extant group, Holometabola, diversified into the extant orders during the late Carboniferous and early Permian (Fig. 2), whereas Rehm et al. (2011) suggest that they did so much earlier in the late Devonian. As in Regier et al. (2010), our analysis supports the Paleoptera hypothesis of Ephemeroptera (mayflies) being the sister group of Odonata (dragonflies), and the estimated ages of these groups as early Carboniferous (Table 2) is in accordance with the fossil record (Grimaldi and Engel 2005). Again, Rehm et al. (2011) suggest that this group is much older. Our estimate of the age of the divergence of Dermaptera (earwigs) from the common ancestor of Orthoptera (grasshoppers) and Blattodea (cockroaches) is somewhat younger (early Permian; Table 2) than the fossil record would suggest (early Carboniferous), although our credibility interval for the divergence does encompass the early Carboniferous. We estimate that Phthiraptera (lice) diverged from Hemiptera (aphids, bugs) in the early Permian (Table 2) and the Hemiptera diverged into Sternorrhyncha (aphids, white flies, scale insects) and Heteroptera (true bugs) in the early Triassic (Fig. 2). The latter is in line with fossil evidence (Grimaldi and Engel 2005).
The Colonization of Land and the Origins of Flight: Insights from Molecular Data
Our estimated times of divergence provide insights into the 3 independent colonizations of land by arthropods (Fig. 6). The first lineage likely to have to colonized land was the common ancestor to myriapods in the early Cambrian (ca. 538 Ma), although the 95% HPD does stretch into the Ordovician (Table 2). Colonization by the chelicerate clade Arachnida appears to have occurred during the Cambrian or early Ordovician (Fig. 6, Table 2). These estimates for Arachnida are very close to the proposed ages for the fossilized tracks of potential chelicerates at ~490 Ma (MacNaughton et al. 2002). A recent study on mite evolution and their colonization of land based on limited sampling of only one gene (Schaefer et al. 2010) suggests that the common ancestor of mites existed some 570 Ma, which is older than our estimate of the colonization of land by the common ancestor of all arachnids (to which mites belong). The hexapodans appear to have colonized land later, during the Ordovician or Silurian (Fig. 6, Table 2), although our estimate is earlier than the earliest fossil hexapodans from the early Devonian (Grimaldi and Engel 2005). The crown clade of Hexapoda was one of our calibrated nodes, and the posterior estimates of the time of the first divergence are essentially the same, but somewhat older than the prior used to calibrate the node. Interestingly, a fossil insect believed to be a pterygote has been recorded from the early Devonian (Engel and Grimaldi 2004), suggesting that the age of insects in general may go back to the Silurian or indeed the Ordovician.
Multiple independent land invasions of the Arthropods. Clades that are aquatic are dark gray, while clades that are terrestrial are diagonal line filled. Gray shaded areas give 3 the intervals during which colonization of land may have happened (not taking 95% HPD intervals into account), except in the case of Malacostraca, where the lower limit of the colonization is not possible to ascertain based on the taxon sampling.
Given the overlap in the 95% HPDs among the myriapod and chelicerate lineages, we cannot exclude the possibility that the 2 colonizations happened at about the same time, although this appears unlikely. Rather, based on our mean estimates of the ages of the crown clades, myriapods appear to have left the aquatic life before plants, because fossil evidence for the first terrestrial plants appears roughly 60 myr later at 475 Ma (Gray 1985; Wellman et al. 2003). Arachnida also appears earlier than these plants at 501 Ma. Hexapods on the other hand appeared after the terrestrial plant fossils in the late Ordovician at around 433 Ma (Wellman et al. 2003; Gensel 2008).
Grimaldi and Engel (Grimaldi and Engel 2005) state that “[h]ow, when, and why insect wings originated is one of the most perplexing conundrums in evolution” (p. 158). Insect flight, which attains some of the highest mass-specific aerobic metabolic rates known to science (Sacktor 1976), has been hypothesized to have evolved during a period of hyperoxic conditions from the Late Devonian to the Late Carboniferous [ca. 375–250 Ma; (Dudley 1998). This period is also known for insect gigantism (Briggs 1985), even among flying insects (Carpenter 1992). The higher O2 conditions during this period, indicated in the geochemical record (Berner 1999), are argued to have facilitated the evolution of flight by simultaneously providing higher amounts of O2 for passive uptake by insects and increasing air density for greater lift (Dudley 1998). Although suggestive, molecular data are needed to provide independent confirmation of whether the origins of the Pterygota occurred during these exceptionally high ancient atmospheric oxygen levels or predated this event.
Based on our results, it is clear that the evolution of insect wings happened much earlier than the fossil record would suggest and led to a relatively rapid radiation of insects (Fig. 7). The ancestor to all Pterygota diverged from the common ancestor of Zygentoma (silverfish) in the late Devonian (ca. 384 Ma). The putative Devonian pterygote fossil (Engel and Grimaldi 2004) is very much inline with our estimate of the origin of Pterygota. The initial divergence was followed by a series of divergences leading to clades with large numbers of species, i.e., the lineages leading to Paleoptera (367 Ma), Polyneoptera (346 Ma), Paraneoptera (330 Ma), Hymenoptera (308 Ma), Coleoptera (288 Ma), and finally Lepidoptera and Diptera (267 Ma). Within 120 myr, the basis for today's incredible diversity of flying insects was established. Intriguingly, the first 100 myr of this period coincides with the period of increasing atmospheric oxygen levels (Berner 1999), which began rising in the late Devonian and peaked at about the same time as the major holometabolan lineages diverged from each other in the early Permian. This rise in atmospheric oxygen is attributed to the appearance and spread of large and woody vascular plants, resulting in an increased burial of organic carbon that in turn formed the most abundant coal deposits in Earth's history, from which the time period's name Carboniferous derives (Berner 1999). However, although flight clearly did evolve during a period of increasing oxygen levels and impressively large forests, the initial steps in the evolution of flight are primarily associated with the Devonian just prior to Romer's Gap, when atmospheric O2 levels were relatively low although fossils of large forests were well established (Willis and McElwain 2002; Ward et al. 2006).
The origins and evolution of insect flight. The Insecta chronogram is taken from Fig. 2. The evolution of oxygen content of the atmosphere over time is shown below the chronogram and is based on (Berner 1999). The darker shaded area gives the interval during which major lineages of flying insects diverged.
Dating with Confidence
Our findings suggest that in order to date with confidence (Drummond et al. 2006), size does matter. The necessity of multiple calibration points has been discussed by a number of authors (Hug and Roger 2007; Ho and Phillips 2009), but the size of a dataset, either number of taxa or number of characters sampled, has not received as much attention in studies estimating times of divergence (Wertheim and Sanderson 2011). As pointed out in the Introduction section, most previous studies have sampled either very few taxa and many gene regions or many taxa and few gene regions. Typical of early studies based on few taxa and few gene regions were much older estimates of divergence times than one would expect based on the fossil record. Increasing the number of gene regions has not appeared to alleviate this problem (Sanders and Lee 2010), unless very strong priors are used to calibrate the tree (Peterson et al. 2008), which may decrease informative input from the data itself (Sanders and Lee 2010). Increasing the number of taxa and relaxing prior constraints in order to get a better representation of the basal, ancient divergences does appear to help in arriving at more realistic estimates of when they have happened (e.g., Wahlberg et al. 2009; Bell et al. 2010).
Several previous studies that have estimated the age of various clades within the arthropods, which, while using somewhat similar analysis methods to our own, had much smaller phylogenetic breadth, taxonomic depth, and gene sampling (see Introduction section). The exception to this is the recent study based on EST libraries (Rehm et al. 2011). The study by Rehm et al. (2011) differs in several respects to our study, as we have noted above. Our results conflict with theirs in several important respects, our estimate for the age of the basal split in Arthropoda is much older than theirs (ca. 700 vs. ca. 560 Ma) and our estimates for the ages of the basal splits in insects are much younger than theirs (between 50 and 100 myr younger). However, both studies do suggest that arthropods began diversifying in the Precambrian and our simulations suggest that this is a robust result. These age discrepancies for the major insect lineages are probably explained by the way in which calibrations were implemented. Rehm et al. (2011) appear to have used hard minimum bounds with no maximum limits (either soft or hard), except at one node describing the age of the first split in Diptera, which has both a hard maximum and a hard minimum bound. Compared with the use of these hard bounds, we feel the use of soft bounds with mean ages provides for more interaction between our data and uncertainty in the calibration points during analysis. However, given the exceptional taxonomic diversity of the Arthropoda, estimations of the age of this clade will always be based on an extremely limited subset of taxa. Whether larger datasets of taxa and genes continue to disagree or converge upon consensus remains to be determined in the coming years.
Substantial changes in substitution rates along branches could, in theory, complicate our ability to date with confidence, especially when rate changes are autocorrelated (Drummond et al. 2006; Battistuzzi et al. 2010). Using simulation analyses, we directly addressed the potential negative effects of such variation across a range of rate changes, from lower to much higher than observed levels (Fig. 5). We found no effect on the ability to detect a “short-fuse” Cambrian explosion with increasing levels of autocorrelated rate variation, with our observed results being significantly older than the Cambrian explosion. Exploration of similar levels of autocorrelated rate variation in our observed dataset, which is consistent with a “long-fuse, gradual” scenario of arthropod evolution, also finds no significant overlap with the Cambrian explosion unless autocorrelated rates are simulated at much higher levels.
Dating complications could also arise should rate changes be associated with basal nodes close to the Cambrian explosion. Indeed, previous study reported an increase in substitution rate among the basal branches of Arthropoda (Aris-Brosou and Yang 2002; Aris-Brosou and Yang 2003). However, these increases in rate appear to have been spurious results (Ho et al. 2005). Our findings suggest that genome-wide rate variation across the arthropod tree is low when averaged across a large phylogenomic dataset (Fig. 4), especially among the basal branches of the tree. However, previous study using a 5-taxon analysis of genomic data did identify an increased rate of molecular evolution for an internal region of the arthropod phylogeny, in the branches between Coleoptera and Diptera (Savard et al. 2006). Our results agree. The increased taxonomic sampling of our dataset localizes this rate increase to the basal branches of insects with a significant increase in the lineages leading to Diptera and Lepidoptera (Fig. 4). Further detailed sampling coupled with additional fossil data is needed to more accurately resolve the relationship between divergence timing and evolutionary rates among these ancestral branches.
A final concern is that of the age constraint structure and fossil placements when estimating times of divergence (Hug and Roger 2007; Ho and Phillips 2009), both of which can have large effects on results. Our use of multiple true soft constraints, relatively evenly spread throughout the topology and modeled as normal distributions, has allowed us to observe how these constraints affect the posterior results. Looking at the individual nodes with the prior constraints, we find no systematic bias in the effects, with some posterior estimates of ages being older than the priors, some being similar and some being younger than our imposed priors (Fig. 3). By allowing such shifts by using soft constraints, we feel, as others do, that it provides a more robust analysis of the data (Sanders and Lee 2007; Ho and Phillips 2009). Moreover, by including all of these constraints in our simulation analyses, we have simultaneously assessed the interaction of these constraints with various amounts of autocorrelated rate change, finding them sufficient for our temporal investigations.
Given these findings, should the arthropods have arisen quickly under a “short-fuse” Cambrian explosion scenario, our analyses would have detected this event even under some “worst case” scenarios of molecular evolution. Complimentary to this, our observed estimate of the crown age of Arthropoda was significantly older than the Cambrian explosion event (Fig. 5a). In sum, our observations and simulations are consistent with a “long-fuse” scenario of gradual evolution of the Arthropoda during the Precambrian, possibly beginning in the Cryogenian.
Conclusions
The results presented here provide estimates of times of divergence in the megadiverse phylum of Arthropoda using what we believe to be the most robust estimation methods available. In addition, we explicitly test the Cambrian explosion hypothesis using these methods on simulated data. Our molecular-based results provide independent temporal estimates for the study of macroevolutionary events that are complementary to fossil data. Knowing the age of the arthropods, as well as when subsequent major lineages appeared, provides a powerful tool for studying macroevolutionary events fundamental to our understanding of evolution. Recent fossil finds from the Ediacaran suggest that Metazoans are older than previously thought (Maloof et al. 2010; Yuan et al. 2011), and such discovery is an ongoing process that has continually pushed Metazoan origins deeper in time. This growing body of empirical data is concordant with our results in suggesting that conclusions based on the absence of data, such as the paucity of fossils in the Ediacaran, may be substantially revised over time with new fossil finds.
Supplementary Material
Supplementary material, including data files and/or online-only appendices, can be found in the Dryad data repository at http://datadryad.org, doi:10.5061/dryad.3r4j45d2.
Funding
Funding for this work was provided by the Academy of Finland (grant numbers 131155, 129811).
Acknowledgements
A special thank you to R. Robertson and the excellent guides at the Burgess Shale Geoscience Foundation for a tour of the Walcott Quary, which inspired much of this work. We thank Simon Ho for help with his program RateEvolver, and Joona Lehtomäki and Jussi Nokso-Koivisto for help with Python. We are also grateful for access to the resources of CSC -IT Center for Science, Finland, which were used to perform the Bayesian analyses. We thank associate editor Brian Wiegmann, Karl Kjer and an anonymous reviewer for constructive comments on previous versions of this article.
References
Arango
C.P.
,
Wheeler
W.C.
.
Phylogeny of the sea spiders (Arthropoda, Pycnogonida) based on direct optimization of six loci and morphology
|
6, Table 2), although our estimate is earlier than the earliest fossil hexapodans from the early Devonian (Grimaldi and Engel 2005). The crown clade of Hexapoda was one of our calibrated nodes, and the posterior estimates of the time of the first divergence are essentially the same, but somewhat older than the prior used to calibrate the node. Interestingly, a fossil insect believed to be a pterygote has been recorded from the early Devonian (Engel and Grimaldi 2004), suggesting that the age of insects in general may go back to the Silurian or indeed the Ordovician.
Multiple independent land invasions of the Arthropods. Clades that are aquatic are dark gray, while clades that are terrestrial are diagonal line filled. Gray shaded areas give 3 the intervals during which colonization of land may have happened (not taking 95% HPD intervals into account), except in the case of Malacostraca, where the lower limit of the colonization is not possible to ascertain based on the taxon sampling.
Given the overlap in the 95% HPDs among the myriapod and chelicerate lineages, we cannot exclude the possibility that the 2 colonizations happened at about the same time, although this appears unlikely. Rather, based on our mean estimates of the ages of the crown clades, myriapods appear to have left the aquatic life before plants, because fossil evidence for the first terrestrial plants appears roughly 60 myr later at 475 Ma (Gray 1985; Wellman et al. 2003). Arachnida also appears earlier than these plants at 501 Ma. Hexapods on the other hand appeared after the terrestrial plant fossils in the late Ordovician at around 433 Ma (Wellman et al. 2003; Gensel 2008).
Grimaldi and Engel (Grimaldi and Engel 2005) state that “[h]ow, when, and why insect wings originated is one of the most perplexing conundrums in evolution” (p. 158).
|
no
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
no_statement
|
the "silurian" "period" was not the "birth" of the first "land" "plants".. "land" "plants" did not emerge during the "silurian" "period".
|
https://www.nature.com/articles/s41559-022-01885-x
|
Divergent evolutionary trajectories of bryophytes and tracheophytes ...
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Subjects
Abstract
The origin of plants and their colonization of land fundamentally transformed the terrestrial environment. Here we elucidate the basis of this formative episode in Earth history through patterns of lineage, gene and genome evolution. We use new fossil calibrations, a relative clade age calibration (informed by horizontal gene transfer) and new phylogenomic methods for mapping gene family origins. Distinct rooting strategies resolve tracheophytes (vascular plants) and bryophytes (non-vascular plants) as monophyletic sister groups that diverged during the Cambrian, 515–494 million years ago. The embryophyte stem is characterized by a burst of gene innovation, while bryophytes subsequently experienced an equally dramatic episode of reductive genome evolution in which they lost genes associated with the elaboration of vasculature and the stomatal complex. Overall, our analyses reveal that extant tracheophytes and bryophytes are both highly derived from a more complex ancestral land plant. Understanding the origin of land plants requires tracing character evolution across a diversity of modern lineages.
Main
The origin and early evolution of land plants (embryophytes) constituted a formative episode in Earth history, transforming the terrestrial landscape, the atmosphere and the carbon cycle1,2. Along with bacteria, algae, lichens and fungi3, land plants were fundamental to the creation of the earliest terrestrial ecosystems, and their subsequent diversification has resulted in more than 370,000 extant species4. Embryophytes form a monophyletic group nested within freshwater streptophyte algae5 and their move to land, while providing a new ecological niche, presented new challenges that required adaptation to water loss and growth against gravity6. Early innovations that evolved in response to these challenges include a thick waxy cuticle, stomata and a means of transporting water from the roots up vertically growing stems2,5,7,8. Modern land plants comprise two main lineages, vascular plants (tracheophytes) and non-vascular plants (bryophytes), that have responded to these evolutionary challenges in different ways.
The evolutionary origins of many gene families, including those of key transcription factors, have been shown to predate the colonization of land9,10. However, studies of gene family evolution within land plants have typically been restricted to individual gene families or sets of genes that encode single traits11,12,13,14,15,16. A lack of genome-scale data from non-flowering plants has also hindered efforts to reconstruct patterns of genome and gene content evolution more broadly across land plants17, although this challenge has been mitigated by the publication of large transcriptomic datasets18. Progress has also been made towards resolving the ambiguous phylogenetic relationships at the root of land plants15,18,19,20,21,22,23. The bryophyte fossil record has also undergone a radical reinterpretation such that there are now many more records with the potential to constrain the timescale of early land plant evolution24,25,26. Finally, new methods have been developed for timetree calibration based on the relative time constraints informed by horizontal gene transfer (HGT) events27.
Here we seek to exploit these advances in elucidating early land plant evolution. We first infer a rooted phylogeny of land plants using outgroup-free rooting methods and both concatenation and coalescent approaches. We then estimate an updated timescale of land plant evolution incorporating densely sampled fossil calibrations that reflect a revised interpretation of the fossil record. We extend this analysis using gene transfer events to better calibrate the timescale of hornwort evolution, a poorly constrained region of the land plant tree. By building on this dated phylogeny, we reconstruct the gene content evolution of bryophytes, tracheophytes and the ancestral embryophyte, revealing how key genes, pathways and genomes diverged during early land plant evolution.
Results
Complementary rooting approaches support the monophyly of bryophytes
A rooted phylogenetic framework is required to infer the nature of the ancestral embryophyte and to trace changes in gene content during the evolution of land plants. To that end, we compiled a comprehensive dataset of the published genome and transcriptome data from embryophytes and their algal relatives, and we inferred species trees using concatenation (PhyloBayes and IQ-TREE) and coalescent (ASTRAL) approaches (Supplementary Information). When the tree was rooted with an algal outgroup, we recovered bryophyte monophyly and a root between bryophytes and tracheophytes with high support across all analyses (Extended Data Fig. 1), in agreement with recent work15,18,20,22,23,28. However, rooting phylogenies with an outgroup can influence the ingroup topology due to long-branch attraction (LBA)29,30,31, where distantly related or fast-evolving taxa artifactually branch with the outgroup. LBA resulting from the large evolutionary distance between land plants and their algal relatives has previously been suggested as a possible cause of the difficulty in resolving the land plant phylogeny32. Indeed, outgroup-rooting analyses using different models20,33, datasets and molecules (that is, chloroplast, mitochondrial or nuclear sequences22,28) have provided support for conflicting hypotheses about the earliest-branching lineages and the nature of the ancestral land plant. LBA is thus a known artefact when recovering the land plant phylogeny.
To address the impact of LBA and complement traditional outgroup-rooting analyses, we used two outgroup-free rooting methods—amalgamated likelihood estimation (ALE) and STRIDE34,35—to infer root placement on a dataset of 24 high-quality embryophyte genomes without the inclusion of an algal outgroup (Fig. 1). ALE calculates gene family likelihoods for a given root position under a model of gene duplication, transfer and loss (DTL)34; support for candidate root positions can then be evaluated by comparing their summed gene family likelihoods. STRIDE first identifies putative gene duplications in unrooted gene trees that can act as synapomorphies for post-duplication clades. The root of the species tree is then estimated using a probabilistic model that accounts for conflict among the inferred duplications35. Across 18,560 orthogroups, STRIDE recovered three most parsimonious roots: between bryophytes and tracheophytes, between liverworts and the remaining land plants and between hornworts and the remaining land plants (Fig. 1). Of these, the rooting on hornworts was assigned a 0.2% probability, on liverworts a 59.8% probability and between bryophytes and tracheophytes a 39.9% probability. To estimate root likelihoods using the ALE approach, we first used the divergence time estimates from the molecular clock analysis to convert branch lengths into units of geological time, allowing us to perform time-consistent reconciliations (that is, to prevent reconciliations in which gene transfers occur into the past). We reconciled 18,560 gene families under the 12 rooted and dated embryophyte trees (Fig. 1a) and used an approximately unbiased (AU) test (Fig. 1b) to evaluate support for the tested root positions. The AU test rejected 9 of 12 roots (P < 0.05; Fig. 2b and Supplementary Table 3), resulting in a credible set of three roots: the hornwort stem, the moss stem and a root between bryophytes and tracheophytes. These three credible roots are in close proximity on the tree, and root positions further from this region are rejected with increasing confidence (Fig. 1b and Supplementary Table 1). To evaluate the nature of the root signal for these three branches, we performed a family-filtering analysis in which families with high DTL rates were sequentially removed and the likelihood re-evaluated. The rationale for this analysis is that the evolution of these families may be poorly described by the model, and so they may contribute misleading signals36. In this case, the root order did not change after the removal of the high-DTL-rate families (Supplementary Fig. 1), suggesting broad support for these root positions from the data and analysis. Note that, in the ALE analysis, the moss and hornwort stems were accorded a higher summed gene family likelihood than was the branch separating bryophytes and tracheophytes, although the difference was not significant (hornwort stem log-likelihood, −824,522.9, P = 0.624; moss stem log-likelihood, −824,606.5, P = 0.475; bryophyte stem log-likelihood, −824709.1, P = 0.277). In a secondary analysis, we also used ALE to compare support for these different root positions in a smaller dataset of 11 genomes that included algal outgroups; in this analysis, all roots were rejected except for a root between tracheophytes and bryophytes (Extended Data Fig. 2, P < 0.05).
Fig. 1: Investigating the root of embryophytes using outgroup-free rooting.
a, An unrooted maximum likelihood tree was inferred from an alignment of 24 species and 249 single-copy orthogroups under the LG + C60 + G4 + F model69. Twelve candidate root positions for embryophytes were investigated using both ALE and STRIDE. For the ALE analysis, the unrooted tree was rooted in each of the 12 positions and scaled to geological time on the basis of the results of the divergence time analysis, and 18,560 gene clusters were reconciled using the ALEml algorithm88. The green circles highlight supported roots following the ALE analysis, while the red circles denote supported nodes in the STRIDE analysis. b, The likelihood of the 12 embryophyte roots was assessed with an AU test. The AU test significantly rejected 9 of the 12 roots, with roots on hornworts, moss and monophyletic bryophytes (root positions 9, 12 and 8, respectively) comprising the credible set. c, Phylogenetic trees constrained to the credible roots were inferred in IQ-TREE69 under the LG + C60 + G + F model. An AU test was used to evaluate the likelihood of each of the constrained trees90, with the root resulting in monophyletic bryophytes being the only one not to be significantly rejected.
Divergence times in millions of years as inferred using a molecular clock model, 68 fossil calibrations and an HGT. The inference that the common ancestor of embryophytes lived during the Cambrian is robust to the choice of maximum age constraints (Supplementary Methods). The divergence times of hornworts are constrained by an HGT into polypod ferns, with the result that the hornwort crown is inferred to have diverged during the Permian–Triassic. The nodes are positioned on the mean age, and the bars represent the 95% highest posterior density.
Finally, we constrained the topology of the tree inferred from the concatenated alignment to be in accordance with the three credible roots and computed the likelihood of sequence data along those trees. Trees with embryophyte roots constrained to hornworts and moss were significantly rejected (P < 0.05, AU test; Supplementary Table 2). The agreement between three rooting methods using different sources of information (outgroup placement, gene duplications alone and DTL events more broadly) therefore provides the most compelling support for a root between bryophytes and tracheophytes from our analyses. Taking our analyses together with other recent work15,20,22,23,28 suggests that a root between monophyletic tracheophytes and bryophytes is the best-supported hypothesis of land plant phylogeny. Bryophyte monophyly is therefore the default hypothesis with which to interpret land plant evolution.
Combined fossil and genomic evidence, including an ancient HGT, calibrate the timescale of land plant evolution
We estimated divergence times on the resolved land plant phylogeny (Fig. 2). We assembled a set of 68 fossil calibrations, representing every major lineage of land plant and notably sampling more bryophyte fossils than previous studies (Supplementary Methods). Despite this increased sampling, the fossil record of hornworts remains particularly sparse, and no fossils unambiguously calibrate the deepest branches within the clade. To ameliorate the limitations of the fossil record, we implemented a relative node age constraint based on the horizontal transfer of the chimaeric photoreceptor NEOCHROME from hornworts into ferns37. To account for uncertainty in the timing of the gene transfer, we evaluated the impacts of several possible scenarios on our analyses (Extended Data Fig. 3). In the absence of direct fossil calibrations for hornworts, this gene transfer provides a relative constraint that ties the history of hornworts to that of ferns, for which more fossils are available.
Our results are congruent with those of previous studies38 but offer greater precision on many nodes and in some cases greater accuracy (Supplementary Fig. 2). This has been leveraged by a denser sampling of fossil calibrations, improved taxonomic sampling (especially among bryophytes), relative calibration of hornworts using the NEOCHROME HGT, and the ability to condition divergence times on a single topology.
The role and influence of fossil calibrations in molecular clock studies, especially maximum age calibrations, remain controversial23,39,40. While the fossil record is an incomplete representation of past diversity, our analyses account for this uncertainty in the form of soft minima and maxima. Morris et al.38 inferred a relatively young age for the embryophyte crown ancestor (515–470 million years ago (Ma)), making use of a maximum age constraint based on the absence of embryophyte spores in strata for which fossilization conditions were such that spores of non-embryophyte algae have been preserved. Hedges et al.39 and Su et al.23 argued against the suitability of this maximum age constraint on the basis that calibrations derived from fossil absences are unreliable and that the middle Cambrian maximum age exerts too great an influence on the posterior estimate8,41. To assess the sensitivity of our approach to the effect of maximum age calibrations, we repeated the clock analyses with less informative maximum age calibrations (Supplementary Methods). Removing the maximum age constraint on the embryophyte node produced highly similar estimates to when the maximum is employed (Extended Data Fig. 4). Relaxing all maxima did result in more ancient estimates for the origin of embryophytes, although still considerably younger than recent studies23, extending the possible origin for land plants back to the Ediacaran (540–597 Ma; Extended Data Fig. 4). The older ages estimated in Su et al.23 seem to reflect, in part, differences in the phylogenetic assignment of certain fossils (Supplementary Methods), such as the putative algae Proterocladus antiquus and the liverwort Ricardiothallus devonicus, rather than a dependence on the maximum age calibration. Our results reject the possibility that land plants originated during the Neoproterozoic, instead supporting an origin of the land plant crown group during the mid-late Cambrian, 515–493 Ma, with crown tracheophytes and crown bryophytes originating 452–447 Ma (Late Ordovician) and 500–473 Ma (late Cambrian to Early Ordovician), respectively. Within bryophytes, the divergence between Setaphyta (mosses + liverworts) and hornworts occurred by 479–450 Ma (Ordovician), with the radiation of crown mosses by 420–364 Ma (latest Silurian to Late Devonian) and crown liverworts 440–412 Ma (early Silurian to Early Devonian). Among tracheophytes, the crown ancestor of lycophytes is dated to the middle Silurian to Early Devonian, 431–411 Ma, coincident with that of euphyllophytes 432–414 Ma.
The calibration of hornwort diversification using the NEOCHROME HGT had a substantial impact on inferences of stem and crown group age. In the absence of fossil calibrations on deep nodes, hornworts are characterized by an ancient stem lineage and the youngest crown lineage among land plants38,42. The effect of the relative age constraint is to make the crown group older (294–214 Ma; Fig. 2) and thus shorten the length of the stem, with divergence times within the crown group all moving older. We repeated the analysis with alternative placements for the relative time constraint, with the age of crown hornworts becoming increasingly ancient when the transfer was placed into the ancestor of more inclusive clades, Cyatheales + Polypodiales (258–419 Ma) or before the divergence of Gleicheniales from the Cyatheales + Polypodiales clade (331–445 Ma), respectively (these scenarios are illustrated in Extended Data Fig. 3). All of these estimates considerably predate the earliest unequivocal fossils assigned to hornworts. However, given the scarcity of hornwort fossils, it seems likely that this clade is older than a literal reading of the fossil record might suggest.
Gene content of the embryophyte common ancestor
We used gene-tree/species-tree reconciliation to estimate the gene content of the embryophyte common ancestor (Supplementary Tables 3–5). We used the genome dataset from the ALE rooting analysis with the addition of five algal genomes, to better place the origin of families that predate the origin of embryophytes (Supplementary Fig. 3). The tree was dated following the same methodology as the larger dating analysis while using an applicable subset of calibrations, allowing the use of a dated reconciliation algorithm (ALEml) to improve the estimation of DTL events (Supplementary Fig. 4).
The analysis of ancestral gene content highlighted considerable gene gain along the ancestral embryophyte branch (Fig. 3a and Supplementary Table 3). A substantial number of duplications defined this transition, with fewer transfers and losses observed. Our analysis suggests that the common ancestor of embryophytes and Zygnematales had more of the building blocks of plant complexity than extant Zygnematales, which have undergone a loss of 1,442 gene families since their divergence, the largest loss observed on the tree (Fig. 3a). Functional characterization of the genes lost in the Zygnematales using the KEGG database identified gene families involved in the production of cytoskeletons, exosomes and phenylpropanoid synthesis (Supplementary Table 6). Exosomes and complex cytoskeletons are essential for multicellular organisms to function43,44, and the inferred loss of these gene families is consistent with the hypothesis that the body plan of the algal ancestor of embryophytes was multicellular5, rather than possessing the single-cell or filamentous architecture observed in extant Zygnematales. The more complex cytoskeleton could be associated with increased rigidity, helping overcome the gravitational and evaporative pressures associated with the transition to land6. Interestingly, phenylpropanoids are associated with protection against UV irradiance45 and homiohydry5, suggesting that the common ancestor may have been better adapted to a terrestrial environment than extant Zygnematales.
Fig. 3: Gene content reconstruction of the ancestral embryophyte.
a, Ancestral gene content was inferred for the internal branches of the embryophyte tree. A maximum likelihood tree was inferred from an alignment of 30 species of plants and algae, comprising 185 single-copy orthologues and 71,855 sites, under the LG + C60 + G4 + F model in IQ-TREE69, and rooted in accordance with our previous phylogenetic analysis. A timescale for the tree was then calculated using a subset of 18 applicable fossil calibrations in MCMCtree. We reconciled 20,822 gene family clusters, inferred using Markov clustering87, against the rooted dated species tree using the ALEml algorithm88. The summed copy number of each gene family (under each branch) was determined using custom Python code (branchwise_number_of_events.py). Branches with reduced copies from the ancestral node are coloured in red. The numbers of DTL events are represented by purple, blue and red circles, respectively. The sizes of the circles are proportional to the summed number of events (the scale is indicated by the grey circle). b, The number of DTL events scaled by time for four clade-defining branches in the embryophyte tree. c, The number of shared gene families between the ancestral embryophyte, liverwort and angiosperm. The ancestral embryophyte shares more gene families with the ancestral angiosperm than with the ancestral liverwort.
We also observed greater gene loss along the bryophyte stem lineage (Fig. 3a and Supplementary Tables 3, 7 and 8), with the rate of gene loss (in terms of gene families per year) substantially greater than in all other major clades (Fig. 3b). It is important to note that inferences of gene loss from large-scale analyses are sensitive to the approach used to cluster sequences and define gene families; current approaches are not consummate. We therefore sought to evaluate the robustness of our conclusions using a range of sensitivity analyses (Supplementary Figs. 5–8). These suggested that, while the number of inferred gene losses on the bryophyte stem varies, it remains an event of major gene loss under all conditions tested. We also observed considerable losses along the tracheophyte stem, countered by a greater number of duplications (Supplementary Table 9). This suggests a period of genomic upheaval on both sides of the embryophyte phylogeny. Gene Ontology (GO) term functional annotation of the gene families lost in bryophytes reveals reductions in shoot and root development from the ancestral embryophyte (Supplementary Table 7 and Extended Data Fig. 5). To investigate the evolution of genes underlying morphological differences between tracheophytes and bryophytes, we evaluated the evolutionary history of gene families containing key Arabidopsis genes for vasculature and stomata (Supplementary Table 10). Gene families associated with both vasculature and stomatal function exhibited lineage-specific loss in bryophytes (Supplementary Figs. 9 and 10). Specifically, four orthologous gene families that are involved in the determination of the Arabidopsis body plan, containing WOX4, SPCH/MUTE/FAMA, AP2 and ARR, were inferred to be lost on the bryophyte stem (Supplementary Table 10). To investigate these inferred losses in more detail, we manually curated sequence sets and inferred phylogenetic trees for these families (Supplementary Methods and Extended Data Fig. 6). These analyses of individual gene families corroborated the pattern of loss along the branch leading to bryophytes. The loss of these orthologous gene families strengthens the hypothesis that ancestral embryophytes had a more complex vasculature system than that of extant bryophytes8. Overall, the loss of gene families (Fig. 3) and the change in GO term frequencies (Extended Data Fig. 5) suggest a widespread reduction in complexity in bryophytes, and the ancestral embryophyte being more complex than previously envisaged. Indeed, gene loss defines the bryophytes early in their evolutionary history, but large numbers of duplication and transfer events are observed following the divergence of the setaphytes and hornworts (Supplementary Table 3), with (for example) extant mosses boasting a similar gene copy number to tracheophytes (Fig. 3).
Discussion
We have presented a time-scaled phylogeny for embryophytes, which confirms the growing body of evidence that bryophytes form a monophyletic group (Fig. 1), and our precise estimates of absolute divergence times provide a robust framework to reconstruct genome evolution across early land plant lineages (Fig. 2). Our results confirm that many well-characterized gene families predate the origin of land plants9,10,15,46,47. However, our analyses also show that extensive gene loss has characterized the evolution of major embryophyte groups. Reductive evolution in bryophytes has been demonstrated previously, where the loss of several genes has resulted in the lack of stomata15,48.
Our results suggest that these patterns of gene loss are not confined to stomata but are instead pervasive across bryophyte (and tracheophyte) genomes, and that much of the genome reduction occurred during a relatively brief period of ~20 million years following their divergence from tracheophytes during the Cambrian. While the balance of evidence favours bryophyte monophyly, it is interesting to note that the inference of high levels of gene loss in bryophytes is not contingent on this hypothesis: extensive within-bryophyte gene loss was inferred under all three of the roots within the credible region identified in the ALE analysis (Supplementary Table 11). These findings point to contrasting dynamics of genome evolution between the two major land plant lineages, with bryophytes demonstrating a net loss of genes, whereas gene loss is balanced by duplication in tracheophytes. The evolutionary pressures that underlay this ‘Cambrian implosion’ and the ways in which gene loss contributed to the evolution of the bryophyte body plan (such as the loss of genes associated with vasculature) remain unclear. It has been proposed that the radiation of vascular plants, heralded by the increased diversity of trilete spores in the palynological record, relegated bryophytes to a more marginal niche49. However, it seems possible that bryophytes independently evolved to exploit this niche, shedding the molecular and phenotypic innovations of embryophytes where they were no longer necessary. A large body of research has focused on the importance of gene and whole-genome duplication in generating evolutionary novelty in land plant evolution50,51,52,53. However, gene loss is an important driver of phenotypic evolution in other systems54,55,56, notably in flying and aquatic mammals57 and yeast58. It has also been shown that rates of genome evolution, rather than absolute genome size, correlate with diversification across plants59. Extant bryophytes remain highly diverse, and it is possible that bryophytes represent another example of specialization and evolutionary success via gene loss.
Bryophytes have sometimes been used as models in physiological and genetic experiments to infer the nature of the ancestral land plant. Our analysis suggests that modern bryophytes are highly derived: in terms of gene content, our analysis suggests that the ancestral angiosperm may have shared more genes with the ancestral land plant than did the ancestral liverwort (Fig. 3c). Such differences in gene content between species can be visualized as an ordination, where the two-dimensional distances between species represent dissimilarity in gene content. Reconstructed gene content at ancestral nodes can be projected into this space, showing the evolution of gene content along the phylogeny (Fig. 4). These genome disparity analyses reveal that the genomes of bryophytes and tracheophytes are both highly derived. Neither lineage occupies an ancestral position, with lineage-specific gene gain and loss events driving high disparity in both bryophytes and tracheophytes, reinforcing the view that there are no extant embryophytes that uniquely preserve the ancestral state20,21,60. Despite the paucity of data for some groups, these analyses reveal that the diversity among bryophyte genomes is comparable to that among tracheophyte genomes. These results are perhaps unsurprising given that bryophytes have been evolving independently of tracheophytes since the Cambrian and the similarly ancient divergence of each of the major bryophyte lineages, but they emphasize the point that, in general terms, bryophytes serve as no better a proxy for the ancestral land plant than do tracheophytes. Our results therefore agree that a view of bryophytes as primitive plants may mislead inferences of ancestral gene content or character evolution20,61. Instead, the best model organism(s) for investigating the nature of early plants will depend on the trait being investigated, alongside a careful appraisal of the phylogenetic diversity, including algal outgroups. Likewise, interpretations of the early land plant fossil record have been contingent on the first land plants appearing more like extant bryophytes than tracheophytes. That the ancestral embryophyte may have been more complex than living bryophytes is in keeping with many early macrofossils being more complex than bryophytes and possessing a mosaic of tracheophyte and bryophyte traits8,62.
Fig. 4: Genome disparity analysis demonstrates that the gene content of both tracheophytes and bryophytes is highly derived.
Non-metric multidimensional scaling (NMDS) analysis of the presence and absence of gene families. The presence or absence of each gene family was determined from the ALE analysis for each tip and internal node in the phylogeny. The presence/absence data were used to calculate the Euclidean distances between species and nodes, which were then ordinated using NMDS. Branches were drawn between the nodes of the tree, with convex hulls fitting around members of each major lineage of land plants.
Methods
Sequence data
An amino acid sequence dataset was assembled for the outgroup rooting analysis composed of 177 species, with 23 algae and 154 land plants (Supplementary Table 12). The sequence data were obtained from published transcriptomes18,63 or whole-genome sequences from the NCBI repository64. For the outgroup-free rooting, a second dataset of 24 whole genomes consisting solely of land plants was constructed (Supplementary Table 13). A further 6 genomes, comprising 1 land plant and 5 algae, were used to infer the ancestral gene content across land plants (Supplementary Table 13). The completeness of each genome or transcriptome was assessed using the BUSCO algorithm and the Viridiplantae library65, with completeness measured as the percentage of present BUSCO genes (Supplementary Tables 12 and 13 and Supplementary Figs. 11–14).
Software
All custom Python scripts used in the current study are available at https://github.com/ak-andromeda/ALE_methods/. Software usage is described in the PDF document ALE_methods_summary.pdf in the GitHub folder along with a demonstration dataset.
Phylogenetics
Supermatrices
We aligned 160 single-copy gene families using MAFFT67, and poorly aligning sites were identified and removed with BMGE using the BLOSUM30 matrix68. For the maximum likelihood analyses, we used the best-fitting substitution model as selected by the Bayesian information criterion (LG + C60 + G4 + F) in IQ-TREE (version 1.6.12)69,70; the Bayesian analyses were performed under the CAT + GTR + G4 model in PhyloBayes version 2.3 (ref. 71,72). These models accommodate site-specific amino acid compositions via a fixed number of empirical profiles (C60) or an infinite mixture of profiles (CAT)73,74.
Supertrees
Individual maximum likelihood gene trees were inferred for each of the 160 single-copy gene families in IQ-TREE69, using the best-fitting model, selected individually for each gene using the Bayesian information criterion. A supertree was then inferred using ASTRAL version 5.7.6 (ref. 75).
Divergence time estimation
Molecular clock methods represent one of the only credible means of obtaining an evolutionary timescale, integrating molecular and palaeontological evidence bearing on the phylogenetic and temporal relationships of living clades. Molecular clock methods see through the gaps in the fossil record to the timing of divergence of molecular loci. One feature of any molecular clock analysis is that, in the absence of admixture or gene transfer, the divergence of gene lineages must logically occur prior to the divergence of the organismal lineages that contain them76. Molecular clock branch lengths inferred from concatenates represent an average across loci, and the distinction between gene and lineage divergences is not modelled. The discrepancy between the two ages is unclear, but it is probably small and encompassed by the uncertainties associated with molecular clock estimates.
Estimates of the origins of major lineages of land plants have proven robust to different phylogenetic hypotheses38,39, but not to different interpretations of the fossil record23,38,39. Some recent studies of the timing of land plant evolution have argued that fossil calibrations should not exert undue influence over divergence time estimates23,40. However, in the absence of fossil calibrations, relaxed molecular clocks fail to distinguish rate and time, and fossil calibrations are therefore important across the tree to inform rate variation and in turn increase the accuracy of age estimates77. Our approach thus sought to maximize the information in the fossil record and increase the sampling of fossil calibrations over previous studies23,38.
Minimum age calibrations were defined on the basis of the oldest unequivocal evidence of a lineage. Specifying a maximum age calibration is considered controversial by some23,39, yet maximum ages are always present, either as justified user-specified priors or incidentally as part of the joint time prior78,79. On this basis, we defined our maxima following the principles defined in Parham et al.80, and fossil calibrations were defined as minimum and maximum age constraints, in each case modelled as uniform distributions between minima and maxima, with a 1% probability of either bound being exceeded (Supplementary Methods). We fixed the tree topology to that recovered by the Bayesian analysis and used the normal approximation method in MCMCtree (v. 4.9i) [81], with branch lengths first estimated under the LG + G4 model in codeml (v 4.9i) 81. We divided the gene families into four partitions according to their rate, determined on the basis of the maximum likelihood distance between Arabidopsis thaliana and Ginkgo biloba. We implemented a relaxed clock model (uncorrelated; independent gamma rates), where the rates for each branch are treated as independent samples drawn from a lognormal distribution. The shape of the distribution is assigned a prior for the mean rate (μ) and for the variation among branches (σ), each modelled as a gamma-distributed hyperprior. The gamma distribution for the mean rate was assigned a diffuse shape parameter of 2 and a scale parameter of 10, on the basis of the pairwise distance between Arabidopsis thaliana and Ginkgo biloba, assuming a divergence time of 350 Ma38. The rate variation parameter was assigned a shape parameter of 1 and a scale parameter of 10. The birth and death parameters were each set to 1, specifying a uniform kernel82. Four independent Markov chain Monte Carlo runs were performed, each running for four million generations to achieve convergence. Convergence was assessed in Tracer(v 1.7.1) 83 by comparing posterior parameter estimates across all four runs and by ensuring that the effective sample sizes exceeded 200.
Temporal constraint from a hornwort-to-fern HGT
HGT events provide information about the order of nodes on a species phylogeny in time over and above the ancestor–descendent relationships imposed by a strictly bifurcating phylogenetic species tree. Consequently, inferred HGT events can be used as relative node order constraints between divergent scions27; this is especially useful when fossil calibrations are not uniformly distributed across a tree. We used the horizontal transfer of the chimaeric neochrome photoreceptor (NEO) from hornworts to a derived fern lineage (Polypodiales)84 as an additional source of data about divergence times in hornworts, a lineage that diverged early in plant evolution but is poorly represented in the fossil record. We inferred a new gene tree for NEO using the expanded sampling of lineages now available, which confirmed the donor and recipient lineages originally reported84 (Extended Data Fig. 7). The gene tree topology for the NEOCHROME family reveals discordance between the species and gene trees for some relationships within the ferns, with copies present in some earlier-diverging lineages, including gleichenioid and tree ferns (Extended Data Fig. 7). This suggests that some duplication and loss, or perhaps within-fern transfer, may have occurred in this family. As a result, while the gene was most likely acquired in the common ancestor of Polypodiales, transfers into Gleicheniales or Cyatheales cannot be excluded entirely. We repeated the analysis with the relative time constraint reflecting each of these possibilities.
This relative node order constraint was used together with the 66 fossil calibrations in a Bayesian inference program (mcmc-date, https://github.com/dschrempf/mcmc-date) to infer a species tree with branch lengths measured in absolute time. In contrast to MCMCtree, mcmc-date uses the posterior distribution of branch lengths estimated by PhyloBayes, as described above, together with a multivariate normal distribution accounting for correlations between branches, to approximate the phylogenetic likelihood. Furthermore, an exponential hyperprior with mean 1.0 was used for the birth and death rates, as well as for the mean and variance of the gamma prior of the branch rates. A tailored set of random-walk proposals executed in random order per iteration, and the Metropolis-coupled Markov chain Monte Carlo algorithm85 with four parallel chains, resulted in near independence of consecutive samples. After a burn-in of approximately 5,000 iterations, 15,000 iterations were performed. All inferred parameters and node ages have effective sample sizes above 8,000 as calculated by Tracer. Subsequently, the relative node dating analysis and the partitioned molecular clock analysis were combined by using the posterior distributions for the divergence times within hornworts from the relative node dating as a prior for the partitioned analysis in MCMCtree.
Gene-tree/species-tree reconciliation
Modelling of gene DTL with ALE was used to assess the most likely root of embryophytes. We constructed a dataset comprising 24 genomes with the highest BUSCO completion for each lineage sampled (Supplementary Figs. 13 and 14 and Supplementary Table 13). An unrooted species tree was constructed using IQ-TREE under the LG + C60 + G4 + F model, as described in the ‘Phylogenetics’ section. The unrooted species tree was then manually rooted on 12 candidate branches, with each alternatively rooted tree scaled to geological time using the mean node ages from the dating analysis. Gene family clusters were inferred by an all-versus-all DIAMOND BLAST86 with an e-value threshold of 10−5, in combination with Markov clustering with an inflation parameter of 2.0 (ref. 87). All gene family clusters were aligned (MAFFT) and trimmed (BMGE), and bootstrap tree distributions were inferred using IQ-TREE as described above. Gene family clusters were reconciled under the 12 candidate root position trees using the ALEml algorithm88. The likelihood of each gene family under each root was calculated; the credible roots were determined using an AU test89,90. A detailed description of the ALE implementation can be found at https://github.com/ak-andromeda/ALE_methods/.
Ancestral gene content reconstruction
Gene family clusters for the genomic dataset were inferred using the same methods as described above, but the dataset was expanded to contain the genomes of five algal outgroups to allow inference of gene content evolution prior to the embryophyte root (Supplementary Figs. 3 and 4). Ancestral gene content and instances of gene duplication, loss and transfer were determined by reconciling the gene family clusters with the rooted species tree under the ALEml model. We repeated the analyses using different approaches to filter the data for low-quality gene families (Supplementary Methods). A custom Python script called Ancestral_reconstruction_copy_number.py was used to identify the presence and absence of gene families on each branch of the tree from the ALE output (Supplementary Methods). To functionally annotate the gene families, we inferred the consensus sequence of each gene family alignment using hidden Markov modelling91. Consensus sequences were functionally annotated using eggNOG-mapper92, and GO terms were summarized using the custom Python script make_go_term_dictionary.py. For deeper nodes of the tree where GO terms were infrequent, genes were annotated with the KEGG database using BlastKOALA93. KEGG annotations were summarized using the Python script kegg_analysis.py. Additionally, the numbers of DTL events per branch were calculated using the custom Python script branchwise_number_of_events.py.
Acknowledgements
T.A.W., J.W.C. and A.M.H. are supported by a Leverhulme Trust Research Project Grant (no. RPG-2019-004). T.A.W. is also supported by a Royal Society University Research Fellowship (no. URF\R\201024). B.J.H. is supported by a PhD studentship from the New Phytologist Trust. P.C.J.D. was funded by a Natural Environment Research Council grant (no. NEP013678/1), part of the Biosphere, Evolution, Transitions and Resilience programme, which is cofunded by the Natural Science Foundation for China; as well as a Biotechnology and Biological Sciences Research Council grant (no. BB/T012773/1) and a Leverhulme Trust Research Fellowship (no. 2022-167). This work was supported by the Gordon and Betty Moore Foundation through grant no. 10.37807/GBMF9741 to T.A.W., G.J.S. and P.C.J.D. G.J.S. and D.S. are supported by the European Research Council under the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 714774.
Contributions
B.J.H., J.W.C., D.S., G.J.S., P.C.J.D., A.M.H. and T.A.W. conceived the study and designed the experiments. All experiments were performed by B.J.H., J.W.C. and D.S. All authors contributed to the interpretation of the results and the drafting of the manuscript.
Extended data
a, phylogenetic tree inferred from a concatenated alignment of 30919 sites consisting of 160 single copy orthogroups using the CAT-GTR model (Blanquart and Lartillot, 2008). Branch colour is proportional to the posterior probability; black branches received maximum support, and red received less than maximum and greater values than 0.9. The grey bars assigned to each species are proportional to the percentage of gaps in the alignment. Species with more than 50% gaps in the alignment have their labels coloured blue. The branches of the tree are not drawn to scale. b, Summarised maximum likelihood tree inferred from the same alignment as above using the LG + C60 + G4 + F model, which accounts for site heterogeneity in the substitution process. All major nodes received maximum boot strap support. c, Phylogenetic tree inferred using the ASTRAL; gene trees were inferred from the 160 single copy orthogroups used to construct the concatenate. All branches except the one defining bryophytes received maximum coalescent support, albeit the branch still received strong support (0.95). The size of the circles in both a and b are proportional to sample size of the lineage they represent.
Unrooted maximum likelihood tree inferred from an alignment of 11 species and single copy orthogroups under the LG + C60 + G4 + F model. Four candidate root positions for embryophytes were investigated using ALE. For the ALE analysis, the unrooted tree was rooted in each of the twelve positions and scaled to geological time based on the results of the divergence time analysis and gene clusters were reconciled using the ALEml algorithm. The likelihood of the four embryophyte roots was assessed with an approximate unbiased (AU) test. The AU test significantly rejected 3 out of the 4 roots, favouring only a root between bryophytes and tracheophytes.
The NEOCHROME horizontal gene transfer is predicted to have occurred from hornworts into the ancestor of polypod ferns. However, topological uncertainty in the NEOCHROME gene tree allows the possibility that the transfer could have occurred into a more ancient lineage (A). We placed the relative node calibration such that hornworts must be more ancient than (i) Polypodiales (ii) Cyatheales + Polypodiales and (iii) Gleicheniales+Cyatheales+Polypodiales. The 95% highest posterior density (HPD) for the molecular clock analysis under each scenario is shown as a bar in (B), with a dot for the mean age. 95% HPDs were calculated from 2,000 post-burnin samples over 2,000,000 MCMC generations.
Calibrations were altered by variously relaxing maximum age calibrations on the age of embryophytes (Strategy B) and embryophytes and tracheophytes (Strategy C). The width of the red band across the phylogenies represents the 95% highest posterior density (HPD) interval. 95% HPDs were calculated from 2,000 post-burnin samples over 2,000,000 MCMC generations.
Left, overall change in GO term frequency between the ancestral embryophyte and the ancestral bryophyte/tracheophyte. GO terms on average become less frequent in bryophytes. Right, change in the frequency of specific GO terms between the ancestral embryophyte and the ancestral bryophyte/tracheophyte. Bryophytes have a reduction in gene families associated with shoot and root development, while we see an increase in gene families associated with these GO terms in the tracheophyte ancestor.
Gene trees were constructed from BLAST searches of an expanded taxon set. Each gene tree was inferred under the best-fitting model in IQ-TREE determined via the Bayesian Information Criterion. The trees were rooted using algal outgroups. In each case, the branches where bryophytes appear to have undergone loss are marked by a yellow dot.
The Arabidopsis thaliana protein sequence for PHOT1 was used to BLAST a database of 177 species of plant and transcriptomes. The homologous sequences were aligned with MAFFT and trimmed with BMGE. A maximum likelihood tree was inferred in IQ-TREE under the best fitting substitution model inferred with Bayesian Inference Criterion. 8 fern genes were resolved within the hornworts and were inferred to have undergone horizontal gene transfer (coloured red). This transfer was previously characterised (Li et al., 2014), and we corroborate this finding with maximum bootstrap support.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
|
To assess the sensitivity of our approach to the effect of maximum age calibrations, we repeated the clock analyses with less informative maximum age calibrations (Supplementary Methods). Removing the maximum age constraint on the embryophyte node produced highly similar estimates to when the maximum is employed (Extended Data Fig. 4). Relaxing all maxima did result in more ancient estimates for the origin of embryophytes, although still considerably younger than recent studies23, extending the possible origin for land plants back to the Ediacaran (540–597 Ma; Extended Data Fig. 4). The older ages estimated in Su et al.23 seem to reflect, in part, differences in the phylogenetic assignment of certain fossils (Supplementary Methods), such as the putative algae Proterocladus antiquus and the liverwort Ricardiothallus devonicus, rather than a dependence on the maximum age calibration. Our results reject the possibility that land plants originated during the Neoproterozoic, instead supporting an origin of the land plant crown group during the mid-late Cambrian, 515–493 Ma, with crown tracheophytes and crown bryophytes originating 452–447 Ma (Late Ordovician) and 500–473 Ma (late Cambrian to Early Ordovician), respectively. Within bryophytes, the divergence between Setaphyta (mosses + liverworts) and hornworts occurred by 479–450 Ma (Ordovician), with the radiation of crown mosses by 420–364 Ma (latest Silurian to Late Devonian) and crown liverworts 440–412 Ma (early Silurian to Early Devonian).
|
no
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
no_statement
|
the "silurian" "period" was not the "birth" of the first "land" "plants".. "land" "plants" did not emerge during the "silurian" "period".
|
https://opengeology.org/textbook/8-earth-history/
|
8 Earth History – An Introduction to Geology
|
8 Earth History
Spider Rock, within Canyon de Chelly National Monument, not only has a long human history with the Diné tribe, but also has a long geologic history. The rocks are Permian in age, and formed in the desert conditions that dominated North America toward the end of the Paleozoic through the middle Mesozoic. Erosion of the canyon occurred in the Cenozoic.
KEY CONCEPTS
By the end of this chapter, students should be able to:
Explain the big-bang theory and origin of the elements
Explain the solar system’s origin and the consequences for Earth.
Describe the turbulent beginning of Earth during the Hadean and ArcheanEons
Identify the transition to modern atmosphere, plate tectonics, and evolution that occurred in the Proterozoic Eon
Describe the Paleozoic evolution and extinction of invertebrates with hard parts, fish, amphibians, reptiles, tetrapods, and land plants; and tectonics and sedimentation associated with the supercontinent Pangea
Describe the Mesozoic evolution and extinction of birds, dinosaurs, and mammmals; and tectonics and sedimentation associated with the breakup of Pangea
Describe the Cenozoic evolution of mammals and birds, paleoclimate, and tectonics that shaped the modern world
Geologic time on Earth, represented circularly, to show the individual time divisions and important events. Ga=billion years ago, Ma=million years ago.Entire courses and careers have been based on the wide-ranging topics covering Earth’s history. Throughout the long history of Earth, change has been the norm. Looking back in time, an untrained eye would see many unfamiliar life forms and terrains. The main topics studied in Earth history are paleogeography, paleontology, and paleoecology and paleoclimatology—respectively, past landscapes, past organisms, past ecosystems, and past environments. This chapter will cover briefly the origin of the universe and the 4.6 billion year history of Earth. This Earth history will focus on the major physical and biological events in each Eons and Era.
8.1 Origin of the Universe
The Hubble Deep Field. This image, released in 1996, is a composite long-exposure picture of one of the darkest parts of the night sky. Every light on this image that does not have diffraction spikes is believed to be an entire galaxy, with hundreds of billions of stars, demonstrating the immense size and scope of the universe.
The universe appears to have an infinite number of galaxies and solar systems and our solar system occupies a small section of this vast entirety. The origins of the universe and solar system set the context for conceptualizing the Earth’s origin and early history.
8.1.1 Big-Bang Theory
Timeline of expansion of the universeThe mysterious details of events prior to and during the origin of the universe are subject to great scientific debate. The prevailing idea about how the universe was created is called the big-bang theory. Although the ideas behind the big-bang theory feel almost mystical, they are supported by Einstein’s theory of general relativity. Other scientific evidence, grounded in empirical observations, supports the big-bang theory.
The big-bang theory proposes the universe was formed from an infinitely dense and hot core of material. The bang in the title suggests there was an explosive, outward expansion of all matter and space that created atoms. Spectroscopy confirms that hydrogen makes up about 74% of all matter in the universe. Since its creation, the universe has been expanding for 13.8 billion years and recent observations suggest the rate of this expansion is increasing.
Spectroscopy
The electromagnetic spectrum and properties of light across the spectrum.Spectroscopy is the investigation and measurement of spectra produced when materials interacts with or emits electromagnetic radiation. Spectra is the plural for spectrum which is a particular wavelength from the electromagnetic spectrum. Common spectra include the different colors of visible light, X-rays, ultraviolet waves, microwaves, and radio waves. Each beam of light is a unique mixture of wavelengths that combine across the spectrum to make the color we see. The light wavelengths are created or absorbed inside atoms, and each wavelength signature matches a specific element. Even white light from the Sun, which seems like an uninterrupted continuum of wavelengths, has gaps in some wavelengths. The gaps correspond to elements present in the Earth’s atmosphere that act as filters for specific wavelengths. These missing wavelengths were famously observed by Joseph von Fraunhofer (1787–1826) in the early 1800s, but it took decades before scientists were able to relate the missing wavelengths to atmospheric filtering. Spectroscopy shows that the Sun is mostly made of hydrogen and helium. Applying this process to light from distant stars, scientists can calculate the abundance of elements in a specific star and visible universe as a whole. Also, this spectroscopic information can be used as an interstellar speedometer.
Redshift
Click to animate. This animation demonstrates how the Doppler effect is heard as a car moves. The waves in front of the car are compressed together, making the pitch higher. The waves in the back of the car are stretched, and and the pitch gets lower.The Doppler effect is the same process that changes the pitch of the sound of an approaching car or ambulance from high to low as it passes. When an object emits waves, such as light or sound, while moving toward an observer, the wavelengths get compressed. In sound, this results in a shift to a higher pitch. When an object moves away from an observer, the wavelengths are extended, producing a lower pitched sound. The Doppler effect is used on light emitted from stars and galaxies to determine their speed and direction of travel. Scientists, including Vesto Slipher (1875–1969) and Edwin Hubble (1889–1953), examined galaxies both near and far and found that almost all galaxies outside of our galaxy are moving away from each other, and us. Because the light wavelengths of receding objects are extended, visible light is shifted toward the red end of the spectrum, called a redshift. In addition, Hubble noticed that galaxies that were farther away from Earth also had the greater amount of redshift, and thus, the faster they are traveling away from us. The only way to reconcile this information is to deduce the universe is still expanding. Hubble’s observation forms the basis of big-bang theory.
Cosmic Microwave Background Radiation
Heat map, showing slight variations in background heat, which is related to cosmic background radiation.Another strong indication of the big-bang is cosmic microwave background radiation. Cosmic radiation was accidentally discovered by Arno Penzias (1933–) and Robert Woodrow Wilson (1936–) when they were trying to eliminate background noise from a communication satellite. They discovered very faint traces of energy or heat that are omnipresent across the universe. This energy was left behind from the big bang, like an echo.
8.1.2 Stellar Evolution
Origin of the elements on the periodic table, showing the important role the star life cycle plays.Astronomers think the big bang created lighter elements, mostly hydrogen and smaller amounts of elements helium, lithium, and beryllium. Another process must be responsible for creating the other 90 heavier elements. The current model of stellar evolution explains the origins of these heavier elements.
Birth of a star
Section of the Eagle Nebula known as “The Pillars of Creation.”Stars start their lives as elements floating in cold, spinning clouds of gas and dust known as nebulas. Gravitational attraction or perhaps a nearby stellar explosion causes the elements to condense and spin into disk shape. In the center of this disk shape a new star is born under the force of gravity. The spinning whirlpool concentrates material in the center, and the increasing gravitational forces collect even more mass. Eventually, the immensely concentrated mass of material reaches a critical point of such intense heat and pressure it initiates fusion.
Fusion
General diagram showing the series of fusion steps that occur in the sun.Fusion is not a chemical reaction. Fusion is a nuclear reaction in which two or more nuclei, the centers of atoms, are forced together and combine creating a new larger atom. This reaction gives off a tremendous amount of energy, usually as light and solar radiation. An element such as hydrogen combines or fuses with other hydrogen atoms in the core of a star to become a new element, in this case, helium. Another product of this process is energy, such as solar radiation that leaves the Sun and comes to the Earth as light and heat. Fusion is a steady and predictable process, which is why we call this the main phase of a star’s life. During its main phase, a star turns hydrogen into helium. Since most stars contain plentiful amounts of hydrogen, the main phase may last billions of years, during which their size and energy output remains relatively steady.
Two main paths of a star’s life cycle, depending on mass.The giant phase in a star’s life occurs when the star runs out of hydrogen for fusion. If a star is large enough, it has sufficient heat and pressure to start fusing helium into heavier elements. This style of fusion is more energetic and the higher energy and temperature expand the star to a larger size and brightness. This giant phase is predicted to happen to our Sun in another few billion years, growing the radius of the Sun to Earth’s orbit, which will render life impossible. The mass of a star during its main phase is the primary factor in determining how it will evolve. If the star has enough mass and reaches a point at which the primary fusion element, such as helium, is exhausted, fusion continues using new, heavier elements. This occurs over and over in very large stars, forming progressively heavier elements like carbon and oxygen. Eventually, fusion reaches its limit as it forms iron and nickel. This progression explains the abundance of iron and nickel in rocky objects, like Earth, within the solar system. At this point, any further fusion absorbs energy instead of giving it off, which is the beginning of the end of the star’s life.
Death of a Star
Hubble space telescope image of the Crab Nebula, the remnants of a supernova that occurred in 1054 C.E.
The death of a star can range from spectacular to other-worldly (see figure). Stars like the Sun form a planetary nebula, which comes from the collapse of the star’s outer layers in an event like the implosion of a building. In the tug-of-war between gravity’s inward pull and fusion’s outward push, gravity instantly takes over when fusion ends, with the outer gasses puffing away to form a nebula. More massive stars do this as well but with a more energetic collapse, which starts another type of energy release mixed with element creation known as a supernova. In a supernova, the collapse of the core suddenly halts, creating a massive outward-propagating shock wave. A supernova is the most energetic explosion in the universe short of the big bang. The energy release is so significant the ensuing fusion can make every element up through uranium.
A black hole and its shadow have been captured in an image for the first time in 2019, a historic feat by an international network of radio telescopes called the Event Horizon Telescope (Source: NASA)
The death of the star can result in the creation of white dwarfs, neutron stars, or black holes. Following their deaths, stars like the Sun turn into white dwarfs.
White dwarfs are hot star embers, formed by packing most of a dying star’s mass into a small and dense object about the size of Earth. Larger stars may explode in a supernova that packs their mass even tighter to become neutron stars. Neutron stars are so dense that protons combine with electrons to form neutrons. The largest stars collapse their mass even further, becoming objects so dense that light cannot escape their gravitational grasp. These are the infamous black holes and the details of the physics of what occurs in them are still up for debate.
[ays_quiz id=”45″]
8.2 Origin of the Solar System: The Nebular Hypothesis
Small protoplanetary discs in the Orion NebulaOur solar system formed at the same time as our Sun as described in the nebular hypothesis. The nebular hypothesis is the idea that a spinning cloud of dust made of mostly light elements, called a nebula, flattened into a protoplanetary disk, and became a solar system consisting of a star with orbiting planets. The spinning nebula collected the vast majority of material in its center, which is why the sun Accounts for over 99% of the mass in our solar system.
8.2.1 Planet Arrangement and Segregation
This disk is asymmetric, possibly because of a large gas giant planet orbiting relatively far from the star.As our solar system formed, the nebular cloud of dispersed particles developed distinct temperature zones. Temperatures were very high close to the center, only allowing condensation of metals and silicate minerals with high melting points. Farther from the Sun, the temperatures were lower, allowing the condensation of lighter gaseous molecules such as methane, ammonia, carbon dioxide, and water. This temperature differentiation resulted in the inner four planets of the solar system becoming rocky, and the outer four planets becoming gas giants.
Image by the ALMA telescope of HL Tauri and its protoplanetary disk, showing grooves formed as planets absorb material in the disk.Both rocky and gaseous planets have a similar growth model. Particles of dust, floating in the disc were attracted to each other by static charges and eventually, gravity. As the clumps of dust became bigger, they interacted with each other—colliding, sticking, and forming proto-planets. The planets continued to grow over the course of many thousands or millions of years, as material from the protoplanetary disc was added. Both rocky and gaseous planets started with a solid core. Rocky planets built more rock on that core, while gas planets added gas and ice. Ice giants formed later and on the furthest edges of the disc, accumulating less gas and more ice. That is why the gas-giant planets Jupiter and Saturn are composed of mostly hydrogen and helium gas, more than 90%. The ice giants Uranus and Neptune are composed of mostly methane ices and only about 20% hydrogen and helium gases.
This artist’s impression of the water snowline around the young star V883 Orionis, as detected with ALMA.The planetary composition of the gas giants is clearly different from the rocky planets. Their size is also dramatically different for two reasons: First, the original planetary nebula contained more gases and ices than metals and rocks. There was abundant hydrogen, carbon, oxygen, nitrogen, and less silicon and iron, giving the outer planets more building material. Second, the stronger gravitational pull of these giant planets allowed them to collect large quantities of hydrogen and helium, which could not be collected by weaker gravity of the smaller planets.
A polished fragment of the iron-rich Toluca Meteorite, with octahedral Widmanstätten Pattern.Jupiter’s massive gravity further shaped the solar system and growth of the inner rocky planets. As the nebula started to coalesce into planets, Jupiter’s gravity accelerated the movement of nearby materials, generating destructive collisions rather than constructively gluing material together. These collisions created the asteroid belt, an unfinished planet, located between Mars and Jupiter. This asteroid belt is the source of most meteorites that currently impact the Earth. Study of asteroids and meteorites help geologist to determine the age of Earth and the composition of its core, mantle, and crust. Jupiter’s gravity may also explain Mars’ smaller mass, with the larger planet consuming material as it migrated from the inner to outer edge of the solar system.
Pluto and planet definition
Eight largest objects discovered past Neptune.The outermost part of the solar system is known as the Kuiper belt, which is a scattering of rocky and icy bodies. Beyond that is the Oort cloud, a zone filled with small and dispersed ice traces. These two locations are where most comets form and continue to orbit, and objects found here have relatively irregular orbits compared to the rest of the solar system. Pluto, formerly the ninth planet, is located in this region of space. The XXVIth General Assembly of the International Astronomical Union (IAU) stripped Pluto of planetary status in 2006 because scientists discovered an object more massive than Pluto, which they named Eris. The IAU decided against including Eris as a planet, and therefore, excluded Pluto as well. The IAU narrowed the definition of a planet to three criteria: 1) enough mass to have gravitational forces that force it to be rounded, 2) not massive enough to create fusion, and 3) large enough to be in a cleared orbit, free of other planetesimals that should have been incorporated at the time the planet formed. Pluto passed the first two parts of the definition, but not the third. Pluto and Eris are currently classified as dwarf planets.
[ays_quiz id=”46″]
8.3 Hadean Eon
Geologic Time Scale with ages shown
Geoscientists use the geological time scale to assign relative age names to events and rocks, separating major events in Earth’s history based on significant changes as recorded in rocks and fossils. This section summarizes the most notable events of each major time interval. For a breakdown on how these time intervals are chosen and organized, see chapter 7.
The Hadean Eon, named after the Greek god and ruler of the underworld Hades, is the oldest eon and dates from 4.5–4.0 billion years ago.
Artist’s impression of the Earth in the Hadean.This time represents Earth’s earliest history, during which the planet was characterized by a partially molten surface, volcanism, and asteroid impacts. Several mechanisms made the newly forming Earth incredibly hot: gravitational compression, radioactive decay, and asteroid impacts. Most of this initial heat still exists inside the Earth. The Hadean was originally defined as the birth of the planet occurring 4.0 billion years ago and preceding the existence of many rocks and life forms. However, geologists have dated minerals at 4.4 billion years, with evidence that liquid water was present. There is possibly even evidence of life existing over 4.0 billion years ago. However, the most reliable record for early life, the microfossil record, starts at 3.5 billion years ago.
8.3.1 Origin of Earth’s Crust
The global map of the depth of the moho, or thickness of the crust.As Earth cooled from its molten state, minerals started to crystallize and settle resulting in a separation of minerals based on density and the creation of the crust, mantle, and core. The earliest Earth was chiefly molten material and would have been rounded by gravitational forces so it resembled a ball of lava floating in space. As the outer part of the Earth slowly cooled, the high melting-point minerals (see Bowen’s Reaction Series in Chapter 4) formed solid slabs of early crust. These slabs were probably unstable and easily reabsorbed into the liquid magma until the Earth cooled enough to allow numerous larger fragments to form a thin primitive crust. Scientists generally assume this crust was oceanic and mafic in composition, and littered with impacts, much like the Moon’s current crust. There is still some debate over when platetectonics started, which would have led to the formation of continental and felsiccrust. Regardless of this, as Earth cooled and solidified, less dense felsic minerals floated to the surface of the Earth to form the crust, while the denser mafic and ultramafic materials sank to form the mantle and the highest-density iron and nickel sank into the core. This differentiated the Earth from a homogenous planet into a heterogeneous one with layers of felsic crust, mafic crust, ultramafic mantle, and iron and nickel core.
8.3.2 Origin of the Moon
Dark side of the MoonSeveral unique features of Earth’s Moon have prompted scientists to develop the current hypothesis about its formation. The Earth and Moon are tidally locked, meaning that as the Moon orbits, one side always faces the Earth and the opposite side is not visible to us. Also and most importantly, the chemical compositions of the Earth and Moon show nearly identical isotope ratios and volatile content. Apollo missions returned from the Moon with rocks that allowed scientists to conduct very precise comparisons between Moon and Earth rocks. Other bodies in the solar system and meteorites do not share the same degree of similarity and show much higher variability. If the Moon and Earth formed together, this would explain why they are so chemically similar.
Artist’s concept of the giant impact from a Mars-sized object that could have formed the moon.Many ideas have been proposed for the origin of the Moon: The Moon could have been captured from another part of the solar system and formed in place together with the Earth, or the Moon could have been ripped out of the early Earth. None of proposed explanations can account for all the evidence. The currently prevailing hypothesis is the giant-impact hypothesis. It proposes a body about half of Earth’s size must have shared at least parts of Earth’s orbit and collided with it, resulting in a violent mixing and scattering of material from both objects. Both bodies would be composed of a combination of materials, with more of the lower density splatter coalescing into the Moon. This may explain why the Earth has a higher density and thicker core than the Moon.
Computer simulation of the evolution of the Moon (2 minutes).
8.3.3 Origin of Earth’s Water
Water vapor leaves comet 67P/Churyumov–Gerasimenko.Explanations for the origin of Earth’s water include volcanic outgassing, comets, and meteorites. The volcanic outgassing hypothesis for the origin of Earth’s water is that it originated from inside the planet, and emerged via tectonic processes as vapor associated with volcanic eruptions. Since all volcanic eruptions contain some water vapor, at times more than 1% of the volume, these alone could have created Earth’s surface water. Another likely source of water was from space. Comets are a mixture of dust and ice, with some or most of that ice being frozen water. Seemingly dry meteors can contain small but measurable amounts of water, usually trapped in their mineral structures. During heavy bombardment periods later in Earth’s history, its cooled surface was pummeled by comets and meteorites, which could be why so much water exists above ground. There isn’t a definitive answer for what process is the source of ocean water. Earth’s water isotopically matches water found in meteorites much better than that of comets. However, it is hard to know if Earth processes could have changed the water’s isotopic signature over the last 4-plus billion years. It is possible that all three sources contributed to the origin of Earth’s water.
[ays_quiz id=”47″]
8.4 Archean Eon
Artist’s impression of the Archean.The Archean Eon, which lasted from 4.0–2.5 billion years ago, is named after the Greek word for beginning. This eon represents the beginning of the rock record. Although there is current evidence that rocks and minerals existed during the Hadean Eon, the Archean has a much more robust rock and fossil record.
8.4.1 Late Heavy Bombardment
2015 image from NASA’s New Horizons probe of Pluto. The lack of impacts found on the Tombaugh Regio (the heart-shaped plain, lower right) has been inferred as being younger than the Late Heavy Bombardment and the surrounding surface due to its lack of impacts.
Objects were chaotically flying around at the start of the solar system, building the planets and moons. There is evidence that after the planets formed, about 4.1–3.8 billion years ago, a second large spike of asteroid and comet impacted the Earth and Moon in an event called late heavy bombardment. Meteorites and comets in stable or semi-stable orbits became unstable and started impacting objects throughout the solar system. In addition, this event is called the lunar cataclysm because most of the Moons craters are from this event. During late heavy bombardment, the Earth, Moon, and all planets in the solar system were pummeled by material from the asteroid and Kuiper belts. Evidence of this bombardment was found within samples collected from the Moon.
Simulation of before, during, and after the late heavy bombardment.It is universally accepted that the solar system experienced extensive asteroid and comet bombardment at its start; however, some other process must have caused the second increase in impacts hundreds of millions of years later. A leading theory blames gravitational resonance between Jupiter and Saturn for disturbing orbits within the asteroid and Kuiper belts based on a similar process observed in the Eta Corvi star system.
8.4.2 Origin of the Continents
The layers of the Earth. Physical layers include lithosphere and asthenosphere; chemical layers are crust, mantle, and core.In order for plate tectonics to work as it does currently, it necessarily must have continents. However, the easiest way to create continental material is via assimilation and differentiation of existing continents (see Chapter 4). This chicken-and-egg quandary over how continents were made in the first place is not easily answered because of the great age of continental material and how much evidence has been lost during tectonics and erosion. While the timing and specific processes are still debated, volcanic action must have brought the first continental material to the Earth’s surface during the Hadean, 4.4 billion years ago. This model does not solve the problem of continent formation, since magmatic differentiation seems to need thicker crust. Nevertheless, the continents formed by some incremental process during the early history of Earth. The best idea is that density differences allowed lighter felsic materials to float upward and heavier ultramafic materials and metallic iron to sink. These density differences led to the layering of the Earth, the layers that are now detected by seismic studies. Early protocontinents accumulated felsic materials as developing plate-tectonic processes brought lighter material from the mantle to the surface.
Subduction of an oceanic plate beneath another oceanic plate, forming a trench and an island arc. Several island arcs might combine and eventually evolve into a continent.
The first solid evidence of modern plate tectonics is found at the end of the Archean, indicating at least some continentallithosphere must have been in place. This evidence does not necessarily mark the starting point of plate tectonics; remnants of earlier tectonic activity could have been erased by the rock cycle.
Geologic provinces of Earth. Cratons are pink and orange.
The stable interiors of the current continents are called cratons and were mostly formed in the Archean Eon. A craton has two main parts: the shield, which is crystalline basement rock near the surface, and the platform made of sedimentary rocks covering the shield. Most cratons have remained relatively unchanged with most tectonic activity having occurred around cratons instead of within them. Whether they were created by plate tectonics or another process, Archean continents gave rise to the Proterozoic continents that now dominate our planet.
The continent of ZealandiaThe general guideline as to what constitutes a continent and differentiates oceanic from continental crust is under some debate. At passive margins, continental crust grades into oceanic crust at passive margins, making a distinction difficult. Even island-arc and hot-spot material can seem more closely related to continental crust than oceanic. Continents usually have a craton in the middle with felsic igneous rocks. There is evidence that submerged masses like Zealandia, that includes present-day New Zealand, would be considered a continent. Continental crust that does not contain a craton is called a continental fragment, such as the island of Madagascar off the east coast of Africa.
8.4.3 First Life on Earth
Fossils of microbial mats from SwedenLife most likely started during the late Hadean or early ArcheanEons. The earliest evidence of life are chemical signatures, microscopic filaments, and microbial mats. Carbon found in 4.1 billion year old zircon grains have a chemical signature suggesting an organic origin. Other evidence of early life are 3.8–4.3 billion-year-old microscopic filaments from a hydrothermalvent deposit in Quebec, Canada. While the chemical and microscopic filaments evidence is not as robust as fossils, there is significant fossil evidence for life at 3.5 billion years ago. These first well-preserved fossils are photosynthetic microbial mats, called stromatolites, found in Australia.
Greenhouse gases were more common in Earth’s early atmosphere.
Although the origin of life on Earth is unknown, hypotheses include a chemical origin in the early atmosphere and ocean, deep-sea hydrothermal vents, and delivery to Earth by comets or other objects. One hypothesis is that life arose from the chemical environment of the Earth’s early atmosphere and oceans, which was very different than today. The oxygen-free atmosphere produced a reducing environment with abundant methane, carbon dioxide, sulfur, and nitrogen compounds. This is what the atmosphere is like on other bodies in the solar system. In the famous Miller-Urey experiment, researchers simulated early Earth’s atmosphere and lightning within a sealed vessel. After igniting sparks within the vessel, they discovered the formation of amino acids, the fundamental building blocks of proteins. In 1977, when scientists discovered an isolated ecosystem around hydrothermal vents on a deep-sea mid-ocean ridge (see Chapter 4), it opened the door for another explanation of the origin of life. The hydrothermal vents have a unique ecosystem of critters with chemosynthesis as the foundation of the food chain instead of photosynthesis. The ecosystem is deriving its energy from hot chemical-rich waters pouring out of underground towers. This suggests that life could have started on the deep ocean floor and derived energy from the heat from the Earth’s interior via chemosynthesis. Scientists have since expanded the search for life to more unconventional places, like Jupiter’s icy moon Europa.
Animation of the original Miller-Urey 1959 experiment that simulated the early atmosphere and created amino acids from simple elements and compounds.
Another possibility is that life or its building blocks came to Earth from space, carried aboard comets or other objects. Amino acids, for example, have been found within comets and meteorites. This intriguing possibility also implies a high likelihood of life existing elsewhere in the cosmos.
[ays_quiz id=”48″]
8.5 Proterozoic Eon
Diagram showing the main products and reactants in photosynthesis. The one product that is not shown is sugar, which is the chemical energy that goes into constructing the plant, and the energy that is stored in the plant which is used later by the plant or by animals that consume the plant.The Proterozoic Eon, meaning “earlier life,” comes after the Archean Eon and ranges from 2.5 billion to 541 million years old. During this time, most of the central parts of the continents had formed and plate tectonic processes had started. Photosynthesis by microbial organisms, such as single-celled cyanobacteria, had been slowly adding oxygen to the oceans. As cyanobacteria evolved into multicellular organisms, they completely transformed the oceans and later the atmosphere by adding massive amounts of free oxygen gas (O2) and initiated what is called the Great Oxygenation Event (GOE). This drastic environmental change decimated the anaerobic bacteria, which could not survive in the presence of free oxygen. On the other hand, aerobic organisms could thrive in ways they could not earlier.
An oxygenated world also changed the chemistry of the planet in significant ways. For example, iron remained in solution in the non-oxygenated environment of the earlier Archean Eon. In chemistry, this is known as a reducing environment. Once the environment was oxygenated, iron combined with free oxygen to form solid precipitates of iron oxide, such as the mineral hematite or magnetite. These precipitates accumulated into large mineral deposits with red chert known as banded-iron formations, which are dated at about 2 billion years.
Alternating bands of iron-rich and silica-rich mud, formed as oxygen combined with dissolved iron.The formation of iron oxide minerals and red chert (see figure) in the oceans lasted a long time and prevented oxygen levels from increasing significantly, since precipitation took the oxygen out of the water and deposited it into the rock strata. As oxygen continued to be produced and mineral precipitation leveled off, dissolved oxygen gas eventually saturated the oceans and started bubbling out into the atmosphere. Oxygenation of the atmosphere is the single biggest event that distinguishes the Archean and Proterozoic environments. In addition to changing mineral and ocean chemistry, the GOE is also tabbed as triggering Earth’s first glaciation event around 2.1 billion years ago, the Huron Glaciation. Free oxygen reacted with methane in the atmosphere to produce carbon dioxide. Carbon dioxide and methane are called greenhouse gases because they trap heat within the Earth’s atmosphere, like the insulated glass of a greenhouse. Methane is a more effective insulator than carbon dioxide, so as the proportion of carbon dioxide in the atmosphere increased, the greenhouse effect decreased, and the planet cooled.
8.5.1 Rodinia
One possible reconstruction of Rodinia 1.1 billion years ago. Source: John Goodge, modified from Dalziel (1997).By the Proterozoic Eon, lithospheric plates had formed and were moving according to plate tectonic forces that were similar to current times. As the moving plates collided, the ocean basins closed to form a supercontinent called Rodinia. The supercontinent formed about 1 billion years ago and broke up about 750 to 600 million years ago, at the end of the Proterozoic. One of the resulting fragments was a continental mass called Laurentia that would later become North America. Geologists have reconstructed Rodinia by matching and aligning ancient mountain chains, assembling the pieces like a jigsaw puzzle, and using paleomagnetics to orient to magnetic north.
The disagreements over these complex reconstructions is exemplified by geologists proposing at least six different models for the breakup of Rodinia to create Australia, Antarctica, parts of China, the Tarim craton north of the Himalaya, Siberia, or the Kalahari craton of eastern Africa. This breakup created lots of shallow-water, biologically favorable environments that fostered the evolutionary breakthroughs marking the start of the next eon, the Phanerozoic.
8.5.2 Life Evolves
Modern cyanobacteria (as stromatolites) in Shark Bay, Australia.Early life in the Archean and earlier is poorly documented in the fossil record. Based on chemical evidence and evolutionary theory, scientists propose this life would have been single-celled photosynthetic organisms, such as the cyanobacteria that created stromatolites. Cyanobacteria produced free oxygen in the atmosphere through photosynthesis. Cyanobacteria, archaea, and bacteria are prokaryotes—primitive organisms made of single cells that lack cell nuclei and other organelles.
Fossil stromatolites in Saratoga Springs, New York.A large evolutionary step occurred during the ProterozoicEon with the appearance of eukaryotes around 2.1 to 1.6 billion years ago. Eukaryotic cells are more complex, having nuclei and organelles. The nuclear DNA is capable of more complex replication and regulation than that of prokaryotic cells. The organelles include mitochondria for producing energy and chloroplasts for photosynthesis. The eukaryote branch in the tree of life gave rise to fungi, plants, and animals.
Another important event in Earth’s biological history occurred about 1.2 billion years ago when eukaryotes invented sexual reproduction. Sharing genetic material from two reproducing individuals, male and female, greatly increased genetic variability in their offspring. This genetic mixing accelerated evolutionary change, contributing to more complexity among individual organisms and within ecosystems (see Chapter 7).
Proterozoic land surfaces were barren of plants and animals and geologic processes actively shaped the environment differently because land surfaces were not protected by leafy and woody vegetation. For example, rain and rivers would have caused erosion at much higher rates on land surfaces devoid of plants. This resulted in thick accumulations of pure quartzsandstone from the Proterozoic Eon such as the extensive quartzite formations in the core of the Uinta Mountains in Utah.
Dickinsonia, a typical Ediacaran fossil.Fauna during the Ediacaran Period, 635.5 to 541 million years ago are known as the Ediacaran fauna, and offer a first glimpse at the diversity of ecosystems that evolved near the end of the Proterozoic. These soft-bodied organisms were among the first multicellular life forms and probably were similar to jellyfish or worm-like. Ediacaran fauna did not have hard parts like shells and were not well preserved in the rock records. However, studies suggest they were widespread in the Earth’s oceans. Scientists still debate how many species were evolutionary dead-ends that became extinct and how many were ancestors of modern groupings. The transition of soft-bodied Ediacaran life to life forms with hard body parts occurred at the end of the Proterozoic and beginning of the Phanerozoic Eons. This evolutionary explosion of biological diversity made a dramatic difference in scientists’ ability to understand the history of life on Earth.
[ays_quiz id=”49″]
8.6 Phanerozoic Eon: Paleozoic Era
The trilobites had a hard exoskeleton, and were an early arthropod, the same group that includes modern insects, crustaceans, and arachnids.The PhanerozoicEon is the most recent, 541 million years ago to today, and means “visible life” because the Phanerozoic rock record is marked by an abundance of fossils. Phanerozoic organisms had hard body parts like claws, scales, shells, and bones that were more easily preserved as fossils. Rocks from the older Precambrian time are less commonly found and rarely include fossils because these organisms had soft body parts. Phanerozoic rocks are younger, more common, and contain the majority of extant fossils. The study of rocks from this eon yields much greater detail. The Phanerozoic is subdivided into three eras, from oldest to youngest they are Paleozoic (“ancient life”), Mesozoic (“middle life”), and Cenozoic (“recent life”) and the remaining three chapter headings are on these three important eras.
Trilobites, by Heinrich Harder, 1916.
Life in the early Paleozoic Era was dominated by marine organisms but by the middle of the era plants and animals evolved to live and reproduce on land. Fish evolved jaws and fins evolved into jointed limbs. The development of lungs allowed animals to emerge from the sea and become the first air-breathing tetrapods (four-legged animals) such as amphibians. From amphibians evolved reptiles with the amniotic egg. From reptiles evolved an early ancestor to birds and mammals and their scales became feathers and fur. Near the end of the Paleozoic Era, the Carboniferous Period had some of the most extensive forests in Earth’s history. Their fossilized remains became the coal that powered the industrial revolution
8.6.1 Paleozoic Tectonics and Paleogeography
Laurentia, which makes up the North American craton.
During the Paleozoic Era, sea-levels rose and fell four times. With each sea-level rise, the majority of North America was covered by a shallow tropical ocean. Evidence of these submersions are the abundant marine sedimentary rocks such as limestone with fossils corals and ooids. Extensive sea-level falls are documented by widespread unconformities. Today, the midcontinent has extensive marine sedimentary rocks from the Paleozoic and western North America has thick layers of marine limestone on block faulted mountain ranges such as Mt. Timpanogos near Provo, Utah.
A reconstruction of Pangaea, showing approximate positions of modern continents.The assembly of supercontinent Pangea, sometimes spelled Pangaea, was completed by the late Paleozoic Era. The name Pangea was originally coined by Alfred Wegener and means “all land.” Pangea is the when all of the major continents were grouped together as one by a series of tectonic events including subduction island-arc accretion, and continental collisions, and ocean-basin closures. In North America, these tectonic events occurred on the east coast and are known as the Taconic, Acadian, Caledonian, and Alleghanian orogenies. The Appalachian Mountains are the erosional remnants of these mountain building events in North America. Surrounding Pangea was a global ocean basin known as the Panthalassa. Continued plate movement extended the ocean into Pangea, forming a large bay called the Tethys Sea that eventually divided the land mass into two smaller supercontinents, Laurasia and Gondwana. Laurasia consisted of Laurentia and Eurasia, and Gondwana consisted of the remaining continents of South America, Africa, India, Australia, and Antarctica.
Animation of plate movement the last 3.3 billion years. Pangea occurs at the 4:40 mark.
While the east coast of North America was tectonically active during the Paleozoic Era, the west coast remained mostly inactive as a passive margin during the early Paleozoic. The western edge of North American continent was near the present-day Nevada-Utah border and was an expansive shallow continental shelf near the paleoequator. However, by the Devonian Period, the Antler orogeny started on the west coast and lasted until the Pennsylvanian Period. The Antler orogeny was a volcanic island arc that was accreted onto western North America with the subduction direction away from North America. This created a mountain range on the west coast of North American called the Antler highlands and was the first part of building the land in the west that would eventually make most of California, Oregon, and Washington states. By the late Paleozoic, the Sonoma orogeny began on the west coast and was another collision of an island arc. The Sonoma orogeny marks the change in subduction direction to be toward North America with a volcanic arc along the entire west coast of North America by late Paleozoic to early Mesozoic Eras.
By the end of the Paleozoic Era, the east coast of North America had a very high mountain range due to continental collision and the creation of Pangea. The west coast of North America had smaller and isolated volcanic highlands associated with island arc accretion. During the MesozoicEra, the size of the mountains on either side of North America would flip, with the west coast being a more tectonically active plate boundary and the east coast changing into a passive margin after the breakup of Pangea.
8.6.2 Paleozoic Evolution
Anomalocaris reconstruction by the MUSE science museum in Italy.The beginning of the Paleozoic Era is marked by the first appearance of hard body parts like shells, spikes, teeth, and scales; and the appearance in the rock record of most animal phyla known today. That is, most basic animal body plans appeared in the rock record during the Cambrian Period. This sudden appearance of biological diversity is called the Cambrian Explosion. Scientists debate whether this sudden appearance is more from a rapid evolutionary diversification as a result of a warmer climate following the late Proterozoic glacial environments, better preservation and fossilization of hard parts, or artifacts of a more complete and recent rock record. For example, fauna may have been diverse during the Ediacaran Period, setting the state for the Cambrian Explosion, but they lacked hard body parts and would have left few fossils behind. Regardless, during the Cambrian Period 541–485 million years ago marked the appearance of most animal phyla.
Original plate from Walcott’s 1912 description of Opabinia, with labels: fp = frontal appendage, e = eye, ths = thoracic somites, i = intestine, ab = abdominal segment.One of the best fossil sites for the Cambrian Explosion was discovered in 1909 by Charles Walcott (1850–1927) in the Burgess Shale in western Canada. The Burgess Shale is a Lagerstätte, a site of exceptional fossil preservation that includes impressions of soft body parts. This discovery allowed scientists to study Cambrian animals in immense detail because soft body parts are not normally preserved and fossilized. Other Lagerstätte sites of similar age in China and Utah have allowed scientist to form a detailed picture of Cambrian biodiversity. The biggest mystery surrounds animals that do not fit existing lineages and are unique to that time. This includes many famous fossilized creatures: the first compound-eyed trilobites; Wiwaxia, a creature covered in spiny plates; Hallucigenia, a walking worm with spikes; Opabinia, a five-eyed arthropod with a grappling claw; and Anomalocaris, the alpha predator of its time, complete with grasping appendages and circular mouth with sharp plates. Most notably appearing during the Cambrian is an important ancestor to humans. A segmented worm called Pikaia is thought to be the earliest ancestor of the Chordata phylum that includes vertebrates, animals with backbones.
A modern coral reef.By the end of the Cambrian, mollusks, brachiopods, nautiloids, gastropods, graptolites, echinoderms, and trilobites covered the sea floor. Although most animal phyla appeared by the Cambrian, the biodiversity at the family, genus, and species level was low until the Ordovician Period. During the Great Ordovician Biodiversification Event, vertebrates and invertebrates (animals without backbone) became more diverse and complex at family, genus, and species level. The cause of the rapid speciation event is still debated but some likely causes are a combination of warm temperatures, expansive continental shelves near the equator, and more volcanism along the mid-ocean ridges. Some have shown evidence that an asteroid breakup event and consequent heavy meteorite impacts correlate with this diversification event. The additional volcanism added nutrients to ocean water helping support a robust ecosystem. Many life forms and ecosystems that would be recognizable in current times appeared at this time. Mollusks, corals, and arthropods in particular multiplied to dominate the oceans.
Guadalupe National Park is made of a giant fossil reef complex.One important evolutionary advancement during the Ordovician Period was reef-building organisms, mostly colonial coral. Corals took advantage of the ocean chemistry, using calcite to build large structures that resembled modern reefs like the Great Barrier Reef off the coast of Australia. These reefs housed thriving ecosystems of organisms that swam around, hid in, and crawled over them. Reefs are important to paleontologists because of their preservation potential, massive size, and in-place ecosystems. Few other fossils offer more diversity and complexity than reef assemblages.
According to evidence from glacial deposits, a small ice age caused sea-levels to drop and led to a major mass extinction by the end of the Ordovician. This is the earliest of five mass extinction events documented in the fossil record. During this mass extinction, an unusually large number of species abruptly disappear in the fossil record (see video).
Life bounced back during the Silurianperiod. The major evolutionary event was the development of the forward pair of gill arches into jaws, allowing fish new feeding strategies and opening up new ecological niches.
3-minute video describing mass extinctions and how they are defined.
The armor-plated fish (placoderm) Bothriolepis panderi from the Devonian of Russia.Life bounced back during the Silurian period. The period’s major evolutionary event was the development of jaws from the forward pair of gill arches in bony fishes and sharks. Hinged jaws allowed fish to exploit new food sources and ecological niches. This period also included the start of armored fishes, known as the placoderms. In addition to fish and jaws, Silurian rocks provide the first evidence of terrestrial or land-dwelling plants and animals. The first vascular plant, Cooksonia, had woody tissues, pores for gas exchange, and veins for water and food transport. Insects, spiders, scorpions, and crustaceans began to inhabit moist, freshwater terrestrial environments.
Several different types of fish and amphibians that led to walking on land.The Devonian Period is called the Age of Fishes due to the rise in plated, jawed, and lobe-finned fishes . The lobe-finned fishes, which were related to the modern lungfish and coelacanth, are important for their eventual evolution into tetrapods, four-limbed vertebrate animals that can walk on land. The first lobe-finned land-walking fish, named Tiktaalik, appeared about 385 million years ago and serves as a transition fossil between fish and early tetrapods. Though Tiktaalik was clearly a fish, it had some tetrapod structures as well. Several fossils from the Devonian are more tetrapod like than fish like but these weren’t fully terrestrial. The first fully terrestrial tetrapod arrived in the Mississippian (early Carboniferous) period. By the Mississippian (early Carboniferous) period, tetrapods had evolved into two main groups, amphibians and amniotes, from a common tetrapod ancestor. The amphibians were able to breathe air and live on land but still needed water to nurture their soft eggs. The first reptile (an amniote) could live and reproduce entirely on land with hard-shelled eggs that wouldn’t dry out.
Land plants had also evolved into the first trees and forests. Toward the end of the Devonian, another mass extinction event occurred. This extinction, while severe, is the least temporally defined, with wide variations in the timing of the event or events. Reef building organisms were the hardest hit, leading to dramatic changes in marine ecosystems.
A reconstruction of the giant arthropod (insects and their relatives) Arthropleura.The next time period, called the Carboniferous (North American geologists have subdivided this into the Mississippian and Pennsylvanian periods), saw the highest levels of oxygen ever known, with forests (e.g., ferns, club mosses) and swamps dominating the landscape . This helped cause the largest arthropods ever, like the millipede Arthropleura, at 2.5 meters (6.4 feet) long! It also saw the rise of a new group of animals, the reptiles. The evolutionary advantage that reptiles have over amphibians is the amniote egg (egg with a protective shell), which allows them to rely on non-aquatic environments for reproduction. This widened the terrestrial reach of reptiles compared to amphibians. This booming life, especially plant life, created cooling temperatures as carbon dioxide was removed from the atmosphere. By the middle Carboniferous, these cooler temperatures led to an ice age (called the Karoo Glaciation) and less-productive forests. The reptiles fared much better than the amphibians, leading to their diversification. This glacial event lasted into the early Permian.
Reconstruction of Dimetrodon.By the Permian, with Pangea assembled, the supercontinent led to a dryer climate, and even more diversification and domination by the reptiles. The groups that developed in this warm climate eventually radiated into dinosaurs. Another group, known as the synapsids, eventually evolved into mammals. Synapsids, including the famous sail-backed Dimetrodon are commonly confused with dinosaurs. Pelycosaurs (of the Pennsylvanian to early Permian like Dimetrodon) are the first group of synapsids that exhibit the beginnings of mammalian characteristics such as well-differentiated dentition: incisors, highly developed canines in lower and upper jaws and cheek teeth, premolars and molars. Starting in the late Permian, a second group of synapsids, called the therapsids (or mammal-like reptiles) evolve, and become the ancestors to mammals.
Permian Mass Extinction
Map of global flood basalts. Note the largest is the Siberian Traps.The end of the Paleozoic era is marked by the largest mass extinction in earth history. The Paleozoic era had two smaller mass extinctions, but these were not as large as the Permian Mass Extinction, also known as the Permian-Triassic Extinction Event. It is estimated that up to 96% of marine species and 70% of land-dwelling (terrestrial) vertebrates went extinct. Many famous organisms, like sea scorpions and trilobites, were never seen again in the fossil record. What caused such a widespread extinction event? The exact cause is still debated, though the leading idea relates to extensive volcanism associated with the Siberian Traps, which are one of the largest deposits of flood basalts known on Earth, dating to the time of the extinction event. The eruption size is estimated at over 3 million cubic kilometers that is approximately 4,000,000 times larger than the famous 1980 Mt. St. Helens eruption in Washington. The unusually large volcanic eruption would have contributed a large amount of toxic gases, aerosols, and greenhouse gasses into the atmosphere. Further, some evidence suggests that the volcanism burned vast coal deposits releasing methane (a greenhouse gas) into the atmosphere. As discussed in Chapter 15, greenhouse gases cause the climate to warm. This extensive addition of greenhouse gases from the Siberian Traps may have caused a runaway greenhouse effect that rapidly changed the climate, acidified the oceans, disrupted food chains, disrupted carbon cycling, and caused the largest mass extinction.
[ays_quiz id=”50″]
8.7 Phanerozoic Eon: Mesozoic Era
Perhaps the greatest fossil ever found, a velociraptor attacked a protoceratops, and both were fossilized mid sequence.Following the Permian Mass Extinction, the Mesozoic (“middle life”) was from 252 million years ago to 66 million years ago. As Pangea started to break apart, mammals, birds, and flowering plants developed. The Mesozoic is probably best known as the age of reptiles, most notably, the dinosaurs.
8.7.1 Mesozoic Tectonics and Paleogeography
Animation showing Pangea breaking upPangea started breaking up (in a region that would become eastern Canada and United States) around 210 million years ago in the Late Triassic. Clear evidence for this includes the age of the sediments in the Newark Supergroup rift basins and the Palisades sill of the eastern part of North America and the age of the Atlantic ocean floor. Due to sea-floor spreading, the oldest rocks on the Atlantic’s floor are along the coast of northern Africa and the east coast of North America, while the youngest are along the mid-ocean ridge.
Age of oceanic lithosphere, in millions of years. Notice the differences in the Atlantic Ocean along the coasts of the continents.
This age pattern shows how the Atlantic Ocean opened as the young Mid-Atlantic Ridge began to create the seafloor. This means the Atlantic ocean started opening and was first formed here. The southern Atlantic opened next, with South America separating from central and southern Africa. Last (happening after the Mesozoic ended) was the northernmost Atlantic, with Greenland and Scandinavia parting ways. The breaking points of each riftedplate margin eventually turned into the passive plate boundaries of the east coast of the Americas today.
Video of Pangea breaking apart and plates moving to their present locations. By Tanya Atwater.
Sketch of the major features of the Sevier Orogeny.In western North America, an active plate margin had started with subduction, controlling most of the tectonics of that region in the Mesozoic. Another possible island-arc collision created the Sonoman Orogeny in Nevada during the latest Paleozoic to the Triassic. In the Jurassic, another island-arc collision caused the Nevadan Orogeny, a large Andean-style volcanic arc and thrust belt. The Sevier Orogeny followed in the Cretaceous, which was mainly a volcanic arc to the west and a thin-skinnedfold and thrust belt to the east, meaning stacks of shallow faults and folds built up the topography. Many of the structures in the Rocky Mountains today date from this orogeny.
The Cretaceous Interior Seaway in the mid-Cretaceous.
Tectonics had an influence in one more important geographic feature in North America: the Cretaceous Western Interior Foreland Basin, which flooded during high sea levels forming the Cretaceous Interior Seaway. Subduction from the west was the Farallon Plate, an oceanic plate connected to the Pacific Plate (seen today as remnants such as the Juan de Fuca Plate, off the coast of the Pacific Northwest). Subduction was shallow at this time because a very young, hot and less dense portion of the Farallon plate was subducted. This shallow subduction caused a downwarping in the central part of North America. High sea levels due to shallow subduction, and increasing rates of seafloor spreading and subduction, high temperatures, and melted ice also contributed to the high sea levels. These factors allowed a shallow epicontinental seaway that extended from the Gulf of Mexico to the Arctic Ocean to divide North America into two separate land masses, Laramidia to the west and Appalachia to the east, for 25 million years. Many of the coal deposits in Utah and Wyoming formed from swamps along the shores of this seaway. By the end of the Cretaceous, cooling temperatures caused the seaway to regress.
8.7.2 Mesozoic Evolution
A Mesozoic scene from the late Jurassic.The Mesozoic era is dominated by reptiles, and more specifically, the dinosaurs. The Triassic saw devastated ecosystems that took over 30 million years to fully re-emerge after the Permian Mass Extinction. The first appearance of many modern groups of animals that would later flourish occurred at this time. This includes frogs (amphibians), turtles (reptiles), marine ichthyosaurs and plesiosaurs (marine reptiles), mammals, and the archosaurs. The archosaurs (“ruling reptiles”) include ancestral groups that went extinct at the end of the Triassic, as well as the flying pterosaurs, crocodilians, and the dinosaurs. Archosaurs, like the placental mammals after them, occupied all major environments: terrestrial (dinosaurs), in the air (pterosaurs), aquatic (crocodilians) and even fully marine habitats (marine crocodiles). The pterosaurs, the first vertebrate group to take flight, like the dinosaurs and mammals, start small in the Triassic.
A drawing of the early plesiosaur Agustasaurus from the Triassic of Nevada.At the end of the Triassic, another mass extinction event occurred, the fourth major mass extinction in the geologic record. This was perhaps caused by the Central Atlantic Magmatic Province flood basalt. The end-Triassic extinction made certain lineages go extinct and helped spur the evolution of survivors like mammals, pterosaurs (flying reptiles), ichthyosaurs/plesiosaurs/mosasaurs (marine reptiles), and dinosaurs.
Reconstruction of the small (<5″) Megazostrodon, one of the first animals considered to be a true mammal.Mammals, as previously mentioned, got their start from a reptilian synapsid ancestor possibly in the late Paleozoic. Mammals stayed small, in mainly nocturnal niches, with insects being their largest prey. The development of warm-blooded circulation and fur may have been a response to this lifestyle.
Closed structure of a ornithischian hip, which is similar to a birds.In the Jurassic, species that were previously common, flourished due to a warmer and more tropical climate. The dinosaurs were relatively small animals in the Triassic period of the Mesozoic, but became truly massive in the Jurassic. Dinosaurs are split into two groups based on their hip structure, i.e. orientation of the pubis and ischium bones in relationship to each other. This is referred to as the “reptile hipped” saurischians and the “bird hipped” ornithischians. This has recently been brought into question by a new idea for dinosaur lineage.
Open structure of a saurischian hip, which is similar to a lizards.Most of the dinosaurs of the Triassic were saurischians, but all of them were bipedal. The major adaptive advantage dinosaurs had was changes in the hip and ankle bones, tucking the legs under the body for improved locomotion as opposed to the semi-erect gait of crocodiles or the sprawling posture of reptiles. In the Jurassic, limbs (or a lack thereof) were also important to another group of reptiles, leading to the evolution of Eophis, the oldest snake.
Therizinosaurs, like Beipiaosaurus (shown in this restoration), are known for their enormous hand claws.There is a paucity of dinosaur fossils from the Early and Middle Jurassic, but by the Late Jurassic they were dominating the planet. The saurischians diversified into the giant herbivorous (plant-eating) long-necked sauropods weighing up to 100 tons and bipedal carnivorous theropods, with the possible exception of the Therizinosaurs. All of the ornithischians (e.g Stegosaurus, Iguanodon, Triceratops, Ankylosaurus, Pachycephhlosaurus) were herbivorous with a strong tendency to have a “turtle-like” beak at the tips of their mouths.
Iconic “Berlin specimen” Archaeopteryx lithographica fossil from Germany.The pterosaurs grew and diversified in the Jurassic, and another notable arial organism developed and thrived in the Jurassic: birds. When Archeopteryx was found in the Solnhofen Lagerstätte of Germany, a seeming dinosaur-bird hybrid, it started the conversation on the origin of birds. The idea that birds evolved from dinosaurs occurred very early in the history of research into evolution, only a few years after Darwin’s On the Origin of Species. This study used a remarkable fossil of Archeopteryx from a transitional animal between dinosaurs and birds. Small meat-eating theropod dinosaurs were likely the branch that became birds due to their similar features. A significant debate still exists over how and when powered flight evolved. Some have stated a running-start model, while others have favored a tree-leaping gliding model or even a semi-combination: flapping to aid in climbing.
Reconstructed skeleton of Argentinosaurus, from Naturmuseum Senckenberg in Germany.The Cretaceous saw a further diversification, specialization, and domination of the dinosaurs and other fauna. One of the biggest changes on land was the transition to angiosperm-dominated flora. Angiosperms, which are plants with flowers and seeds, had originated in the Cretaceous, switching many plains to grasslands by the end of the Mesozoic. By the end of the period, they had replaced gymnosperms (evergreen trees) and ferns as the dominant plant in the world’s forests. Haplodiploid eusocial insects (bees and ants) are descendants from Jurassic wasp-like ancestors that co-evolved with the flowering plants during this time period. The breakup of Pangea not only shaped our modern world’s geography, but biodiversity at the time as well. Throughout the Mesozoic, animals on the isolated, now separated island continents (formerly parts of Pangea), took strange evolutionary turns. This includes giant titanosaurian sauropods (Argentinosaurus) and theropods (Giganotosaurus) from South America.
K-T Extinction
Graph of the rate of extinctions. Note the large spike at the end of the Cretaceous (labeled as K).
Similar to the end of the Paleozoic era, the Mesozoic Era ended with the K-Pg Mass Extinction (previously known as the K-T Extinction) 66 million years ago. This extinction event was likely caused by a large bolide(an extraterrestrial impactor such as an asteroid, meteoroid, or comet) that collided with earth. Ninety percent of plankton species, 75% of plant species, and all the dinosaurs went extinct at this time.
Artist’s depiction of an impact eventOne of the strongest pieces of evidence comes from the element iridium. Quite rare on Earth, and more common in meteorites, it has been found all over the world in higher concentrations at a particular layer of rock that formed at the time of the K-T boundary. Soon other scientists started to find evidence to back up the claim. Melted rock spheres, a special type of “shocked” quartz called stishovite, that only is found at impact sites, was found in many places around the world . The huge impact created a strong thermal pulse that could be responsible for global forest fires, strong acid rains, a corresponding abundance of ferns, the first colonizing plants after a forest fire, enough debris thrown into the air to significantly cool temperatures afterward, and a 2-km high tsunami inferred from deposits found from Texas to Alabama.
The land expression of the Chicxulub crater. The other side of the crater is within the Gulf of México.Still, with all this evidence, one large piece remained missing: the crater where the bolide impacted. It was not until 1991 that the crater was confirmed using petroleum company geophysical data. Even though it is the third largest confirmed crater on Earth at roughly 180 km wide, the Chicxulub Crater was hard to find due to being partially underwater and partially obscured by the dense forest canopy of the Yucatan Peninsula. Coring of the center of the impact called the peak ring contained granite, indicating the impact was so powerful that it lifted basement sediment from the crust several miles toward the surface. In 2010, an international team of scientists reviewed 20 years of research and blamed the impact for the extinction.
Geology of India, showing purple as Deccan Traps-related rocks.With all of this information, it seems like the case would be closed. However, there are other events at this time which could have partially aided the demise of so many organisms. For example, sea levels are known to be slowly decreasing at the time of the K-T event, which is tied to marine extinctions, though any study on gradual vs. sudden changes in the fossil record is flawed due to the incomplete nature of the fossil record. Another big event at this time was the Deccan Traps flood basalt volcanism in India. At over 1.3 million cubic kilometers of material, it was certainly a large source of material hazardous to ecosystems at the time, and it has been suggested as at least partially responsible for the extinction. Some have found the impact and eruptions too much of a coincidence, and have even linked the two together.
[ays_quiz id=”51″]
8.8 Phanerozoic Eon: Cenozoic Era
Paraceratherium, seen in this reconstruction, was a massive (15-20 ton, 15 foot tall) ancestor of rhinos.The Cenozoic, meaning “new life,” is known as the age of mammals because it is in this era that mammals came to be a dominant and large life form, including human ancestors. Birds, as well, flourished in the open niches left by the dinosaur’s demise. Most of the Cenozoic has been relatively warm, with the main exception being the ice age that started about 2.558 million years ago and (despite recent warming) continues today. Tectonic shifts in the west caused volcanism, but eventually changed the long-standing subduction zone into a transform boundary.
8.8.1 Cenozoic Tectonics and Paleogeography
Animation of the last 38 million years of movement in western North America. Note, that after the ridge is subducted, convergent turns to transform (with divergent inland).
Shallow subduction during the Laramide Orogeny.In the Cenozoic, the plates of the Earth moved into more familiar places, with the biggest change being the closing of the Tethys Sea with collisions such as the Alps, Zagros, and Himalaya, a collision that started about 57 million years ago, and continues today. Maybe the most significant tectonic feature that occurred in the Cenozoic of North America was the conversion of the west coast of California from a convergent boundary subduction zone to a transform boundary. Subduction off the coast of the western United States, which had occurred throughout the Mesozoic, had continued in the Cenozoic. After the Sevier Orogeny in the late Mesozoic, a subsequent orogeny called the Laramide Orogeny, occurred in the early Cenozoic. The Laramide was thick-skinned, different than the Sevier Orogeny. It involved deeper crustal rocks, and produced bulges that would become mountain ranges like the Rockies, Black Hills, Wind River Range, Uinta Mountains, and the San Rafael Swell. Instead of descending directly into the mantle, the subducting plate shallowed out and moved eastward beneath the continental plate affecting the overlying continent hundreds of miles east of the continental margin and building high mountains. This occurred because the subducting plate was so young and near the spreading center and the density of the plate was therefore low and subduction was hindered.
Map of the San Andreas fault, showing relative motion.As the mid-ocean ridge itself started to subduct, the relative motion had changed. Subduction caused a relative convergence between the subducting Farallon plate and the North American plate. On the other side of the mid-ocean ridge from the Farallon plate was the Pacific plate, which was moving away from the North American plate. Thus, as the subduction zone consumed the mid-ocean ridge, the relative movement became transform instead of convergent, which went on to become the San Andreas Fault System. As the San Andreas grew, it caused east-west directed extensional forces to spread over the western United States, creating the Basin and Range province. The transform fault switched position over the last 18 million years, twisting the mountains around Los Angeles, and new faults in the southeastern California deserts may become a future San Andreas-style fault. During this switch from subduction to transform, the nearly horizontal Farallon slab began to sink into the mantle. This caused magmatism as the subducting slab sank, allowing asthenosphere material to rise around it. This event is called the Oligocene ignimbrite flare-up, which was one of the most significant periods of volcanism ever, including the largest single confirmed eruption, the 5000 cubic kilometer Fish Canyon Tuff.
8.8.2 Cenozoic Evolution
Family tree of Hominids (Hominadae).There are five groups of early mammals in the fossil record, based primarily on fossil teeth, the hardest bone in vertebrate skeletons. For the purpose of this text, the most important group are the Eupantotheres, that diverge into the two main groups of mammals, the marsupials (like Sinodelphys) and placentals or eutherians (like Eomaia) in the Cretaceous and then diversified in the Cenozoic. The marsupials dominated on the isolated island continents of South America and Australia, and many went extinct in South America with the introduction of placental mammals. Some well-known mammal groups have been highly studied with interesting evolutionary stories in the Cenozoic. For example, horses started small with four toes, ended up larger and having just one toe. Cetaceans (marine mammals like whales and dolphins) started on land from small bear-like (mesonychids) creatures in the early Cenozoic and gradually took to water. However, no study of evolution has been more studied than human evolution. Hominids, the name for human-like primates, started in eastern Africa several million years ago.
Lucy skeleton from the Cleveland Natural History Museum, showing real fossil (brown) and reconstructed skeleton (white).The first critical event in this story is an environmental change from jungle to more of a savanna, probably caused by changes in Indian Ocean circulation. While bipedalism is known to have evolved before this shift, it is generally believed that our bipedal ancestors (like Australopithecus) had an advantage by covering ground more easily in a more open environment compared to their non-bipedal evolutionary cousins. There is also a growing body of evidence, including the famous “Lucy” fossil of an Australopithecine, that our early ancestors lived in trees. Arboreal animals usually demand a high intelligence to navigate through a three-dimensional world. It is from this lineage that humans evolved, using endurance running as a means to acquire more resources and possibly even hunt. This can explain many uniquely human features, from our long legs, strong achilles, lack of lower gut protection, and our wide range of running efficiencies.
The hypothesized movement of the homo genus. Years are marked as to the best guess of the timing of movement.Now that the hands are freed up, the next big step is a large brain. There have been arguments from a switch to more meat eating, cooking with fire, tool use, and even the construct of society itself to explain this increase in brain size. Regardless of how, it was this increased cognitive power that allowed humans to reign as their ancestors moved out of Africa and explored the world, ultimately entering the Americas through land bridges like the Bering Land Bridge. The details of this worldwide migration and the different branches of the hominid evolutionary tree are very complex, and best reserved for its own course.
Anthropocene and Extinction
Graph showing abundance of large mammals and the introduction of humans.Humans have had an influence on the Earth, its ecosystems and climate. Yet, human activity can not explain all of the changes that have occurred in the recent past. The start of the Quaternary period, the last and current period of the Cenozoic, is marked by the start of our current ice age 2.58 million years ago. During this time period, ice sheets advanced and retreated, most likely due to Milankovitch cycles (see ch. 15). Also at this time, various cold-adapted megafauna emerged (like giant sloths, saber-tooth cats, and woolly mammoths), and most of them went extinct as the Earth warmed from the most recent glacial maximum. A long-standing debate is over the cause of these and other extinctions. Is climate warming to blame, or were they caused by humans? Certainly, we know of recent human extinctions of animals like the dodo or passenger pigeon. Can we connect modern extinctions to extinctions in the recent past? If so, there are several ideas as to how this happened. Possibly the most widely accepted and oldest is the hunting/overkill hypothesis. The idea behind this hypothesis is that humans hunted large herbivores for food, then carnivores could not find food, and human arrival times in locations has been shown to be tied to increased extinction rates in many cases.
Bingham Canyon Mine, Utah. This open pit mine is the largest man-made removal of rock in the world.Modern human impact on the environment and the Earth as a whole is unquestioned. In fact, many scientists are starting to suggest that the rise of human civilization ended and/or replaced the Holoceneepoch and defines a new geologic time interval: the Anthropocene. Evidence for this change includes extinctions, increased tritium (hydrogen with two neutrons) due to nuclear testing, rising pollutants like carbon dioxide, more than 200 never-before seen mineral species that have occurred only in this epoch, materials such as plastic and metals which will be long lasting “fossils” in the geologic record, and large amounts of earthen material moved. The biggest scientific debate with this topic is the starting point. Some say that humans’ invention of agriculture would be recognized in geologic strata and that should be the starting point, around 12,000 years ago. Others link the start of the industrial revolution and the subsequent addition of vast amounts of carbon dioxide in the atmosphere. Either way, the idea is that alien geologists visiting Earth in the distant future would easily recognize the impact of humans on the Earth as the beginning of a new geologic period.
[ays_quiz id=”52″]
Summary
The changes that have occurred since the inception of Earth are vast and significant. From the oxygenation of the atmosphere, the progression of life forms, the assembly and deconstruction of several supercontinents, to the extinction of more life forms than exist today, having a general understanding of these changes can put present change into a more rounded perspective.
|
This period also included the start of armored fishes, known as the placoderms. In addition to fish and jaws, Silurian rocks provide the first evidence of terrestrial or land-dwelling plants and animals. The first vascular plant, Cooksonia, had woody tissues, pores for gas exchange, and veins for water and food transport. Insects, spiders, scorpions, and crustaceans began to inhabit moist, freshwater terrestrial environments.
Several different types of fish and amphibians that led to walking on land. The Devonian Period is called the Age of Fishes due to the rise in plated, jawed, and lobe-finned fishes . The lobe-finned fishes, which were related to the modern lungfish and coelacanth, are important for their eventual evolution into tetrapods, four-limbed vertebrate animals that can walk on land. The first lobe-finned land-walking fish, named Tiktaalik, appeared about 385 million years ago and serves as a transition fossil between fish and early tetrapods. Though Tiktaalik was clearly a fish, it had some tetrapod structures as well. Several fossils from the Devonian are more tetrapod like than fish like but these weren’t fully terrestrial. The first fully terrestrial tetrapod arrived in the Mississippian (early Carboniferous) period. By the Mississippian (early Carboniferous) period, tetrapods had evolved into two main groups, amphibians and amniotes, from a common tetrapod ancestor. The amphibians were able to breathe air and live on land but still needed water to nurture their soft eggs. The first reptile (an amniote) could live and reproduce entirely on land with hard-shelled eggs that wouldn’t dry out.
Land plants had also evolved into the first trees and forests. Toward the end of the Devonian, another mass extinction event occurred. This extinction, while severe, is the least temporally defined, with wide variations in the timing of the event or events.
|
yes
|
Paleobotany
|
Was the Silurian period the birth of the first land plants?
|
no_statement
|
the "silurian" "period" was not the "birth" of the first "land" "plants".. "land" "plants" did not emerge during the "silurian" "period".
|
https://pressbooks-dev.oer.hawaii.edu/biology/chapter/bryophytes/
|
Bryophytes – Biology
|
Bryophytes
Learning Objectives
Describe the distinguishing traits of liverworts, hornworts, and mosses
Chart the development of land adaptations in the bryophytes
Describe the events in the bryophyte lifecycle
Bryophytes are the group of plants that are the closest extant relative of early terrestrial plants. The first bryophytes (liverworts) most likely appeared in the Ordovician period, about 450 million years ago. Because of the lack of lignin and other resistant structures, the likelihood of bryophytes forming fossils is rather small. Some spores protected by sporopollenin have survived and are attributed to early bryophytes. By the Silurian period, however, vascular plants had spread through the continents. This compelling fact is used as evidence that non-vascular plants must have preceded the Silurian period.
More than 25,000 species of bryophytes thrive in mostly damp habitats, although some live in deserts. They constitute the major flora of inhospitable environments like the tundra, where their small size and tolerance to desiccation offer distinct advantages. They generally lack lignin and do not have actual tracheids (xylem cells specialized for water conduction). Rather, water and nutrients circulate inside specialized conducting cells. Although the term non-tracheophyte is more accurate, bryophytes are commonly called nonvascular plants.
In a bryophyte, all the conspicuous vegetative organs—including the photosynthetic leaf-like structures, the thallus, stem, and the rhizoid that anchors the plant to its substrate—belong to the haploid organism or gametophyte. The sporophyte is barely noticeable. The gametes formed by bryophytes swim with a flagellum, as do gametes in a few of the tracheophytes. The sporangium—the multicellular sexual reproductive structure—is present in bryophytes and absent in the majority of algae. The bryophyte embryo also remains attached to the parent plant, which protects and nourishes it. This is a characteristic of land plants.
The bryophytes are divided into three phyla: the liverworts or Hepaticophyta, the hornworts or Anthocerotophyta, and the mosses or true Bryophyta.
Liverworts
Liverworts (Hepaticophyta) are viewed as the plants most closely related to the ancestor that moved to land. Liverworts have colonized every terrestrial habitat on Earth and diversified to more than 7000 existing species ([link]). Some gametophytes form lobate green structures, as seen in [link]. The shape is similar to the lobes of the liver, and hence provides the origin of the name given to the phylum.
This 1904 drawing shows the variety of forms of Hepaticophyta.
A liverwort, Lunularia cruciata, displays its lobate, flat thallus. The organism in the photograph is in the gametophyte stage.
Openings that allow the movement of gases may be observed in liverworts. However, these are not stomata, because they do not actively open and close. The plant takes up water over its entire surface and has no cuticle to prevent desiccation. [link] represents the lifecycle of a liverwort. The cycle starts with the release of haploid spores from the sporangium that developed on the sporophyte. Spores disseminated by wind or water germinate into flattened thalli attached to the substrate by thin, single-celled filaments. Male and female gametangia develop on separate, individual plants. Once released, male gametes swim with the aid of their flagella to the female gametangium (the archegonium), and fertilization ensues. The zygote grows into a small sporophyte still attached to the parent gametophyte. It will give rise, by meiosis, to the next generation of spores. Liverwort plants can also reproduce asexually, by the breaking of branches or the spreading of leaf fragments called gemmae. In this latter type of reproduction, the gemmae—small, intact, complete pieces of plant that are produced in a cup on the surface of the thallus (shown in [link])—are splashed out of the cup by raindrops. The gemmae then land nearby and develop into gametophytes.
The life cycle of a typical liverwort is shown. (credit: modification of work by Mariana Ruiz Villareal)
Hornworts
The hornworts (Anthocerotophyta) belong to the broad bryophyte group. They have colonized a variety of habitats on land, although they are never far from a source of moisture. The short, blue-green gametophyte is the dominant phase of the lifecycle of a hornwort. The narrow, pipe-like sporophyte is the defining characteristic of the group. The sporophytes emerge from the parent gametophyte and continue to grow throughout the life of the plant ([link]).
Hornworts grow a tall and slender sporophyte. (credit: modification of work by Jason Hollinger)
Stomata appear in the hornworts and are abundant on the sporophyte. Photosynthetic cells in the thallus contain a single chloroplast. Meristem cells at the base of the plant keep dividing and adding to its height. Many hornworts establish symbiotic relationships with cyanobacteria that fix nitrogen from the environment.
The lifecycle of hornworts ([link]) follows the general pattern of alternation of generations. The gametophytes grow as flat thalli on the soil with embedded gametangia. Flagellated sperm swim to the archegonia and fertilize eggs. The zygote develops into a long and slender sporophyte that eventually splits open, releasing spores. Thin cells called pseudoelaters surround the spores and help propel them further in the environment. Unlike the elaters observed in horsetails, the hornwort pseudoelaters are single-celled structures. The haploid spores germinate and give rise to the next generation of gametophyte.
The alternation of generation in hornworts is shown. (credit: modification of work by “Smith609”/Wikimedia Commons based on original work by Mariana Ruiz Villareal)
Mosses
More than 10,000 species of mosses have been catalogued. Their habitats vary from the tundra, where they are the main vegetation, to the understory of tropical forests. In the tundra, the mosses’ shallow rhizoids allow them to fasten to a substrate without penetrating the frozen soil. Mosses slow down erosion, store moisture and soil nutrients, and provide shelter for small animals as well as food for larger herbivores, such as the musk ox. Mosses are very sensitive to air pollution and are used to monitor air quality. They are also sensitive to copper salts, so these salts are a common ingredient of compounds marketed to eliminate mosses from lawns.
Mosses form diminutive gametophytes, which are the dominant phase of the lifecycle. Green, flat structures—resembling true leaves, but lacking vascular tissue—are attached in a spiral to a central stalk. The plants absorb water and nutrients directly through these leaf-like structures. Some mosses have small branches. Some primitive traits of green algae, such as flagellated sperm, are still present in mosses that are dependent on water for reproduction. Other features of mosses are clearly adaptations to dry land. For example, stomata are present on the stems of the sporophyte, and a primitive vascular system runs up the sporophyte’s stalk. Additionally, mosses are anchored to the substrate—whether it is soil, rock, or roof tiles—by multicellular rhizoids. These structures are precursors of roots. They originate from the base of the gametophyte, but are not the major route for the absorption of water and minerals. The lack of a true root system explains why it is so easy to rip moss mats from a tree trunk. The moss lifecycle follows the pattern of alternation of generations as shown in [link]. The most familiar structure is the haploid gametophyte, which germinates from a haploid spore and forms first a protonema—usually, a tangle of single-celled filaments that hug the ground. Cells akin to an apical meristem actively divide and give rise to a gametophore, consisting of a photosynthetic stem and foliage-like structures. Rhizoids form at the base of the gametophore. Gametangia of both sexes develop on separate gametophores. The male organ (the antheridium) produces many sperm, whereas the archegonium (the female organ) forms a single egg. At fertilization, the sperm swims down the neck to the venter and unites with the egg inside the archegonium. The zygote, protected by the archegonium, divides and grows into a sporophyte, still attached by its foot to the gametophyte.
Art Connection
This illustration shows the life cycle of mosses. (credit: modification of work by Mariana Ruiz Villareal)
Which of the following statements about the moss life cycle is false?
The mature gametophyte is haploid.
The sporophyte produces haploid spores.
The calyptra buds to form a mature gametophyte.
The zygote is housed in the venter.
<!– <para>C. –>
The slender seta (plural, setae), as seen in [link], contains tubular cells that transfer nutrients from the base of the sporophyte (the foot) to the sporangium or capsule.
This photograph shows the long slender stems, called setae, connected to capsules of the moss Thamnobryum alopecurum. (credit: modification of work by Hermann Schachner)
A structure called a peristome increases the spread of spores after the tip of the capsule falls off at dispersal. The concentric tissue around the mouth of the capsule is made of triangular, close-fitting units, a little like “teeth”; these open and close depending on moisture levels, and periodically release spores.
Section Summary
Seedless nonvascular plants are small, having the gametophyte as the dominant stage of the lifecycle. Without a vascular system and roots, they absorb water and nutrients on all their exposed surfaces. Collectively known as bryophytes, the three main groups include the liverworts, the hornworts, and the mosses. Liverworts are the most primitive plants and are closely related to the first land plants. Hornworts developed stomata and possess a single chloroplast per cell. Mosses have simple conductive cells and are attached to the substrate by rhizoids. They colonize harsh habitats and can regain moisture after drying out. The moss sporangium is a complex structure that allows release of spores away from the parent plant.
Art Connections
[link] Which of the following statements about the moss life cycle is false?
|
Bryophytes
Learning Objectives
Describe the distinguishing traits of liverworts, hornworts, and mosses
Chart the development of land adaptations in the bryophytes
Describe the events in the bryophyte lifecycle
Bryophytes are the group of plants that are the closest extant relative of early terrestrial plants. The first bryophytes (liverworts) most likely appeared in the Ordovician period, about 450 million years ago. Because of the lack of lignin and other resistant structures, the likelihood of bryophytes forming fossils is rather small. Some spores protected by sporopollenin have survived and are attributed to early bryophytes. By the Silurian period, however, vascular plants had spread through the continents. This compelling fact is used as evidence that non-vascular plants must have preceded the Silurian period.
More than 25,000 species of bryophytes thrive in mostly damp habitats, although some live in deserts. They constitute the major flora of inhospitable environments like the tundra, where their small size and tolerance to desiccation offer distinct advantages. They generally lack lignin and do not have actual tracheids (xylem cells specialized for water conduction). Rather, water and nutrients circulate inside specialized conducting cells. Although the term non-tracheophyte is more accurate, bryophytes are commonly called nonvascular plants.
In a bryophyte, all the conspicuous vegetative organs—including the photosynthetic leaf-like structures, the thallus, stem, and the rhizoid that anchors the plant to its substrate—belong to the haploid organism or gametophyte. The sporophyte is barely noticeable. The gametes formed by bryophytes swim with a flagellum, as do gametes in a few of the tracheophytes. The sporangium—the multicellular sexual reproductive structure—is present in bryophytes and absent in the majority of algae.
|
no
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.