category
stringclasses
191 values
search_query
stringclasses
434 values
search_type
stringclasses
2 values
search_engine_input
stringclasses
748 values
url
stringlengths
22
468
title
stringlengths
1
77
text_raw
stringlengths
1.17k
459k
text_window
stringlengths
545
2.63k
stance
stringclasses
2 values
Creationism
Is the complexity of the human eye evidence of design?
yes_statement
the "complexity" of the "human" "eye" is "evidence" of "design".. the intricate structure of the "human" "eye" points to intelligent "design".
https://www.space.com/1826-seti-intelligent-design.html
SETI and Intelligent Design | Space
SETI and Intelligent Design If you're an inveterate tube-o-phile, you may remember the episode of "Cheers" in which Cliff, the postman who's stayed by neither snow, nor rain, nor gloom of night from his appointed rounds of beer, exclaims to Norm that he's found a potato that looks like Richard Nixon's head. This could be an astonishing attempt by taters to express their political views, but Norm is unimpressed. Finding evidence of complexity (the Nixon physiognomy) in a natural setting (the spud), and inferring some deliberate, magical mechanism behind it all, would be a leap from the doubtful to the divine, and in this case, Norm feels, unwarranted. Cliff, however, would have some sympathizers among the proponents of Intelligent Design (ID), whose efforts to influence school science curricula continue to swill large quantities of newspaper ink. As just about everyone is aware, these folks use similar logic to infer a "designer" behind such biological constructions as DNA or the human eye. The apparent complexity of the product is offered as proof of deliberate blueprinting by an unknown creator--conscious action, presumably from outside the universe itself. What many readers will not know is that SETI research has been offered up in support of Intelligent Design. The way this happens is as follows. When ID advocates posit that DNA--which is a complicated, molecular blueprint--is solid evidence for a designer, most scientists are unconvinced. They counter that the structure of this biological building block is the result of self-organization via evolution, and not a proof of deliberate engineering. DNA, the researchers will protest, is no more a consciously constructed system than Jupiter's Great Red Spot. Organized complexity, in other words, is not enough to infer design. But the adherents of Intelligent Design protest the protest. They point to SETI and say, "upon receiving a complex radio signal from space, SETI researchers will claim it as proof that intelligent life resides in the neighborhood of a distant star. Thus, isn't their search completely analogous to our own line of reasoning--a clear case of complexity implying intelligence and deliberate design?" And SETI, they would note, enjoys widespread scientific acceptance. If we as SETI researchers admit this is so, it sounds as if we're guilty of promoting a logical double standard. If the ID folks aren't allowed to claim intelligent design when pointing to DNA, how can we hope to claim intelligent design on the basis of a complex radio signal? It's true that SETI is well regarded by the scientific community, but is that simply because we don't suggest that the voice behind the microphone could be God? Simple Signals In fact, the signals actually sought by today's SETI searches are not complex, as the ID advocates assume. We're not looking for intricately coded messages, mathematical series, or even the aliens' version of "I Love Lucy." Our instruments are largely insensitive to the modulation--or message--that might be conveyed by an extraterrestrial broadcast. A SETI radio signal of the type we could actually find would be a persistent, narrow-band whistle. Such a simple phenomenon appears to lack just about any degree of structure, although if it originates on a planet, we should see periodic Doppler effects as the world bearing the transmitter rotates and orbits. And yet we still advertise that, were we to find such a signal, we could reasonably conclude that there was intelligence behind it. It sounds as if this strengthens the argument made by the ID proponents. Our sought-after signal is hardly complex, and yet we're still going to say that we've found extraterrestrials. If we can get away with that, why can't they? Well, it's because the credibility of the evidence is not predicated on its complexity. If SETI were to announce that we're not alone because it had detected a signal, it would be on the basis of artificiality. An endless, sinusoidal signal - adead simple tone - is not complex; it's artificial. Such a tone just doesn't seem to be generated by natural astrophysical processes. In addition, and unlike other radio emissions produced by the cosmos, such a signal is devoid of the appendages and inefficiencies nature always seems to add - for example, DNA's junk and redundancy. Consider pulsars - stellar objects that flash light and radio waves into space with impressive regularity. Pulsars were briefly tagged with the moniker LGM (Little Green Men) upon their discovery in 1967. Of course, these little men didn't have much to say. Regular pulses don't convey any information--no more than the ticking of a clock. But the real kicker is something else: inefficiency. Pulsars flash over the entire spectrum. No matter where you tune your radio telescope, the pulsar can be heard. That's bad design, because if the pulses were intended to convey some sort of message, it would be enormously more efficient (in terms of energy costs) to confine the signal to a very narrow band. Even the most efficient natural radio emitters, interstellar clouds of gas known as masers, are profligate. Their steady signals splash over hundreds of times more radio band than the type of transmissions sought by SETI. Imagine bright reflections of the Sun flashing off Lake Victoria, and seen from great distance. These would be similar to pulsar signals: highly regular (once ever 24 hours), and visible in preferred directions, but occupying a wide chunk of the optical spectrum. It's not a very good hailing-signal or communications device. Lightning bolts are another example. They produce pulses of both light and radio, but the broadcast extends over just about the whole electromagnetic spectrum. That sort of bad engineering is easily recognized and laid at nature's door. Nature, for its part, seems unoffended. Junk, redundancy, and inefficiency characterize astrophysical signals. It seems they characterize cells and sea lions, too. These biological constructions have lots of superfluous and redundant parts, and are a long way from being optimally built or operated. They also resemble lots of other things that may be either contemporaries or historical precedents. So that's one point: the signals SETI seeks are really not like other examples drawn from the bestiary of complex astrophysical phenomena. That speaks to their artificiality. The Importance of Setting There's another hallmark of artificiality we consider in SETI, and it's context. Where is the signal found? Our searches often concentrate on nearby Sun-like star systems - the very type of astronomical locale we believe most likely to harbor Earth-size planets awash in liquid water. That's where we hope to find a signal. The physics of solar systems is that of hot plasmas (stars), cool hydrocarbon gasses (big planets), and cold rock (small planets). These do not produce, so far as we can either theorize or observe, monochromatic radio signals belched into space with powers of ten billion watts or more--the type of signal we look for in SETI experiments. It's hard to imagine how they would do this, and observations confirm that it just doesn't seem to be their thing. Context is important, crucially important. Imagine that we should espy a giant, green square in one of these neighboring solar systems. That would surely meet our criteria for artificiality. But a square is not overly complex. Only in the context of finding it in someone's solar system does its minimum complexity become indicative of intelligence. In archaeology, context is the basis of many discoveries that are imputed to the deliberate workings of intelligence. If I find a rock chipped in such a way as to give it a sharp edge, and the discovery is made in a cave, I am seduced into ascribing this to tool use by distant, fetid and furry ancestors. It is the context of the cave that makes this assumption far more likely then an alternative scenario in which I assume that the random grinding and splitting of rock has resulted in this useful geometry. In short, the champions of Intelligent Design make two mistakes when they claim that the SETI enterprise is logically similar to their own: First, they assume that we are looking for messages, and judging our discovery on the basis of message content, whether understood or not. In fact, we're on the lookout for very simple signals. That's mostly a technical misunderstanding. But their second assumption, derived from the first, that complexity would imply intelligence, is also wrong. We seek artificiality, which is an organized and optimized signal coming from an astronomical environment from which neither it nor anything like it is either expected or observed: Very modest complexity, found out of context. This is clearly nothing like looking at DNA's chemical makeup and deducing the work of a supernatural biochemist. Get the Space.com Newsletter Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. Seth Shostak is an astronomer at the SETI (Search for Extraterrestrial Intelligence) Institute in Mountain View, California, who places a high priority on communicating science to the public. In addition to his many academic papers, Seth has published hundreds of popular science articles, and not just for Space.com; he makes regular contributions to NBC News MACH, for example. Seth has also co-authored a college textbook on astrobiology and written three popular science books on SETI, including "Confessions of an Alien Hunter" (National Geographic, 2009). In addition, Seth ahosts the SETI Institute's weekly radio show, "Big Picture Science."
SETI and Intelligent Design If you're an inveterate tube-o-phile, you may remember the episode of "Cheers" in which Cliff, the postman who's stayed by neither snow, nor rain, nor gloom of night from his appointed rounds of beer, exclaims to Norm that he's found a potato that looks like Richard Nixon's head. This could be an astonishing attempt by taters to express their political views, but Norm is unimpressed. Finding evidence of complexity (the Nixon physiognomy) in a natural setting (the spud), and inferring some deliberate, magical mechanism behind it all, would be a leap from the doubtful to the divine, and in this case, Norm feels, unwarranted. Cliff, however, would have some sympathizers among the proponents of Intelligent Design (ID), whose efforts to influence school science curricula continue to swill large quantities of newspaper ink. As just about everyone is aware, these folks use similar logic to infer a "designer" behind such biological constructions as DNA or the human eye. The apparent complexity of the product is offered as proof of deliberate blueprinting by an unknown creator--conscious action, presumably from outside the universe itself. What many readers will not know is that SETI research has been offered up in support of Intelligent Design. The way this happens is as follows. When ID advocates posit that DNA--which is a complicated, molecular blueprint--is solid evidence for a designer, most scientists are unconvinced. They counter that the structure of this biological building block is the result of self-organization via evolution, and not a proof of deliberate engineering. DNA, the researchers will protest, is no more a consciously constructed system than Jupiter's Great Red Spot. Organized complexity, in other words, is not enough to infer design. But the adherents of Intelligent Design protest the protest. They point to SETI and say, "upon receiving a complex radio signal from space, SETI researchers will claim it as proof that intelligent life resides in the neighborhood of a distant star.
no
Creationism
Is the complexity of the human eye evidence of design?
yes_statement
the "complexity" of the "human" "eye" is "evidence" of "design".. the intricate structure of the "human" "eye" points to intelligent "design".
https://www.icr.org/article/made-his-image-amazing-design-human
Made in His Image: The Amazing Design of the Human Body | The ...
And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living being. (Genesis 2:7) Several crucial things are needed to construct a skyscraper: building materials, a detailed plan, and a mind to design and put it all together. After months of planning and working, the building takes shape and, once finished, looks stunning. It’s firmly grounded to withstand wind and unpredictable weather. It’s supported with precisely arranged metal bars, and it even has aesthetic features. The design is not only admirable but quite apparent. Would someone walk by a skyscraper and think it wasn’t designed? The more we learn about nature, the more we discover just how intricately designed it is—with features far more elaborate than those of the skyscraper. Biological organisms have blueprint-like information (e.g., DNA and epigenetic) and building materials (proteins and others), and the only valid remaining question is whether or not a mind planned and put it all together. But why question the existence of a mind? Whenever we can verify the origination of plans and specifications, they are always the result of purposeful design, and only minds possess the ability to design. And only beings possess minds. Take the human body. Its profound engineering outshines virtually everything else we see. Even the best scientists and engineers can’t come close to replicating its beauty, performance, and complexity. As we study the human body, it becomes apparent that it was the result of an exceptionally intelligent and creative Mind. This fall, ICR will release Made in His Image, a new four-episode DVD series that follows in the footsteps of Unlocking the Mysteries of Genesis and explores the most complex and miraculous creations in the universe—us! We’ll take viewers on a journey through the development of a baby’s exceptional cardiovascular system that is vital to life in the womb, delve into the breathtaking features of the human eye and hand, and explore Christ’s marvelous engineering displayed through the abilities of the human body in motion. The Miracle of Birth The first episode looks at seven aspects of design that form a principle called all-or-nothing unity. This principle states that unless all of the critical parts are in the right place at the right time and in the right amount, none of the expected function is attained. Consultant physician Dr. Joseph Kezele and Dr. Randy Guliuzza describe how the birth of a child perfectly exhibits all-or-nothing unity and the wonderful hand of our all-powerful Creator at work. One example is the multiple temporary structures that allow a child to survive in a watery world for nine months and then suddenly transition into a normal breathing environment after birth. A substitute lung to get oxygen from the mother, shunts that divert most blood around the developing baby’s lungs, and blood vessels that connect the baby to the placenta—all of these must work together to enable a baby to thrive in the womb. Within the first 30 minutes after birth, all the temporary vessels, shunts, and openings normally stop functioning, and they permanently close within the next one to two days. The Marvel of Eyes The origin of the eye’s incredibly complex components has consistently been difficult to explain via a natural process of numerous, successive, slight modifications due to genetic mutations. The Lord Jesus certainly packed a huge amount of functionality into something beautiful, and our eyes not only allow us to see the world but help us to reflect the emotions of our inner spirit. They are a hallmark of God’s purpose and elegance. The visual system develops in the womb with built-in plans and specifications. Tissues form the eye in a precise choreography of carefully timed steps. At the same time, nerves are constructed to bring data from the eyes to the brain. After a child is born, his eyes take in data. Light photons hit the back of his retina, which converts patterns of light into a flow of electrical signals. These data are sent down the optic nerve to the brain for interpretation into information. However, the brain cannot interpret the data until memories are formed for future reference. Both the processes of retrieving memories and associating them to data patterns are essential to complete the function of sight. As simple as they seem, even eyelids display complex design. They protect eyes from dust and debris and keep the eyes lubricated. Tears not only help clean your eyes, they also contain antimicrobial agents and other compounds that produce a euphoric effect and help you feel better after crying emotional tears. Uniquely Human Hands Human hands are definitely unique, enabling us to perform in ways unmatched by animals. Not only is their physical structure different from comparable hands in the animal kingdom, they are controlled by an unusually large neurological command center in the brain that gives us uniquely human abilities. The human hand can perform a variety of grips and movements exhibiting an astonishing amount of flexibility and control. Our hands allow us to grip heavy objects like a hammer or a bowling ball as well as light and fragile objects like a potato chip. The unique ridges and swirls on our fingertips also act as data collectors. Human hands can also perform rapid movements due to the brain’s profound ability to develop a “forward plan” that anticipates movements several steps ahead. This allows pianists to play up to 30 successive notes about 40 milliseconds apart. Beauty in Motion The human body is the ultimate example of the marriage of design and function, enabling athletic abilities that showcase just how perfectly God engineered us. It takes all parts functioning together as one well-balanced system to achieve amazing athletic feats like ballet or skiing or throwing a baseball. To help us balance, we have an interconnected control system in our ears known as the vestibular system—a great example of a biologically complex system with multiple parts working together for a single purpose. Our maculae give us straight-line movement data, while semicircular canals give us circular-motion data. Our brains interpret the data from the maculae and semicircular canals to give us a sense of our position and direction of movement. Our nervous system carries instructions from the brain to adjust the rest of the body according to the sensory data—enabling a gymnast to vault and stick her landing after a flip, twist, and turn. Athletes are good at their sports because they practice. The brain stores repetitive muscle movements as skills to be recalled and used at will. The more an athlete repeats a purposeful motion, the more precise and easily recalled the skill will be in his brain. Everywhere we look—up into the vast and awesome grandeur of the universe or down into the incredible beauty of the inner workings of our bodies—we see great design and purpose, pointing us to the Creator. In spite of the many other wonders we witness, you and I are the most efficient, complex, and astonishing work that our omnipotent and omniscient Designer ever made. So God created man in His own image; in the image of God He created him; male and female He created them. (Genesis 1:27) TheMade in His Image DVD series showcases God’s incredible design in the human body and demonstrates that there’s so much more to us than just well-engineered physical anatomy. As you witness the wonders of God’s work in the human body, we pray that you’ll be reminded of His deep desire to have His creation reconciled to Himself. We hope you experience the sense of reverence, worship, and knowledge of the Lord Jesus that He intended when first He created you in His own image.
And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living being. (Genesis 2:7) Several crucial things are needed to construct a skyscraper: building materials, a detailed plan, and a mind to design and put it all together. After months of planning and working, the building takes shape and, once finished, looks stunning. It’s firmly grounded to withstand wind and unpredictable weather. It’s supported with precisely arranged metal bars, and it even has aesthetic features. The design is not only admirable but quite apparent. Would someone walk by a skyscraper and think it wasn’t designed? The more we learn about nature, the more we discover just how intricately designed it is—with features far more elaborate than those of the skyscraper. Biological organisms have blueprint-like information (e.g., DNA and epigenetic) and building materials (proteins and others), and the only valid remaining question is whether or not a mind planned and put it all together. But why question the existence of a mind? Whenever we can verify the origination of plans and specifications, they are always the result of purposeful design, and only minds possess the ability to design. And only beings possess minds. Take the human body. Its profound engineering outshines virtually everything else we see. Even the best scientists and engineers can’t come close to replicating its beauty, performance, and complexity. As we study the human body, it becomes apparent that it was the result of an exceptionally intelligent and creative Mind. This fall, ICR will release Made in His Image, a new four-episode DVD series that follows in the footsteps of Unlocking the Mysteries of Genesis and explores the most complex and miraculous creations in the universe—us! We’ll take viewers on a journey through the development of a baby’s exceptional cardiovascular system that is vital to life in the womb, delve into the breathtaking features of the human eye and hand, and explore Christ’s marvelous engineering displayed through the abilities of the human body in motion.
yes
Creationism
Is the complexity of the human eye evidence of design?
yes_statement
the "complexity" of the "human" "eye" is "evidence" of "design".. the intricate structure of the "human" "eye" points to intelligent "design".
https://www.britannica.com/science/evolution-scientific-theory/Intelligent-design-and-its-critics
Evolution - Intelligent Design, Criticism, Theory | Britannica
William Paley’s Natural Theology, the book by which he has become best known to posterity, is a sustained argument explaining the obvious design of humans and their parts, as well as the design of all sorts of organisms, in themselves and in their relations to one another and to their environment. Paley’s keystone claim is that “there cannot be design without a designer; contrivance, without a contriver; order, without choice;…means suitable to an end, and executing their office in accomplishing that end, without the end ever having been contemplated.” His book has chapters dedicated to the complex design of the human eye; to the human frame, which, he argues, displays a precise mechanical arrangement of bones, cartilage, and joints; to the circulation of the blood and the disposition of blood vessels; to the comparative anatomy of humans and animals; to the digestive system, kidneys, urethra, and bladder; to the wings of birds and the fins of fish; and much more. For more than 300 pages, Paley conveys extensive and accurate biological knowledge in such detail and precision as was available in 1802, the year of the book’s publication. After his meticulous description of each biological object or process, Paley draws again and again the same conclusion—only an omniscient and omnipotent deity could account for these marvels and for the enormous diversity of inventions that they entail. On the example of the human eye he wrote: I know no better method of introducing so large a subject, than that of comparing…an eye, for example, with a telescope. As far as the examination of the instrument goes, there is precisely the same proof that the eye was made for vision, as there is that the telescope was made for assisting it. They are made upon the same principles; both being adjusted to the laws by which the transmission and refraction of rays of light are regulated.…For instance, these laws require, in order to produce the same effect, that the rays of light, in passing from water into the eye, should be refracted by a more convex surface than when it passes out of air into the eye. Accordingly we find that the eye of a fish, in that part of it called the crystalline lens, is much rounder than the eye of terrestrial animals. What plainer manifestation of design can there be than this difference? What could a mathematical instrument maker have done more to show his knowledge of [t]his principle, his application of that knowledge, his suiting of his means to his end…to testify counsel, choice, consideration, purpose? It would be absurd to suppose, he argued, that by mere chance the eye should have consisted, first, of a series of transparent lenses—very different, by the by, even in their substance, from the opaque materials of which the rest of the body is, in general at least, composed, and with which the whole of its surface, this single portion of it excepted, is covered: secondly, of a black cloth or canvas—the only membrane in the body which is black—spread out behind these lenses, so as to receive the image formed by pencils of light transmitted through them; and placed at the precise geometrical distance at which, and at which alone, a distinct image could be formed, namely, at the concourse of the refracted rays: thirdly, of a large nerve communicating between this membrane and the brain; without which, the action of light upon the membrane, however modified by the organ, would be lost to the purposes of sensation. The strength of the argument against chance derived, according to Paley, from a notion that he named relation and that later authors would term irreducible complexity. Paley wrote: When several different parts contribute to one effect, or, which is the same thing, when an effect is produced by the joint action of different instruments, the fitness of such parts or instruments to one another for the purpose of producing, by their united action, the effect, is what I call relation; and wherever this is observed in the works of nature or of man, it appears to me to carry along with it decisive evidence of understanding, intention, art…all depending upon the motions within, all upon the system of intermediate actions. Natural Theology was part of the canon at Cambridge for half a century after Paley’s death. It thus was read by Darwin, who was an undergraduate student there between 1827 and 1831, with profit and “much delight.” Darwin was mindful of Paley’s relation argument when in the Origin of Species he stated: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case.…We should be extremely cautious in concluding that an organ could not have been formed by transitional gradations of some kind.” In the 1990s several authors revived the argument from design. The proposition, once again, was that living beings manifest “intelligent design”—they are so diverse and complicated that they can be explained not as the outcome of natural processes but only as products of an “intelligent designer.” Some authors clearly equated this entity with the omnipotent God of Christianity and other monotheistic religions. Others, because they wished to see the theory of intelligent design taught in schools as an alternate to the theory of evolution, avoided all explicit reference to God in order to maintain the separation between religion and state. The call for an intelligent designer is predicated on the existence of irreducible complexity in organisms. In Michael Behe’s book Darwin’s Black Box: The Biochemical Challenge to Evolution (1996), an irreducibly complex system is defined as being “composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.” Contemporary intelligent-design proponents have argued that irreducibly complex systems cannot be the outcome of evolution. According to Behe, “Since natural selection can only choose systems that are already working, then if a biological system cannot be produced gradually it would have to arise as an integrated unit, in one fell swoop, for natural selection to have anything to act on.” In other words, unless all parts of the eye come simultaneously into existence, the eye cannot function; it does not benefit a precursor organism to have just a retina, or a lens, if the other parts are lacking. The human eye, they conclude, could not have evolved one small step at a time, in the piecemeal manner by which natural selection works. The theory of intelligent design has encountered many critics, not only among evolutionary scientists but also among theologians and religious authors. Evolutionists point out that organs and other components of living beings are not irreducibly complex—they do not come about suddenly, or in one fell swoop. The human eye did not appear suddenly in all its present complexity. Its formation required the integration of many genetic units, each improving the performance of preexisting, functionally less-perfect eyes. About 700 million years ago, the ancestors of today’s vertebrates already had organs sensitive to light. Mere perception of light—and, later, various levels of vision ability—were beneficial to these organisms living in environments pervaded by sunlight. As is discussed more fully below in the section Diversity and extinction, different kinds of eyes have independently evolved at least 40 times in animals, which exhibit a full range, from very uncomplicated modifications that allow individual cells or simple animals to perceive the direction of light to the sophisticated vertebrate eye, passing through all sorts of organs intermediate in complexity. Evolutionists have shown that the examples of irreducibly complex systems cited by intelligent-design theorists—such as the biochemical mechanism of blood clotting (seecoagulation) or the molecular rotary motor, called the flagellum, by which bacterial cells move—are not irreducible at all; rather, less-complex versions of the same systems can be found in today’s organisms. Evolutionists have pointed out as well that imperfections and defects pervade the living world. In the human eye, for example, the visual nerve fibres in the eye converge on an area of the retina to form the optic nerve and thus create a blind spot; squids and octopuses do not have this defect. Defective design seems incompatible with an omnipotent intelligent designer. Anticipating this criticism, Paley responded that “apparent blemishes…ought to be referred to some cause, though we be ignorant of it.” Modern intelligent-design theorists have made similar assertions; according to Behe, “The argument from imperfection overlooks the possibility that the designer might have multiple motives, with engineering excellence oftentimes relegated to a secondary role.” This statement, evolutionists have responded, may have theological validity, but it destroys intelligent design as a scientific hypothesis, because it provides it with an empirically impenetrable shield against predictions of how “intelligent” or “perfect” a design will be. Science tests its hypotheses by observing whether predictions derived from them are the case in the observable world. A hypothesis that cannot be tested empirically—that is, by observation or experiment—is not scientific. The implication of this line of reasoning for U.S. public schools has been recognized not only by scientists but also by nonscientists, including politicians and policy makers. The liberal U.S. senator Edward Kennedy wrote in 2002 that “intelligent design is not a genuine scientific theory and, therefore, has no place in the curriculum of our nation’s public school science classes.” Scientists, moreover, have pointed out that not only do imperfections exist but so do dysfunctions, blunders, oddities, and cruelties prevail in the world of life. For this reason theologians and religious authors have criticized the theory of intelligent design, because it leads to conclusions about the nature of the designer at odds with the omniscience, omnipotence, and omnibenevolence that they, like Paley, identify as the attributes of the Creator. One example of a “blunder” is the human jaw, which for its size has too many teeth; the third molars, or wisdom teeth, often become impacted and need to be removed. Whereas many people would find it awkward, to say the least, to attribute to God a design that a capable human engineer would not even wish to claim, evolution gives a good account of this imperfection. As brain size increased over time in human ancestors, the concurrent remodeling of the skull entailed a reduction of the jaw so that the head of the fetus would continue to fit through the birth canal of the adult female. Evolution responds to an organism’s needs not by optimal design but by tinkering, as it were—by slowly modifying existing structures through natural selection. Despite the modifications to the human jaw, the woman’s birth canal remains much too narrow for easy passage of the fetal head, and many thousands of babies die during delivery as a result. Science makes this understandable as a consequence of the evolutionary enlargement of the human brain; females of other animals do not experience this difficulty. The world of life abounds in “cruel” behaviours. Numerous predators eat their prey alive; parasites destroy their living hosts from within; in many species of spiders and insects, the females devour their mates. Religious scholars in the past had struggled with such dysfunction and cruelty because they were difficult to explain by God’s design. Evolution, in one respect, came to their rescue. A contemporary Protestant theologian called Darwin the “disguised friend,” and a Roman Catholic theologian wrote of “Darwin’s gift to theology.” Both were acknowledging the irony that the theory of evolution, which at first had seemed to remove the need for God in the world, now was convincingly removing the need to explain the world’s imperfections as outcomes of God’s design.
Evolutionists point out that organs and other components of living beings are not irreducibly complex—they do not come about suddenly, or in one fell swoop. The human eye did not appear suddenly in all its present complexity. Its formation required the integration of many genetic units, each improving the performance of preexisting, functionally less-perfect eyes. About 700 million years ago, the ancestors of today’s vertebrates already had organs sensitive to light. Mere perception of light—and, later, various levels of vision ability—were beneficial to these organisms living in environments pervaded by sunlight. As is discussed more fully below in the section Diversity and extinction, different kinds of eyes have independently evolved at least 40 times in animals, which exhibit a full range, from very uncomplicated modifications that allow individual cells or simple animals to perceive the direction of light to the sophisticated vertebrate eye, passing through all sorts of organs intermediate in complexity. Evolutionists have shown that the examples of irreducibly complex systems cited by intelligent-design theorists—such as the biochemical mechanism of blood clotting (seecoagulation) or the molecular rotary motor, called the flagellum, by which bacterial cells move—are not irreducible at all; rather, less-complex versions of the same systems can be found in today’s organisms. Evolutionists have pointed out as well that imperfections and defects pervade the living world. In the human eye, for example, the visual nerve fibres in the eye converge on an area of the retina to form the optic nerve and thus create a blind spot; squids and octopuses do not have this defect. Defective design seems incompatible with an omnipotent intelligent designer. Anticipating this criticism, Paley responded that “apparent blemishes…ought to be referred to some cause, though we be ignorant of it.” Modern intelligent-design theorists have made similar assertions; according to Behe, “The argument from imperfection overlooks the possibility that the designer might have multiple motives, with engineering excellence oftentimes relegated to a secondary role.”
no
Creationism
Is the complexity of the human eye evidence of design?
yes_statement
the "complexity" of the "human" "eye" is "evidence" of "design".. the intricate structure of the "human" "eye" points to intelligent "design".
https://thehumanevolutionblog.com/2015/01/12/the-poor-design-of-the-human-eye/
The Poor Design of the Human Eye – The Human Evolution Blog
Research, Writing, and Musings from Prof. Nathan H. Lents The Poor Design of the Human Eye The human eye is a well-tread example of how evolution can produce a clunky design even when the result is a well-performing anatomical product. The human eye is indeed a marvel, but if it were to be designed from scratch, it’s hard to imagine it would look anything like it does. Inside the human eye is the long legacy of how light-sensing slowly and incrementally developed in the animal lineage. [Update: This article is now included as a section in my new book, Human Errors, go check it out!] Not long ago, creationists often pointed to the human eye as an example of so-called irreducible complexity. Their claim was that the eye is so sophisticated, and with so many interconnected parts, that evolution could not have produced it through incrementation. Because the human eye does not function, even slightly, unless all of the parts are in place and working, there is no conceivable prior step of less complexity that the current form of the eye could have evolved from. So goes the complaint. This bizarre objection misunderstands both how evolution works and how the eye works. It’s true that the eye won’t function if you remove any one part, but evolution doesn’t work by adding individual fully-formed parts to a pre-existing structure. The entire eye evolves as a unit. There have been incremental advances throughout the entire structure of the eye, one at a time. Fortunately, we have many very good examples of earlier versions of the vertebrate eye, both from extant (living) organisms with more primitive eyes, and from the fossil record. In fact, the eye is now one of the anatomical structures about which we have the most complete understanding of its gradual evolution. For this reason, creationists have largely abandoned the argument of irreducible complexity of the eye, retreating to more obscure examples such as the bacterial flagellum. Before I discuss the puzzling physical design of the eye, let’s start off by making one thing clear: the human eye is fraught with functional problems as well. Many people reading this are doing so only with the aid of modern technology. In the US and Europe, 30-40% of the population have myopia (near-sightedness) and require assistance from glasses or contact lenses. Otherwise, their eyes cannot focus light properly and cannot resolve objects that are more than a few feet away. The rate of myopia increases to more than 70% in Asian countries. The defect in the myopic eye is not caused by injury or overuse: it is simply too long. Images focus sharply before they reach the back of the eye and then fall out of focus again as they finally land on the retina. It’s bad design, plain and simple. Of course, the opposite problem, far-sightedness, exists as well and comes in two forms: hyperopia and presbyopia. Hyperopic eyes are built too short and the light fails to focus before hitting the retina, another example of poor construction. Presbyopia, on the other hand, is age-related far-sightedness caused by the progressive loss of the flexibility of the lens and/or failure of the ciliary muscles to pull on the lens and focus light properly. Presbyopia literally means “old man sight” and begins to set in around age 40. By 60 years of age, virtually everyone suffers from difficulty resolving close objects. At 37 years old, I have already noticed that I hold books and newspapers further and further from my face as time goes on. The time for bifocals is nigh. Add to this: glaucoma, cataracts, and retinal detachment (just to name a few), and a pattern begins to emerge. For the “most highly evolved creatures” on the planet, our eyes are rather lacking. The vast majority of people will suffer significant loss of visual function in their lifetimes, and for many people, it starts even before puberty. I got glasses after my first eye exam when I was in the second grade. Who knows how long I had actually needed them? My vision isn’t just a little blurry; it’s terrible. My lenses are -4.25 diopters, which means my vision is somewhere around 20/400. Had I been born before, say, 1600, I would probably have gone through life unable to do anything that required me to see further than arm’s length. In pre-history, I would have been worthless as a hunter. Or a gatherer, for that matter. Compare this to the excellent vision of most birds, especially birds of prey, such as eagles and condors. Their visual acuity at great distances puts even the best human eyes to shame. Many birds can see a broader range of wavelengths than we can also, including ultraviolet light. In fact, migrating birds detect north and south poles with their eyes. It’s not clear if they are consciously aware of this perception, but it seems likely to me that they are, considering this information is conveyed by the same nerves that relay vision. This would mean that some birds can actually see the earth’s magnetic field. Many birds also have an additional translucent eyelid that allows them to look directly into the sun, at length, without damaging their retinas. The superiority of the bird eye shows that whatever designed the human eye, be it nature or a deity, is capable of producing eyes that are much better than the human eye. The question of why nature didn’t provide humans with better eyes is easily answered by evolutionary theory: it wasn’t strongly selected for. Alternatively, why an intelligent designer would deny his favorite creatures the excellent vision that he provided lowly birds is quite a mystery. There is more to be said about the shortcomings of our eyes. Our night-vision is, at best, only so-so, and for some it is very poor. Compare this to cats, whose night-vision is legendary. So sensitive are cats’ eyes that they can detect a single photon of light in an absolutely dark environment. For reference, in a small brightly lit room, there are about one hundred billion photons at any one moment in time. Even with what light we can see, our acuity and resolution in dim light is far worse than that of cats, dogs, birds, and many other animals. You might be able to see more colors than dogs can, but they can see at night more clearly than you can. Speaking of color vision, not all humans have that, either. Somewhere around 6% of the men in the world have some form of color blindness. (It’s not nearly as common in females because the screwed up genes that lead to color blindness are almost always on the X chromosome. Because they have two X chromosomes, females have a backup in case they inherit one bum copy). With a world population of around 7 billion, that means at least a quarter of a billion people cannot appreciate the same palette of colors that the rest of us can. That’s a lot of color blindness. Now on to the physical design of the eye. One of the all-time most famous examples of quirky designs in nature is the vertebrate retina. The photoreceptor cells of the retina appear to be placed backward, with the wiring facing the light and the photoreceptor facing inward. A photoreceptor cells looks something like a microphone: the “hot” end has the sound receiver, and the other end terminates with the cable that carries the signal off to the amplifier. The human retina, located in the back of the eyeball, is designed such that all of the little “microphones” are facing the wrong way. The side with the cable faces forwards! (I’m pretty sure this is photoshopped, but the point is made.) This is not an optimal design for obvious reasons. The photons of light must travel around the bulk of the photoreceptor cell in order to hit the receiver tucked in the back. It’s as if you were speaking into the wrong end of a microphone. It can still work, provided that you turn the sensitivity of the microphone way up and you speak loudly. Furthermore, light must travel through a thin layer of tissue and blood supply before reaching the photoreceptors. To date, there are no working hypotheses about why the vertebrate retina is wired in backwards. It seems to have been a random development that then “stuck” because a correction of that magnitude would be very difficult to pull off with random mutations. Interestingly, the retina of cephalopods – octopi and squid – is not inverted. The cephalopod eye and the vertebrate eye, while strikingly similar, evolved completely independently of one another. Nature “invented” the camera-like eye at least twice, once in vertebrates and once in cephalopods. (Insects, arachnids, and crustaceans have an entirely different type of eye.) During the evolution of the cephalopod eye, the retina took shape in a more logical way, with the photoreceptors facing outward toward the light. Vertebrates were not so lucky. To be sure, evolution has done an impressive job of building an excellent eye despite the backwards contour of the retina. Perhaps capitalizing on the odd design, the vertebrate retina is able to provide oxygen and nutrients directly to the most metabolically active part of the light-sensing cells – the photoreceptors themselves. However, there is no evidence that the backwards design is necessary or even advantageous for oxygen delivery, especially given that the cephalopod eye does not show any sign of inadequate oxygen delivery. While the blood vessels of the vertebrate retina have clearly made the best of a bad situation, all available evidence supports the notion that the inverted vertebrate retina is inferior to the more logical design of the cephalopods. In fact, most ophthalmologists agree that the backwards retina is what causes retinal detachment to be more common in vertebrates than it is in cephalopods. There is one more design quirk in the human eye that merits mention. Right smack in the middle of the retina, there is a structure called the optic disc where the axons of the millions of photoreceptor cells all converge to form the optic nerve. The disc is located on the surface of the retina, occupying a small circular spot in which no photoreceptor cells can fit. This creates a blind spot in each eye. We don’t notice these blind spots because having two eyes compensates and our brain fills in the picture for us, but they are definitely there. You can find simple demonstrations of this on the internet by searching for “optic disc blind spot.” The optic disc is a necessary structure insofar as the retinal axons must all converge at some point. An intelligent design would be to place it deeper in the back of the eye, tucked underneath the retina, rather than smack on top of it. The backwards placement of the retina makes the blind spot somewhat unavoidable and all vertebrates have it. Cephalopods do not, however, because their right-side-out retina allows the easy placement of the fovea behind an unbroken retina. Nevertheless, there are many conceivable “fixes” for the blind spot in the human retina, backwards though it is. Thus, the optic disc and accompanying blind spot are an example of poor design in all vertebrates, including humans. In sum, the human eye, wondrous though it is, has a few rather glaring defects in its design. These flaws are easily understood through the twists and turns of evolutionary progress. However, such rather obvious shortcomings are not easy to explain under the guise of an intelligent design. Yeah right.Dr George Marshall, an ophthalmologist, “The idea that the eye is wired backward comes from a lack of knowledge of eye function and anatomy.” Page 119, Refuting Evolution 2 And the rest of the chapter is an eye-opener too. Follow me on Twitter Tumblr addy About The Human Evolution Blog is maintained by Professor Nathan Lents of John Jay College, The City University of New York. All content on this site is owned by Nathan Lents and may not be reproduced without permission.
Research, Writing, and Musings from Prof. Nathan H. Lents The Poor Design of the Human Eye The human eye is a well-tread example of how evolution can produce a clunky design even when the result is a well-performing anatomical product. The human eye is indeed a marvel, but if it were to be designed from scratch, it’s hard to imagine it would look anything like it does. Inside the human eye is the long legacy of how light-sensing slowly and incrementally developed in the animal lineage. [Update: This article is now included as a section in my new book, Human Errors, go check it out!] Not long ago, creationists often pointed to the human eye as an example of so-called irreducible complexity. Their claim was that the eye is so sophisticated, and with so many interconnected parts, that evolution could not have produced it through incrementation. Because the human eye does not function, even slightly, unless all of the parts are in place and working, there is no conceivable prior step of less complexity that the current form of the eye could have evolved from. So goes the complaint. This bizarre objection misunderstands both how evolution works and how the eye works. It’s true that the eye won’t function if you remove any one part, but evolution doesn’t work by adding individual fully-formed parts to a pre-existing structure. The entire eye evolves as a unit. There have been incremental advances throughout the entire structure of the eye, one at a time. Fortunately, we have many very good examples of earlier versions of the vertebrate eye, both from extant (living) organisms with more primitive eyes, and from the fossil record. In fact, the eye is now one of the anatomical structures about which we have the most complete understanding of its gradual evolution. For this reason, creationists have largely abandoned the argument of irreducible complexity of the eye, retreating to more obscure examples such as the bacterial flagellum. Before I discuss the puzzling physical design of the eye, let’s start off by making one thing clear: the human eye is fraught with functional problems as well. Many people reading this are doing so only with the aid of modern technology.
no
Creationism
Is the complexity of the human eye evidence of design?
yes_statement
the "complexity" of the "human" "eye" is "evidence" of "design".. the intricate structure of the "human" "eye" points to intelligent "design".
https://www.gotquestions.org/intelligent-design.html
What is the Intelligent Design Theory? | GotQuestions.org
Find Out What is the Intelligent Design Theory? The Intelligent Design Theory says that intelligent causes are necessary to explain the complex, information-rich structures of biology and that these causes are empirically detectable. Certain biological features defy the standard Darwinian random-chance explanation, because they appear to have been designed. Since design logically necessitates an intelligent designer, the appearance of design is cited as evidence for a designer. There are three primary arguments in the Intelligent Design Theory: 1) irreducible complexity, 2) specified complexity, and 3) the anthropic principle. One of the arguments for Intelligent Design, irreducible complexity, is defined as “a single system which is composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.” Simply put, life is comprised of intertwined parts that rely on each other in order to be useful. Random mutation may account for the development of a new part, but it cannot account for the concurrent development of multiple parts necessary for a functioning system. For example, the human eye is obviously a very useful system. Without the eyeball, the optic nerve, and the visual cortex, a randomly mutated incomplete eye would actually be counterproductive to the survival of a species and would therefore be eliminated through the process of natural selection. An eye is not a useful system unless all its parts are present and functioning properly at the same time. Another argument for Intelligent Design, specified complexity, is the concept that, since specified complex patterns can be found in organisms, some form of guidance must have accounted for their origin. The specified complexity argument states that it is impossible for complex patterns to be developed through random processes. For example, a room filled with 100 monkeys and 100 computers may eventually produce a few words, or maybe even a sentence, but it would never produce a Shakespearean play. And how much more complex is biological life than a Shakespearean play? The anthropic principle of Intelligent Design states that the world and universe are “fine-tuned” to allow for life on Earth. If the ratio of elements in the atmosphere of the earth was altered slightly, many species would quickly cease to exist. If the earth were significantly closer to or further away from the sun, many species would cease to exist. The existence and development of life on Earth requires so many variables to be perfectly in tune that it would be impossible for all the variables to come into being through random, uncoordinated events. While the Intelligent Design Theory does not presume to identify the source of intelligence (whether it be God or UFOs or something else), the vast majority of Intelligent Design theorists are theists. They see the appearance of design which pervades the biological world as evidence for the existence of God. There are, however, a few atheists who cannot deny the strong evidence for design but are not willing to acknowledge a Creator God. They tend to interpret the data as evidence that earth was seeded by some sort of master race of extraterrestrial creatures (aliens). Of course, their interpretation does not address the origin of the aliens, either, so they are back to the original argument with no credible answer. Intelligent Design Theory is not always exactly the same as biblical creationism. There are various interpretations of what Intelligent Design refers to. Biblical creationists conclude that the Genesis account of creation is reliable and correct, and so life on Earth was designed by an intelligent agent: God. They see the facets of Intelligent Design as evidence from the natural realm that supports this conclusion. Other Intelligent Design theorists begin with the natural realm and reach the conclusion that life on Earth was designed by an intelligent agent, without specifying who that agent might be. In and of itself, Intelligent Design does not specify who the Designer or designers actually are. As such, Intelligent Design is compatible with biblical creationism, but it is not an inherently religious position.
For example, the human eye is obviously a very useful system. Without the eyeball, the optic nerve, and the visual cortex, a randomly mutated incomplete eye would actually be counterproductive to the survival of a species and would therefore be eliminated through the process of natural selection. An eye is not a useful system unless all its parts are present and functioning properly at the same time. Another argument for Intelligent Design, specified complexity, is the concept that, since specified complex patterns can be found in organisms, some form of guidance must have accounted for their origin. The specified complexity argument states that it is impossible for complex patterns to be developed through random processes. For example, a room filled with 100 monkeys and 100 computers may eventually produce a few words, or maybe even a sentence, but it would never produce a Shakespearean play. And how much more complex is biological life than a Shakespearean play? The anthropic principle of Intelligent Design states that the world and universe are “fine-tuned” to allow for life on Earth. If the ratio of elements in the atmosphere of the earth was altered slightly, many species would quickly cease to exist. If the earth were significantly closer to or further away from the sun, many species would cease to exist. The existence and development of life on Earth requires so many variables to be perfectly in tune that it would be impossible for all the variables to come into being through random, uncoordinated events. While the Intelligent Design Theory does not presume to identify the source of intelligence (whether it be God or UFOs or something else), the vast majority of Intelligent Design theorists are theists. They see the appearance of design which pervades the biological world as evidence for the existence of God. There are, however, a few atheists who cannot deny the strong evidence for design but are not willing to acknowledge a Creator God. They tend to interpret the data as evidence that earth was seeded by some sort of master race of extraterrestrial creatures (aliens). Of course, their interpretation does not address the origin of the aliens, either, so they are back to the original argument with no credible answer. Intelligent Design Theory is not always exactly the same as biblical creationism.
yes
Creationism
Is the complexity of the human eye evidence of design?
no_statement
the "complexity" of the "human" "eye" does not necessarily indicate "design".. the intricate nature of the "human" "eye" can be explained through natural processes.
https://www.scientificamerican.com/article/evolution-of-the-eye/
Evolution of the Eye - Scientific American
Evolution of the Eye Credit: Photograph by Dan Saelinger. Specimen courtesy of the Eye-Bank For Sight Restoration, New York (www.eyedonation.org) Advertisement In Brief The eyes of vertebrate animals are so complex that creationists have long argued that they could not have formed by natural selection. Soft tissues rarely fossilize. But by comparing eye structures and embryological development of the eye in vertebrate species, scientists have gained crucial insights into the organ’s origin. These findings suggest that our camera-style eye has surprisingly ancient roots and that prior to acquiring the elements necessary to operate as a visual organ it functioned to detect light for modulating our long-ago ancestors’ circadian rhythms. The human eye is an exquisitely complicated organ. It acts like a camera to collect and focus light and convert it into an electrical signal that the brain translates into images. But instead of photographic film, it has a highly specialized retina that detects light and processes the signals using dozens of different kinds of neurons. So intricate is the eye that its origin has long been a cause célèbre among creationists and intelligent design proponents, who hold it up as a prime example of what they term irreducible complexity—a system that cannot function in the absence of any of its components and that therefore cannot have evolved naturally from a more primitive form. Indeed, Charles Darwin himself acknowledged in On the Origin of Species—the 1859 book detailing his theory of evolution by natural selection—that it might seem absurd to think the eye formed by natural selection. He nonetheless firmly believed that the eye did evolve in that way, despite a lack of evidence for intermediate forms at the time. Direct evidence has continued to be hard to come by. Whereas scholars who study the evolution of the skeleton can readily document its metamorphosis in the fossil record, soft-tissue structures rarely fossilize. And even when they do, the fossils do not preserve nearly enough detail to establish how the structures evolved. Still, biologists have recently made significant advances in tracing the origin of the eye—by studying how it forms in developing embryos and by comparing eye structure and genes across species to reconstruct when key traits arose. The results indicate that our kind of eye—the type common across vertebrates—took shape in less than 100 million years, evolving from a simple light sensor for circadian (daily) and seasonal rhythms around 600 million years ago to an optically and neurologically sophisticated organ by 500 million years ago. More than 150 years after Darwin published his groundbreaking theory, these findings put the nail in the coffin of irreducible complexity and beautifully support Darwin’s idea. They also explain why the eye, far from being a perfectly engineered piece of machinery, exhibits a number of major flaws—these flaws are the scars of evolution. Natural selection does not, as some might think, result in perfection. It tinkers with the material available to it, sometimes to odd effect. To understand how our eye originated, one needs to know something about events that occurred in deep time. We humans have an unbroken line of ancestors stretching back nearly four billion years to the beginning of life on earth. Around a billion years ago simple multicellular animals diverged into two groups: one had a radially symmetrical body plan (a top side and bottom side but no front or back), and the other—which gave rise to most of the organisms we think of as animals—was bilaterally symmetrical, with left and right sides that are mirror images of one another and a head end. The bilateria themselves then diverged around 600 million years ago into two important groups: one that gave rise to the vast majority of today’s spineless creatures, or invertebrates, and one whose descendants include our own vertebrate lineage. Soon after these two lineages parted ways, an amazing diversity of animal body plans proliferated—the so-called Cambrian explosion that famously left its mark in the fossil record of around 540 million to 490 million years ago. This burst of evolution laid the groundwork for the emergence of our complex eye. Compound vs. Camera The fossil record shows that during the Cambrian explosion two fundamentally different styles of eye arose. The first seems to have been a compound eye of the kind seen today in all adult insects, spiders and crustaceans—part of an invertebrate group collectively known as the arthropods. In this type of eye, an array of identical imaging units, each of which constitutes a lens or reflector, beams light to a handful of light-sensitive elements called photoreceptors. Compound eyes are very effective for small animals in offering a wide-angle view and moderate spatial resolution in a small volume. In the Cambrian, such visual ability may have given trilobites and other ancient arthropods a survival advantage over their visually impaired contemporaries. Compound eyes are impractical for large animals, however, because the eye size required for high-resolution vision would be overly large. Hence, as body size increased, so, too, did the selective pressures favoring the evolution of another type of eye: the camera variety. In camera-style eyes, the photoreceptors all share a single light-focusing lens, and they are arranged as a sheet (the retina) that lines the inner surface of the wall of the eye. Squid and octopuses have a camera-style eye that superficially resembles our own, but their photoreceptors are the same kind found in insect eyes. Vertebrates possess a different style of photoreceptor, which in jawed vertebrates (including ourselves) comes in two varieties: cones for daylight vision and rods for nighttime vision. Several years ago Edward N. Pugh, Jr., then at the University of Pennsylvania, and Shaun P. Collin, then at the University of Queens­land in Australia, and I teamed up to try to figure out how these different types of photoreceptors could have evolved. What we found went beyond answering that question to provide a compelling scenario for the origin of the vertebrate eye. Deep Roots Like other biologists before us, Pugh, Collin and I observed that many of the hallmark features of the vertebrate eye are the same across all living representatives of a major branch of the vertebrate tree: that of the jawed vertebrates. This pattern suggests that jawed vertebrates inherited the traits from a common ancestor and that our eye had already evolved by around 420 million years ago, when the first jawed vertebrates (which probably resembled modern-day cartilaginous fish such as sharks) patrolled the seas. We reasoned that our camera-style eye and its photoreceptors must therefore have still deeper roots, so we turned our attention to the more primitive jawless vertebrates, with which we share a common ancestor from roughly 500 million years ago. We wanted to examine the anatomy of such an animal in detail and thus decided to focus on one of the few modern-day animals in this group: the lamprey, an eel-like fish with a funnel-shaped mouth built for sucking rather than biting. It turns out that this fish, too, has a camera-style eye complete with a lens, an iris and eye muscles. The lamprey’s retina even has a three-layered structure like ours, and its photoreceptor cells closely resemble our cones, although it has apparently not evolved the more sensitive rods. Furthermore, the genes that govern many aspects of light detection, neural processing and eye development are the same ones that direct these processes in jawed vertebrates. These striking similarities to the eye of jawed vertebrates are far too numerous to have arisen independently. Instead an eye essentially identical to our own must have been present in the common ancestor of the jawless and jawed vertebrates 500 million years ago. At this point, my colleagues and I could not help but wonder whether we could trace the origin of the eye and its photoreceptors back even further. Unfortunately, there are no living representatives of lineages that split off from our line in the preceding 50 million years, the next logical slice of time to study. But we found clues in the eye of an enigmatic beast called the hagfish. Like their close relatives the lampreys, hagfish are eel-shaped, jawless fish. They typically live on the ocean floor, where they feed on crustaceans and fallen carcasses of other marine creatures. When threatened, they exude an extremely viscous slime, hence the nickname “slime eels.” Although hagfish are vertebrates, their eye departs profoundly from the vertebrate norm. The hagfish eye lacks a cornea, iris, lens and all of the usual supporting muscles. Its retina contains just two layers of cells rather than three. Furthermore, each eye is buried deep underneath a translucent patch of skin. Observations of hagfish behavior suggest that the animals are virtually blind, locating carrion with their keen sense of smell. The hagfish shares a common ancestor with the lamprey, and this ancestor presumably had a camera-style eye like the lamprey’s. The hagfish eye must therefore have degenerated from that more advanced form. That it still exists in this diminished state is telling. We know from blind cavefish, for instance, that the eye can undergo massive degeneration and can even be lost altogether in as little as 10,000 years. Yet the hagfish eye, such as it is, has hung on for hundreds of millions of years. This persistence suggests that even though the animal cannot use its eye to see in the dim ocean depths, the organ is somehow important for survival. The discovery also has other implications. The hagfish eye may have ended up in its rudimentary state by way of a failure of development, so its current structure may be representative of the architecture of an earlier evolutionary stage. The operation of the hagfish eye could thus throw light on how the proto-eye functioned before evolving into a visual organ. Hints about the role the hagfish eye might play came from taking a closer look at the animal’s retina. In the standard three-layered vertebrate retina, the cells in the middle layer, known as bipolar cells, process information from the photoreceptors and communicate the results to the output neurons, whose signals travel to the brain for interpretation. The two-layered hagfish retina, however, lacks the intervening bipolar cells, which means that the photoreceptors connect directly to the output neurons. In this regard, the wiring of the hagfish retina closely resembles that of the so-called pineal gland, a small, hormone-secreting body in the vertebrate brain. The pineal gland modulates circadian rhythms, and in nonmammalian vertebrates it contains photoreceptor cells that connect directly to output neurons with no intermediary cells; in mammals those cells have lost their ability to detect light. Based in part on this parallel to the pineal gland, my collaborators and I proposed in 2007 that the hagfish eye is not involved in vision but instead provides input to the part of the animal’s brain that regulates crucial circadian rhythms, as well as seasonal activities such as feeding and breeding. Perhaps, then, the ancestral eye of proto-vertebrates living between 550 million and 500 million years ago first served as a nonvisual organ and only later evolved the neural processing power and optical and motor components needed for spatial vision. Studies of the embryological development of the vertebrate eye support this notion. When a lamprey is in the larval stage, it lives in a streambed and, like the hagfish, is blind. At that point in its young life, its eye resembles the hagfish eye in being structurally simple and buried below the skin. When the larva undergoes metamorphosis, its rudimentary eye grows substantially and develops a three-layered retina; a lens, cornea and supporting muscles all form. The organ then erupts at the surface as a camera-style vertebrate eye. Because many aspects of the development of an individual mirror events that occurred during the evolution of its ancestors, we can, with caution, use the developing lamprey eye to inform our reconstruction of how the eye evolved. During embryological development the mammalian eye, too, exhibits telltale clues to its evolutionary origin. Benjamin E. ­Reese­ and his collaborators at the University of California, Santa Barbara, have found that the circuitry of the mammalian retina starts out rather like that of the hagfish, with the photoreceptors connecting directly to the output neurons. Then, over a period of several weeks, the bipolar cells mature and insert themselves between the photoreceptors and the output neurons. This sequence is exactly the developmental pattern one would expect to see if the vertebrate retina evolved from a two-layered circadian organ by adding processing power and imaging components. It therefore seems entirely plausible that this early, simple stage of development represents a holdover from a period in evolution before the invention of bipolar cell circuitry in the retina and before the invention of the lens, cornea and supporting muscles. Rise of the Receptors While we were studying the development of the three layers of the retina, another question related to the eye’s evolution occurred to us. Photoreceptor cells across the animal kingdom fall into two distinct classes: rhabdomeric and ciliary. Until recently, many scientists thought that invertebrates used the rhabdomeric class, whereas vertebrates used the ciliary class, but in fact, the situation is more complicated. In the vast majority of organisms, ciliary photoreceptors are responsible for sensing light for nonvisual purposes—to regulate circadian rhythms, for example. Rhabdomeric receptors, in contrast, sense light for the express purpose of enabling vision. Both the compound eyes of arthropods and the camera-style eyes of mollusks such as the octopus, which evolved independently of the camera-style eyes of vertebrates, employ rhabdomeric photoreceptors. The vertebrate eye, however, uses the ciliary class of photoreceptors to sense light for vision. In 2003 Detlev Arendt of the European Molecular Biology Laboratory in Heidelberg, Germany, reported evidence that our eye still retains the descendants of rhabdomeric photoreceptors, which have been greatly modified to form the output neurons that send information from the retina to the brain. This discovery means that our retina contains the descendants of both classes of photoreceptors: the ciliary class, which has always comprised photoreceptors, and the rhabdomeric class, transformed into output neurons. Pressing an existing structure into use for a new purpose is exactly how evolution works, and so the discovery that the ciliary and rhabdomeric photoreceptors play different roles in our eye than in the eye of invertebrates adds still more weight to the evidence that the vertebrate eye was constructed by natural processes. We wondered, though, what kinds of environmental pressures might have pushed those cells to take on those new roles. To try to understand why the ciliary photoreceptors triumphed as the light sensors of the vertebrate retina, whereas the rhabdomeric class evolved into projection neurons, I analyzed the properties of their respective light-sensing pigments, or rhodopsins, so named for the opsin protein molecule they contain. In 2004 Yoshinori Shichida of Kyoto University in Japan and his colleagues had shown that early in the evolution of vertebrate visual pigments, a change had occurred that made the light-activated form of the pigment more stable and hence more active. I proposed that this change also blocked the route for reconversion of the activated rhodopsin back to its inactive form, which for rhabdomeric rhodopsins uses the absorption of a second photon of light; thus, of necessity, a biochemical pathway was needed to reset the molecule in readiness to signal light again. Once these two elements were in place, I hypothesized, the ciliary photoreceptors would have had a distinct advantage over rhabdomeric photoreceptors in environments such as the deep ocean, where light levels are very low. As a result, some early chordates (ancestors of the vertebrates) may have been able to colonize ecological niches inaccessible to animals that relied on rhabdomeric photoreceptors—not because the improved ciliary opsin conferred better vision (the other essential components of the camera-style eye had yet to evolve) but because it provided an improved way of sensing the light that enables circadian and seasonal clocks to keep time. For these ancient chordates dwelling in darker realms, the less sensitive rhabdomeric photoreceptors they had in addition to the ciliary ones would have been virtually useless and so would have been free to take on a new role: as neurons that transmit signals to the brain. (At that point, they no longer needed opsin, and natural selection would have eliminated it from these cells.) An Eye Is Born Now that my colleagues and I had an idea of how the components of the vertebrate retina originated, we wanted to figure out how the eye evolved from a light-sensing but nonvisual organ into an image-forming one by around 500 million years ago. Here again we found clues in developing embryos. Early in development, the neural structure that gives rise to the eye bulges out on either side to form two sacs, or vesicles. Each of these vesicles then folds in on itself to form a C-shaped retina that lines the interior of the eye. Evolution probably proceeded in much the same way. We postulate that a proto-eye of this kind—with a C-shaped, two-layered retina composed of ciliary photoreceptors on the exterior and output neurons derived from rhabdomeric photoreceptors on the interior—had evolved in an ancestor of vertebrates between 550 million and 500 million years ago, serving to drive its internal clock and perhaps help it to detect shadows and orient its body properly. In the next stage of embryological development, as the retina is folding inward against itself, the lens forms, originating as a thickening of the embryo’s outer surface, or ectoderm, that bulges into the curved empty space formed by the C-shaped retina. This protrusion eventually separates from the rest of the ectoderm to become a free-floating element. It seems likely that a broadly similar sequence of changes occurred during evolution. We do not know exactly when this modification happened, but in 1994 researchers at Lund University in Sweden showed that the optical components of the eye could have easily evolved within a million years. If so, the image-forming eye may have arisen from the nonvisual proto-eye in a geologic instant. With the advent of the lens to capture light and focus images, the eye’s information-gathering capability increased dramatically. This augmentation would have created selective pressures favoring the emergence of improved signal processing in the retina beyond what the simple connection of photoreceptors to output neurons afforded. Evolution met this need by modifying the cell maturation process so that some developing cells, instead of forming ciliary photoreceptors, instead become retinal bipolar cells that insert themselves between the photoreceptor layer and the output neuron layer. This is why the retina’s bipolar cells so closely resemble rod and cone cells, although they lack rhodopsin and receive input not from light but instead from the chemical (called a neurotransmitter) released by the photoreceptors. Although camera-style eyes provide a wide field of view (typically of around 180 degrees), in practice our brain can sample only a fraction of the available information at any given time because of the limited number of nerve fibers linking our eye to our brain. The earliest camera-style eyes no doubt faced an even more severe limitation, because they presumably had even fewer nerve fibers. Thus, there would have been considerable selective pressure for the evolution of muscles to move the eye. Such muscles must have been present by 500 million years ago because the arrangement of these muscles in the lamprey, whose lineage dates back that far, is almost identical to that of jawed vertebrates, including humans. For all the ingenious features evolution built into the vertebrate eye, there are a number of decidedly inelegant traits. For instance, the retina is inside out, so light has to pass through the whole thickness of the retina—through the intervening nerve fibers and cell bodies that scatter the light and degrade image quality—before reaching the light-sensitive photoreceptors. Blood vessels also line the inner surface of the retina, casting unwanted shadows onto the photoreceptor layer. The retina has a blind spot where the nerve fibers that run across its surface congregate before tunneling out through the retina to emerge behind it as the optic nerve. The list goes on and on. These defects are by no means inevitable features of a camera-style eye because octopuses and squid independently evolved camera-style eyes that do not suffer these deficiencies. Indeed, if engineers were to build an eye with the flaws of our own, they would probably be fired. Considering the vertebrate eye in an evolutionary framework reveals these seemingly absurd shortcomings as consequences of an ancient sequence of steps, each of which provided benefit to our long-ago vertebrate ancestors even before they could see. The design of our eye is not intelligent—but it makes perfect sense when viewed in the bright light of evolution. ABOUT THE AUTHOR(S) Trevor D. Lamb is an investigator in the department of neuro­science at the John Curtin School of Medical Research and in the ARC Centre of Excellence in ­Vision Science at the Australian National University in Canberra. His research focuses on the rod and cone photoreceptors of the vertebrate retina. Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers.
So intricate is the eye that its origin has long been a cause célèbre among creationists and intelligent design proponents, who hold it up as a prime example of what they term irreducible complexity—a system that cannot function in the absence of any of its components and that therefore cannot have evolved naturally from a more primitive form. Indeed, Charles Darwin himself acknowledged in On the Origin of Species—the 1859 book detailing his theory of evolution by natural selection—that it might seem absurd to think the eye formed by natural selection. He nonetheless firmly believed that the eye did evolve in that way, despite a lack of evidence for intermediate forms at the time. Direct evidence has continued to be hard to come by. Whereas scholars who study the evolution of the skeleton can readily document its metamorphosis in the fossil record, soft-tissue structures rarely fossilize. And even when they do, the fossils do not preserve nearly enough detail to establish how the structures evolved. Still, biologists have recently made significant advances in tracing the origin of the eye—by studying how it forms in developing embryos and by comparing eye structure and genes across species to reconstruct when key traits arose. The results indicate that our kind of eye—the type common across vertebrates—took shape in less than 100 million years, evolving from a simple light sensor for circadian (daily) and seasonal rhythms around 600 million years ago to an optically and neurologically sophisticated organ by 500 million years ago. More than 150 years after Darwin published his groundbreaking theory, these findings put the nail in the coffin of irreducible complexity and beautifully support Darwin’s idea. They also explain why the eye, far from being a perfectly engineered piece of machinery, exhibits a number of major flaws—these flaws are the scars of evolution. Natural selection does not, as some might think, result in perfection. It tinkers with the material available to it, sometimes to odd effect. To understand how our eye originated, one needs to know something about events that occurred in deep time.
no
Creationism
Is the complexity of the human eye evidence of design?
no_statement
the "complexity" of the "human" "eye" does not necessarily indicate "design".. the intricate nature of the "human" "eye" can be explained through natural processes.
https://plato.stanford.edu/entries/creationism/
Creationism (Stanford Encyclopedia of Philosophy)
Creationism At a broad level, a Creationist is someone who believes in a god who is absolute creator of heaven and earth, out of nothing, by an act of free will. Such a deity is generally thought to be “transcendent” meaning beyond human experience, and constantly involved (‘immanent’) in the creation, ready to intervene as necessary, and without whose constant concern the creation would cease or disappear. Christians, Jews, and Muslims are all Creationists in this sense. Generally they are known as ‘theists,’ distinguishing them from ‘deists,’ that is people who believe that there is a designer who might or might not have created the material on which he (or she or it) is working and who does not interfere once the designing act is finishing. The focus of this discussion is on a narrower sense of Creationism, the sense that one usually finds in popular writings (especially in America today, but expanding world-wide rapidly). Here, Creationism means the taking of the Bible, particularly the early chapters of Genesis, as literally true guides to the history of the universe and to the history of life, including us humans, down here on earth (Numbers 1992). Creationism in this more restricted sense entails a number of beliefs. These include, first, that a short time has elapsed since the beginning of everything. ‘Young Earth Creationists’ think that Archbishop Ussher’s seventeenth-century calculation of about 6000 years is a good estimate. Second, that there are six days of creation – there is debate on the meaning of ‘day’ in this context, with some insisting on a literal twenty-four hours, and others more flexible. Third, that there was a miraculous creation of all life including Homo sapiens — with scope for debate about whether Adam and Eve came together or if Eve came afterwards to keep Adam company. Fourth, that there was a world-wide flood some time after the initial creation, through which only a limited number of humans and animals survived. Fifth, that there were other events such as the Tower of Babel and the turning of Lot’s wife into a pillar of salt. Creationists (in this narrow sense) have variously been known as Fundamentalists or biblical literalists, and sometimes – especially when they are pushing the scientific grounds for their beliefs – as Scientific Creationists. Today’s Creationists are often marked by enthusiasm for something that is known as Intelligent Design. Because the relationship between Creationism in the sense of literalism and Intelligent Design is somewhat complex, examination of this relationship will be left until later and, until stated otherwise, the following discussion focusses on literalists. (Because ‘literalist’ is the common term, we continue to use it. More accurately, such people are better known as ‘inerrantists,’ implying that the stress is less on the actual words and more on the interpretation, especially given the extent to which they interpret the Bible, especially when it comes to prophecies.) With significant provisos to be noted below, Creationists are strongly opposed to a world created by evolution, particularly to a world as described by Charles Darwin in his Origin of Species. Creationists (certainly traditional Creationists) oppose the fact (pattern) of evolution, namely that all organisms living and dead are the end products of a natural process of development from a few forms, perhaps ultimately from inorganic materials (‘common descent’). Often this is known as ‘macroevolution’ as opposed to ‘microevolution,’ meaning apes to humans rather than merely one species of finch to another. Creationists also oppose claims about the total adequacy of the Darwinian version of the theory of evolution, namely that population pressures lead to a struggle for existence; that organisms differ in random ways brought on by errors in the material of heredity (‘mutations’ in the ‘genes’); that the struggle and variation leads to a natural form of selection, with some surviving and reproducing and others failing; and that the end consequence of all of this is evolution, in the direction of well-adapted organisms. 1. History of Creationism Creationists present themselves as the true bearers and present-day representatives of authentic, traditional Christianity, but historically speaking this is simply not true (Ruse 1988 (ed.), 2005; Numbers 1992; McMullin 1985). The Bible has a major place in the life of any Christian, but it is not the case that the Bible taken literally has always had a major place in the lives or theology of Christians. For most, indeed, it has not (Turner 2002). Although, one should remember that most literalists are better known as inerrantists, because they often differ on the meaning of a literal reading! Tradition, the teachings and authority of the church, has always had main status for Catholics, and natural religion – approaching God through reason and argument – has long had an honored place for both Catholics and Protestants. Catholics, especially dating back to Saint Augustine around 400 AD, and even to earlier thinkers like Origen, have always recognized that at times the Bible needs to be taken metaphorically or allegorically. Augustine was particularly sensitive to this need, because for many years as a young man he was a Manichean and hence denied the authenticity and relevance of the Old Testament for salvation. When he became a Christian he knew full well the problems of Genesis and hence was eager to help his fellow believers from getting ensnared in the traps of literalism. It was not until the Protestant Reformation that the Bible started to take on its unique central position, as the great Reformers – especially Luther and Calvin – stressed the need to go by scripture alone and not by what they took to be the overly rich traditions of the Catholic Church. But even they were doubtful about totally literalistic readings. For Luther, justification by faith was the keystone of his theology, and yet the Epistle of Saint James seems to put greater stress on the need for good works. He referred to the gospel as ‘right strawy stuff.’ (Wengert 2013). Calvin likewise spoke of the need for God to accommodate His writings to the untutored public – especially the ancient Jews – and hence of the dangers of taking the Bible too literally in an uncritical sense. The radical branch of the Reformation under Zwingli always put primacy on God’s speaking directly to us through the heart, and to this day one finds modern-day representatives like the Quakers uncomfortable with too-biblically centered an approach to religion. It was after the religious revivals of the eighteenth and early nineteenth century in Britain and America – revivals that led to such sects as the Methodists – that a more full-blooded literalism became a major part of the religious scene. In America particularly literalism took hold, and especially after the Civil War, it took root in the evangelical sects – especially Baptists – of the South (Numbers 1998; Noll 2002). It became part of the defining culture of the South, having (as we shall see below) as much a role in opposing ideas and influences of the leaders and policy makers of the North as anything rooted in deeply thought-through theology. Note the important qualification, ‘leaders and policy makers’ of the North. Many – especially working and lower-middle-class people – living in the large cities of the North felt deeply threatened by the moves to industrialism, the weakening of traditional beliefs, and the large influx of immigrants from Europe. They provided very fertile material for the literalist preachers. (See the extended discussions of these happenings in Ruse 2013.) Thanks to a number of factors, Creationism started to grow dramatically in the early part of the twentieth century. First, there were the first systematic attempts to work out a position that would take account of modern science as well as just a literal reading of Genesis. Particularly important in this respect were the Seventh-day Adventists, especially the Canadian-born George McCready Price, who had theological reasons for wanting literalism, not the least being the belief that the Seventh Day – the day of rest – is literally twenty-four hours in length. Also important for the Adventists and for fellow travelers, that is people who think that Armageddon is on its way, is the balancing and complementary early phenomenon of a world-wide flood. (This, as we shall see, was to become a major theme in twentieth-century Cold War times.) Second, there was the released energy of evangelicals (referring generically to Protestants whose faith was tied to the Bible, taken rather literally) as they succeeded in their attempts to prohibit liquor in the United States. Flushed from one victory, they looked for other fields to conquer. Third there was the spread of public education, and more children being exposed to evolutionary ideas, bringing on a Creationist reaction. Fourth, there were new evangelical currents afloat, especially the tracts the Fundamentals – a series of evangelical publications, conceived in 1909 by California businessman Lyman Stewart, the founder of Union Oil and a devout Presbyterian – that gave the literalist movement its name. And fifth, there was the identification of evolution – Darwinism particularly – with the militaristic aspects of Social Darwinism, especially the Social Darwinism supposed embraced by the Germans in the First World War (Larson 1997; Ruse 2018a). This battle between evolutionists and ‘Fundamentalists’ came to a head in the mid 1920s in Dayton Tennessee, when a young school teacher John Thomas Scopes was prosecuted for teaching evolution in class, in defiance of a state law prohibiting such teaching. Prosecuted by three-times presidential candidate William Jennings Bryan and defended by noted agnostic lawyer Clarence Darrow, the ‘Scopes Monkey Trial’ caught the attention of the world, especially thanks to the inflammatory reporting of Baltimore Sun journalist H. L. Mencken. Matters descended to the farcical when, denied the opportunity to introduce his own science witnesses, Darrow put on the stand the prosecutor Bryan. In the end, Scopes was found guilty and fined $100. This conviction was overturned on a technicality on appeal, but there were no more prosecutions, even though the Tennessee law remained on the books for another forty years. In the 1950s, the Scopes trial became the basis of a famous play and then movie, Inherit the Wind. This portrays the Bryan figure as a bigot, wedded to a crude picture of life’s past. In fact, Bryan in respects was an odd figure to be defending the Tennessee law. He thought that the days of Creation are long periods of time, and he had little sympathy for eschatological speculations about Armageddon and so forth. It is quite possible that, humans apart, he accepted some form of evolution. His objections to Darwinism were more social than theological. He hated what he thought were the militaristic implications that many drew from the struggle for existence at the center of Darwin’s thinking. The First World War, with many justifying violence in the name of evolutionary biology, confirmed his suspicions. (The Pulitzer Prize winning, Summer of the Gods, Larson 1997, is definitive on the Scopes trial. It is generally agreed that Inherit the Wind is using history as a vehicle to explore and condemn McCarthy-like attacks on uncomfortably new or dissenting-type figures in American society.) 2. Creation Science After the Scopes Trial, general agreement is that the Creationism movement had peaked and declined quite dramatically and quickly. Yet, it (and related anti-evolution activity) did have its lasting effects. Text-book manufacturers increasingly took evolution – Darwinism especially – out of their books, so that schoolchildren got less and less exposure to the ideas anyway. Whatever battles the evolutionists may have thought they had won in the court of popular opinion, in the trenches of the classroom they were losing the war badly. Things started to move again in the late 1950s. It was then that, thanks to Sputnik, the Russians so effectively demonstrated their superiority in rocketry (with its implications for the arms race of the Cold War), and America realized with a shudder how ineffective was its science training of its young. Characteristically, the country did something immediate and effective about this, namely pouring money into the production of new science texts. In this way, with class adoption, the Federal Government could have a strong impact and yet get around the problem that education tends to be under the tight control of individual states. The new biology texts gave full scope to evolution – to Darwinism – and with this the Creationism controversy again flared right up. Children were learning these dreadful doctrines in schools, and something had to be done (Ruse (ed.) 1988; Gilkey 1985). Fortunately for the literalist, help was at hand. A biblical scholar, John C. Whitcomb, and a hydraulic engineer, Henry M. Morris, combined to write what was to be the new Bible of the movement, Genesis Flood: The Biblical Record and its Scientific Implications (1961). Following in the tradition of earlier writers, especially those from Seventh-day Adventism, they argued that every bit of the Biblical story of creation given in the early chapters of Genesis is supported fully by the best of modern science. Six days of twenty-four hours, organisms arriving miraculously, humans last, and sometime thereafter a massive world-wide flood that wiped most organisms off the face of the earth – or rather, dumped their carcasses in the mud as the waters receded. At the same time, Whitcomb and Morris argued that the case for evolution fails dismally. They introduced (or revived) a number of arguments that have become standard parts of the Creationist repertoire against evolution. Let us look at a number of these arguments, together with the counter-arguments that evolutionists make in response. First, the Creationists argue that at best evolution is only a theory and not a fact, and that theories should never be taken as gospel (if one might be permitted a metaphor). They claim that the very language of evolutionists themselves show that their ideas are on shaky grounds. To which charge evolutionists respond that this is to confuse two senses of the word ‘theory.’ Sometimes we use it to mean a body of scientific laws, as in ‘Einstein’s theory of relativity.’ Sometimes we use it to mean an ‘iffy hypothesis,’ as in ‘I have a theory about Kennedy’s assassination.’ These are two very different senses. There is nothing iffy about the Copernican heliocentric theory. It is true. It is a fact. Evolutionists argue that the same is the case with evolution. When talking about the theory of evolution, one is talking about a body of laws. In particular, if one is following the ideas of Charles Darwin, one is arguing that population pressures lead to a struggle for existence, this then entails a natural selection of favored forms, and evolution through shared descent is the end result. This is a body of general statements about life, since the 1930s given in a formal version using mathematics with deductive inferences between steps. In other words, we have a body of laws, and hence a theory in the first sense just given. There is no implication here that the theory is iffy, that is in the second sense just given. We are not necessarily talking about something inherently unreliable. Of course, there are going to be additions and revisions, for instance the possibility of much greater hybridization than someone like Darwin realized, but that is another matter (Quammen 2018). Second, Creationists like Whitcomb and Morris claim that the central mechanism of modern evolutionary thought, Darwin’s natural selection, is bogus. They argue that it is not a genuine claim about the real world but merely a truism, what philosophers call a tautology – something true by the meaning of the words like ‘bachelors are unmarried.’ In the case of natural selection, the Creationists point out that an alternative name for the mechanism is ‘the survival of the fittest.’ But, they ask, who are the fittest? They reply: Those that survive! Hence, natural selection reduces to the tautology that those that survive are those that survive. Not a real claim of science at all. To which evolutionists respond that this is a sleight of hand, showing ignorance of what is genuinely at stake. Natural selection is truly real, for it talks about some organisms actually surviving and reproducing in life’s struggles and others failing to do so. Some of our would-be ancestors lived and had babies and others did not. There was a differential reproduction. This is certainly not a mere truism. It could be that everyone had the same number of children. It could also be that there is no difference overall between the successful and the unsuccessful. This too is denied by natural selection. To say that something is the fitter or fittest is to say that it has certain characteristics (what biologists call adaptations) that other organisms do not have, and that on average one expects the fitter to succeed. But there is no guarantee that this must be so or that it will always happen. An earthquake could wipe out everyone, fit and unfit. Before discussing the third argument Creationists level against evolution, it is worth pausing over this second one. Most if not all professional evolutionists argue that sometimes natural selection is not a significant causal factor. In this sense, it is false that selection is something that by definition is and always is the reason for lasting change. The fittest do not always win. It cannot be a tautology. In the 1930s, the American population geneticist Sewall Wright devised his hypothesis of “genetic drift,” arguing that sometimes mere chance can lead to effects overriding the forces of selection (Wright 1931, 1932). Although, at first, this was embraced enthusiastically (Dobzhansky 1937), it soon became clear that at the gross physical (phenotypic) level it is at most minor (Coyne, Barton, and Turelli 1997). However at the level of the gene (genotype), it is still thought very important. Indeed, it is a powerful tool in discovering the exact dates of key evolutionary events, especially those involving speciation (Ayala 2009). Moreover, as we shall see in a moment, somewhat paradoxically, as Creationism has evolved (!), increasingly the virtues of selection working at the microevolutionary level have become apparent. Thus can one explain the diversity of life on earth – it evolved since leaving the Ark, which contained only generic kinds. For all its supposed faults, there is a better discussion of natural selection at the Creationist museum in Kentucky than in the Field Museum in Chicago, 300 miles north. The bar on macroevolution remains absolute. Third, Creationists point out that modern evolutionary theory asserts that the raw building blocks of evolution, the genetic mutations, are random. But this means that there are minimal chances of evolution producing something that works as well and efficiently as an organism, with all of the functioning parts in place. A monkey typing letters does so randomly. It could never in a million years (in a billion, billion, billion… years) type the works of Shakespeare. The Creationists say that same is true of evolution and organisms, given the randomness of mutation. To which evolutionists reply that this may all be well and true of the monkey, but in the case of evolution things are rather different. If a mutation works, then it is kept and then built upon, until the next good mutation comes along. This shrinks considerably the odds of evolution producing organisms, even though the appearance of mutation is random. Suppose you take just one phrase from Shakespeare. ‘Friends, Romans, countrymen, lend me your ears.’ If you had to get every letter right straight off, you would be into a huge time-span. Twenty-six (the number of letters, more if you include capitals and gaps and punctuation) to the power of the number of spaces. But if you are allowed to keep the ‘F’ as soon as you get it, and then go on to try for a ‘r’, you are no longer going back to square one each time and suddenly the task becomes much more manageable. (Dawkins 1986 has a good discussion of these issues.) Incidentally, add evolutionists, one must take care in speaking of mutation as ‘random.’ There is no implication that mutation is uncaused or something else rather peculiar. Rather is meant that mutations do not occur according to need. Suppose a new disease appears. Evolutionary theory does not guarantee that a new, life-protecting mutation will occur to order. Fourth in the litany of Creationist complaints, there is a perennial favourite based on paleontology. Creationists agree that the fossil record is sequential, fish to primates moving upwards, but argue that this is the result of the sorting effect of the Flood. Primates are above dinosaurs, for example, because primates are more agile and moved further up the mountain before being caught and drowning. They also argue, however, that the fossil record ought to be continuous if evolution occurred, but in real life there are many gaps between different form – jumps from one kind of organism to another. Apes to humans would be a case in point. This spells Creation not evolution. To which the response comes that on the one hand one expects such gaps. Fossilization is an uncommon occurrence – most dead bodies get eaten straight away or just rot – and the wonder is that we have what we do have. On the other hand argue evolutionists, the record is not that gappy. There are lots of good sequences – lines of fossils with little difference between adjacent forms, from the amphibians to the mammals for example, or (in more detail) the evolution of the horse from Eohippus on five toes to the modern horse on one toe. Moreover, in refutation of Creationism, we do not find fossils out of order as you might expect after a flood. For all that Creationists sometimes claim otherwise, humans are never found down with the dinosaurs. Those brutes of old expired long before we appeared on the scene and the fossil record confirms this. Fifth, Creationists argue that physics disproves evolution. The second law of thermodynamics claims that things always run down – entropy increases, to use the technical language. Energy gets used and converted eventually into heat, and cannot be of further service. But organisms clearly keep going and seem to defy the law. This would be impossible simply given evolution. The second law rules out the blind evolution (meaning change without direct divine guidance) of organisms from the initial simple blobs up to the complex higher organisms like humans. There must therefore have been a non-natural, miraculous intervention to produce functioning life. To which argument the response of evolutionists is that the second law does indeed say that things are running down, but it does not deny that isolated pockets of the universe might reverse the trend for a short while by using energy from elsewhere. And this is what happens on planet Earth. We use the energy from the sun to keep evolving for a while. Eventually the sun will go out and life will become extinct. The second law will win eventually, but not just yet. Sixth, and let us make this the final Creationist objection, it is said that humans simply cannot be explained by blind law (that is, unguided law), especially not by blind evolutionary laws. They must have been created. To which the response is that it is mere arbitrary supposition to believe that humans are that exceptional. In fact, today the fossil record for humans is strong – we evolved over the past four million years from small creatures of half our height, who had small brains and who walked upright but not as well as we. There is lots of fossil proof of these beings (known as Australopithecus afarensis). Recently, there are other pertinent discoveries, showing that (as one would expect) there is not only evolution towards modern humans, but branching and sidelines, for instance of the so-called “hobbit,” Homo floresiensis (Falk 2012). Perhaps it is true that we humans are special, in that (as Christians claim) we uniquely have immortal souls, but this is a religious claim. It is not a claim of science, and hence evolution should not be faulted for not explaining souls. There is of course a lot more to be found out about human evolution, but this is the nature of science. No branch of science has all of the answers. The real question is whether the branch of science keeps the answers coming in, and evolutionists claim that this is certainly true of their branch of science. 3. Understanding Creationism in its Cultural Context Before moving on historically, it is worthwhile to stop for a moment and consider aspects of Creationism, in what one might term the cultural context. First, as a populist movement, driven as much by social factors – a sense of alienation from the modern world — one would expect to find that cultural changes in society would be reflected in Creationist beliefs. This is indeed so. Take, above all, the question of racial issues and relationships. In the middle of the nineteenth century in the South, biblical literalism was very popular because it was thought to justify slavery (Noll 2002). Even though one can read the Christian message as being strongly against slavery — the Sermon on the Mount hardly recommends making people into the property of others – the Bible elsewhere seems to endorse slavery. Remember, when the escaped slave came to Saint Paul, the apostle told him to return to his master and to obey him. Remnants of this kind of thinking persisted in Creationist circles well into the twentieth century. Price, for instance, was quite convinced that blacks are degenerate whites. By the time of Genesis Flood, however, the civil rights movement was in full flower, and Whitcomb and Morris trod very carefully. They explained in detail that the Bible gives no justification for treating blacks as inferior. The story of the son and grandson of Noah being banished to a dark-skinned future was not part of their reading of the Holy Scriptures. Literalism may be the unvarnished word of God, but literalism is as open to interpretation as scholarly readings of Plato or Aristotle. That is why many refer to Creationists as “inerrantists” rather than “literalists.” Second, as noted above, both for internal and external reasons, Creationists realized that they needed to tread carefully in outright opposition to evolution of all kinds. Could it really be that Noah’s Ark carried all of the animals that we find on earth today? It would be much easier if the Ark carried only the basic “kinds” of Creation, and then after the Flood the animals dispersed and diversified. We find in fact then that although Creationists were (and are) adamantly opposed to unified common descent and to the idea of natural change being adequate for all the forms we see today, from early on they were accepting huge amounts of what can only truly be called evolution! This said, Creationists were convinced that this change occurs much more rapidly than most conventional evolutionists would allow. Although it took some time to formulate, gradually we see emerging the strategy of (what we have seen as) distinguishing between what is called “microevolution” and “macroevolution.” Supposedly, microevolution is the sort of thing that brought diversification to Darwin’s finches, and many Creationists – notwithstanding the fact that it is supposedly a tautology – are even prepared to put this down to natural selection. Macroevolution is what makes reptiles reptiles, and mammals mammals. This cannot be a natural process but required miracles during the days of Creation. Although he was a lifelong opponent of Creationism (see below), forever committed to common descent, and thought that all changes must be natural, Creationists seized with glee on paleontologist Stephen Jay Gould’s claims that microevolution must be selection-fueled and macroevolution might require other causal forces (Gould 1980). Third, and perhaps most significant of all, never think that Creationism is purely an epistemological matter – a matter of facts and their understanding. Moral claims have always been absolutely fundamental. Nearly all Creationists (in the Christian camp) are what is known theologically as premillennialists, believing that Jesus will come soon and reign over the world before the Last Judgement. They are opposed to postmillennialists who think that Jesus will come later, and amillennialists who are inclined to think that perhaps we are already living in a Jesus-dominated era. Postmillennialists put a premium on our getting things ready for Jesus – hence, we should engage in social action and the like. Premillennialists think there is nothing we ourselves can do to better the world, so we had best get ourselves and others in a state ready for Jesus. This means individual behavior and conversion of others. For premillennialists therefore, and this includes almost all Creationists, the great moral drives are to things like family sanctity (which today encompasses anti-abortion), sexual orthodoxy (especially anti-homosexuality), biblically sanctioned punishments (very pro-capital punishment), strong support for Israel (because of claims in Revelation about the Jews returning to Israel before End Times), and so forth. It is absolutely vital to see how this moral agenda is an integral part of American Creationism, as much as Floods and Arks. (Ruse 2005 discusses these matters in much detail.) 4. Arkansas Genesis Flood enjoyed massive popularity among the faithful, and led to a thriving Creation Science Movement, where Morris particularly and his coworkers and believers – notably Duane T. Gish, author of Evolution: The Fossils Say No! – pushed the literalist line. Particularly effective was their challenging of evolutionists to debate, where they would employ every rhetorical trick in the book, reducing the scientists to fury and impotence, with bold statements (provocatively made most often by Gish) about the supposed nature of the universe (Gilkey 1985; Ruse (ed.) 1988). This all culminated eventually in a court case in Arkansas. By the end of the 1970s, Creationists were passing around draft bills, intended for state legislatures, that would allow – insist on – the teaching of Creationism in state-supported public schools. In the biology classes of such schools, that is. By this time it was realized that, thanks to Supreme Court rulings on the First Amendment to the Constitution (that which prohibits the establishment of state religion), it was not possible to exclude the teaching of evolution from such schools. The trick was to get Creationism – something that prima facie rides straight through the separation of church and state – into such schools. The idea of Creation Science is to do this. The claim is that, although the science parallels Genesis, as a matter of scientific fact, it stands alone as good science. Hence, these draft bills proposed what was called: ‘Balanced treatment.’ If one were to teach the ‘evolution model,’ then one had also to teach the ‘Creation Science model.’ Sauce for the evolutionist goose is also sauce for the Creationist gander. In 1981, these drafts found a taker in Arkansas, where such a bill was passed and signed into law – as it happens, by a legislature and governor that thought little of what they were doing until the consequences were drawn to their attention. (William Clinton was governor from 1978 to 1980, and again from 1982 to his winning of the presidency, in 1992. The law was passed during the interregnum.) At once the American Civil Liberties Union sprang into action, bringing suit on grounds of the law’s unconstitutionality. The theologian Langdon Gilkey, the geneticist Francisco Ayala, the paleontologist Stephen Jay Gould, and as the philosophical representative Michael Ruse appeared as expert witnesses, arguing that Creationism has no place in state supported biology classes. The state’s position was not exactly helped when one of its witnesses (Norman Geisler, a theologian) admitted under cross-examination to his belief in UFOs as a satanic manifestation. Hardly surprisingly, evolution won. The judge ruled firmly that Creation Science is not science, it is religion, and as such has no place in public classrooms. The judge (William Overton) ruled that the ‘essential characteristics’ of what makes something scientific are: It is guided by natural law; It has to be explanatory by reference to natural law; It is testable against the empirical world; Its conclusions are tentative, i.e. are not necessarily the final word; and It is falsifiable. In the judge’s opinion, Creation Science fails on all counts, and that apparently was an end to matters. In 1987 this whole matter was decided decisively in the same way, by the Supreme Court, in a similar case involving Louisiana. (See Ruse (ed.) 1988, for an edited collection that reproduces many of the pertinent documents, including state bills, as well as witness testimony and the judge’s ruling.) Of course, in real life nothing is ever that simple, and Arkansas was certainly not the end of matters. One of the key issues in the trial was less theological or scientific, but philosophical. (Paradoxically, the ACLU had significant doubts about using a philosophical witness and only decided at the last minute to bring Michael Ruse to the stand. As it happens, as the judge’s ruling shows, the philosophical testimony proved crucial.) Look again at the fifth of the judge’s criteria for what makes for good or genuine science. The Creationists had started to refer to the ideas of the eminent, Austrian-born, British-residing philosopher Karl Popper (1959). As is well known, Popper claimed that for something to be genuinely scientific it has to be falsifiable. By this, Popper meant that genuine science puts itself up to check against the real world. If the predictions of the science hold true, then it lives to fight another day. If the predictions fail, then the science must be rejected – or at least revised. Popper (1974) himself expressed doubts about whether evolutionary theory is genuinely falsifiable and he rather inclined to think that it is less a description of reality than a heuristic to further study, what he called a ‘metaphysical research programme’ (Ruse 1977). The Creationists seized on this and argued that they had the best authority to reject evolution, or at least to judge it no more of a science than Creationism. (To his credit, Popper revised his thinking on Darwinian evolutionary theory and grew to see and admit that it was a genuine scientific theory; see Popper 1978). Part of the testimony in Arkansas was designed to refute this argument, and it was shown that in fact evolution does indeed make falsifiable claims. As we have already seen, natural selection is no tautology. If one could show that organisms did not exhibit differential reproduction – to take the example given above, that all proto-humans had the same number of offspring – then selection theory would certainly be false. Likewise, if one could show that human and dinosaur remains truly did occur in the same time strata of the fossil record, one would have powerful proof against the thinking of modern evolutionists. This argument succeeded in court – the judge accepted that evolutionary thinking is falsifiable. Conversely, he accepted that Creation Science is never truly open to check. On-the-spot, ad hoc hypotheses proliferate as soon as any of its claims are challenged. It is not falsifiable and hence not genuine science. However, after the case a number of prominent philosophers (most notably the American Larry Laudan) objected strongly to the very idea of using falsifiability as a ‘criterion of demarcation’ between science and non-science. They argued that in fact there is no hard and fast rule for distinguishing science from other forms of human activity, and that hence in this sense the Creationists might have a point (Ruse (ed.) 1988). Not that people like Laudan were themselves Creationists. They thought Creationism false. Their objection was rather to trying to find some way of making evolution and not Creationism into a genuine science. Defenders of the anti-Creationism strategy taken in Arkansas argued, with reason and law, that the United States Constitution does not bar the teaching of false science. It bars the teaching of non-science, especially non-science which is religion by another name. Hence, if the objections of people like Laudan were taken seriously, the Creationists might have a case to make for the balanced treatment of evolution and Creationism. Popperian falsifiability may be a somewhat rough and ready way of separating science and religion, but it is good enough for the job at hand, and in law that is what counts. 5. The Naturalism Debate Evolutionists were successful in court. Nevertheless, Laudan and fellow thinkers inspired the Creationists to new efforts, and since the Arkansas court case, the philosophical dimension to the evolution/Creationism controversy has been much increased. In particular, philosophical arguments are central to the thinking of the leader of today’s creationists, Berkeley law professor, Phillip Johnson, whose reputation was made with the anti-evolutionary tract Darwin on Trial (1991). (Johnson’s influence and importance is recognized by all and he has become leader emeritus. As we shall see, the task of leadership then got passed to younger people, especially the biochemist Michael Behe and the philosopher-mathematician William Dembski.) In respects, Johnson just repeated the arguments of the Creation Scientists (those given in an earlier section) – gaps in the fossil record and so forth — but at the same time he stressed that the Creation/evolution debate is not just one of science versus religion or good science versus bad science, but rather of conflicting philosophical positions. The implication was that one philosophy is much like another, or rather the implication was that one person’s philosophy is another person’s poison and that it is all a matter of personal opinion. Behind this one sees the lawyer’s mind at work that, if it is all a matter of philosophy, then there is nothing in the United States Constitution which bars the teaching of Creationism in schools. (For better or for worse, one sees the heavy hand of Thomas Kuhn here, and his claim in his The Structure of Scientific Revolutions that the change from one paradigm to another is akin to a political revolution, not ultimately fueled by logic but more by extra-scientific factors, like emotions and simple preferences. In the Arkansas trial, Kuhn was as oft mentioned by the prosecutors as was Popper.) Crucial to Johnson’s position are a number of fine distinctions. He distinguishes between what he calls “methodological naturalism” and “metaphysical naturalism”. The former is the scientific stance of trying to explain by laws and by refusing to introduce miracles. A methodological naturalist would insist on explaining all phenomena, however strange, in natural terms. Elijah setting fire to the water-drenched sacrifice, for instance, would be explained in terms of lightning striking or some such thing. The latter is the philosophical stance that insists that there is nothing beyond the natural – no God, no supernatural, no nothing. ‘Naturalism is a metaphysical doctrine, which means simply that it states a particular view of what is ultimately real and unreal. According to naturalism, what is ultimately real is nature, which consists of the fundamental particles that make up what we call matter and energy, together with the natural laws that govern how those particles behave. Nature itself is ultimately all there is, at least as far as we are concerned’ (Johnson 1995, 37–38). Then there is someone that Johnson calls a ‘theistic realist.’ This is someone who believes in a God, and that this God can and does intervene in the natural world. ‘God always has the option of working through regular secondary mechanisms, and we observe such mechanisms frequently. On the other hand, many important questions – including the origin of genetic information and human consciousness – may not be explicable in terms of unintelligent causes, just as a computer or a book cannot be explained that way’ (p. 209). Johnson thinks of himself as a theistic realist, and hence as such in opposition to metaphysical realism. Methodological realism, which he links with evolutionism, would seem to be distinct from metaphysical realism, but it is Johnson’s claim that the former slides into the latter. Hence, the evolutionist is the methodological realist, is the metaphysical realist, is the opponent of the theistic realist – and as far as Johnson is concerned, the genuine theistic realist is one who takes a pretty literalistic reading of the Bible. So ultimately, it is all less a matter of science and more a matter of attitudes and philosophy. Evolution and Creationism are different world pictures, and it is conceptually, socially, pedagogically, and with good luck in the future legally wrong to treat them differently. More than this, it is incorporated into Johnson’s argument that Creationism (a.k.a. Theistic Realism) is the only genuine form of Christianity. But does any of this really follow? The evolutionist would claim not. The key notion in Johnson’s attack is clearly methodological naturalism. Metaphysical naturalism, having been defined as something which precludes theism, has been set up as a philosophy with a religion-like status. It necessarily perpetuates the conflict between religion and science. But as Johnson himself notes, many people think that they can be methodological naturalists and theists. Methodological naturalism is not a religion equivalent. Is this possible, at least in a consistent way with intellectual integrity? It is Johnson’s claim that it is not, for he wants the religion/science war to be absolute with no captives or compromises. 6. Can an Evolutionist Be a Christian? To sort out this debate, let us agree (to what is surely the case) that if you are a methodological naturalist, today you are going to accept evolution and conversely to think that evolution supports your cause. Today, methodological naturalism and evolution are a package deal. Take one, and you take the other. Reject one, and you reject the other. Clearly then, if your theism is one which gets its knowledge of God’s actions and purposes from a literal reading of the Bible, you have got a conflict. You cannot accept Genesis literally and evolution. That is a fact. In other words, there can be no accommodation between Creationism and evolution. However, what if you think that theologically speaking there is much to be said for a nice shade of grey? What if you think that much of the Bible, although true, should be interpreted in a metaphorical manner? What if you think you can be an evolutionist, and yet take in the essential heart of the Bible? What price consistency and methodological naturalism then? The answer depends on what you take to be the “essential heart” of the Bible. At a minimum we can say that, to the Christian, this heart speaks of our sinful nature, of God’s sacrifice, and of the prospect of ultimate salvation. It speaks of the world as a meaningful creation of God (however caused) and of a foreground drama which takes place within this world. One refers particularly to the original sin, Jesus’ life and death, and his resurrection and anything which comes after it. And clearly at once we are plunged into the first of the big problems, namely that of miracles – those of Jesus himself (the turning of water into wine at the marriage at Cana), his return to life on the third day, and (especially if you are a Catholic) such ongoing miracles as transubstantiation and those associated, in response to prayer, with the intervention of saints. There are a number of options here for the would-be methodological naturalist. You might simply say that such miracles occurred, that they did involve violations of law, but that they are outside your science. They are simply exceptions to the rule. End of argument. A little abrupt, but not flatly inconsistent with calling yourself a theist. You say normally God works through law but, for our salvation, miracles outside law were necessary. Or you might say that miracles occur but that they are compatible with science, or at least not incompatible. Jesus was in a trance and the cure for cancer after the prayers to Saint Bernadette was according to rare, unknown, but genuine laws. This position is less abrupt, although you might worry whether this strategy is truly Christian, in letter or in spirit. It seems a little bit of a cheat to say that the Jesus taken down from the cross was truly not dead, and the marriage at Cana starts to sound like outright fraud. Of course, you can start stripping away at more and more miracles, downgrading them to regular occurrences blown up and magnified by the Apostles, but in the end this rather defeats the whole purpose. The third option is simply to refuse to get into the battle at all. You argue that the law/miracle dichotomy is a false one. Miracles are just not the sorts of things which conflict with or confirm natural laws. Traditional Christians have always argued this in some respects. Take the Catholic doctrine of transubstantiation. The turning of the bread and the wine into the body and blood of Christ is simply not something open to empirical check. You cannot disconfirm religion or prove science by doing an analysis of the host. Likewise even with the resurrection of Jesus. After the Crucifixion, his mortal body was irrelevant. The point was that the disciples felt Jesus in their hearts, and were thus emboldened to go forth and preach the gospel. Something real happened to them, but it was not a physical reality – nor, for instance, was Paul’s conversion a physical event, even though it changed his life and those of countless after him. Today’s miracles also are really more a matter of the spirit than the flesh. Does one simply go to Lourdes in hope of a lucky lottery ticket to health or for the comfort that one knows one will get, even if there is no physical cure? In the words of the philosophers, it is a category mistake to put miracles and laws in the same set. (Hume (1748, 1779) is the starting place for these discussions. Although somewhat dated, Flew and MacIntyre (1955) is still invaluable. Paradoxically, both of these then-atheist authors came to see the light and returned to the Christianity of their childhoods! Apparently, the “words of the philosophers” are not definitive.) What has Johnson to say to all of this? Frustratingly, the answer is: “remarkably little”! In main part this stems from a refusal to spell out exactly what is meant by “theism”. What Johnson does say is more in the way of sneer or dismissal than argument. Persons who are sufficiently motivated to do so can find ways to resist the easy pathway from M[ethodological] N[aturalism] to atheism, agnosticism or deism. For example, perhaps God actively directs the evolutionary process but (for some inscrutable reason) does so in a way that is empirically imperceptible. No one can disprove that sort of possibility, but not many people seem to regard it as intellectually impressive either. That they seem to rely on “faith” — in the sense of belief without evidence – is why theists are a marginalized minority in the academic world and always on the defensive. Usually they protect their reputation for good judgment by restricting their theism to private life and assuming for professional purposes a position that is indistinguishable from naturalism. (Johnson 1995, 211) He adds: Makeshift compromises between supernaturalism in religion and naturalism in science may satisfy individuals, but they have little standing in the intellectual world because they are recognized as a forced accommodation of conflicting lines of thought (p. 212). At this point, the evolutionist will probably throw up his or her hands in despair. Where did the idea of ‘makeshift compromise’ come from except from Johnson’s imagination? In actual fact, many significant theologians of our age think that, with respect to miracles, science and religion have no conflict (Barth 1949; Gilkey 1985). They would add that faith without difficulty and opposition is not true faith, either. “As the Danish philosopher Søren Kierkegaard … taught us, too much objective certainty deadens the very soul of faith. Genuine piety is possible only in the face of radical uncertainty” (Haught 1995, 59). Such thinkers, often conservative theologically, are inspired by Martin Buber to find God in the center of personal relationships, I-Thou, rather in science, I-It. For them there is something degrading in the thought of Jesus as a miracle man, a sort of fugitive from the Ed Sullivan Show. What happened with the five thousand? Some hokey-pokey over a few loaves and fishes? Or did Jesus fill the multitude’s heart with love, so there was a spontaneous outpouring of generosity and sharing, as every one in the crowd was fed by the food brought by a few? These theologians would agree fully with the first part of Johnson’s characterization of “theism”. Things were very different thanks to Jesus’ presence and actions. What they deny, here or elsewhere, is the need to search for exception to law. Johnson’s Creationism and evolution/naturalism are indeed in conflict. But Johnson’s Creationism is not all that there is to religion, to Christianity in particular. There are those who call themselves theists, who think that one can be a methodological naturalist, where today this would imply evolution (Ruse 2010). Johnson has not argued against them. 7. Intelligent Design Let us move on now from the more philosophical sorts of issues. Building on the more critical approach of Johnson, who is taken to have cleared the foundations as it were, there is a group of people who are trying to offer an alternative to evolution. These are the enthusiasts for so-called ‘Intelligent Design.’ Supporters of this position think that Darwinism is ineffective, at least inasmuch as it claims to make superfluous or unnecessary a direct appeal to a designer of some sort. These are people who think that a full understanding of the organic world demands the invocation of some force beyond nature, a force which is purposeful or at least purpose creating. Often the phrase which is used is “organized complexity,” a term much used by the German Romantic philosopher Friedrich Schelling, and which by invoking the intentional predicate “organized” rather gives the game away (Richards 2003). For the moment, continue to defer questions about the relationship between Intelligent Design Theory and more traditional forms of Creationism. There are two parts to this approach: an empirical and a philosophical. Let us take them in turn, beginning with he who has most fully articulated the empirical case for a designer, the already-mentioned, Lehigh University biochemist Michael Behe. Focusing on something which he calls ‘irreducible complexity,’ Behe writes: By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. (Behe 1996, 39) Behe adds, surely truly, that any irreducibly complex biological system, if there is such a thing, would be a powerful challenge to Darwinian evolution. Since natural selection can only choose systems that are already working, then if a biological system cannot be produced gradually it would have to arise as an integrated unit, in one fell swoop, for natural selection to have anything to act on (p. 39). Now turn to the world of biology, and in particular turn to the micro-world of the cell and of mechanisms that we find at that level. Take bacteria which use a flagellum, driven by a kind of rotary motor, to move around. Every part is incredibly complex, and so are the various parts, combined. The external filament of the flagellum (called ‘flagellin’), for instance, is a single protein that makes a kind of paddle surface contacting the liquid during swimming. Near the surface of the cell, just as needed is a thickening, so that the filament can be connected to the rotor drive. This naturally requires a connector, known as a ‘hook protein.’ There is no motor in the filament, so that has to be somewhere else. ‘Experiments have demonstrated that it is located at the base of the flagellum, where electron microscopy shows several ring structures occur’ (p. 70). All, way too complex to have come into being in a gradual fashion. Only a one-step process will do, and this one-step process must involve some sort of designing cause. Behe is careful not to identify this designer with the Christian God, but the implication is that it is a force from without the normal course of nature. Irreducible complexity spells design. 8. Is Complexity Irreducible? Irreducible complexity is supposedly something which could not have come through unbroken law (meaning law that has no special divine guidance), and especially not through the agency of natural selection. Critics claim that Behe shows a misunderstanding of the very nature and workings of natural selection. No one is denying that in natural processes there may well be parts which, if removed, would lead at once to the non-functioning of the systems in which they occur. The point however is not whether the parts now in place could not be removed without collapse, but whether they could have been put in place by natural selection. Consider an arched bridge, made from cut stone, without cement, held in place only by the force of the stones against each other. If you tried to build the bridge from scratch, upwards and then inwards, you would fail – the stones would keep falling to the ground, as indeed the whole bridge now would collapse were you to remove the center keystone or any surrounding it. Rather, what you must do is first build a supporting structure (possibly an earthen embankment), on which you will lay the stones of the bridge, until they are all in place. At which point you can remove the structure for it is no longer needed, and in fact is in the way. Likewise, one can imagine a biochemical sequential process with several stages, on the parts of which other processes piggyback as it were. Then the hitherto non-sequential parasitic processes link up and start functioning independently, the original sequence finally being removed by natural selection as redundant or inconveniently draining of resources. Of course, this is all pretend. But Darwinian evolutionists have hardly ignored the matter of complex processes. Indeed, it is discussed in detail by Darwin in the Origin, where he refers to that most puzzling of all adaptations, the eye. At the biochemical level, today’s Darwinians have many examples of the most complex of processes that have been put in place by selection. Take that staple of the body’s biochemistry, the process where energy from food is converted into a form which can be used by the cells. Rightly does a standard textbook refer to this vital organic system, the so-called ‘Krebs cycle,’ as something which ‘undergoes a very complicated series of reactions’ (Hollum 1987, 408). This process, which occurs in the cell parts known as mitochondria, involves the production of ATP (adenosine triphosphate): a complex molecule which is energy rich and which is degraded by the body as needed (say in muscle action) into another less rich molecule ADP (adenosine diphosphate). The Krebs cycle remakes ATP from other energy sources – an adult human male needs to make nearly 200 Kg a day — and by any measure, the cycle is enormously involved and intricate. For a start, nearly a dozen enzymes (substances which facilitate chemical processes) are required, as one sub-process leads on to another. Yet the cycle did not come out of nowhere. It was cobbled together out of other cellular processes which do other things. It was a ‘bricolage’, that is to say it was something put together in a haphazard fashion. Each one of the bits and pieces of the cycle exists for other purposes and has been coopted for the new end. The scientists who have made this connection could not have made a stronger case against Behe’s irreducible complexity than if they had had him in mind from the first. In fact, they set up the problem virtually in Behe’s terms: ‘The Krebs cycle has been frequently quoted as a key problem in the evolution of living cells, hard to explain by Darwin’s natural selection: How could natural selection explain the building of a complicated structure in toto, when the intermediate stages have no obvious fitness functionality?’ (Meléndez-Hervia et al 1996, 302). (Readers who want to dig more deeply into some of the technical issues should start with the entry on fitness.) What these workers do not offer is a Behe-type answer. First, they brush away a false lead. Could it be that we have something like the evolution of the mammalian eye, where primitive existent eyes in other organisms suggest that selection can and does work on proto models (as it were), refining features which have the same function if not as efficient as more sophisticated models? Probably not, for there is no evidence of anything like this. But then we are put on a more promising track. In the Krebs cycle problem the intermediary stages were also useful, but for different purposes, and, therefore, its complete design was a very clear case of opportunism. The building of the eye was really a creative process in order to make a new thing specifically, but the Krebs cycle was built through the process that Jacob (1977) called ‘evolution by molecular tinkering,’ stating that evolution does not produce novelties from scratch: It works on what already exists. The most novel result of our analysis is seeing how, with minimal new material, evolution created the most important pathway of metabolism, achieving the best chemically possible design. In this case, a chemical engineer who was looking for the best design of the process could not have found a better design than the cycle which works in living cells. (p. 302) Rounding off the response to Behe, let us note that, if his arguments are well-taken, then in respects we are into a bigger set of problems than otherwise! His position seems simply not viable given what we know of the nature of mutation and the stability of biological systems over time. When exactly is the intelligent designer supposed to strike and to do its work? In his major work, Darwin’s Black Box, Behe suggests that everything might have been done long ago and then left to its own devices. ‘The irreducibly complex biochemical systems that I have discussed… did not have to be produced recently. It is entirely possible, based simply on an examination of the systems themselves, that they were designed billions of years ago and that they have been passed down to the present by the normal processes of cellular reproduction’ (Behe 1996, 227–8). This is not a satisfactory response. We cannot ignore the history of the genes from the point between their origin (when they would not have been needed) and today when they are in full use. In the words of Brown biochemist Kenneth Miller: ‘As any student of biology will tell you, because those genes are not expressed, natural selection would not be able to weed out genetic mistakes. Mutations would accumulate in these genes at breathtaking rates, rendering them hopelessly changed and inoperative hundreds of millions of years before Behe says that they will be needed.’ There is much experimental evidence showing that this is the case. Behe’s idea of designer doing everything back then and then leaving matters to their natural fate is ‘pure and simple fantasy’ (Miller 1999, 162–3). What is the alternative strategy that Behe must take? Presumably that the designer is at work all of the time, producing mechanisms as and when needed. So, if we are lucky, we might expect to see some produced in our lifetime. Indeed, there must be a sense of disappointment among biologists that no such creative acts have so far been reported. More than this, as we turn from science towards theology, there even greater disappointments. Most obviously, what about bad mutations (in the sense of mutations that lead to consequences very non-helpful to their possessors)? If the designer is needed and available for complex engineering problems, why could not the designer take some time on the simple matters, specifically those simple matters which if unfixed lead to absolutely horrendous problems. Some of the worst genetic diseases are caused by one little alteration in one little part of the DNA. If the designer is able and willing to do the very complex because it is very good, why does it not do the very simple because the alternative is very bad? Behe speaks of this as being part of the problem of evil, which is true, but not very helpful. Given that the opportunity and ability to do good was so obvious and yet not taken, we need to know the reason why. (A comprehensive collection, edited by an Intelligent Design Theorist and an avid Darwinian evolutionist, contains arguments from both sides, by biologists and philosophers; see Dembski and Ruse (eds.) 2004.) 9. The Explanatory Filter Behe is in need of help. This supposedly comes from a conceptual argument in favor of Intelligent Design due to the also-mentioned, philosopher-mathematician William Dembski (1998a, b). Let us first look at his argument, and then see how it helps Behe. Dembski’s aim is two-fold. First, to give us the criteria by which we distinguish something that we would label ‘designed’ rather than otherwise. Second, to put this into context, and show how we distinguish design from something produced naturally by law or something we would put down to chance. As far as inferring design is concerned, there are three notions of importance: contingency, complexity, and specification. Design has to be something which is not contingent. The example that Dembski uses is the message from outer space received in the movie Contact. The series of dots and dashes, zeros and ones, could not be deduced from the laws of physics. But do they show evidence of design? Suppose we can interpret the series in a binary fashion, and the initial yield is the number group, 2, 3, 5. As it happens, these are the beginning of the prime-number series, but with so small a yield no one is going to get very excited. It could just be chance. So no one is going to insist on design yet. But suppose now you keep going on the series, and it turns out that it yields in exact and precise order the prime numbers up to 101. Now you will start to think that something is up, because the situation seems just too complex to be mere chance. It is highly improbable. ‘Complexity as I am describing it here is a form of probability….’ (Dembski 2000, 27). But although you are probably happy now to conclude (on the basis of the prime-number sequence) that there are extraterrestrials out there, in fact there is another thing needed. ‘If I flip a coin 1000 times, I will participate in a highly complex (that is, highly improbable) event…. This sequence of coin tosses will not, however, trigger a design inference. Though complex, this sequence will not exhibit a suitable pattern.’ Here, we have a contrast with the prime-number sequence from 2 to 101. ‘Not only is this sequence complex, but it also embodies a suitable pattern. The SETI researcher who in the movie Contact discovered this sequence put it this way: “This isn’t noise, this has structure”’ (pp. 27–8). What is going on here? You recognize in design something which is not just arbitrary or chance or which is given status only after the experiment or discovery, but rather something that was or could be in some way specified, insisted upon, before you set out. You know or could work out the sequence of prime numbers at any time before or after the contact from space. The random sequence of penny tosses will come only after the event. ‘The key concept is that of “independence”. I define a specification as a match between an event and an independently given pattern. Events that are both highly complex and specified (that is, that match an independently given pattern) indicate design.’ Dembski is now in a position to move on to the second part of his argument where we actually detect design. Here we have what he calls an ‘Explanatory Filter’ (Dembski 1998a, b). We have a particular phenomenon. The question is, what caused it? Is it something which might not have happened, given the laws of nature? Is it contingent? Or was it necessitated? The moon goes endlessly round the earth. We know that it does this because of (the updated version of) Newton’s laws. End of discussion. No design here. However, now we have some rather strange new phenomenon, the causal origin of which is a puzzle. Suppose we have a mutation, where although we can quantify over large numbers we cannot predict at an individual level. There is no immediate subsumption beneath law, and therefore there is no reason to think that at this level it was necessary. Let us say, as supposedly happened in the extended royal family of Europe, there was a mutation to a gene responsible for hemophilia. Is it complex? Obviously not, for it leads to breakdown rather than otherwise. Hence it is appropriate to talk now of chance. There is no design. The hemophilia mutation was just an accident. Suppose now that we do have complexity. A rather intricate mineral pattern in the rocks might qualify here. Suppose we have veins of precious metals set in other materials, the whole being intricate and varied – certainly not a pattern you could simply deduce from the laws of physics or chemistry or geology or whatever. Nor would one think of it as being a breakdown mess, as one might a bad mutation. Is this now design? Almost certainly not, for there is no way that one might pre-specify such a pattern. It is all a bit ad hoc, and not something which comes across as the result of conscious intention. And then finally there are phenomena which are complex and specified. One presumes that the microscopical biological apparatuses and processes discussed by Behe would qualify here. They are not contingent, for they are irreducibly complex. They are design-like for they do what is needed for the organism in which they are to be found. That is to say they are of pre-specified form. And so, having survived the explanatory filter, they are properly considered the product of real design. Now, with the conceptual argument laid out in full, we are in a position to turn back to Behe and to see how Dembski’s explanatory filter is supposed to let Behe’s god off the hook with respect to the problem of evil. Given the explanatory filter, a bad mutation would surely get caught by the filter half-way down. It would be siphoned off to the side as chance, if not indeed simply put down as necessity. It certainly would not pass the specification test. This would mean that a dreadful genetic disease would not be the fault of the designer, whereas successful complex mechanisms would be to the designer’s credit. Dembski stresses that these are mutually exclusive alternatives. ‘To attribute an event to design is to say that it cannot plausibly be referred to either law or chance. In characterizing design as the set-theoretic complement of the disjunction law-or-chance, one therefore guarantees that these three modes of explanation will be mutually exclusive and exhaustive’ (Dembski 1998b, 98). 10. Mutually Exclusive? The key assumption being made by Dembski is that design and law and chance are mutually exclusive. This is the very essence of the explanatory filter. But in real life does one want to make this assumption? Suppose that something is put down to chance. Does this mean that law is ruled out? Surely not! If one argues that a Mendelian mutation is chance, what one means is with respect to that particular theory it is chance, but one may well believe that the mutation came about by normal regular causes and that if these were all known, then it would not longer be chance at all but necessity. The point is that chance in this case is a confession of ignorance not, as one might well think the case in the quantum world, an assertion about the way that things are. That is, claims about chance are not ontological assertions, as presumably claims about designers must be. More than this, one might well argue that the designer always works through law. This may be deism and hence no true Christianity – some Christians would insist that God does sometimes intervene in the Creation. But truly Christian or not, a deity who always works through law is certainly not inconsistent with the hypothesis of a designing intelligence. The designer may prefer to have things put in motion in such a way that his/her/its intentions unfurl and reveal themselves as time goes by. The pattern in a piece of cloth made by machine is as much an object of design as the pattern from cloth produced by a hand loom. In other words, in a sense that would conform to the normal usage of the terms, one might want to say of something that it is produced by laws, is chance with respect to our knowledge or theory, and fits into an overall context of design by the great orderer or creator of things. In short, Dembski’s filter does not let Behe’s designer off the hook. If the designer can make – and rightfully takes credit for — the very complex and good, then the designer could prevent — and by its failure is properly criticized for – the very simple and awful. The problems in theology are as grim as are those in science. (The intelligent design theorists have provided work for many philosophers eager to refute them. Pennock 1988 and Sober 2000 are good places to start. See the entry on teleological notions in biology.) 11. Intelligent Design and Traditional Creationism Let us now try to tackle the somewhat complex issue of the relationship between Intelligent Design Theory and traditional Creationism, as discussed earlier in this essay. In significant respects, they are clearly not the same. Most Intelligent Design Theorists believe in a long earth history (even the scientific estimation of a universe of about 15 billion years in age) and most accept overall common descent. In a recent book, The Edge of Evolution , Michael Behe has made this point very clear indeed. However, there are major overlaps, sufficient to encourage some critics (myself included) to refer to Intelligent Design Theory as ‘Creationism-lite’ (Ruse 2017, 114). First, politically, the Creationists are more than willing at the moment to let the ID theorists do the blocking. Openly they support the ID movement, believing in taking one step at a time. If ID is successful, then is the time to ask for more. A major funding and emotional support for the ID movement is the Discovery Institute, a privately-supported think tank in Seattle. One of its prominent members is University of Chicago educated philosopher Paul Nelson, who is a young-earth creationist and a strong believer in the eschatological significance of Israel. Second, do note that both Creationists and ID enthusiasts are committed to some form of non-naturalist account of origins. The ties of course are stronger. ID enthusiasts pretend to be neutral about the Intelligent Designer, but they clearly do not think that he or she is natural. No one pretends that the earth and its denizens are a lab experiment being run by a grad student on Andromeda. In fact, in their own correspondence and works written for followers, they make it very clear that the Designer is the Christian God of the Gospels. They are always quoting the first chapter of John – “In the beginning was the Word, and the Word was with God, and the Word was God.” So in both cases we have an evangelical Christian motive setting the agenda on origins. Some ID enthusiasts are quite strong literalists. Johnson for instance thinks that Genesis Chapter Six might be right about their beings giants in early times – a point made much of in Genesis Flood. (Forrest and Gross 2004 do a superb job of ferreting out much of the unstated biblical foundations of Intelligent Design Theory.) Third there is the moral factor. There is a very strong streak of anti-postmillennialism in the writings of ID theorists. They share the same concern about the moral values of the Creationists – anti-abortion, anti-homosexuality, pro-capital punishment, pro-Israel (for eschatological reasons) and so forth. Phillip Johnson feels very strongly that the tendency to cross-dress, including apparently women who wear jeans, is a sign of the degenerate state of our society (Johnson 2002). In short, while there are certainly important differences between the position of most literalists and most ID supporters, the strong overlap should not be ignored or downplayed. 12. Recent Developments Creationism in the sense used in this discussion is still very much a live phenomenon in American culture today – and in other parts of the world, like the Canadian West, to which it has been exported. Popularity does not imply truth. Scientifically Creationism is worthless, philosophically it is confused, and theologically it is blinkered beyond repair. The same is true of its offspring, Intelligent Design Theory. But do not underestimate its social and political power. As we move through the second decade of the new millennium, thanks to Johnson and his fellows, there are ongoing pressures to introduce non-evolutionary ideas into science curricula, especially into the science curricula of publicly funded schools in the United States of America. In 2004, in Dover, Pennsylvania, there was an attempt by the school board to introduce Intelligent Design Theory into the biology classrooms of the publicly funded schools. As it happens, this was rejected strongly by the federal judge trying the case – a man who was appointed by President George W. Bush no less – and the costs of the case will surely deter others from rushing to follow the example of this board (who were incidentally then promptly dropped by the voters.) (A lively account of this trial is by Lauri Lebo, The Devil in Dover: An Insider’s Story of Dogma v. Darwin in Small-Town America. Philosopher Robert Pennock argued that IDT is not genuine science, and taking somewhat of a post-modernist stance, philosopher Steven Fuller argued that it as good science as any other. Pennock and Ruse (eds.) 2008 is an updated version of Ruse (ed.) 1988, and includes full discussion of Dover as well as Arkansas.) The battle is not yet over and things could get a lot worse before they get better, if indeed they will get better. Already, there are members of the United States Supreme Court who have made it clear that they would receive sympathetically calls to push evolution from a preeminent place in science teaching, and with its turn to the right it would be foolish to assume that if a case came its way that Creationism or ID theory would be rejected as unsuitable for public school classroom use. If additions are made, with present appointments, we could find that — nearly a century after the Scopes Trial, when the Fundamentalists were perceived as figures of fun – Creationism in one form or another finally takes its place in the classroom. Unfortunately at the moment, those opposed to Creationism are spending more of their energies quarreling among themselves than fighting the opposition. There is a crop of “new atheists”, including the biologist and popular writer Richard Dawkins (2006) and the philosopher Daniel Dennett (2005) who are not only against religion but also against those – including non-believers – who do not share their hostility. At least since the time of the Arkansas trial, many fighting Creationism (including Gould 1999, 2002; Ruse 2001) have argued that true religion and science do not conflict. Hence, evolutionists (including non-believers) should make common cause with liberal Christians, who share their hatred of dogmatic Christian fundamentalism. Prominent among those so arguing include the author of this piece, as well as Eugenie Scott of the National Center for Science Education. They argue that in their hostility to religion, the new atheists get close to making their own views quasi-religious – certainly they argue that Darwinism is incompatible with religion – and hence ripe for the Creationists’ complaint that if Creationism is not to be taught in schools (because it violates the U.S. Constitution’s separation of Church and State), then neither should evolution be so taught. It is to be hoped that this quarrel will soon subside. We conclude by noting four recent developments in the Creationism debate. First, a number of well-known philosophers have started to make encouraging sounds about Intelligent Design Theory. Calvinist philosopher Alvin Plantinga has long been a critic of naturalism and now (in a work based on his 2005 Gifford Lectures at St Andrew’s University in Scotland) he extends this critique to Darwinian evolutionary theory, arguing that the evidence in its favor is scanty (Plantinga 2011). He hedges somewhat on alternatives, but gives a very sympathetic reading of the thinking of Michael Behe and clearly finds much in such a position that meshes nicely with his own theological concerns. Coming from a very different perspective, as he is openly atheistic, Thomas Nagel (2008) likewise finds much in modern biology that worries and disappoints him—he makes special reference to what seems to him to be a total inability to give a naturalistic explanation of the origin of life—and although obviously he does not want to endorse Intelligent Design Theory, given the supposition that it is God who is doing the designing, nevertheless he argues that Intelligent Design Theory should be taught as an alternative in state-supported schools in the USA. Recently, in a full-length work, Mind and Cosmos, he has continued this attack, arguing (2011, 7) that “the idea that we have in our possession the basic tools needed to understand [the world] is no more credible now than it was in Aristotle’s day”, thereby implying that the work of Copernicus, Galileo, Newton, and Einstein and Darwin has not led to any new tools needed to understand the world. Backing Nagel, at least in his visceral dislike of Darwinism, is another prominent American philosopher Jerry Fodor, whose recent, co-authored book is titled What Darwin Got Wrong. Even if (with reason) Fodor might argue that he is no Creationist, his position is grist for their mill. A more thoughtful critique of Darwinism might allay this worry. It is difficult to know how seriously one is expected to take these criticisms. Let it be said that one would have a great deal more respect for the arguments and conclusions put forward if they had been informed by contemporary writings on evolutionary theory, for instance, the brilliant and painstaking work of the husband and wife team of Peter and Rosemary Grant (2007), who have spent decades studying the evolution and speciation of finches in the Galapagos Archipelago. Or the groundbreaking work of people like Francisco Ayala (2009) as they study the molecular factors involved in ongoing development and change. Not to mention the seminal studies of Brian Hall (1999) and Sean Carroll (2005) on the ways in which individual development can get reflected in long-term changes (so-called “evo-devo”). Though Richard Dawkins can put people off when he holds forth on matters philosophical or theological, that is no good reason simply to dismiss without argument his scientific claims, as Plantinga often does. Likewise, it is true indeed that no one yet has been able to spell out the full story of the origin of life, but this doesn’t justify Nagel’s failure to mention that a huge amount now is known about life’s origins, most especially about the crucial role played by the secondary ribonucleic acid RNA (rather than the more familiar DNA) (Ruse and Travis (eds.) 2009). Until the criticisms put forward by Nagel, Plantinga, Fodor, etc. do start to take seriously modern science, we might justifiably continue to take them less than seriously. One observation to make about these criticisms is that they are put forward by philosophers in the analytic tradition, which in its early days involved some opposition to Darwinism (Cunningham 1996). This goes back to Bertrand Russell and Ludwig Wittgenstein, neither of whom had much time for the theory and (in Russell’s case at least) was mirrored by a strong dislike of American Pragmatism, a school of thought that did take Darwin very seriously (Ruse 2009). In both the case of Russell and Wittgenstein this opposition is based primarily on a mistaken identification of the thinking of Charles Darwin with that of Herbert Spencer. It was the latter who was much given to seeing evolutionary processes as justifying extraneous claims about the necessity of struggle and so forth, views that both Russell and Wittgenstein viewed with as little enthusiasm as did William Jennings Bryan. Russell learnt his dislike of evolution as applied to philosophy from his teacher Henry Sidgwick and Wittgenstein (like other European-born philosophers like Karl Popper) from the general culture of their youth. Significantly, those philosophers of the English-speaking tradition of the twentieth century who have had kind words for Darwin – W.V.O. Quine, Richard Rorty, and Thomas Kuhn to name three – have all been sympathetic to Pragmatism in one form or another (Ruse 2018b, 2018c; see also, the entry on pragmatism). Second among new or revived discussions of Creationism and its various aspects, one question often asked is why evolution in particular raises such ire in evangelicals and related religionists. The Bible states that the sun stopped for Joshua and yet no one today worries about the theological implications of the Copernican Revolution. Michael Ruse particularly, has argued strongly that the main reason for the conflict is that often evolutionists – Darwinian evolutionists – turn their secular science into a religion, with moral imperatives and much more (Ruse forthcoming). The argument is that, as opposed to the Christian notion of Providence, where we are entirely in the hands of God, such evolutionists are progressionists, thinking that change is in our hands and can be for the better. In theological terms, Creationists tend to be premillennialists, believing that Jesus will return and rule for a thousand years and all we can do is get ready for this for instance through converting others, as opposed to evolutionists who are postmillennialists, thinking (metaphorically) that paradise is to be made down here by us, before there is any appropriate talk of a Second Coming. Thomas Henry Huxley, his grandson Julian S. Huxley, and today’s most eminent Darwinian evolutionist, Edward O. Wilson of Harvard, have been or are open in their secular religion building. Against the Creationist urge to save souls, they want to improve science education (THH), support public mega-works (JSH), and promote biodiversity (EOW). There is a strong odor of this secular religion building around the New Atheists, like Richard Dawkins (2006), despite denials. Expectedly, this thesis has not gone down well with many evolutionists, and conversely has been welcomed by Creationists who have long made this claim. It should be noted that the arguments do not and are not intended to give comfort to Creationism as such. The claim by Ruse and others (for instance Miller 1999 and Pennock 1998) is that there is a good, scientific theory of evolution, based on Darwin’s mechanism of natural selection. They are not offering a thesis about the science, but more one almost sociological in trying to understand the tension. If the thesis proves true, then evolutionists themselves are in a better position to defend themselves and to defend their science. Third among developments in Creationist thinking, especially since the failure at Dover, we find somewhat of a shift in strategy by religious critics of Darwinism. Now it is the moral issues that are brought to the fore. For instance, Richard Weikart (2004) claims that “no matter how crooked the road was from Darwin to Hitler, clearly Darwinism and eugenics smoothed the path for Nazi ideology, especially from the Nazi stress on expansion, war, racial struggle, and racial extermination.” In a similar vein, in the 2008 film Expelled—a work very favorable to Intelligent Design Theory—the link is drawn explicitly. Philosopher David Berlinski is blunt: “if you open Mein Kampf and read it, especially if you can read it in German, the correspondence between Darwinian ideas and Nazi ideas just leaps from the page.” In other words, if you are into Darwin, you are into National Socialism. As always, as soon as one starts to look at things a little more closely, the story becomes more complex (Richards 2013). Let us agree that something had to lead to Hitler and that given the racism that infects huge amounts of nineteenth century thinking about humankind—including Darwin’s Descent of Man—one should not give evolutionary theory a knee-jerk absolution. In fact, some early twentieth century writers on war and strife, clearly inspired in some fashion by Darwin, give one great pause for reflection. Listen to the sometime member of the German High Command, General Friedrich von Bernhardi. Darwinism endorses war endorses that which is morally good or acceptable. “Struggle is therefore a universal law of Nature, and the instinct of self-preservation which leads to struggle is acknowledged to be a natural condition of existence. ‘Man is a fighter’” (von Bernhardi 1912, 13). And “might gives the right to occupy or to conquer. Might is at once the supreme right, and the dispute as to what is right is decided by the arbitration of war. War gives a biologically just decision, since its decisions rest on the very nature of things” (ibid., p. 15). Hence “It may be that a growing people cannot win colonies from uncivilized races, and yet the State wishes to retain the surplus population which the mother-country can no longer feed. Then the only course left is to acquire the necessary territory by war.” Yet when one turns to Hitler himself, one soon sees that any similarities are superficial. One doubts very much that the (to be generous) ill-educated Fuhrer had ever read Darwin, and his concerns are not that of the old English evolutionist. All great cultures of the past perished only because the originally creative race died out from blood poisoning. The ultimate cause of such a decline was their forgetting that all culture depends on men and not conversely; hence that to preserve a certain culture the man who creates it must be preserved. This preservation is bound up with the rigid law of necessity and the right to victory of the best and stronger in this world. Those who want to live, let them fight, and those who do not want to fight in this world of eternal struggle do not deserve to live. (Hitler 1925, 1, chapter 11) “Blood poisoning”! The worry here is about the Jews and their supposed ill-effects on pure races. The Jews do not get a mention in the Descent of Man (1871), and although Darwin supposes that white races tend to wipe out others, it is not from any mental or physical superiority, but because we can tolerate their diseases but they cannot tolerate ours! And this is all because whites have had a bigger pool of variants to draw on than have others. A final brief (fourth) comment is that increasingly the struggle against Creationism and its various offspring is rapidly becoming a world-wide struggle. Leading historian of the Creationism movement, Ronald Numbers (2006), is particularly concerned about this fact. Not only do we find Creationism on the rise in countries like the Netherlands (where, with its large conservative Protestant population, such a rise is not altogether unexpected) but we find enthusiasm in non-Christian cultures, especially in cultures where Islam is a major factor. The exact reasons for such a rise have, as yet, been barely explored, but Numbers is surely right in thinking that theology probably plays but a minor role, and more sociological factors—dislike of the hegemony of the West and the role that science and technology play in such dominance—are probably very significant. The fact is that, for whatever reason, if anything Creationism is on the rise. And with that somber point, this is perhaps a good place to draw this discussion to a close. If this essay persuades even one person to take up the fight against so awful an outcome, then it will have served its purpose. Bibliography Ayala, F. J., 2009. “Molecular evolution”, in Evolution: The First Four Billion Years, in Ruse and Travis (eds.) 2009, 132–151.
6, 302). (Readers who want to dig more deeply into some of the technical issues should start with the entry on fitness.) What these workers do not offer is a Behe-type answer. First, they brush away a false lead. Could it be that we have something like the evolution of the mammalian eye, where primitive existent eyes in other organisms suggest that selection can and does work on proto models (as it were), refining features which have the same function if not as efficient as more sophisticated models? Probably not, for there is no evidence of anything like this. But then we are put on a more promising track. In the Krebs cycle problem the intermediary stages were also useful, but for different purposes, and, therefore, its complete design was a very clear case of opportunism. The building of the eye was really a creative process in order to make a new thing specifically, but the Krebs cycle was built through the process that Jacob (1977) called ‘evolution by molecular tinkering,’ stating that evolution does not produce novelties from scratch: It works on what already exists. The most novel result of our analysis is seeing how, with minimal new material, evolution created the most important pathway of metabolism, achieving the best chemically possible design. In this case, a chemical engineer who was looking for the best design of the process could not have found a better design than the cycle which works in living cells. (p. 302) Rounding off the response to Behe, let us note that, if his arguments are well-taken, then in respects we are into a bigger set of problems than otherwise! His position seems simply not viable given what we know of the nature of mutation and the stability of biological systems over time. When exactly is the intelligent designer supposed to strike and to do its work? In his major work, Darwin’s Black Box, Behe suggests that everything might have been done long ago and then left to its own devices. ‘The irreducibly complex biochemical systems that I have discussed… did not have to be produced recently.
no
Spelaeology
Is the largest cave system located in Kentucky, USA?
yes_statement
the "largest" "cave" "system" is "located" in kentucky, usa.. kentucky, usa is home to the "largest" "cave" "system".
https://nypost.com/2022/09/10/kentuckys-mammoth-cave-already-worlds-largest-grows-by-6-miles/
Kentucky's Mammoth Cave, already world's largest, grows by 6 miles
Social Links for Matthew Sedacca The National Park Service announced that explorers mapped out an additional 6 miles, making the total length of the cave 426 miles. Facebook/Mammoth Cave National P This one’s a grower. Mammoth Cave National Park in south-central Kentucky, home to the world’s longest cave system, has extended its fame. The National Park Service announced that explorers have mapped out an additional 6 miles throughout the underground network of passageways, bringing its total length to a whopping 426 miles. The discovery came during the park’s 50th-anniversary celebration of the Mammoth Cave and Flint Ridge Connection, the 1972 expedition that earned the system the title of the longest cave in the world. 1 of 3 Advertisement Mammoth Cave National Park, located in Kentucky, is the world’s largest cave system. Facebook/Mammoth Cave National P The discovery came during the park’s 50th-anniversary celebration of the 1972 Mammoth Cave expedition. Facebook/Mammoth Cave National P Advertisement “The additional 6 miles of the cave is spread out in various sections throughout the cave system and were mapped and documented through hours of survey work completed by our partner, the Cave Research Foundation,” said Park Superintendent Barclay Trimble. “It is very fitting that we can now announce the new miles during our anniversary events celebrating years of great work accomplished by the CRF.” On Monday, a Tennessee helicopter pilot was found dead at the national park after his aircraft crashed.
Social Links for Matthew Sedacca The National Park Service announced that explorers mapped out an additional 6 miles, making the total length of the cave 426 miles. Facebook/Mammoth Cave National P This one’s a grower. Mammoth Cave National Park in south-central Kentucky, home to the world’s longest cave system, has extended its fame. The National Park Service announced that explorers have mapped out an additional 6 miles throughout the underground network of passageways, bringing its total length to a whopping 426 miles. The discovery came during the park’s 50th-anniversary celebration of the Mammoth Cave and Flint Ridge Connection, the 1972 expedition that earned the system the title of the longest cave in the world. 1 of 3 Advertisement Mammoth Cave National Park, located in Kentucky, is the world’s largest cave system. Facebook/Mammoth Cave National P The discovery came during the park’s 50th-anniversary celebration of the 1972 Mammoth Cave expedition. Facebook/Mammoth Cave National P Advertisement “The additional 6 miles of the cave is spread out in various sections throughout the cave system and were mapped and documented through hours of survey work completed by our partner, the Cave Research Foundation,” said Park Superintendent Barclay Trimble. “It is very fitting that we can now announce the new miles during our anniversary events celebrating years of great work accomplished by the CRF.” On Monday, a Tennessee helicopter pilot was found dead at the national park after his aircraft crashed.
yes
Spelaeology
Is the largest cave system located in Kentucky, USA?
yes_statement
the "largest" "cave" "system" is "located" in kentucky, usa.. kentucky, usa is home to the "largest" "cave" "system".
https://www.smithsonianmag.com/travel/biggest-longest-five-amazing-caves-visit-180952892/
From the Biggest to the Longest, Five Amazing Caves To Visit ...
From the Biggest to the Longest, Five Amazing Caves To Visit To enter Son Doong Cave in Vietnam, visitors must descend over 260 feet. Ryan Deboodt Last weekend, cave explorers, scientists and geologists gathered at Hidden Earth, the United Kingdom's annual caving conference, to answer an important question: What's the largest cave in the world? Until then, the largest known cave chamber was thought to be Borneo's Sarawak Chamber, which is so large that it can fit multiple full-size airplanes inside. But after extensive laser scanning, cave explorers had a different cave to nominate: the Miao Room cavern, a chamber beneath China's Ziyun Getu He Chuandong National Park, accessible only by an underground stream. Precise laser measurements proved that Miao Room is the largest known cave chamber in the world by volume, measuring 380.7 million cubic feet (the Sarawak Chamber still has a larger surface area than the Miao Room chamber, however). Unfortunately for non-professional cave explorers, the Miao Room cavern, located with the Gebihe cave system, isn't open to public visitors. But if you want to experience the amazing world beneath the Earth's surface, consider a trip to one of these five amazing caves instead. Son Doong Cave, Vietnam / Son Doong cave opened to tours in 2013. Ryan Deboodt / The entrance of the cave was discovered in 1990 by a local man, Ho Khanh. Ryan Deboodt / To enter Son Doong Cave in Vietnam, visitors must descend over 260 feet. Ryan Deboodt / British explorers were the first to lead an expedition inside the cave, in 2009. Ryan Deboodt / The cave is so large that a 40-story skyscraper could fit inside parts of it. Ryan Deboodt / Algae often grows on the cave's limestone formations. Ryan Deboodt / The larger plants that grow in the cave help support animal life, such as monkeys and flying foxes. Ryan Deboodt / The cave is home to huge limestone formations. Ryan Deboodt / Son Doong Cave, located in Vietnam, is one of the largest caves in the world. Ryan Deboodt / The cave is formed mostly from limestone. Ryan Deboodt / The Son Doong Cave is large enough to house a virgin jungle, growing more than 650 feet below the surface of the Earth. Ryan Deboodt / Inside the cave is a large underground river. Ryan Deboodt In 1991, Ho Khanh, a man living in the jungles of Vietnam, discovered the entrance to a cave, but the descent into the opening was steep—dropping more than 200 feet—and Khanh was unable to enter. His discovery drew the attention of explorers, who made it their mission to enter the cave. In 2009, spelunkers from the British Cave Research Association (BCRA) lead the first expedition into the Son Doong Cave. What they found was one of the largest caves known to man: Son Doong measures over 5.5 miles in length, and some of its caverns are large enough to hold a 40-story skyscraper. Son Doong is also home to a virgin jungle growing more than 600 feet beneath the Earth's surface, in a portion of the cave where the roof has collapsed, allowing natural sunlight to filter down. Plants both small and large can thrive in the cave jungle—trees there can grow nearly 100 feet tall. The jungle's larger plants provide home to animals not normally found below the Earth, like monkeys. A river also flows within the Son Doong Cave—in English, Son Doong Cave means "Mountain River Cave." In 2013, Oxalis Adventures became the first (and only) licensed company to run tours into the caves. For $3,000, tourists were granted a six-day trip deep into the cave's interior. In 2015, Oxalis plans to offer eight trips a month into the cave, lead by BCRA experts who were all part of the original expedition into the cave. The River Styx is just one of Mammoth Cave's semi-subterranean waterways. Wikimedia On the surface, Mammoth National Park in central Kentucky encompasses around 80 square miles, but underneath lies a twisting labyrinth of limestone caves, creating a network that earns the title of the longest cave system in the world. 365 miles of the cave have been explored to date, but no one knows how far the cave system actually extends, as new caverns and recesses are continuously being discovered. The first human to enter Mammoth Cave descended into its winding passages over 4,000 years ago. Today, the cave is a massive tourist attraction, with more than 390,000 visitors passing through its limestone halls each year. One of the cave's most remarkable features is the abundance of stalactite formations, which number in the thousands and were created from years of water seeping through the cave's limestone ceiling. The Mulu Caves, located in Gunung Mulu National Park on the island of Borneo, are home to the world's largest cave chamber by surface area, as well as one of the largest cave passages on Earth. The Sarawak Chamber, which measures 1.66 million square feet, is nearly 2,000 feet long and over 260 feet high—so large that it could hold 40 Boeing 747 airplanes. Deer Chamber, one of the largest cave passages on Earth, is so big that it could fit five cathedrals the size of Saint Paul's in London inside its cavernous walls. Thousands of bats live within the Mulu Caves, and exit every day around sunset in search for food, offering tourists a magnificent display of their exodus. Initial exploration of the Sistema Sac Actun began from the Gran Cenote, pictured above, which is located about three miles from the Mexican village of Tulum. Wikipedia / Entrance to the Gran Cenote, part of the Sistema Sac Actun, the longest surveyed underwater cave system in the world. Wikimedia Commons Located just miles from the Mexican village of Tulum, the Sistema Sac Actun is the second-longest known underwater cave system in the world; over 130 miles of it have been explored so far. The underwater cave is usually accessed through the Gran Cenote, a hugely popular destination for snorkelers and scuba divers alike. The cenote, or sinkhole, is one of hundreds that dot the expansive cave system. It is open all day to visitors, who can swim in its waters for a small fee. New Athos Cave, Georgia / A visitor path winds through New Athos Cave, one of the largest cave systems in the world. Wikimedia / Stalactite and stalagmite formations within the New Athos Cave. Wikipedia The country of Georgia is home to the world's deepest cave, the Krubera cave, which plummets 7,208 feet into the Earth. Unfortunately, the bathophobia-inducing locale isn't open to visitors, so those looking for an alternative will need to travel to the New Athos Cave (also called the Novy Afon Cave), located inside Georgia's Iverian Mountain. In 1975, the town of New Athos decided to construct a railway within the cave for the purpose of luring tourists. The idea worked, and today, the New Athos Cave is one of Georgia's most popular attractions. The largest chamber in the cave is over 850 feet long and 160 feet high, and the cave itself is thought to be one of the largest in the world, though much of it remains unexplored.
The River Styx is just one of Mammoth Cave's semi-subterranean waterways. Wikimedia On the surface, Mammoth National Park in central Kentucky encompasses around 80 square miles, but underneath lies a twisting labyrinth of limestone caves, creating a network that earns the title of the longest cave system in the world. 365 miles of the cave have been explored to date, but no one knows how far the cave system actually extends, as new caverns and recesses are continuously being discovered. The first human to enter Mammoth Cave descended into its winding passages over 4,000 years ago. Today, the cave is a massive tourist attraction, with more than 390,000 visitors passing through its limestone halls each year. One of the cave's most remarkable features is the abundance of stalactite formations, which number in the thousands and were created from years of water seeping through the cave's limestone ceiling. The Mulu Caves, located in Gunung Mulu National Park on the island of Borneo, are home to the world's largest cave chamber by surface area, as well as one of the largest cave passages on Earth. The Sarawak Chamber, which measures 1.66 million square feet, is nearly 2,000 feet long and over 260 feet high—so large that it could hold 40 Boeing 747 airplanes. Deer Chamber, one of the largest cave passages on Earth, is so big that it could fit five cathedrals the size of Saint Paul's in London inside its cavernous walls. Thousands of bats live within the Mulu Caves, and exit every day around sunset in search for food, offering tourists a magnificent display of their exodus. Initial exploration of the Sistema Sac Actun began from the Gran Cenote, pictured above, which is located about three miles from the Mexican village of Tulum.
yes
Spelaeology
Is the largest cave system located in Kentucky, USA?
yes_statement
the "largest" "cave" "system" is "located" in kentucky, usa.. kentucky, usa is home to the "largest" "cave" "system".
https://www.usgs.gov/geology-and-ecology-of-national-parks/geology-mammoth-cave-national-park
Geology of Mammoth Cave National Park | U.S. Geological Survey
Geology of Mammoth Cave National Park Mammoth Cave National Park is a designated UNESCO World Heritage Site as well as an International Biosphere Reserve located in central Kentucky. The park was established on July 1, 1941 and encompasses 52,830 acres of wilderness. Mammoth Cave National Park is known for housing the world’s longest cave system, with over 405 miles (651 km) mapped. The vast cavern system was formed by water slowly dissolving Mississippian-aged carbonate rocks, creating sinkholes, tunnels, and underground rivers. This national park is also host to a diverse group of animal and plant species due to the microclimates created by varying light and temperature conditions. Geologic History Mammoth Cave National Park has one the most famous karst topographies in the entire world. Karst terrain is created by dissolution of soluble rocks like limestone or dolomite and is characterized by the presence of cave systems, sinkholes, springs, and disappearing streams (surface water that suddenly flows underground). The park is located within the Central Kentucky Karst, a limestone belt that extends from southern Indiana through Kentucky into Tennessee. It is part of the Chester Upland and the Pennyroyal Plateau and is dissected by the Green River, which controls the cave development. The sedimentary rocks that form the park were produced from sediments that were deposited when this area was submerged by an ancient ocean 330 million years ago as great inland seas or as near-shore deposits. The erosional force of the Green River has helped to carve the limestone into the unique and extraordinary karst topography of this region over millions of years. Stratigraphy Stratigraphy is the study of rock layers and their relationship to one another, helping researchers understand how the environment has changed over time. Rock layers are split into large sediment packages called formations, grouped based on common characteristics. Formations are sometimes split into smaller members, narrowed down to a more specific group of characteristics. The four rock formations, from oldest to youngest, found in Mammoth Cave are the St. Louis Limestone, the Ste. Genevieve Formation, the Girkin Formation, and the Big Clifty Formation. The largest caves are formed in the first three limestone units, while the Big Clifty, predominately shale and sandstone, forms a caprock of more chemically resistant rock. The St. Louis Limestone is the oldest unit exposed, the first unit deposited that we can still see from the surface and cave exposures. This unit was deposited during the Mississippian about 330 million years ago and is composed of fine to medium-grained limestone, dolomite, sandstone, siltstone, and greenish-gray shale. These sediments are typically found in marine environments. The fossilized remains of corals, bryozoans, brachiopods, shark teeth, gastropods, and crinoids have been found in this area and support that this part of what is now Kentucky was an ancient sea. Other important characteristics of this formation are horizontal beds and flat nodules of chert (a specific type of hard, very fine-grained sedimentary rock composed of silica) that protrude from the cave walls because they weather more slowly than the rest of the limestone and gypsum (a soft, hydrated calcium sulfate mineral). The Ste. Genevieve Formation overlies the St. Louis Limestone and consists of interbedded limestone and dolomite, no gypsum is found in this unit. The Ste. Genevieve Formation contains similar fossils as in the St. Louis Formation. The Girkin Formation lies above the Ste. Genevieve Formation and is predominantly made up of fine-to coarse-grained crystalline limestone. When these rocks were deposited, the area was still under water. An interesting feature found in this formation is the presence of oolites (small concentric spheres of calcium carbonate that form in shallow marine waters). This formation is separated by local interbedding of shale and sandstone, forming a lower oolitic portion and an upper fossiliferous portion. This upper fossiliferous section include the remains of many marine organisms such as corals, brachiopods, crinoids, and echinoids. The Big Clifty Formation, the uppermost strata, is the oldest of the cap rocks. These brown and gray sedimentary rocks are mostly sandstone, siltstone, and fissile shale (shale that splits into thin sheets). These resistant cap rocks are critical to cave preservation. If too much water entered the cave system, the passages and caverns we marvel at today may have eroded away. Sources/Usage: Public Domain. Jessie Carson and Burak Lacinier gaze at the broad vista and rocky breakdowns of Thanksgiving Hall in Mammoth Cave, near the cave's Frozen Niagara area. Image taken in 2007. Image is a panoramic composite. Cave Formation The largest of the caves are found in the St. Louis Limestone, Ste. Genevieve, and Girkin Formations. Groundwater began interacting with the Girkin Limestone about 10 million years ago and began to carve the caves. The upper levels of the cave system were fully formed by 3.2 million years ago, based on radiometric dating of quartz pebbles. Water is a powerful force that can carve through rock, but it works very slowly. To form this massive cave system, water percolated through cracks and pores in the cap rocks, where it reached the soluble limestone (calcium carbonate). Rainwater and groundwater are slightly acidic (naturally containing carbonic acid through the reaction of water and carbon dioxide), therefore chemically dissolving the rock over very long periods of time, also physically weathering the rock with the erosional power of water. The large size of this cave system is attributed to the amount of time it has been forming and to the size of the Green River drainage basin. At various points in the geologic past, parts of the cave system have filled with sediment that accumulated from the Green River, in some cases filling up previous passageways. Speleothems Mammoth Cave has many subterranean cave formations called speleothems that form from the precipitation of minerals that were once suspended in the groundwater percolating through the limestone. Stalactites, stalagmites, evaporites, helictites, gypsum formations, and travertine dams are just a few examples. Evaporites are deposits of minerals that remain after water has evaporated. These often form in areas with a constant drip or slow flow, or areas with intermittent water. For example, evaporite dams can often form along the edges of pools, where the water ebbs periodically and dissolved minerals can precipitate out and accumulate.
Geology of Mammoth Cave National Park Mammoth Cave National Park is a designated UNESCO World Heritage Site as well as an International Biosphere Reserve located in central Kentucky. The park was established on July 1, 1941 and encompasses 52,830 acres of wilderness. Mammoth Cave National Park is known for housing the world’s longest cave system, with over 405 miles (651 km) mapped. The vast cavern system was formed by water slowly dissolving Mississippian-aged carbonate rocks, creating sinkholes, tunnels, and underground rivers. This national park is also host to a diverse group of animal and plant species due to the microclimates created by varying light and temperature conditions. Geologic History Mammoth Cave National Park has one the most famous karst topographies in the entire world. Karst terrain is created by dissolution of soluble rocks like limestone or dolomite and is characterized by the presence of cave systems, sinkholes, springs, and disappearing streams (surface water that suddenly flows underground). The park is located within the Central Kentucky Karst, a limestone belt that extends from southern Indiana through Kentucky into Tennessee. It is part of the Chester Upland and the Pennyroyal Plateau and is dissected by the Green River, which controls the cave development. The sedimentary rocks that form the park were produced from sediments that were deposited when this area was submerged by an ancient ocean 330 million years ago as great inland seas or as near-shore deposits. The erosional force of the Green River has helped to carve the limestone into the unique and extraordinary karst topography of this region over millions of years. Stratigraphy Stratigraphy is the study of rock layers and their relationship to one another, helping researchers understand how the environment has changed over time. Rock layers are split into large sediment packages called formations, grouped based on common characteristics. Formations are sometimes split into smaller members, narrowed down to a more specific group of characteristics.
yes
Spelaeology
Is the largest cave system located in Kentucky, USA?
yes_statement
the "largest" "cave" "system" is "located" in kentucky, usa.. kentucky, usa is home to the "largest" "cave" "system".
https://franks-travelbox.com/en/nordamerika/usa/mammoth-cave-national-park-in-kentucky-usa/
Mammoth Cave National Park in Kentucky, USA | Franks Travelbox
Mammoth Cave National Park in Kentucky, USA Mammoth Cave National Park is home to the spectacular Mammoth Cave, whose gigantic underground halls can be explored on 500km of hiking trails. Fresh air fanatics enjoy the idyllic landscape around Mammoth Cave. Mammoth Caves National Park in central Kentucky in the United States is located around the core area of Mammoth Cave, which is the largest cave system in the world with its gigantic extension of more than 600 kilometres. Advertisement Over the millennia, acid rain water created gigantic passages, shafts and chambers in the limestone layer, which is up to 400m thick. Today, these passages are accessible to visitors through 500km of hiking trails and extend over 5 levels. The park has been a UNESCO World Heritage Site since 1981 and was declared a biosphere reserve in 1990. The first people explored the cave more than 4,500 years ago, as evidenced by mummy finds and various utensils of human origin, such as clothing, torches, ropes and pottery. The oldest and best preserved mummy, called Lost John, is estimated to be 2,300 years old. Discovered by the whites at the end of the 18th century, the cave was used in the following years mainly as a source of saltpetre for the production of gunpowder, created by the excretion of countless bats. Tour of the Mammoth Cave If you want to avoid the tourist crowds and are not claustrophobic, you can take a 6-hour tour, partly on all fours, through muddy tunnels and narrow passages through the fascinating underground world. Besides the gigantic underground spaces and the bizarre limestone formations, there are also some stalactites and gypsum crystal forms to marvel at. As is usual in caves, animals without eyesight live here, such as the cave blind fish and some small animals, such as worms, beetles and crickets or crayfish. Of the bats that used to be so numerous, only a few colonies remain. Adventure under day You are not allowed to explore the caves on your own, but you can find out about guided tours at the visitor centre. Depending on the season, there are 3-5 different hikes every day, which are accompanied by rangers. The "Frozen Niagara Tour" takes you past spectacular rock formations, offers a glimpse into one of the seemingly bottomless shafts, the so-called domes, and leads up to a metre-high waterfall-like limestone structure - hence the name. Advertisement The "Historic Tour" leads to the old saltpetre mining facilities and an underground river. The "Half Day Tour" is the longest and most strenuous tour (6 hours), but it offers the most comprehensive insight into the diverse cave system of Mammoth Cave. The tour takes you through extensive passages and giant chambers as well as through narrow loopholes and low crawlways. The hardships are rewarded with chambers full of stalactites and flowers made of plaster, metre-deep domes and gigantic stone halls. Tip: The temperature in Mammoth Cave is a relatively warm 13°C for a cave system. So be sure to bring a jumper or jacket and sturdy shoes for a visit even on hot days. Advantage: the 13°C are also available in winter, when the caves are not so well visited! Daylight walks Fresh-air fanatics can take a canoe or paddle-boat trip on the Green River above the caves, from where deer, raccoons and plenty of birds can always be seen on the banks and numerous species of fish underwater (fishing allowed!), or take the hiking trails of the above-ground karst landscape through funnel-shaped sinkholes, peaceful swamps, bubbling springs and splashing waterfalls. Due to the small size of the park, there are only a few hiking trails that start from the visitor centre and are easy to walk in one day. The trail to First Creek Lake and the Echo River Trail are particularly good for bird watching. The two short nature trails, Cave Island and Cedar Sink Trail, are also extremely informative. The vegetation escaped clear-cutting due to the establishment of the national park and has since recovered to such an extent that almost the entire park area is covered with mixed forest. Attention campers: Some animals, especially tree squirrels and chipmunks, are extremely trusting and have little respect for food that does not belong to them. Raccoons and skunks also roam the area at night in search of food and may not even stop at tents.
Mammoth Cave National Park in Kentucky, USA Mammoth Cave National Park is home to the spectacular Mammoth Cave, whose gigantic underground halls can be explored on 500km of hiking trails. Fresh air fanatics enjoy the idyllic landscape around Mammoth Cave. Mammoth Caves National Park in central Kentucky in the United States is located around the core area of Mammoth Cave, which is the largest cave system in the world with its gigantic extension of more than 600 kilometres. Advertisement Over the millennia, acid rain water created gigantic passages, shafts and chambers in the limestone layer, which is up to 400m thick. Today, these passages are accessible to visitors through 500km of hiking trails and extend over 5 levels. The park has been a UNESCO World Heritage Site since 1981 and was declared a biosphere reserve in 1990. The first people explored the cave more than 4,500 years ago, as evidenced by mummy finds and various utensils of human origin, such as clothing, torches, ropes and pottery. The oldest and best preserved mummy, called Lost John, is estimated to be 2,300 years old. Discovered by the whites at the end of the 18th century, the cave was used in the following years mainly as a source of saltpetre for the production of gunpowder, created by the excretion of countless bats. Tour of the Mammoth Cave If you want to avoid the tourist crowds and are not claustrophobic, you can take a 6-hour tour, partly on all fours, through muddy tunnels and narrow passages through the fascinating underground world. Besides the gigantic underground spaces and the bizarre limestone formations, there are also some stalactites and gypsum crystal forms to marvel at. As is usual in caves, animals without eyesight live here, such as the cave blind fish and some small animals, such as worms, beetles and crickets or crayfish.
yes
Spelaeology
Is the largest cave system located in Kentucky, USA?
yes_statement
the "largest" "cave" "system" is "located" in kentucky, usa.. kentucky, usa is home to the "largest" "cave" "system".
https://a-z-animals.com/blog/how-big-is-kentucky-see-its-size-in-miles-acres-and-how-it-compares-to-other-states/
How Big Is Kentucky? See Its Size in Miles, Acres, and How it ...
How Big Is Kentucky? See Its Size in Miles, Acres, and How it Compares to Other States How Big Is Kentucky? See Its Size in Miles, Acres, and How it Compares to Other States WATCH: Sharks biting alligators, the most epic lion battles, and MUCH more. Enter your email in the box below to get the most mind-blowing animal stories and videos delivered directly to your inbox every day. Thanks for subscribing! Kentucky is located in the south-central United States along the western side of the Appalachian Mountains. Here is a look at the total area of the Bluegrass State in terms of square miles, square kilometers, and acres, along with comparisons to the sizes of other states. Length and Width of Kentucky At its longest point, Kentucky measures 425 miles from west to east. That is equivalent to the distance from Indianapolis, Indiana to Atlanta, Georgia. While Kentucky’s west-to-east measurement is substantial, it is much shorter, measuring north-to-south. The state extends 182 miles at its widest point. That is roughly the same distance from Louisville to Nashville, Tennessee. Square Miles and Kilometers The University of Louisville lists Kentucky’s total area at 39,732 square miles. That equates to 102,907 square kilometers. Forests cover about 50% of Kentucky’s land. The state also features more miles of streams and rivers than any other state except for Alaska. Acres Kentucky’s land mass equates to 25,428,480 acres. One square mile is equivalent to 640 acres. A football field (minus the end zones) is a little over one acre in size. Other States Kentucky is in the bottom third of states according to land mass. The Bluegrass State ranks 37th out of the 50 U.S. states in terms of size. Alaska is far and away the nation’s largest state at 665,384 square miles. Nearly 17 Kentuckys could fit into Alaska! Texas is the largest of the contiguous 48 states at 268,597 square miles. That is nearly seven times the size of Kentucky. California, the third largest state, is s little more than four times the size of Kentucky. States with sizes comparable to Kentucky include Virginia (about 3,000 square miles larger), Tennessee (approximately 2,400 square miles larger), and Indiana (around 3,300 square miles smaller). Kentucky accounts for just a little over one percent of U.S. land. Rhode Island only occupies 0.04 percent of the nation’s land, while almost 17.5 percent of United States land is in Alaska. State Borders Kentucky shares borders with seven states. It is bordered by Illinois, Indiana, and Ohio to the north, West Virginia to the northeast, Virginia to the east, Tennessee to the south, and Missouri to the west. Rivers define most of the state’s borders. Kentucky’s borders with Tennessee and Virginia are the only two that do not follow along a river. Kentucky Bend There is quite an oddity in the borders between Kentucky, Missouri, and Tennessee. A small section of Kentucky measuring about 27 square miles lies in an oxbow loop of the Mississippi River. This region is known as the Kentucky Bend. It is also sometimes called Madrid Bend, New Madrid Bend, Bessie Bend, and Bubbleland. Kentucky Bend is completely separated from the rest of the state, making it one of the oddest state borders in the nation. Kentucky Bend is completely encircled by Missouri and Tennessee. The Mississippi River flows around the east, north, and west of Kentucky Bend, representing the border with Missouri. The land border to the south is with Tennessee. That means this small exclave, which is officially part of Kentucky, does not touch the rest of the state of Kentucky at all. Kentucky Bend is likely the result of mistakes made by nineteenth-century surveyors. The few residents living in Kentucky Bend have a Tennessee mailing address. The closest populated area is New Madrid, Missouri. Yet, the residents of Kentucky Bend are counted among the citizenry of Kentucky. It is one of the most interesting and confusing state borders in the U.S. Population Kentucky is home to 4,512,310 residents, according to the United States Census Bureau’s 2022 estimate. That makes Kentucky the 26th most populous state in the nation. Louisiana has a very similar population, with about 78,000 more residents than Kentucky. California is the nation’s most populous state, with a 2022 estimate of 39,029,342 residents. There are over 8.5 times more Californians than Kentuckians. The least populous state is Wyoming with only 581,381 residents. Kentucky has a population nearly eight times the size of Wyoming’s. Fun Facts About the Bluegrass State Kentucky became the 15th state admitted to the Union on June 1, 1792. It was the first state on the western frontier to achieve statehood. The highest point in Kentucky is Black Mountain in Harlan County, with an elevation of 4,145 feet above sea level. The lowest point in the state is the Mississippi River in Fulton County which sits 257 feet above sea level. Presidential History Abraham Lincoln was born on February 12, 1809, near Hodgenville. His birthplace is now a national historic park. The president of the Confederacy was also born in Kentucky. Jefferson Davis was born in Fairview on June 3, 1808. The birthplaces of these two opposing Civil War executives were only about 100 miles apart. Zachary Taylor is the other president with Kentucky ties. Though he was born in Virginia on November 24, 1784, Taylor’s family moved to Kentucky when he was a young boy. They relocated to an area on the Ohio River that would later become the city of Louisville. The state is also home to the first town in the United States to be named after George Washington, the nation’s first president. Washington, Kentucky, was named in 1780. Notable Kentuckians Kentucky was the birthplace of numerous country music stars, the most famous being Loretta Lynn, who sang about growing up in Butcher Hollow in her iconic song, “Coal Miner’s Daughter.” Other well-known country artists born in Kentucky include Crystal Gayle, Wynonna Judd, and Billy Ray Cyrus. The McCoy family of the infamous Hatfield-McCoy feud lived in Pike County. The McCoys feuded with the Hatfields, who hailed from West Virginia, for nearly 30 years. National Park Kentucky is home to one national park. Mammoth Cave National Park in south-central Kentucky is home to the largest cave system in the world. Over two million people visit the park each year to explore the cave. Visitors can also fish, canoe, bicycle, ride horses, and hike in the 52,000-acre park. Mammoth Cave National Park invites visitors to explore the largest cave system on Earth. Kentucky Food Foodies will certainly appreciate the numerous culinary creations that were born in Kentucky. The cheeseburger, the “Old Fashion” drink, Bibb lettuce, the Hot Brown sandwich, and chewing gum all originated in Kentucky. However, the most famous food to emanate from the Bluegrass State has to be the creation of Colonel Sanders. The famous Kentucky icon opened a café in Corbin in 1940 that gave rise to the Kentucky Fried Chicken franchise. The Featured Image A freelance writer in Cincinnati, OH, Mike is passionate about the natural world. He, his wife, and their two sons love the outdoors, especially camping and exploring US National Parks. A former pastor, he also writes faith-based content to encourage and inspire. And, for reasons inexplicable, Mike allows Cincinnati sports teams to break his heart every year.
Presidential History Abraham Lincoln was born on February 12, 1809, near Hodgenville. His birthplace is now a national historic park. The president of the Confederacy was also born in Kentucky. Jefferson Davis was born in Fairview on June 3, 1808. The birthplaces of these two opposing Civil War executives were only about 100 miles apart. Zachary Taylor is the other president with Kentucky ties. Though he was born in Virginia on November 24, 1784, Taylor’s family moved to Kentucky when he was a young boy. They relocated to an area on the Ohio River that would later become the city of Louisville. The state is also home to the first town in the United States to be named after George Washington, the nation’s first president. Washington, Kentucky, was named in 1780. Notable Kentuckians Kentucky was the birthplace of numerous country music stars, the most famous being Loretta Lynn, who sang about growing up in Butcher Hollow in her iconic song, “Coal Miner’s Daughter.” Other well-known country artists born in Kentucky include Crystal Gayle, Wynonna Judd, and Billy Ray Cyrus. The McCoy family of the infamous Hatfield-McCoy feud lived in Pike County. The McCoys feuded with the Hatfields, who hailed from West Virginia, for nearly 30 years. National Park Kentucky is home to one national park. Mammoth Cave National Park in south-central Kentucky is home to the largest cave system in the world. Over two million people visit the park each year to explore the cave. Visitors can also fish, canoe, bicycle, ride horses, and hike in the 52,000-acre park. Mammoth Cave National Park invites visitors to explore the largest cave system on Earth. Kentucky Food Foodies will certainly appreciate the numerous culinary creations that were born in Kentucky.
yes
Spelaeology
Is the largest cave system located in Kentucky, USA?
yes_statement
the "largest" "cave" "system" is "located" in kentucky, usa.. kentucky, usa is home to the "largest" "cave" "system".
https://whc.unesco.org/en/list/150/
Mammoth Cave National Park - UNESCO World Heritage Centre
Share on social media UNESCO social media Mammoth Cave National Park Mammoth Cave National Park Mammoth Cave National Park, located in the state of Kentucky, has the world's largest network of natural caves and underground passageways, which are characteristic examples of limestone formations. The park and its underground network of more than 560 surveyed km of passageways are home to a varied flora and fauna, including a number of endangered species. Outstanding Universal Value Mammoth Cave is the most extensive cave system in the world, with over 285 miles (458 km) of surveyed cave passageways within the property (and at least another 80 miles [128 km] outside the property). The park illustrates a number of stages of the Earth's evolutionary history and contains ongoing geological processes and unique wildlife. It is renowned for its size and vast network of extremely large horizontal passages and vertical shafts. Nearly every type of cave formation is known within the site, the product of karst topography. The flora and fauna of Mammoth Cave is the richest cave-dwelling wildlife known, with more than 130 species within the cave system. Criterion (vii): Mammoth Cave is the longest cave system in the world. The long passages with huge chambers, vertical shafts, stalagmites and stalactites, splendid forms of beautiful gypsum flowers, delicate gypsum needles, rare mirabilite flowers and other natural features of the cave system are all superlative examples of their type. No other known cave system in the world offers a greater variety of sulfate minerals. Criterion (viii): Mammoth Cave exhibits 100 million years of cave-forming action and presents nearly every type of cave formation known. Geological processes involved in their formation continue. Today, this huge and complex network of cave passages provides a clear, complete and accessible record of the world’s geomorphic and climatic changes. Outside the cave, the karst topography is superb, with fascinating landscapes and all of the classic features of a karst drainage system: vast recharge area, complex network of underground conduits, sink holes, cracks, fissures, and underground rivers and springs. Criterion (x): The flora and fauna of the cave is the richest caverniculous wildlife known, numbering over 130 species, of which 14 species of troglobites and troglophiles are known only to exist here. Integrity With nearly 500 km of surveyed cave passageways within the property and over 21,000 hectares above ground, the property is large enough to offer a high level of protection to the outstanding universal value for which it was inscribed. A portion of the site has development (roads, visitor facilities, park operational and administrative infrastructure), but most of the area remains undeveloped in a natural zone. As a national park, protection of the property’s integrity takes first priority in management decisions. Mammoth Cave and its karst terrain face threats and challenges, most of which are from external sources. Because large portions of the Mammoth Cave watershed lie outside park boundaries, activities conducted in these privately-owned areas greatly influence water quality and quantity within the park. Water quality is influenced by sewage and waste disposal, farming and forestry practices, oil/gas wells, railroads and highways. Water quantity is influenced by flood-control dams on the Green and Nolin Rivers, and a small lock and dam immediately downstream of the park. The integrity of Mammoth Cave has been strengthened as a result of five significant measures that have been taken since Mammoth Cave National Park was inscribed in 1981: an updated General Management Plan in 1983; the establishment of the Mammoth Cave Area International Biosphere Reserve in 1990 and subsequent expansion in 1996; a regional sewage system, installed in the early 1990s, which serves both the park and three adjacent communities; the establishment of the Mammoth Cave International Center for Science and Learning in 2004; and the discovery and mapping of 140 additional miles (225 km) of cave passageways over the past 31 years. The regional sewer system has greatly increased protection of the park’s sensitive cave system by servicing most of the areas that drain into the Mammoth Cave. The 1996 expansion of the Mammoth Cave Area Biosphere Reserve to 367,993 hectares has also played an important role in securing the property’s integrity and maintaining water quality. The Biosphere Reserve now includes all or portions of six counties near Mammoth Cave National Park, encompassing the ecologically sensitive hydrological recharge area for Mammoth Cave National Park as well as a large interaction zone. This has helped address common concerns regarding water quality, has provided an impetus for protection and has reinforced the World Heritage property values inside the park in combination with the connected ecologically sensitive areas outside of the park. In 2004, the Mammoth Cave International Center for Science and Learning was established through a partnership between Mammoth Cave National Park and Western Kentucky University. Part of a national network of learning centers located within national parks, it facilitates the use of parks for scientific inquiry, supports science-informed decision making, and promotes science literacy and resource stewardship. The learning center has contributed to “sister park” agreements with other World Heritage sites (China and Slovenia) that protect cave and karst resources. Fine particles of air pollution often cause haze in the park, affecting how well and how far visitors can see vistas and landmarks. Air pollutants of concern can have serious effects on park air quality, human health, wildlife, vegetation, upland ponds, streams, soils, and visibility. Protection and management requirements Designated by the U.S. Congress in 1941 as a national park, Mammoth Cave National Park is managed under the authority of the Organic Act of August 25, 1916 which established the United States National Park Service. In addition, the park has specific enabling legislation which provides broad congressional direction regarding the primary purposes of the park. Numerous other federal laws bring additional layers of protection to the park and its resources. Day to day management is directed by the Park Superintendent. Management goals and objectives for the property have been developed through a General Management Plan, which has been supplemented in recent years with more site-specific planning exercises as well as numerous plans for specific issues and resources. In addition, the National Park Service has established Management Policies which provide broader direction for all National Park Service units, including Mammoth Cave. Approximately 600,000 people visit the property each year, and 400,000 of those tour Mammoth Cave. Access to the cave is strictly controlled and visitation is confined to 10 miles of developed passageway. On the surface of the park, some trail use activities produce soil erosion and equine waste. Invasive species crowding out native plants is another area of great concern. Protection of the site from current and potential threats will require continued monitoring of resource conditions, such as through the NPS Inventory and Monitoring program, which has developed nine “vital signs” for the park, including five cave vital signs (aquatic biota, bats, crickets, meteorology, woodrats), forest vegetation communities, invasive species early detection, ozone/foliar injury, and water quality. Continued collaboration at the landscape scale, such as through the Biosphere Reserve, is also essential to long term protection of the site.
Share on social media UNESCO social media Mammoth Cave National Park Mammoth Cave National Park Mammoth Cave National Park, located in the state of Kentucky, has the world's largest network of natural caves and underground passageways, which are characteristic examples of limestone formations. The park and its underground network of more than 560 surveyed km of passageways are home to a varied flora and fauna, including a number of endangered species. Outstanding Universal Value Mammoth Cave is the most extensive cave system in the world, with over 285 miles (458 km) of surveyed cave passageways within the property (and at least another 80 miles [128 km] outside the property). The park illustrates a number of stages of the Earth's evolutionary history and contains ongoing geological processes and unique wildlife. It is renowned for its size and vast network of extremely large horizontal passages and vertical shafts. Nearly every type of cave formation is known within the site, the product of karst topography. The flora and fauna of Mammoth Cave is the richest cave-dwelling wildlife known, with more than 130 species within the cave system. Criterion (vii): Mammoth Cave is the longest cave system in the world. The long passages with huge chambers, vertical shafts, stalagmites and stalactites, splendid forms of beautiful gypsum flowers, delicate gypsum needles, rare mirabilite flowers and other natural features of the cave system are all superlative examples of their type. No other known cave system in the world offers a greater variety of sulfate minerals. Criterion (viii): Mammoth Cave exhibits 100 million years of cave-forming action and presents nearly every type of cave formation known. Geological processes involved in their formation continue. Today, this huge and complex network of cave passages provides a clear, complete and accessible record of the world’s geomorphic and climatic changes.
yes
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://www.psychologytoday.com/us/blog/body-sense/202205/the-fiction-mind-body-separation
The Fiction of a “Mind-Body” Separation | Psychology Today
The Fiction of a “Mind-Body” Separation Thoughts, sensations, emotions, movements are part of the same psychobiology. Key points The distinction commonly made between “mind” and “body” is a complete fiction, totally divorced from any basis in our cellular psychobiology. All thought is a biological process, totally dependent on the embodied conditions that sustain life and health. The pulsations and secretions of the living body support the creation of thinking about and feeling ourselves. Embodied self-awareness (ESA) refers to the ways in which we pay attention to what is happening inside of our own bodies. ESA has two basic forms. We can be aware of the felt experiences from within the body or we can be aware of our thoughts. These two forms of self-awareness arise from different networks of the cells and tissues of our living body. We can feel ourselves or we can think about ourselves, but usually not both at the same time, so we are fooled into believing that mind and body are separate. "Mind" usually means our thoughts and mental images. "Mind" can also refer to the brain or what goes on “in the head” as opposed to in the body. But isn’t the head part of the body? The head and brain are attached to the neck by nerves and bones and muscles and fascia that link the head and its functional and sensory organs with the other parts of the body. The brain in the head is linked to the rest of the body via the nervous system and by the hormones and neurotransmitters in the blood which travel throughout the whole body. The central nervous system is in the head and those nerve cells connect to the brain stem and spinal cord to form links to the peripheral nervous system in the parts of the body that is not in the head. Medications and foods we process in the digestive system, for example, can affect “mental” health by being transferred via the gut to the blood and brain. If nerves are severed in any part of the body, we can’t feel that part or move it anymore. On the other hand, if the parts of the central nervous system that receive signals from a specific part of the peripheral nervous system are damaged – such as from a stroke or head injury – we lose feeling and movement in those linked peripheral locations in the body. So even at a basic physiological level, the so-called “mind” and so-called “body” are fictional: they don’t have any basis in biological reality. The brain initiates secretions of hormones into the blood. Hormones regulate many different types of body functions like digestion, urination, body temperature, metabolism, and reproductive and sexual function. The bloodstream, because of its microscopic capillaries, touches all the cells of the body. Changes in these peripheral cells send neurochemical messages to the brain via the blood and peripheral nervous system that “tell” the brain to secrete one or another hormone that the body needs. This brain-to-blood linkage occurs at the base of the brain in the hypothalamus and pituitary just below it. Using those blood and neural signals from the body, these brain organs initiate the creation of hormones for metabolic energy required to do any activity (cortisol). They support interpersonal warmth and closeness (oxytocin). They produce hormones for sexual activity (penile and clitoral erections and sexual arousal, genital lubrication, and orgasm) and reproduction (sperm and ova released in the gonadal glandular system and milk released in the lactating glands of the breasts). These brain centers also create the hormonal precursors to release metabolic resources in the thyroid and other glandular systems for temperature regulation and cellular growth. The hormones that circulate around the body in the blood also perfuse brain cells and act as neurotransmitters that affect brain function. When our sexual encounter has ended, for example, this signals the hypothalamus to slow or stop the production of particular sex hormones. Similarly, when we finish our exercise session or stop working, the body-to-brain connection turns off or slows down the production of cortisol. Coming back to the fiction of a separate “mind” and “body,” there are many more connections that make the head just another part of the body. The nostril passages and mouth in the head are connected to the trachea and lungs. The mouth connects via the throat and esophagus to the entire digestive system and all of its organs (like the stomach, intestines, liver, pancreas, and gallbladder), as well as to the kidneys, bladder, and urinary tract. Vocalizations from the mouth are created as much in the brain’s language centers as in the diaphragm, chest, and throat. The tongue is a head muscle that connects to the muscles of the neck, which in turn connect to other muscles that attach to bones in the chest and shoulders. The eyes and ears in the head, via the entire nervous and neuro-muscular systems, affect head-turning and body movements toward or away from sights and sounds. All these connections between the head and other body regions make it difficult to defend the fiction of a distinction between “mind” and “body.” The concept of Embodied Self-Awareness better represents the idea that both our thoughts and our feelings are brought into awareness via multi-cellular pathways that extend throughout the whole body. By some (devious?) route in the evolution of our species, a form of consciousness was created in which thinking seems as if it comes from someplace inside the head. This fiction is further perpetuated because when we are thinking, we lose awareness of the felt sense of our bodies. But the fact is that all our thoughts are founded upon an embodied experience, created within the entire network of the cells, tissues, and structures of a living human body. And conversely, muscle pain, for example, seems as if it is coming from a particular part of the body (the arm, leg, fingers, or wherever). That pain awareness, however, is similarly created by a distributed network of cells in the peripheral nervous system that link to brain centers for bringing the pain into conscious awareness, along with the information that the pain seems “located” at the periphery. This is similar to how thoughts seem to be "located" in the head. Our conscious experience of ourselves, therefore, tricks us into believing in the fiction of "mind" and "body." It’s like believing that the very convincing portrayals of Sherlock Holmes in films and serials make him a real person rather than a fiction created originally by the real Sir Arthur Conan Doyle and enhanced by talented screenwriters and directors. Eliminating the "mind-body" fiction opens new worlds for embodied approaches to health care, psychotherapy, lifestyle, and a host of emerging embodied treatment approaches for healing emotional, physical, and other traumatic wounds.
The Fiction of a “Mind-Body” Separation Thoughts, sensations, emotions, movements are part of the same psychobiology. Key points The distinction commonly made between “mind” and “body” is a complete fiction, totally divorced from any basis in our cellular psychobiology. All thought is a biological process, totally dependent on the embodied conditions that sustain life and health. The pulsations and secretions of the living body support the creation of thinking about and feeling ourselves. Embodied self-awareness (ESA) refers to the ways in which we pay attention to what is happening inside of our own bodies. ESA has two basic forms. We can be aware of the felt experiences from within the body or we can be aware of our thoughts. These two forms of self-awareness arise from different networks of the cells and tissues of our living body. We can feel ourselves or we can think about ourselves, but usually not both at the same time, so we are fooled into believing that mind and body are separate. "Mind" usually means our thoughts and mental images. "Mind" can also refer to the brain or what goes on “in the head” as opposed to in the body. But isn’t the head part of the body? The head and brain are attached to the neck by nerves and bones and muscles and fascia that link the head and its functional and sensory organs with the other parts of the body. The brain in the head is linked to the rest of the body via the nervous system and by the hormones and neurotransmitters in the blood which travel throughout the whole body. The central nervous system is in the head and those nerve cells connect to the brain stem and spinal cord to form links to the peripheral nervous system in the parts of the body that is not in the head. Medications and foods we process in the digestive system, for example, can affect “mental” health by being transferred via the gut to the blood and brain. If nerves are severed in any part of the body, we can’t feel that part or move it anymore.
no
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://www3.nd.edu/~jspeaks/courses/2006-7/20208/descartes-mind-body.html
Descartes on the separateness of mind and body
1 The method of doubt In the selection from the Meditations on First Philosophy that we read, Descartes argues that the mind is something distinct from any body. But the Meditations begins with a discussion of a topic seemingly far removed from the nature of the mind: the question of whether we can be certain of the truth of any of our opinions. What we have to see is how this question is related to questions about the relationship between mind and body. Descartes begins (1.5) by noting that his opinions up to this point have been based on his senses, but that we cannot be certain that our senses do not deceive us. This is in part because we cannot be certain that what we think of as our sensations of the world are not a dream: “How often have I dreamt that I was in these familiar circumstances, that I was dressed, and occupied this place by the fire, when I was lying undressed in bed? At the present moment ...I look upon this paper with eyes wide awake; ...but I cannot forget that, at other times I have been deceived in sleep by similar illusions; and, attentively considering those cases, I perceive so clearly that there exist no certain marks by which the state of waking can ever be distinguished from sleep, that I feel greatly astonished ...” (1.7) So, Descartes argues, there seems to be some sense in which I am less than certain about the existence of the bodies I seem to be perceiving. It seems to me that there is a computer monitor in front of me right now; but, because “there exist no certain marks by which the state of waking can ever be distinguished from sleep” I cannot be certain that I am not dreaming of a computer monitor rather than seeing one. Descartes uses the figure of an ‘evil demon’ to make much the same point: “ I will suppose, then, not that Deity, who is sovereignly good and the fountain of truth, but that some malignant demon, who is at once exceedingly potent and deceitful, has employed all his artifice to deceive me; I will suppose that the sky, the air, the earth, colors, figures, sounds, and all external things, are nothing better than the illusions of dreams, by means of which this being has laid snares for my credulity; I will consider myself as without hands, eyes, flesh, blood, or any of the senses, and as falsely believing that I am possessed of these ...” (1.12) The point of this, for our purposes, is not whether it is plausible or reasonable to believe that we are constantly being deceived by an evil demon; we can assume that this is not a reasonable thing to believe. Rather, the important point is that, by reflecting on scenarios like dreaming and being deceived by an evil demon, it seems possible to doubt whether any of the external, physical things which we seem to perceive really do exist. 2 What cannot be doubted At the beginning of the second Meditation, Descartes wonders whether there is anything whose existence cannot be doubted: “I suppose, accordingly, that all the things which I see are false (fictitious); I believe that none of those objects which my fallacious memory represents ever existed; I suppose that I possess no senses; I believe that body, figure, extension, motion, and place are merely fictions of my mind. What is there, then, that can be esteemed true ? Perhaps this only, that there is absolutely nothing certain.” (2.2) But he quickly finds that this is not the case; even though he can doubt the existence of any external thing, he cannot doubt his own existence: “But I had the persuasion that there was absolutely nothing in the world, that there was no sky and no earth, neither minds nor bodies; was I not, therefore, at the same time, persuaded that I did not exist? Far from it; I assuredly existed, since I was persuaded. But there is I know not what being, who is possessed at once of the highest power and the deepest cunning, who is constantly employing all his ingenuity in deceiving me. Doubtless, then, I exist, since I am deceived; and, let him deceive me as he may, he can never bring it about that I am nothing, so long as I shall be conscious that I am something. So that it must, in fine, be maintained, all things being maturely and carefully considered, that this proposition (pronunciatum ) I am, I exist, is necessarily true each time it is expressed by me, or conceived in my mind.” (2.3) The same line of reasoning appears to carry over to particular episodes of thinking. Just as an evil demon cannot deceive me about my own existence, he cannot deceive me about the fact that I am being deceived. 3 Why the mind cannot be identical to any body So far, we seem to have two results: that it is possible to doubt whether any external, physical things exist, but that it is not possible to doubt that oneself, or one’s own mental episodes, exist. At this point, you might ask: so what? What does this show about the relationship between the mind and the body? Descartes is most explicit about this in paragraph 9 of Meditation 6: “And, firstly, because I know that all which I clearly and distinctly conceive can be produced by God exactly as I conceive it, it is sufficient that I am able clearly and distinctly to conceive one thing apart from another, in order to be certain that the one is different from the other, seeing they may at least be made to exist separately, by the omnipotence of God; and it matters not by what power this separation is made, in order to be compelled to judge them different; and, therefore, merely because I know with certitude that I exist, and because, in the meantime, I do not observe that aught necessarily belongs to my nature or essence beyond my being a thinking thing, I rightly conclude that my essence consists only in my being a thinking thing or a substance whose whole essence or nature is merely thinking]. And although I may, or rather, as I will shortly say, although I certainly do possess a body with which I am very closely conjoined; nevertheless, because, on the one hand, I have a clear and distinct idea of myself, in as far as I am only a thinking and unextended thing, and as, on the other hand, I possess a distinct idea of body, in as far as it is only an extended and unthinking thing, it is certain that I, that is, my mind, by which I am what I am], is entirely and truly distinct from my body, and may exist without it.” So see how this argument works, it helps to break it down into steps. First, Descartes says that if he can “clearly and distinctly” conceive some state of affairs, then God could create that state of affairs. So, if he can clearly and distinctly conceive some state of affairs, then that state of affairs is possible. The distinction between possible and impossible situations, and contingent and necessary truths. So, if Descartes is right, we can show that it is possible that x and y are distinct things by clearly and distinctly conceiving of them as distinct. What he wants to show is that it is possible that mind and body are distinct; so what he needs to show is that he can clearly and distinctly conceive of mind and body as distinct. But, in a sense, he has already shown this. In Meditation 1, Descartes doubted the existence of material bodies; so, he was conceiving of bodies not existing. But, in Meditation 2, he found that he could not doubt his own existence. So, in this method of doubt, he was conceiving of his mind as existing, but of bodies as not existing. So he was conceiving of his mind as distinct from his body. So, if the above is correct, it follows that it is possible that his mind is distinct from his body. But what we want to know is not whether it is possible for one’s mind to be distinct from one’s body; what we want to know is whether minds really are distinct from bodies. How can we get from one thesis to the other? The necessity of identity as bridging this gap in the argument. We can think of Descartes’ argument for the distinctness of mind and body as breaking down into steps as follows: 1. If I can clearly and distinctly conceive of such and such being the case, God could make such and such the case. 2. If God could make such and such the case, then such and such is possible. 3. If I can clearly and distinctly conceive of such and such being the case, then such and such is possible. (1,2) 4. I can clearly and distinctly conceive of the mind existing without the body. 5. I can clearly and distinctly conceive of a case there the mind the body. (4) 6. It is possible that the mind the body. (3,5) 7. If a = b, then necessarily a = b. C. The mind the body. Is this argument valid? Is it sound? Can you see how to run a parallel argument to show that particular mental events — like certain thoughts, or pains — are not identical to any material bodies, or physical events? 4 The nature of the mind/body distinction So we know that Descartes thinks that the mind is something other than the body; but what, exactly, does that mean? One way to answer this question is to get clearer on what Descartes thinks bodies are. Descartes often speaks of bodies as extended; part of what he means is expressed in the following passage: “By body I understand all that can be terminated by a certain figure; that can be comprised in a certain place, and so fill a certain space as therefrom to exclude every other body.” (2.5) One of the defining aspects of bodies is that they are extended in space: that they have certain dimensions. Should we conclude from this that Descartes thinks that bodies do not exist in space — that they have no dimensions? Does this make sense? Does it follow that they are not located anywhere? 5 Descartes’ view of the relationship between mind and body So far, we’ve examined Descartes’ argument that the mind is not identical to any body. But this tells us what the relationship of mind to body is not; it does not tell us what it is. In one place, Descartes gives his view of the relationship of mind and body by an analogy: “Nature likewise teaches me by these sensations of pain, hunger, thirst, etc., that I am not only lodged in my body as a pilot in a vessel, but that I am besides so intimately conjoined, and as it were intermixed with it, that my mind and body compose a certain unity.” (6.13) What can we take from the idea that the relationship between mind and body is akin to the relationship between pilot and vessel? One thing a pilot does is control the vessel; by steering, pilots cause vessels to do things. So we would expect Descartes to think that minds sometimes cause bodies to do things. And this is what he thinks (see among other places, 6.12-13). Why this fits well with common sense. It seems, then, that we can sum up the main points of Descartes’ dualist view of the relationship between mind and body as follows: The mind is not identical to any body. Nor are particular mental events (particular episodes of thinking, feeling, etc.) identical to any bodies. Bodies are defined by Descartes as things which have extension. Since minds are not identical to any bodies, minds do not have extension. So minds do not exist in space. 6 Varieties of dualism We can separate out two parts of Descartes view, via the distinction between objects or substances on the one hand, and properties on the other. A way to get a handle on this distinction via the distinction between names and predicates. Corresponding to the distinction between substances and predicates is a distinction between two kinds of dualism. The property dualist says that mental properties — like feeling a pain or thinking about food — are not identical to any physical property. The substance dualist says that there are mental substances — minds — which are not identical to any physical things. Descartes was both a substance dualist and a property dualist. In this first part of the course, where we focus on the mind-body problem, the most important part of his view is his property dualism. (Though Descartes himself spends most of his time talking about substance dualism.) In the second part of the course, when we discuss the nature of persons, we’ll return to substance dualism. One good question at this point is: what is the relationship between substance dualism and property dualism? If substance dualism is true, does it follow logically that property dualism must be true as well? How about the other way around? Another distinction between kinds of dualism is worth making here. We noted above that Descartes thought that minds could cause effects in bodies, and vice versa. So, despite thinking that minds and bodies are different sorts of things, Descartes thought that minds and bodies could interact. For this reason, his view is sometimes called interactionistdualism. But not all dualists think this. Some dualists are epiphenomenalists: they think that mental events are caused by physical events, but that mental events never have any physical effects. So the line of causation always goes from physical to mental, and never in the reverse direction. Can you think of any reason why someone would find this view attractive? Why might it be preferable to interactionism? Does the view have any disadvantages? Can the epiphenomenalist, for example, give any explanation of how mental features could have evolved, if they never have any effects in the physical world? A third variety of dualism is parallelism, which is the view that, although mental and physical events run ‘in parallel’, there are no causal connections between them. Why might one be attracted to this view? How could the correlations between mental and physical events be explained by a parallelist, if at all?
And although I may, or rather, as I will shortly say, although I certainly do possess a body with which I am very closely conjoined; nevertheless, because, on the one hand, I have a clear and distinct idea of myself, in as far as I am only a thinking and unextended thing, and as, on the other hand, I possess a distinct idea of body, in as far as it is only an extended and unthinking thing, it is certain that I, that is, my mind, by which I am what I am], is entirely and truly distinct from my body, and may exist without it.” So see how this argument works, it helps to break it down into steps. First, Descartes says that if he can “clearly and distinctly” conceive some state of affairs, then God could create that state of affairs. So, if he can clearly and distinctly conceive some state of affairs, then that state of affairs is possible. The distinction between possible and impossible situations, and contingent and necessary truths. So, if Descartes is right, we can show that it is possible that x and y are distinct things by clearly and distinctly conceiving of them as distinct. What he wants to show is that it is possible that mind and body are distinct; so what he needs to show is that he can clearly and distinctly conceive of mind and body as distinct. But, in a sense, he has already shown this.
yes
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://pubmed.ncbi.nlm.nih.gov/11822639/
The physiology of mind-body interactions: the stress response and ...
Erratum in Abstract There are key differences between mind-body medicine and alternative medicine. A central tenet of mind-body medicine is the recognition that the mind plays a key role in health and that any presumed separation of mind and body is false. Alternative medicine, however, does not focus on the role of thoughts and emotions in health and, therefore, is separate from mind-body medicine. Also, while there has been little scientific research on alternative medicine, the literature on mind-body medicine comprises more than 2000 peer-reviewed studies published in the past 25 years. The groundwork for understanding the physiology of mind-body interactions was established by pioneering studies in the 1930s by Walter Cannon, and in the 1950s by Walter Hess and by Hans Selye that led to an understanding of the fight-or-flight response. Later work by Holmes and Rahe documented measurable relationships between stressful life events and illness. Other research has shown clinical improvement in patients treated with a placebo for a variety of medical problems. The effectiveness of placebo treatment can be interpreted as compelling evidence that expectation and belief can affect physiological response. Recent studies using spectral analysis and topographic electroencephalographic (EEG) mapping of the relaxation response demonstrate that by changing mental activity we can demonstrate measurable changes in central nervous system activity. These, and other, studies demonstrate that mind-body interactions are real and can be measured.
Erratum in Abstract There are key differences between mind-body medicine and alternative medicine. A central tenet of mind-body medicine is the recognition that the mind plays a key role in health and that any presumed separation of mind and body is false. Alternative medicine, however, does not focus on the role of thoughts and emotions in health and, therefore, is separate from mind-body medicine. Also, while there has been little scientific research on alternative medicine, the literature on mind-body medicine comprises more than 2000 peer-reviewed studies published in the past 25 years. The groundwork for understanding the physiology of mind-body interactions was established by pioneering studies in the 1930s by Walter Cannon, and in the 1950s by Walter Hess and by Hans Selye that led to an understanding of the fight-or-flight response. Later work by Holmes and Rahe documented measurable relationships between stressful life events and illness. Other research has shown clinical improvement in patients treated with a placebo for a variety of medical problems. The effectiveness of placebo treatment can be interpreted as compelling evidence that expectation and belief can affect physiological response. Recent studies using spectral analysis and topographic electroencephalographic (EEG) mapping of the relaxation response demonstrate that by changing mental activity we can demonstrate measurable changes in central nervous system activity. These, and other, studies demonstrate that mind-body interactions are real and can be measured.
no
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://iep.utm.edu/dualism-and-mind/
Dualism and Mind | Internet Encyclopedia of Philosophy
Dualism and Mind Dualists in the philosophy of mind emphasize the radical difference between mind and matter. They all deny that the mind is the same as the brain, and some deny that the mind is wholly a product of the brain. This article explores the various ways that dualists attempt to explain this radical difference between the mental and the physical world. A wide range of arguments for and against the various dualistic options are discussed. Substance dualists typically argue that the mind and the body are composed of different substances and that the mind is a thinking thing that lacks the usual attributes of physical objects: size, shape, location, solidity, motion, adherence to the laws of physics, and so on. Substance dualists fall into several camps depending upon how they think mind and body are related. Interactionists believe that minds and bodies causally affect one another. Occasionalists and parallelists, generally motivated by a concern to preserve the integrity of physical science, deny this, ultimately attributing all apparent interaction to God. Epiphenomenalists offer a compromise theory, asserting that bodily events can have mental events as effects while denying that the reverse is true, avoiding any threat to the scientific law of conservation of energy at the expense of the common sense notion that we act for reasons. Property dualists argue that mental states are irreducible attributes of brain states. For the property dualist, mental phenomena are non-physical properties of physical substances. Consciousness is perhaps the most widely recognized example of a non-physical property of physical substances. Still other dualists argue that mental states, dispositions and episodes are brain states, although the states cannot be conceptualized in exactly the same way without loss of meaning. Dualists commonly argue for the distinction of mind and matter by employing Leibniz’s Law of Identity, according to which two things are identical if, and only if, they simultaneously share exactly the same qualities. The dualist then attempts to identify attributes of mind that are lacked by matter (such as privacy or intentionality) or vice versa (such as having a certain temperature or electrical charge). Opponents typically argue that dualism is (a) inconsistent with known laws or truths of science (such as the aforementioned law of thermodynamics), (b) conceptually incoherent (because immaterial minds could not be individuated or because mind-body interaction is not humanly conceivable), or (c) reducible to absurdity (because it leads to solipsism, the epistemological belief that one’s self is the only existence that can be verified and known). 1. Dualism The most basic form of dualism is substance dualism, which requires that mind and body be composed of two ontologically distinct substances. The term “substance” may be variously understood, but for our initial purposes we may subscribe to the account of a substance, associated with D. M. Armstrong, as what is logically capable of independent existence. (Armstrong, 1968, p. 7). According to the dualist, the mind (or the soul) is comprised of a non-physical substance, while the body is constituted of the physical substance known as matter. According to most substance dualists, mind and body are capable of causally affecting each other. This form of substance dualism is known as interactionism. Two other forms of substance dualism are occasionalism and parallelism. These theories are largely relics of history. The occasionalist holds that mind and body do not interact. They may seem to when, for example, we hit our thumb with a hammer and a painful and distressing sensation occurs. Occassionalists, like Malebranche, assert that the sensation is not caused by the hammer and nerves, but instead by God. God uses the occasion of environmental happenings to create appropriate experiences. According to the parallelist, our mental and physical histories are coordinated so that mental events appear to cause physical events (and vice versa) by virtue of their temporal conjunction, but mind and body no more interact than two clocks that are synchronized so that the one chimes when hands of the other point out the new hour. Since this fantastic series of harmonies could not possibly be due to mere coincidence, a religious explanation is advanced. God does not intervene continuously in creation, as the occasionalist holds, but builds into creation a pre-established harmony that largely eliminates the need for future interference. Another form of dualism is property dualism. Property dualists claim that mental phenomena are non-physical properties of physical phenomena, but not properties of non-physical substances. Some forms of epiphenomenalism fall into this category. According to epiphenomenalism, bodily events or processes can generate mental events or processes, but mental phenomena do not cause bodily events or processes (or, on some accounts, anything at all, including other mental states). (McLaughlin, p. 277) Whether an epiphenomenalist thinks these mental epiphenomena are properties of the body or properties of a non-physical mental medium determines whether the epiphenomenalist is a property or substance dualist. Still other dualists hold not that mind and body are distinct ontologically, but our mentalistic vocabulary cannot be reduced to a physicalistic vocabulary. In this sort of dualism, mind and body are conceptually distinct, though the phenomena referred to by mentalistic and physicalistic terminology are coextensive. The following sections first discuss dualism as expounded by two of its primary defenders, Plato and Descartes. This is followed by additional arguments for and against dualism, with special emphasis on substance dualism, the historically most important and influential version of dualism. 2. Platonic Dualism in the Phaedo The primary source for Plato‘s views on the metaphysical status of the soul is the Phaedo, set on the final day of Socrates’ life before his self-administered execution. Plato (through the mouth of Socrates, his dramatic persona) likens the body to a prison in which the soul is confined. While imprisoned, the mind is compelled to investigate the truth by means of the body and is incapable (or severely hindered) of acquiring knowledge of the highest, eternal, unchanging, and non-perceptible objects of knowledge, the Forms. Forms are universals and represent the essences of sensible particulars. While encumbered by the body, the soul is forced to seek truth via the organs of perception, but this results in an inability to comprehend that which is most real. We perceive equal things, but not Equality itself. We perceive beautiful things but not Beauty itself. To achieve knowledge or insight into the pure essences of things, the soul must itself become pure through the practice of philosophy or, as Plato has Socrates provocatively put it in the dialogue, through practicing dying while still alive. The soul must struggle to disassociate itself from the body as far as possible and turn its attention toward the contemplation of intelligible but invisible things. Though perfect understanding of the Forms is likely to elude us in this life (if only because the needs of the body and its infirmities are a constant distraction), knowledge is available to pure souls before and after death, which is defined as the separation of the soul from the body. a. The Argument From Opposites Plato’s Phaedo contains several arguments in support of his contention that the soul can exist without the body. According to the first of the Phaedo‘s arguments, the Argument from Opposites, things that have an opposite come to be from their opposite. For example, if something comes to be taller, it must come to be taller from having been shorter; if something comes to be heavier, it must come to be so by first having been lighter. These processes can go in either direction. That is, things can become taller, but they also can become shorter; things can become sweeter, but also more bitter. In the Phaedo, Socrates notes that we awaken from having been asleep and go to sleep from having been awake. Similarly, since dying comes from living, living must come from dying. Thus, we must come to life again after we die. During the interim between death and rebirth the soul exists apart from the body and has the opportunity to glimpse the Forms unmingled with matter in their pure and undiluted fullness. Death liberates the soul, greatly increasing its apprehension of truth. As such, the philosophical soul is unafraid to die and indeed looks forward to death as to liberation. b. The Argument From Recollection A second argument from the Phaedo is the Argument from Recollection. Socrates argues that the soul must exist prior to birth because we can recollect things that could not have been learned in this life. For example, according to Socrates we realize that equal things can appear to be unequal or can be equal in some respects but not others. People can disagree about whether two sticks are equal. They may disagree about if they are equal in length, weight, color, or even whether they are equally “sticks.” The Form of Equality—Equality Itself—can never be or appear unequal. According to Socrates, we recognize that the sticks are unequal and that they are striving to be equal but are nevertheless deficient in terms of their equality. Now, if we can notice that the sticks are unequal, we must comprehend what Equality is. Just as I could not recognize that a portrait was a poor likeness of your grandfather unless I already knew what your grandfather looked like, I cannot reccognize that the sticks are unequal by means of the senses, without an understanding of the Form of Equality. We begin to perceive at birth or shortly thereafter. Hence, the soul must have existed prior to birth. It existed before it acquires a body. (A similar argument is developed in Plato’s Meno (81a-86b). c. The Argument From Affinity A third argument from the Phaedo is the Argument from Affinity. Socrates claims that things that are composite are more liable to be destroyed than things that are simple. The Forms are true unities and therefore least likely ever to be annihilated. Socrates then posits that invisible things such as Forms are not apt to be disintegrated, whereas visible things, which all consist of parts, are susceptible to decay and corruption. Since the body is visible and composite, it is subject to decomposition. The soul, on the other hand, is invisible. The soul also becomes like the Forms if it is steadfastly devoted to their consideration and purifies itself by having no more association with the body than necessary. Since the invisible things are the durable things, the soul, being invisible, must outlast the body. Further, the philosophical soul, that becomes Form-like, is immortal and survives the death of the body. d. Criticisms of the Platonic Arguments Some of these arguments are challenged even in the Phaedo itself by Socrates’ friends Simmias and Cebes and the general consensus among modern philosophers is that the arguments fail to establish the immortality of the soul and its independence and separability from the body. (Traces of the Affinity argument in a more refined form will be observed in Descartes below). The Argument from Opposites applies only to things that have an opposite and, as Aristotle notes, substances have no contraries. Further, even if life comes from what is itself not alive, it does not follow that the living human comes from the union of a dead (i.e. separated) soul and a body. The principle that everything comes to be from its opposite via a two-directional process cannot hold up to critical scrutiny. Although one becomes older from having been younger, there is no corresponding reverse process leading the older to become younger. If aging is a uni-directional process, perhaps dying is as well. Cats and dogs come to be from cats and dogs, not from the opposites of these (if they have opposites). The Arguments from Recollection and Affinity, on the other hand, presuppose the existence of Forms and are therefore no more secure than the Forms themselves (as Socrates notes in the Phaedo at 76d-e). We turn now to Descartes’ highly influential defense of dualism in the early modern period. 3. Descartes’ Dualism The most famous philosophical work of René Descartes is the Meditations on First Philosophy (1641). In the Sixth Meditation, Descartes calls the mind a thing that thinks and not an extended thing. He defines the body as an extended thing and not a thing that thinks (1980, p. 93). “But what then am I? A thing that thinks. What is that? A thing that doubts, understands, affirms, denies, wills, refuses, and which also imagines and senses.” (1980, p. 63). He expands on the notion of extension in the Fifth Meditation saying, “I enumerate the [extended] thing’s various parts. I ascribe to these parts certain sizes, shapes, positions, and movements from place to place; to these movements I ascribe various durations” (1980, p. 85). Bodies, but not minds, are describable by predicates denoting entirely quantifiable qualities and hence bodies are fit objects for scientific study. Having thus supplied us with the meanings of “mind” and “body,” Descartes proceeds to state his doctrine: “I am present to my body not merely in the way a seaman is present to his ship, but . . . I am tightly joined and, so to speak, mingled together with it, so much so that I make up one single thing with it” (1980, p. 94). The place where this “joining” was believed by Descartes to be especially true was the pineal gland—the seat of the soul. “Although the soul is joined to the whole body, there is yet in the body a certain part in which it seems to exercise its functions more specifically than in all the others. . . I seem to find evidence that the part of the body in which the soul exercises its functions immediately is. . . solely the innermost part of the brain, namely, a certain very small gland.” (1952, p. 294). When we wish to “move the body in any manner, this volition causes the gland to impel the spirits towards the muscles which bring about this effect” (1952, p. 299). Conversely, the body is also able to influence the soul. Light reflected from the body of an animal and entering through our two eyes “form but one image on the gland, which, acting immediately on the soul, causes it to see the shape of the animal.” (1952, p. 295-96). It is clear, then, that Descartes held to a form of interactionism, believing that mental events can sometimes cause bodily events and that bodily events can sometimes cause mental events. (This reading of Descartes-as-interactionist has recently been challenged. See Baker and Morris (1996). Also, Daniel Garber suggests that Descartes is a quasi-occasionalist, permitting minds to act on bodies, but invoking God to explain the actions of inanimate bodies on each other and phenomena where bodies act on minds, such as sensation. See Garber, 2001, ch. 10). a. The Argument From Indivisibility Descartes’ primary metaphysical justification of the distinction of mind and body is the Argument from Indivisibility. He writes, “there is a great difference between a mind and a body, because the body, by its very nature, is something divisible, whereas the mind is plainly indivisible. . . insofar as I am only a thing that thinks, I cannot distinguish any parts in me. . . . Although the whole mind seems to be united to the whole body, nevertheless, were a foot or an arm or any other bodily part amputated, I know that nothing would be taken away from the mind. . .” (1980, p. 97). Decartes argues that the mind is indivisible because it lacks extension. The body, as an object that takes up space, can always be divided (at least conceptually), whereas the mind is simple and non-spatial. Since the mind and body have different attributes, they must not be the same thing, their “unity” notwithstanding. This Indivisibility Argument makes use of Leibniz’s Law of Identity: two things are the same if, and only if, they have all of the same properties at the same time. More formally, x is identical to y if, and only if, for any property p had by x at time t, y also has p at t, and vice versa. Descartes uses Leibniz’s Law to show that the mind and body are not identical because they do not have all of the same properties. An illustration (for present purposes a property can be considered anything that may be predicated of a subject): If the man with the martini is the mayor, it must be possible to predicate all and only the same properties of both “the man” and “the mayor,” including occupying (or having bodies that occupy) the same exact spatial location at the same time. Since divisibility may be predicated of bodies (and all of their parts, such as brains) and may not be predicated of minds, Leibniz’s Law suggests that minds cannot be identical to bodies or any of their parts or systems. Although it makes sense to speak of the left or right half of the brain, it makes no sense to speak of half of a desire, several pieces of a headache, part of joy, or two-thirds of a belief. What is true of mental states is held to be true of the mind that has the states as well. In the synopsis of the Meditations, Descartes writes, “we cannot conceive of half a soul, as we can in the case of any body, however small.” (1980, p. 52). The mind has many ideas, but they are all ideas of one indivisible mind. b. Issues Raised by the Indivisibility Argument John Locke argued that awareness is rendered discontinuous by intervals of sleep, anesthesia, or unconsciousness. (Bk.II, ch.I, sect.10). Is awareness then divisible? Locke suggests that the mind cannot exhibit temporal discontinuity and also have thought as its essence. But even if Descartes was wrong to consider the mind an essentially thinking thing, the concept of mind is not reduced to vacuity if some other, positive characteristic can be found by which to define it. But what might that be? (Without some such means of characterizing the mind it would be defined entirely negatively and we would have no idea what it is). Against Locke, Dualists can argue in several ways. (1) That the mind has both conscious and unconscious thoughts and that Locke’s argument shows only that the mind is not always engaged in conscious reflection, though it may be perpetually busy at the unconscious level. Locke argues that such a maneuver creates grave difficulties for personal identity (Bk.II, Ch.I, sect.11), however, and denies that thoughts can exist unperceived. (2) Dualists can argue that the soul always thinks, but that the memory fails to preserve those thoughts when asleep or under anesthesia. (3) Dualists can argue that the Lockean observation is not relevant to the Argument from Indivisibility because the discontinuity Locke identifies in consciousness is not a spatial discontinuity but a temporal one. The Argument from Indivisibility seeks to show that bodies but not minds are spatially divisible and that argument is not rebutted by pointing out that consciousness is temporally divisible. (Indeed, if minds are temporally divisible and bodies are not, we have an argument for dualism of a different sort). David Hume, on the other hand, questioned of what the unity of consciousness might consist. The Indivisibility Argument suggests that the mind is a simple unity. Hume finds no reason to grant or assume that the diversity of our experiences (whether visual perception, pain or active thinking and mathematical apprehension) constitute a unity rather than a diversity. For Hume, all introspection reveals is the presence of various impressions and ideas, but does not reveal a subject in which those ideas inhere. Accordingly, if observation is to yield knowledge of the self, the self can consist in nothing but a bundle of perceptions. Even talk of a “bundle” is misleading if that suggests an empirically discoverable internal unity. Thus, Descartes’ commitment to a res cogitans or thing which thinks is unfounded and substance dualism is undermined. (For a contrary view on what constitutes the unity of the self, see Madell’s view that, “What unites all of my experiences…is simply that they all have the irreducible and unanalyzable property of ‘mineness,'” in Nagel, 1986, p. 34, n. 5). Immanuel Kant replied to Hume that we must suppose or posit the unity of the ego (which he called the “transcendental unity of apperception”) as a preliminary to all experience since without such a unity the manifold of sense-data (or “sensibility”) could not constitute, for example, the experience of seeing a clock. However, Kant agreed that we must not mistake the unity of apperception for the perception of unity—that is, the perception of a unitary thing or substance. Kant also argued that there is little reason to suppose that the mind or ego cannot be destroyed despite its unity since its powers may gradually attenuate to the point where they simply fade away. The mind need not be separated into non-physical granules to be destroyed since it can suffer a kind of death through loss of its powers. Awareness, perception, memory and the like admit of degrees. If the degree of consciousness decreases to zero, then the mind is effectively annihilated. Even if, as Plato and Descartes agree, the mind is not divisible, it does not follow that it survives (or could survive) separation from the body. Additionally, if the mind is neither physical nor identical to its inessential characteristics (1980, p. 53), it is impossible to distinguish one mind from another. Kant argues that two substances that are otherwise identical can be differentiated only by their spatial locations. If minds are not differentiated by their contents and have no spatial positions to distinguish them, there remains no basis for individuating their identities. (On numerically individuating non-physical substances, see Armstrong, 1968, pp. 27-29. For a general discussion of whether the self is a substance, see Shoemaker, 1963, ch. 2). c. The Argument From Indubitability Descartes’ other major argument for dualism in the Meditations derives from epistemological considerations. After taking up his celebrated method of doubt, which commits him to reject as false anything that is in the slightest degree uncertain, Descartes finds that the entirety of the physical world is uncertain. Perhaps, after all, it is nothing but an elaborate phantasm wrought by an all-powerful and infinitely clever, but deceitful, demon. Still, he cannot doubt his own existence, since he must exist to doubt. Because he thinks, he is. But he cannot be his body, since that identity is doubtful and possibly altogether false. Therefore, he is a non-bodily “thinking thing,” or mind. As Richard Rorty puts it: “If we look in Descartes for a common factor which pains, dreams, memory-images, and veridical and hallucinatory perceptions share with concepts of (and judgments about) God, number, and the ultimate constituents of matter, we find no explicit doctrine. . . . The answer I would give to the question ‘What did Descartes find?’ is ‘Indubitability'” (1979 p. 54). In sum, I cannot doubt the existence of my mind, but I can doubt the existence of my body. Since what I cannot doubt cannot be identical to what I can doubt (by Leibniz’s Law), mind and body are not identical and dualism is established. This argument is also featured in Descartes’ Discourse on Method part four: “[S]eeing that I could pretend that I had no body and that there was no world nor any place where I was, but that I could not pretend, on that account, that I did not exist; and that, on the contrary, from the very fact that I thought about doubting the truth of other things, it followed very evidently and very certainly that I existed. . . . From this I knew that I was a substance the whole essence or nature of which was merely to think, and which, in order to exist, needed no place and depended on no material things. Thus this ‘I,’ that is, the soul through which I am what I am, is entirely distinct from the body. . .” (1980, p. 18). The Argument from Indubitability has been maligned in the philosophical literature from the very beginning. Most famously, Arnauld comments in the objections originally published with the Meditations that, “Just as a man errs in not believing that the equality of the square on its base to the squares on its sides belongs to the nature of that triangle, which he clearly and distinctly knows to be right angled, so why am I perhaps not in the wrong in thinking that nothing else belongs to my nature which I clearly and distinctly know to be something that thinks, except that fact that I am this thinking being? Perhaps it also belongs to my essence to be something extended.” (1912, p. 84). Suppose that I cannot doubt whether a given figure is a triangle, but can doubt whether its interior angles add up to two right angles. It does not follow from this that the number of degrees in triangles may be more or less than 180. This is because the doubt concerning the number of degrees in a triangle is a property of me, not of triangles. Similarly, I may doubt that my body is not a property of my body, believing it to be a property of whatever part of me it is that doubts, and that “whatever” may be something extended. The dualist can reply in two ways. First, he or she may argue that, while doubting the body is not a property of bodies, being doubtable is a property of bodies. Since bodies have the property of being doubtable, and minds do not, by Leibniz’s Law the diversity of the two is established. Second, the dualist may reply that it is always possible to doubt whether the figure before me is a triangle. As such, Arnauld’s supposedly parallel argument is not parallel at all. Similar objections are open against other, more recent rebuttals to Descartes’ argument. Consider, for example, the following parallel argument from Paul Churchland (1988, p. 32): I cannot doubt that Mohammed Ali was a famous heavyweight boxer but can doubt that Cassius Clay was a famous heavyweight boxer. Following Descartes, it ought to be that Ali is not Clay (though in fact Clay was a famous heavyweight and identical to Ali). By way of reply, surely it is possible for an evil demon to deceive me about whether Mohammed Ali was a famous heavyweight boxer. So, the dualist might insist, the case of mind is unique in its immunity from doubt. It is only with reference to our own mental states that we can be said to know incorrigibly. d. The Real Distinction Argument A third argument in the Meditations maintains that the mind and body must really be separate because Descartes can conceive of the one without the other. Since he can clearly and distinctly understand the body without the mind and vice versa, God could really have created them separately. But if the mind and body can exist independently, they must really be independent, for nothing can constitute a part of the essence of a thing that can be absent without the thing itself ceasing to be. If the essence of the mind is incorporeal, so must be the mind itself. 4. Other Leibniz’s Law Arguments for Dualism a. Privacy and First Person Authority As noted earlier, dualists have argued for their position by employing Leibniz’s Law in many ingenious ways. The general strategy is to identify some property or feature indisputably had by mental phenomena but not attributable in any meaningful way to bodily or nervous phenomena, or vice versa. For example, some have suggested that mental states are private in the sense that only those who possess them can know them directly. If I desire an apple, I know that I have this desire “introspectively.” Others can know of my desire only by means of my verbal or non-verbal behavior or, conceivably, by inspection of my brain. (The latter assumes a correlation, if not an identity, between nervous and mental states or events). My linguistic, bodily and neural activities are public in the sense that anyone suitably placed can observe them. Since mental states are private to their possessors, but brain states are not, mental states cannot be identical to brain states. (Rey pp. 55-56). A closely related argument emphasizes that my own mental states are knowable without inference; I know them “immediately.” (Harman, 1973, pp. 35-37). Others can know my mental states only by making inferences based on my verbal, non-verbal or neurophysiological activity. You may infer that I believe it will rain from the fact that I am carrying an umbrella, but I do not infer that I believe it will rain from noticing that I am carrying an umbrella. I do not need to infer my mental states because I know them immediately. Since mental states are knowable without inference in the first person case, but are knowable (or at least plausibly assigned) only by inference in the third person case, we have an authority or incorrigibility with reference to our own mental states that no one else could have. Since beliefs about the physical world are always subject to revision (our inferences or theories could be mistaken), mental states are not physical states. b. Intentionality Some mental states exhibit intentionality. Intentional mental states include, but are not limited to, intendings, such as plans to buy milk at the store. They are states that are about, of, for, or towards things other than themselves. Desires, beliefs, loves, hates, perceptions and memories are common intentional states. For example, I may have a desire for an apple; I may have love for or towards my neighbor; I may have a belief about republicans or academics; or I may have memories of my grandfather. The dualist claims that brain states, however, cannot plausibly be ascribed intentionality. How can a pattern of neural firings be of or about or towards anything other than itself? As a purely physical event, an influx of sodium ions through the membrane of a neural cell creating a polarity differential between the inside and outside of the cell wall, and hence an electrical discharge, cannot be of Paris, about my grandfather, or for an apple. [Although Brentano goes further than most contemporary philosophers in regarding all mental phenomena as intentional, he argues that “the reference to something as an object is a distinguishing characteristic of all mental phenomena. No physical phenomena exhibits anything similar.” (Brentano, 1874/1973, p. 97, quoted in Rey, 1997, p. 23).] Thus, by Leibniz’s Law, if minds are capable of intentional states and bodies are not, minds and bodies must be distinct. (Taylor, pp. 11-12; Rey pp. 57-59). c. Truth and Meaning Another attempt to derive dualism by means of Leibniz’s Law observes that some mental states, especially beliefs, have truth-values. My belief that it will rain can be either true or false. But, the dualist may urge, as a purely physical event, an electrical or chemical discharge in the brain cannot be true or false. Indeed, it lacks not only truth, but also linguistic meaning. Since mental states such as beliefs possess truth-value and semantics, it seems incoherent to attribute these properties to bodily states. Thus, mental states are not bodily states. Presumably, then, the minds that have these states are also non-physical. (Churchland, 1988, p. 30; Taylor, 1983, p. 12). d. Problems with Leibniz’s Law Arguments for Dualism Although each of these arguments for dualism may be criticized individually, they are typically thought to share a common flaw: they assume that because some aspect of mental states, such as privacy, intentionality, truth, or meaning cannot be attributed to physical substances, they must be attributable to non-physical substances. But if we do not understand how such states and their properties can be generated by the central nervous system, we are no closer to understanding how they might be produced by minds. (Nagel, 1986, p. 29). The question is not, “How do brains generate mental states that can only be known directly by their possessors?” Rather, the relavent question is “How can any such thing as a substance, of whatever sort, do these things?” The mystery is as great when we posit a mind as the basis of these operations or capacities as when we attribute them to bodies. Dualists cannot explain the mechanisms by which souls generate meaning, truth, intentionality or self-awareness.Thus, dualism creates no explanatory advantage. As such, we should use Ockham’s razor to shave off the spiritual substance, because we ought not to multiply entities beyond what is necessary to explain the phenomena. Descartes’ prodigious doubt notwithstanding, we have excellent reasons for thinking that bodies exist. If the only reasons for supposing that non-physical minds exist are the phenomena of intentionality, privacy and the like, then dualism unnecessarily complicates the metaphysics of personhood. On the other hand, dualists commonly argue that it makes no sense to attribute some characteristics of body to mind; that to do so is to commit what Gilbert Ryle called a “category mistake.” For example, it makes perfect sense to ask where the hypothalamus is, but not, in ordinary contexts, to ask where my beliefs are. We can ask how much the brain weighs, but not how much the mind weighs. We can ask how many miles per hour my body is moving, but not how many miles per hour my mind is moving. Minds are just not the sorts of things that can have size, shape, weight, location, motion, and the other attributes that Descartes ascribes to extended reality. We literally could not understand someone who informed us that the memories of his last holiday are two inches behind the bridge of his nose or that his perception of the color red is straight back from his left eye. If these claims are correct, then some Leibniz’s Law arguments for dualism are not obviously vulnerable to the critique above. 5. The Free Will and Moral Arguments Another argument for dualism claims that dualism is required for free will. If dualism is false, then presumably materialism, the thesis that humans are entirely physical beings, is true. (We set aside consideration of idealism—the thesis that only minds and ideas exist). If materialism were true, then every motion of bodies should be determined by the laws of physics, which govern the actions and reactions of everything in the universe. But a robust sense of freedom presupposes that we are free, not merely to do as we please, but that we are free to do otherwise than as we do. This, in turn, requires that the cause of our actions not be fixed by natural laws. Since, according to the dualist, the mind is non-physical, there is no need to suppose it bound by the physical laws that govern the body. So, a strong sense of free will is compatible with dualism but incompatible with materialism. Since freedom in just this sense is required for moral appraisal, the dualist can also argue that materialism, but not dualism, is incompatible with ethics. (Taylor, 1983, p. 11; cf. Rey, 1997, pp. 52-53). This, the dualist may claim, creates a strong presumption in favor of their metaphysics. This argument is sometimes countered by arguing that free will is actually compatible with materialism or that even if the dualistic account of the will is correct, it is irrelevant because no volition on the part of a non-physical substance could alter the course of nature anyway. As Bernard Williams puts it, “Descartes’ distinction between two realms, designed to insulate responsible human action from mechanical causation, insulated the world of mechanical causation, that is to say, the whole of the external world, from responsible human action. Man would be free only if there was nothing he could do.” (1966, p. 7). Moreover, behaviorist opponents argue that if dualism is true, moral appraisal is meaningless since it is impossible to determine another person’s volitions if they are intrinsically private and otherworldly. 6. Property Dualism Property dualists claim that mental phenomena are non-physical properties of physical phenomena, but not properties of non-physical substances. Property dualists are not committed to the existence of non-physical substances, but are committed to the irreducibility of mental phenomena to physical phenomena. An argument for property dualism, derived from Thomas Nagel and Saul Kripke, is as follows: We can assert that warmth is identical to mean kinetic molecular energy, despite appearances, by claiming that warmth is how molecular energy is perceived or manifested in consciousness. Minds detect molecular energy by experiencing warmth; warmth “fixes the reference” of heat. (“Heat” is a rigid designator of molecular motion; “the sensation of heat” is a non-rigid designator.) Similarly, color is identical to electromagnetic reflectance efficiencies, inasmuch as color is how electromagnetic wavelengths are processed by human consciousness. In these cases, the appearance can be distinguished from the reality. Heat is molecular motion, though it appears to us as warmth. Other beings, for example, Martians, might well apprehend molecular motion in another fashion. They would grasp the same objective reality, but by correlating it with different experiences. We move toward a more objective understanding of heat when we understand it as molecular energy rather than as warmth. in our case, or as whatever it appears to them to be in theirs. Consciousness itself, however, cannot be reduced to brain activity along analogous lines because we should then need to say that consciousness is how brain activity is perceived in consciousness, leaving consciousness unreduced. Put differently, when it comes to consciousness, the appearance is the reality. Therefore, no reduction is possible. Nagel writes: Experience . . . does not seem to fit the pattern. The idea of moving from appearance to reality seems to make no sense here. What is the analogue in this case to pursuing a more objective understanding of the same phenomena by abandoning the initial subjective viewpoint toward them in favor of another that is more objective but concerns the same thing? Certainly it appears unlikely that we will get closer to the real nature of human experience by leaving behind the particularity of our human point of view and striving for a description in terms accessible to beings that could not imagine what it was like to be us. (Nagel 1974; reprinted in Block et. al. p. 523). Consciousness is thus sui generis (of its own kind), and successful reductions elsewhere should give us little confidence when it comes to experience. Some property dualists, such as Jaegwon Kim, liken “having a mind” to “a property, capacity, or characteristic that humans and some higher animals possess in contrast with things like pencils and rocks. . . . Mentality is a broad and complex property.” (Kim, 1996, p. 5). Kim continues: “[Some properties] are physical, like having a certain mass or temperature, being 1 meter long, and being heavier than. Some things—in particular, persons and certain biological organisms—can also instantiate mental properties, like being in pain and liking the taste of avocado.” (p. 6). Once we admit the existence of mental properties, we can inquire into the nature of the relationship between mental and physical properties. According to the supervenience thesis, there can be no mental differences without corresponding physical differences. If, for example, I feel a headache, there must be some change not only in my mental state, but also in my body (presumably, in my brain). If Mary is in pain, but Erin is not, then, according to the supervenience thesis, there must be a physical difference between Mary and Erin. For example, Mary’s c-fibers are firing and Erin’s are not. If this is true, it is possible to argue for a type of property dualism by arguing that some mental states or properties, especially the phenomenal aspects of consciousness, do not “supervene on” physical states or properties in regular, lawlike ways. (Kim, p. 169). Why deny supervenience? Because it seems entirely conceivable that there could exist a twin Earth where all of the physical properties that characterize the actual world are instantiated and are interrelated as they are here, but where the inhabitants are “zombies” without experience, or where the inhabitants have inverted qualia relative to their true-Earth counterparts. If it is possible to have mental differences without physical differences, then mental properties cannot be identical to or reducible to physical properties. They would exist as facts about the world over and above the purely physical facts. Put differently, it always makes sense to wonder “why we exist and not zombies.” (Chalmers, 1996, p. 110). (Kim, 169 and following.; Kripke, 1980, throughout; Chalmers, 1996, throughout, but esp. chs. 3 & 4). Some have attempted to rebut this “conceivability argument” by noting that the fact that we can ostensibly imagine such a zombie world does not mean that it is possible. Without the actual existence of such a world, the argument that mental properties do not supervene on physical properties fails. A second rebuttal avers that absent qualia thought experiments (and inverted spectra though experiments) only support property dualism if we can imagine these possibilities obtaining. Perhaps we think we can conceive a zombie world, when we really can’t. We may think we can conceive of such a world but attempts to do so do not actually achieve such a conception. To illustrate, suppose that Goldbach’s Conjecture is true. If it is, its truth is necessary. If, then, someone thought that they imagined a proof that the thesis is false, they would be conceiving the falsity of what is in reality a necessary truth. This is implausible. What we should rather say in such a case is that the person was mistaken, and that what they imagined false was not Goldbach’s Conjecture after all, or that the “proof” that was imagined was in fact no proof, or that what they were really imagining was something like an excited mathematician shouting, “Eureka! So it’s false then!” Perhaps it is likewise when we “conceive” a zombie universe. We may be mistaken about what it is that we are actually “picturing” to ourselves. Against this objection, however, one could argue that there are independent grounds for thinking that the truth-value of Goldbach’s theorem is necessary and no independent reasons for thinking that Zombie worlds are impossible; therefore, the dualist deserves the benefit of the doubt. But perhaps the physicalist can come up with independent reasons for supposing that the dualist has failed to imagine what she claims. The physicalist can point, for example, to successful reductions in other areas of science. On the basis of these cases she can argue the implausibility of supposing that, uniquely, mental phenomena resist reduction to the causal properties of matter. That is, an inductive argument for reduction outweighs a conceivability argument against reduction. And in that case, the dualist must do more than merely insist that she has correctly imagined inverted spectra in isomorphic individuals. (For useful discussions of some of these issues, see Tye 1986 and Horgan 1987.) 7. Objections to Dualism Motivated by Scientific Considerations The Ockham’s Razor argument creates a strong methodological presumption against dualism, suggesting that the mind-body split multiplies entities unnecessarily in much the way that a demon theory of disease complicates the metaphysics of medicine compared to a germ theory. It is often alleged, more broadly, that dualism is unscientific and renders impossible any genuine science of mind or truly empirical psychology. a. Arguments from Human Development Those eager to defend the relevance of science to the study of mind, such as Paul Churchland, have argued that dualism is inconsistent with the facts of human evolution and fetal development. (1988, pp. 27-28; see also Lycan, 1996, p. 168). According to this view, we began as wholly physical beings. This is true of the species and the individual human. No one seriously supposes that newly fertilized ova are imbued with minds or that the original cell in the primordial sea was conscious. But from those entirely physical origins, nothing non-physical was later added. We can explain the evolution from the unicellular stage to present complexities by means of random mutations and natural selection in the species case and through the accretion of matter through nutritional intake in the individual case. But if we, as species or individuals, began as wholly physical beings and nothing nonphysical was later added, then we are still wholly physical creatures. Thus, dualism is false. The above arguments are only as strong as our reasons for thinking that we began as wholly material beings and that nothing non-physical was later added. Some people, particularly the religious, will object that macro-evolution of a species is problematic or that God might well have infused the developing fetus with a soul at some point in the developmental process (traditionally at quickening). Most contemporary philosophers of mind put little value in these rejoinders. b. The Conservation of Energy Argument Others argue that dualism is scientifically unacceptable because it violates the well-established principle of the conservation of energy. Interactionists argue that mind and matter causally interact. But if the spiritual realm is continually impinging on the universe and effecting changes, the total level of energy in the cosmos must be increasing or at least fluctuating. This is because it takes physical energy to do physical work. If the will alters states of affairs in the world (such as the state of my brain), then mental energy is somehow converted into physical energy. At the point of conversion, one would anticipate a physically inexplicable increase in the energy present within the system. If it also takes material energy to activate the mind, then “physical energy would have to vanish and reappear inside human brains.” (Lycan, 1996, 168). The dualists’ basically have three ways of replying. First, they could deny the sacredness of the principle of the conservation of energy. This would be a desperate measure. The principle is too well established and its denial too ad hoc. Second, the dualist might offer that mind does contribute energy to our world, but that this addition is so slight, in relation to our means of detection, as to be negligible. This is really a re-statement of the first reply above, except that here the principle is valid in so far as it is capable of verification. Science can continue as usual, but it would be unreasonable to extend the law beyond our ability to confirm it experimentally. That would be to step from the empirical to the speculative—the very thing that the materialist objects to in dualism. The third option sidesteps the issue by appealing to another, perhaps equally valid, principle of physics. Keith Campbell (1970) writes: The indeterminacy of quantum laws means that any one of a range of outcomes of atomic events in the brain is equally compatible with known physical laws. And differences on the quantum scale can accumulate into very great differences in overall brain condition. So there is some room for spiritual activity even within the limits set by physical law. There could be, without violation of physical law, a general spiritual constraint upon what occurs inside the head. (p. 54) Mind could act upon physical processes by “affecting their course but not breaking in upon them” (1970, p. 54). If this is true, the dualist could maintain the conservation principle but deny a fluctuation in energy because the mind serves to “guide” or control neural events by choosing one set of quantum outcomes rather than another. Further, it should be remembered that the conservation of energy is designed around material interaction; it is mute on how mind might interact with matter. After all, a Cartesian rationalist might insist, if God exists we surely wouldn’t say that He couldn’t do miracles just because that would violate the first law of thermodynamics, would we? c. Problems of Interaction The conservation of energy argument points to a more general complaint often made against dualism: that interaction between mental and physical substances would involve a causal impossibility. Since the mind is, on the Cartesian model, immaterial and unextended, it can have no size, shape, location, mass, motion or solidity. How then can minds act on bodies? What sort of mechanism could convey information of the sort bodily movement requires, between ontologically autonomous realms? To suppose that non-physical minds can move bodies is like supposing that imaginary locomotives can pull real boxcars. Put differently, if mind-body interaction is possible, every voluntary action is akin to the paranormal power of telekinesis, or “mind over matter.” If minds can, without spatial location, move bodies, why can my mind move immediately only one particular body and no others? Confronting the conundrum of interaction implicit in his theory, Descartes posited the existence of “animal spirits” somewhat subtler than bodies but thicker than minds. Unfortunately, this expedient proved a dead-end, since it is as incomprehensible how the mind could initiate motion in the animal spirits as in matter itself. These problems involved in mind-body causality are commonly considered decisive refutations of interactionism. However, many interesting questions arise in this area. We want to ask: “How is mind-body interaction possible? Where does the interaction occur? What is the nature of the interface between mind and matter? How are volitions translated into states of affairs? Aren’t minds and bodies insufficiently alike for the one to effect changes in the other?” It is useful to be reminded, however, that to be bewildered by something is not in itself to present an argument against, or even evidence against, the possibility of that thing being a matter of fact. To ask “How is it possible that . . . ?” is merely to raise a topic for discussion. And if the dualist doesn’t know or cannot say how minds and bodies interact, what follows about dualism? Nothing much. It only follows that dualists do not know everything about metaphysics. But so what? Psychologists, physicists, sociologists, and economists don’t know everything about their respective disciplines. Why should the dualist be any different? In short, dualists can argue that they should not be put on the defensive by the request for clarification about the nature and possibility of interaction or by the criticism that they have no research strategy for producing this clarification. The objection that minds and bodies cannot interact can be the expression of two different sorts of view. On the one hand, the detractor may insist that it is physically impossible that minds act on bodies. If this means that minds, being non-physical, cannot physically act on bodies, the claim is true but trivial. If it means that mind-body interaction violates the laws of physics (such as the first law of thermodynamics, discussed above), the dualist can reply that minds clearly do act on bodies and so the violation is only apparent and not real. (After all, if we do things for reasons, our beliefs and desires cause some of our actions). If the materialist insists that we are able to act on our beliefs, desires and perceptions only because they are material and not spiritual, the dualist can turn the tables on his naturalistic opponents and ask how matter, regardless of its organization, can produce conscious thoughts, feelings and perceptions. How, the dualist might ask, by adding complexity to the structure of the brain, do we manage to leap beyond the quantitative into the realm of experience? The relationship between consciousness and brain processes leaves the materialist with a causal mystery perhaps as puzzling as that confronting the dualist. On the other hand, the materialist may argue that it is a conceptual truth that mind and matter cannot interact. This, however, requires that we embrace the rationalist thesis that causes can be known a priori. Many prefer to assert that causation is a matter for empirical investigation. We cannot, however, rule out mental causes based solely on the logic or grammar of the locutions “mind” and “matter.” Furthermore, in order to defeat interactionism by an appeal to causal impossibility, one must first refute the Humean equation of causal connection with regularity of sequence and constant conjunction. Otherwise, anything can be the cause of anything else. If volitions are constantly conjoined with bodily movements and regularly precede them, they are Humean causes. In short, if Hume is correct, we cannot refute dualism a priori by asserting that transactions between minds and bodies involve links where, by definition, none can occur. Some, such as Ducasse (1961, 88; cf. Dicker pp. 217-224), argue that the interaction problem rests on a failure to distinguish between remote and proximate causes. While it makes sense to ask how depressing the accelerator causes the automobile to speed up, it makes no sense to ask how pressing the accelerator pedal causes the pedal to move. We can sensibly ask how to spell a word in sign language, but not how to move a finger. Proximate causes are “basic” and analysis of them is impossible. There is no “how” to basic actions, which are brute facts. Perhaps the mind’s influence on the pineal gland is basic and brute. One final note: epiphenomenalism, like occasionalism and parallelism, is a dualistic theory of mind designed, in part, to avoid the difficulties involved in mental-physical causation (although occasionalism was also offered by Malebranche as an account of seemingly purely physical causation). According to epiphenomenalism, bodies are able to act on minds, but not the reverse. The causes of behavior are wholly physical. As such, we need not worry about how objects without mass or physical force can alter behavior. Nor need we be concerned with violations of the conservation of energy principle since there is little reason to suppose that physical energy is required to do non-physical work. If bodies affect modifications in the mental medium, that need not be thought to involve a siphoning of energy from the world to the psychic realm. On this view, the mind may be likened to the steam from a train engine; the steam does not affect the workings of the engine but is caused by it. Unfortunately, epiphenomenalism avoids the problem of interaction only at the expense of denying the common-sense view that our states of mind have some bearing on our conduct. For many, epiphenomenalism is therefore not a viable theory of mind. (For a defense of the common-sense claim that beliefs and attitudes and reasons cause behavior, see Donald Davidson.) d. The Correlation and Dependence Arguments The correlation and dependence argument against dualism begins by noting that there are clear correlations between certain mental events and neural events (say, between pain and a-fiber or c-fiber stimulation). Moreover, as demonstrated in such phenomena as memory loss due to head trauma or wasting disease, the mind and its capacities seem dependent upon neural function. The simplest and best explanation of this dependence and correlation is that mental states and events are neural states and events and that pain just is c-fiber stimulation. (This would be the argument employed by an identity theorist. A functionalist would argue that the best explanation for the dependence and correlation of mental and physical states is that, in humans, mental states are brain states functionally defined). Descartes himself anticipated an objection like this and argued that dependence does not strongly support identity. He illustrates by means of the following example: a virtuoso violinist cannot manifest his or her ability if given an instrument in deplorable or broken condition. The manifestation of the musician’s ability is thus dependent upon being able to use a well-tuned instrument in proper working order. But from the fact that the exhibition of the maestro’s skill is impossible without a functioning instrument, it hardly follows that being skilled at playing the violin amounts to no more than possessing such an instrument. Similarly, the interactionist can claim that the mind uses the brain to manifest it’s abilities in the public realm. If, like the violin, the brain is in a severely diseased or injurious state, the mind cannot demonstrate its abilities; they of necessity remain private and unrevealed. However, for all we know, the mind still has its full range of abilities, but is hindered in its capacity to express them. As for correlation, interactionism actually predicts that mental events are caused by brain events and vice versa, so the fact that perceptions are correlated with activity in the visual cortex does not support materialism over this form of dualism. Property dualists agree with the materialists that mental phenomena are dependent upon physical phenomena, since the fomer are (non-physical) attributes of the latter. Materialists are aware of these dualist replies and sometimes invoke Ockham’s razor and the importance of metaphysical simplicity in arguments to the best explanation. (See Churchland, 1988, p. 28). Other materialist responses will not be considered here. 8. The Problem of Other Minds The problem of how we can know other minds has been used as follows to refute dualism. If the mind is not publicly observable, the existence of minds other than our own must be inferred from the behavior of the other person or organism. The reliability of this inference is deeply suspect, however, since we only know that certain mental states cause characteristic behavior from our own case. To extrapolate to the population as a whole from the direct inspection of a single example, our own case, is to make the weakest possible inductive generalization. Hence, if dualism is true, we cannot know that other people have minds at all. But common sense tell us that others do have minds. Since common sense can be trust, dualism is false. This problem of other minds, to which dualism leads so naturally, is often used to support rival theories such as behaviorism, the mind-brain identity theory, or functionalism (though functionalists sometimes claim that their theory is consistent with dualism). Since the mind, construed along Cartesian lines, leads to solipsism (that is, to the epistemological belief that one’s self is the only existence that can be verified and known), it is better to operationalize the mind and define mental states behaviorally, functionally, or physiologically. If mental states are just behavioral states, brain states, or functional states, then we can verify that others have mental states on the basis of publicly observable phenomena, thereby avoiding skepticism about other selves. Materialist theories are far less vulnerable to the problem of other minds than dualist theories, though even here other versions of the problem stubbornly reappear. Deciding to define mental states behaviorally does not mean that mental states are behavioral, and it is controversial whether attempts to reduce mentality to behavioral, brain, or functional states have been successful. Moreover, the “Absent Qualia” argument claims that it is perfectly imaginable and consistent with everything that we know about physiology that, of two functionally or physiologically isomorphic beings, one might be conscious and the other not. Of two outwardly indistinguishable dopplegangers, one might have experience and the other none. Both would exhibit identical neural activity; both would insist that they can see the flowers in the meadow and deny that they are “blind”; both would be able to obey the request to go fetch a red flower; and yet only one would have experience. The other would be like an automaton. Consequently, it is sometimes argued, even a materialist cannot be wholly sure that other existing minds have experience of a qualitative (whence, “qualia”) sort. The problem for the materialist then becomes not the problem of other minds, but the problem of other qualia. The latter seems almost as severe an affront to common sense as the former. (For an interesting related discussion, see Churchland on eliminative materialism, 1988, pp. 43-49.) 9. Criticisms of the Mind as a Thinking Thing We earlier observed that some philosophers, such as Hume, have objected that supposing that the mind is a thinking thing is not warranted since all we apprehend of the self by introspection is a collection of ideas but never the mind that purportedly has these ideas. All we are therefore left with is a stream of impressions and ideas but no persisting, substantial self to constitute personal identity. If there is no substratum of thought, then substance dualism is false. Kant, too, denied that the mind is a substance. Mind is simply the unifying factor that is the logical preliminary to experience. The idea that the mind is not a thinking thing was revived in the twentieth century by philosophical behaviorists. According to Gilbert Ryle in his seminal 1949 work The Concept of Mind, “when we describe people as exercising qualities of mind, we are not referring to occult episodes of which their overt acts and utterances are effects; we are referring to those overt acts and utterances themselves.” (p. 25). Thus, “When a person is described by one or other of the intelligence epithets such as ‘shrewd’ or ‘silly’, ‘prudent’ or ‘imprudent’, the description imputes to him not the knowledge, or ignorance, of this or that truth, but the ability, or inability, to do certain sorts of things.” (p. 27). For the behaviorist, we say that the clown is clever because he can fall down deliberately yet make it look like an accident We say the student is bright because she can tell us the correct answer to complex, involved equations. Mental events reduce to bodily events or statements about the body. As Ludwig Wittgenstein notes in his Blue Book: It is misleading then to talk of thinking as of a “mental activity.” We may say that thinking is essentially the activity of operating with signs. This activity is performed by the hand, when we think by writing; by the mouth and larynx, when we think by speaking; and if we think by imagining signs or pictures, I can give you no agent that thinks. If then you say that in such cases the mind thinks, I would only draw your attention to the fact that you are using a metaphor. (1958, p. 6) John Wisdom (1934) explains: “‘I believe monkeys detest jaguars’ means ‘This body is in a state which is liable to result in the group of reactions which is associated with confident utterance of ‘Monkeys detest jaguars,’ namely keeping ‘favorite’ monkeys from jaguars and in general acting as if monkeys detested jaguars.'” (p. 56-7). Philosophical behaviorism as developed by followers of Wittgenstein was supported in part by the Private Language Argument. Anthony Kenny (1963) explains: Any word purporting to be the name of something observable only by introspection (i.e. a mental event)… would have to acquire its meaning by a purely private and uncheckable performance . . . If the names of the emotions acquire their meaning for each of us by a ceremony from which everyone else is excluded, then none of us can have any idea what anyone else means by the word. Nor can anyone know what he means by the word himself; for to know the meaning of a word is to know how to use it rightly; and where there can be no check on how a man uses a word there is no room to talk of “right” and “wrong” use (p. 13). Mentalistic terms do not have meaning by virtue of referring to occult phenomena, but by virtue of referring to something public in a certain way. To understand the meaning of words like “mind,” “idea,” “thought,” “love,” “fear,” “belief,” “dream,” and so forth, we must attend to how these words are actually learned in the first place. When we do this, the behaviorist is confident that the mind will be demystified. Although philosophical behaviorism has fallen out of fashion, its recommendations to attend to the importance of the body and language in attempting to understand the mind have remained enduring contributions. Although dualism faces serious challenges, we have seen that many of these difficulties can be identified in its philosophical rivals in slightly different forms.
Since he can clearly and distinctly understand the body without the mind and vice versa, God could really have created them separately. But if the mind and body can exist independently, they must really be independent, for nothing can constitute a part of the essence of a thing that can be absent without the thing itself ceasing to be. If the essence of the mind is incorporeal, so must be the mind itself. 4. Other Leibniz’s Law Arguments for Dualism a. Privacy and First Person Authority As noted earlier, dualists have argued for their position by employing Leibniz’s Law in many ingenious ways. The general strategy is to identify some property or feature indisputably had by mental phenomena but not attributable in any meaningful way to bodily or nervous phenomena, or vice versa. For example, some have suggested that mental states are private in the sense that only those who possess them can know them directly. If I desire an apple, I know that I have this desire “introspectively.” Others can know of my desire only by means of my verbal or non-verbal behavior or, conceivably, by inspection of my brain. (The latter assumes a correlation, if not an identity, between nervous and mental states or events). My linguistic, bodily and neural activities are public in the sense that anyone suitably placed can observe them. Since mental states are private to their possessors, but brain states are not, mental states cannot be identical to brain states. (Rey pp. 55-56). A closely related argument emphasizes that my own mental states are knowable without inference; I know them “immediately.” (Harman, 1973, pp. 35-37). Others can know my mental states only by making inferences based on my verbal, non-verbal or neurophysiological activity. You may infer that I believe it will rain from the fact that I am carrying an umbrella, but I do not infer that I believe it will rain from noticing that I am carrying an umbrella. I do not need to infer my mental states because I know them immediately.
yes
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://academic.oup.com/jaar/article/81/1/6/696076
Body and Mind in Early China: An Integrated Humanities–Science ...
Body and Mind in Early China: An Integrated Humanities–Science Approach *Edward Slingerland, Department of Asian Studies, Asian Centre, 607-1871 West Mall, University of British Columbia, Vancouver, BC, Canada V6T 1Z2. E-mail: [email protected]. I have benefited greatly from audience feedback at the venues where I have presented aspects of this work, including the Collège de France, Princeton University, Ca'Foscari (Venice), the 2011 Association of Asian Studies Annual Meeting, and the 2010 Pacific American Philosophical Association Meeting. In particular, I would like to thank Paul Goldin, Anne Cheng, Martin Kern, Ben Elman, and Willard Peterson for comments that have helped me clarify my arguments. This research was supported by a Social Sciences and Humanities Research Council of Canada grant and the Canada Research Chairs program. Cite Edward Slingerland, Body and Mind in Early China: An Integrated Humanities–Science Approach, Journal of the American Academy of Religion, Volume 81, Issue 1, March 2013, Pages 6–55, https://doi.org/10.1093/jaarel/lfs094 Abstract This article argues against the strong “holist” position that the early Chinese lacked any concept of mind–body dualism, and more broadly against a “neo-Orientalist” trend that portrays Chinese thought as radically different from Western thought. In the first half, it makes the case against strong mind–body holism by drawing upon traditional archeological and textual evidence. In the second, it turns to resources from the sciences, arguing that large-scale quantitative–qualitative analyses of early Chinese texts suggest that they embrace a quite vigorous form of mind–body dualism, and further that a huge body of evidence coming out of the cognitive sciences suggests that this is not at all surprising. In this section, the role that deep humanistic knowledge can, and should, play in scientific approaches to culture is also explored. The article concludes by suggesting that a mutually informed, humanities–scientific approach to religious studies is the best way for our field to move forward. AN ALMOST UNIVERSALLY ACCEPTED TRUISM among scholars of Chinese religion is that, while “Western” thought is dualistic in nature, early Chinese thought can be contrasted as profoundly “holistic.” This sentiment can be traced back to the earliest reception of Chinese thought in Europe, where second-hand accounts of Confucian thought penned by Jesuit priests caused thinkers such as G. W. Leibniz and Voltaire to see Chinese mind–body holism, or their supposed lack of distinction between the secular and religious, as precisely the medicine needed to jolt sick European thought out of its doldrums.1 One of the odd features of the modern Academy is the fact that, while the negative side of this sort of cultural essentialism—the denigration of China as psychologically and politically infantile by the likes of G. W. F. Hegel and Montesquieu—has been singled out and rejected as perniciously “Orientalist,” its normatively positive manifestation has continued to flourish. What I have come to think of as “Hegel with a happy face”—the idea that some essential Chinese holism can serve as a corrective to an equally essentialized Western thought—can be traced from the early European philosophes to scholars such as Lucien Lévy-Bruhl and Marcel Granet (Lévy-Bruhl 1922; Granet 1934) straight down to prominent contemporary scholars of Chinese thought such as Roger Ames, Henry Rosemont, Jr., and François Jullien (Jullien 2007; Rosemont and Ames 2009).2 The “radical holist” position embraced by these scholars has many components: the dualist binaries supposedly foreign to Chinese thought include transcendent–immanent (Needham 1974: 98), part–whole (Jullien 2007: 90), nature–culture (Sterckx 2002: 5), and individual–collective (Ames 2008: 29). This article focuses on one particularly important binary, that of body and mind, characterizing the radical mind–body holist position as well as briefly reviewing some of the traditional humanistic evidence against it.3 I then turn to two new sources of evidence against radical holism, both borrowed from the sciences: a method for performing large-scale random sampling and multiple researcher coding as a check against our qualitative intuitions, and a body of empirical evidence from the cognitive sciences concerning the likelihood of some form of mind–body dualism being a human universal. My more limited goal in this article is to give scholars of religion a more accurate sense of how body and mind were conceived in early China, and to help us to move beyond culturally essentialistic stereotypes of China, positive or negative. I conclude that early Chinese thought is, in fact, characterized by an at least “weak” mind–body dualism—one in which mind and body are experienced as functionally and qualitatively distinct, although potentially overlapping at points—and moreover that such dualism is likely to be a human cognitive universal. At a very general level, this article aims to provide a concrete illustration of the benefits to religious studies of cooperation between the humanities and natural sciences (Slingerland 2008; Taves 2009; Slingerland and Collard 2012). On the one hand, I hope to show both how techniques borrowed from the sciences can be drawn upon as supplements to traditional humanistic methods, and how engaging with the literature from various branches of the cognitive sciences can allow scholars of religion to begin their interpretative projects from a more accurate hermeneutical starting point. On the other hand, I also discuss the manner in which religion scholars and other humanists can play an important role in helping cognitive scientists to think through their categories and get beyond often quite historically and culturally parochial models of human cognition. THE MYTH OF STRONG MIND–BODY HOLISM IN EARLY CHINA One common focus of claims about supposed mind–body holism in early China is the character xin 心, variously translated as “heart” (the original graph is clearly a depiction of the physical organ), “heart–mind,” or “mind.” It is relatively uncontroversial in the field that, depending upon the text and historical period, xin can refer to the physical organ itself or, more abstractly, to a locus of both the sort of higher cognition typically associated with mind in Western cultures and emotions or feelings, which tend to be associated more with body. A relatively weak form of the holist position—one that will be defended below—would hold that we do not find in early China the sort of distinction between an entirely disembodied mind, esprit, or Geist and an ontologically distinct body that characterizes certain philosophical positions in the West. Unfortunately, all too commonly defenses of this more cautious, accurate view—that Cartesian ontological dualism was unknown in early Chinese thought—quickly slide into cultural caricature: the actually rather odd position defended by Descartes is what “Western” thought always has been about, which means that, since the Chinese are not Cartesians, they must be somehow radically different, even a “different order of humanity” (Ames 1993a: 149). Such radical difference characterizes what I call the strong holist position, which holds that, for the early Chinese (or “the Chinese” or even the “East” more generally), there exists no qualitative distinction at all between anything we could call mind and the physical body or other organs of the body. Roger Ames, for instance, claims that the early Chinese conceived of the person “holistically as a psychosomatic process,” and that the very idea of the body as a material substance was foreign to the Chinese: “the body is a ‘process’ rather than a ‘thing,’ something ‘done’ rather than something one ‘has’” (1993b: 168). François Jullien similarly explains that, because the Chinese saw what we would call body, soul, and mind as nothing more than points along a continuous, constantly transforming spectrum of energy, “no dualism is possible” (2007: 69); Chinese thought “eludes the great divide between body and soul … through which European culture has so powerfully shaped itself” (8). This holistic view of the xin has also penetrated other fields, where psychologists, anthropologists, and cognitive linguists have held up the Chinese concept of xin (or the Japanese kokoro) as evidence against mind–body dualism as a cognitive universal (Wierzbicka 2006; Yu 2007). Strong views about mind–body holism are also quite common—if not the default position—in contemporary Chinese scholarship: Zhang Zailin, for instance, observes that, in early Chinese thought, there is no dichotomy of mind versus physical body, but rather a holistic conception whereby mental processes are produced holistically by the body (Zhang 2008: 29; cf. Yang 1996; Tang 2007). Even scholars who might seem, at first glance, to be adopting a stance consistent with weak holism often end up embracing positions that only make sense if one takes holism in a strong sense. Mark Edward Lewis, for instance, observes that “the Chinese”—in contrast to “the Western tradition”—“accepted that the mind was part of the body, more refined and essentialized, but of the same substance” (2006: 20), and then goes on to describe the body in early China as an apparently arbitrarily chosen, culturally constructed “marker of supreme value” (20), and the bodily surface as a constantly “fluid and shifting … zone of exchange” (61). Considering the obvious and intuitive importance of the body as a locus for value and discrete individuality in most Western traditions, this suggests that the early Chinese inhabited an intellectual milieu in which the concepts of body, mind, and self-other boundaries were quite alien to our own. A similar assumption of radical alienness regarding mind and body informs Herbert Fingarette's famous claim that the Confucius of the Analects completely lacked anything like the concept of psychological interiority (Fingarette 1972; cf. 2008).4 Indeed, strong forms of the holist position typically link a complete absence of mind–body dualism in early China to a lack of inner life, individualism, or concept of a personal afterlife. Paolo Santangelo, for instance, claims that, in contrast to “Western cultures,” in China: there is no clear separation between spirit and matter, or soul and body … the concept of “mind–heart” (xin) is different from the idea of an exclusively human soul, endowed with reason and able to make free decisions. … Here, too, there is no place for the idea of the individual that rose in Europe from the concept of the immortality of the soul. (Santangelo 2007: 292) We see here the quite-common coordination of mind–body holism with freedom from other supposedly Western dichotomies: spirit–matter/reason–emotion/essence–appearance/transcendence–immanence. As I discuss below, there is a kernel of truth to all of these claims—otherwise they would not enjoy such continued endorsement by knowledgeable scholars—but we need to resist the tendency to slip from reasonable claim into caricature, or to mistake explicit philosophical positions for actual human cognition. First, however, I would like to consider a variety of reasons for being skeptical about the strong mind–body holist position. Traditional Humanistic Evidence against Strong Holism In a monograph in progress (Slingerland forthcoming), I review in some detail the historical and archeological evidence against the strong mind–body holist position; because of space constraints I confine myself here to merely a few observations, focusing on the Warring States (sixth-third century BCE) period that left us such a wealth of archeological and textual evidence. Afterlife Beliefs Our earliest written records from China are found on the so-called oracle bones, ox scapulae or turtle plastrons that were used in the Shang Dynasty (1600–1046 BCE)5 as a means for communicating with the spirit world. These queries and petitions were directed to a variety of supernatural powers, ranging from what appears to be a non-ancestral high god, Di 帝, down through various nature deities and the ancestors of the royal line. Although ritual practices were directed toward a variety of supernatural agents, sacrifices and petitions tended to focus on the spirits (shen 神) of the ancestors, who—though described as dwelling “above” (shang 上) or with Heaven—were also in constant interaction with the living. They were thought to descend to earth and be present in some numinous form at sacrifices and other important ceremonies, where they were feted with food and drink (from which they extracted only the invisible essences) in order to secure their blessing and continued support. “The spirits are all drunk!” declares a narrator in one ancient poem with great satisfaction, sign that the ceremony could now safely be concluded.6 Scholars have suggested that Western Zhou bronze vessel inscriptions were written on the inside of the vessels because the texts were meant to be read by the spirits, not their living descendants, implying that the spirits were thought to be not only conscious but literate (Shaughnessy 1991; von Falkenhausen 1995). There are continuing controversies in the literature concerning how, precisely, to understand the early Chinese conceptions of the afterlife—which in any case clearly varied both regionally and chronologically—once we enter the Warring States.7 In her detailed study of a Warring States tomb, Constance Cook argues that the material evidence of the tomb “firmly supports the idea of the detachment of an ethereal self from the corporeal body as an ancient and enduring Chinese belief” (2006: 17), and sees in this Warring States practice commonalities with later Han Dynasty accounts that undeniably concern disembodied spirit journeys. Cook's interpretation of this particular tomb is by no means a universal consensus; other scholars, such as Wu Hung, have argued against seeing anything like Han spirit journey concepts at work in Warring States mortuary practices, interpreting Warring States tombs as instead “happy homes” meant to house the quasicorporal spirit for eternity (Wu 1994). In either case, the fact remains that “the dead” (i.e., the disembodied minds/spirits of previously living persons) belong to a qualitatively different order of invisible, relatively intangible, powerful, and possibly dangerous beings. As Lothar von Falkenhausen has convincingly argued, early Chinese mortuary practices—at least by the late Warring States—reveal a view of the afterlife as “hermetically separate and independent from the world of the living” (2006: 300), with the spirits of the dead perceived as “categorically different from the living” (306). It is equally clear that early China was rife with vivid and widely distributed beliefs concerning elaborate spirit journeys and complex spiritual realms separate from, but modeled on, “our” world that predate the introduction of Buddhism to China. As Guolong Lai concludes in his broad review of a variety of late Warring States and early Han tombs, “the late-Warring States and early-Han conceptions of the afterlife generally agree upon the notion of a soul that retains consciousness after death; they also accommodate ideas of some type of land of the dead, postmortem paradise, and the travel of the soul beyond its state of entombment” (2005: 42; cf. Poo 1990). Such dualism becomes even more explicit when we turn to textual accounts of the afterlife. Early transmitted textual sources, such as the Zuo Zhuan, make it clear that the deceased were thought to continue to exist in individual form, maintaining the same personalities and concerns that they possessed in life. A common theme is the appearance of a ghost or ancestor—often in a dream, but sometimes during waking life—complaining about the behavior of the living, making dire predictions about the future, seeking revenge for wrongs done to them during life, or extorting offerings from the living on the threat of supernatural punishment.8 A bamboo text found a Warring States tomb from the late fourth century BCE appears to be a form that could be filled out by the living relatives of those who had died in battle, requesting that a certain deity named Wu Yi 武夷, apparently assigned by the Lord on High to care for and watch over war dead, allow the spirit of the deceased to return to his family to receive food offerings. This strongly suggests that, even by the Warring States, the dead were thought to be residing in some sort of afterworld, and to be capable of traveling between the two worlds under certain conditions.9 These afterlife beliefs—as well as the belief in other supernatural beings such as ancestral spirits, nature deities, or high gods—were not only widespread, but also fundamentally parasitic upon some sort of mind–body dualism: these beings were conceived of as human minds without bodies (or possessing only very tenuous and invisible bodies) who, nonetheless, were interacted with in a manner modeled upon ordinary social interactions because of their continued possession of minds and personal essence. It is also apparent that, under the proper conditions, these disembodied spirits were viewed as capable of being brought back to life. In a famous passage from the Zhuangzi (late fourth century BCE) that recalls Hamlet, Zhuangzi has a conversation with a human skull—a metonymic anchor for the soul that once possessed it—and poses the question: “If I could get the Arbiter of Fate to bring your body back to life, to make you some bones and flesh, to return you to your parents, your wife and children, your old home and friends—wouldn't you want that?” (Watson 1968: 193).10 The fact that this question was not viewed as merely hypothetical is suggested by an account in a late Warring States archeological text of a certain individual named Dan who is returned to life after being released by underworld officials (Harper 1994). The officials in Dan's case appear to have been paid off or otherwise propitiated by his living relatives, which suggests that the afterworldly bureaucracy was seen to be as corrupt as that of this world's. The idea that one's physical body could be replaced or substituted makes it clear that one's personal identity or essence—what makes a person who they are—was understood as something located in the extrasomatic spirit. Another passage from the Zhuangzi illustrates this very nicely. Confucius witnesses a set of piglets suddenly stop nursing at the body of their recently dead mother and run away; the reason for this reaction, he observes, is that: they could no longer see themselves in her, they could no longer see her as one of their own kind (lei 類). That which they loved about their mother was not her body, but rather that which moved/commanded (shi 使) her body. When someone is killed in battle, he is buried without his battle paraphernalia; someone who has had his feet amputated has no reason to care about shoes. In both cases, the thing that is basic (ben 本) has been lost. (Watson 1968: 73) We see here a clear conception of an incorporeal “essential” element to the self—an element that is the locus of personal agency and identity—that leaves the physical body upon death, leaving only an empty husk. This makes absolutely no sense except in the context of mind–body dualism: while the corporeal body dies and decomposes, the mind—that is, the locus of consciousness and personal identity—lives on in some incorporeal, or at best quasicorporeal, form.11 As Paul Goldin has observed, passages such as these make it clear that a view of the mind and body as “distinct entities” (2003: 228) is not at all unknown in early China, and in fact is necessary to begin to even make sense of beliefs in the afterlife, ghosts and spirits, and phenomena such as spirit possession.12 Discussing a passage from the Mozi that describes a spirit descending into the body of medium and then using the medium's body to inflict punishment on a lax religious functionary, Goldin notes that “here we have, in the starkest possible terms, a ghost in the machine. … The author of this text apparently had no difficulty in conceiving of a dualistic universe populated by material bodies and immaterial spirits” (236–237). By the time we reach the early Han (third century BCE), this body–soul dualism becomes, if anything, even more distinct: both the received textual record and unearthed archeological texts are filled with detailed accounts of religious techniques for freeing the mind or spirit from the physical body, pre- and postmortem spirit journeys, and complex geographies of the afterworld, which was variously conceived of as located under, above, or at the far extremity of the visible world.13 In texts such as the chronologically rather problematic Liezi 列子—most likely containing significant Warring States material, but assembled in the third century CE—we even begin to get hints of something approaching Cartesian substance dualism. In one very odd and interesting passage,14 we read of an automaton—apparently made of out of leather and wood—so indistinguishable from a “real” person that it angers the king when it winks at one of his concubines during a performance. The king's anger is only assuaged when the automaton is taken apart to reveal that it is merely a machine—that is, nothing but material stuff, with no mind or soul present. Whatever the date of this passage, by the time of the Eastern Jin Dynasty (317–420 CE) commentator Zhang Zhan 張湛, the “automaton problem”15 it poses is inspiring surprisingly defensive critiques of materialism from what is essentially a substance dualism perspective. “Nowadays there are people who say that the human spirit (ling 靈) is something merely produced by a mechanism (jiguan 機關),” Zhang reports. “How could this be?” In language that echoes contemporary Creationist diatribes against evolutionary theory, he contrasts the supreme mystery (zhimiao 至妙) of the created (zao 造), natural world with the clumsiness of human technology, and concludes by declaring, “How could anyone possibly say that living things lack a spiritual master / controlling spirit (shenzhu 神主)!” (Yang 2007: 181). We could not wish for a clearer expression of a sharp dichotomy between a mechanistic physical–material level of reality and the realm of the disembodied, creative, free, and intentional spirit. Philosophical Accounts of Xin–Body Relations One pillar of the strong holist position is the claim that the xin is simply one organ in the body, not in any way qualitatively different from other organs. The goal of Jane Geaney's On the Epistemology of the Senses in Early China (2002), for instance, is to “undermine the view that the heartmind [xin] is radically distinct from the senses.” “The heartmind is,” she declares, “if not a sense itself, then very closely related to them” (84)—i.e., although the xin has its own particular functions to play in the organic self, its functions are in no sense qualitatively different from the those of the other organs. This claim, unfortunately, simply does not hold up to scrutiny. While certainly identified in some way as an organ in the body, the xin in Warring States discourse is clearly singled out as a very special type of organ, with qualitatively unique powers: it is the locus of intentions, rational thought, language use, categorization, and voluntary willing. Because of these qualitatively unique powers, it is often contrasted with the other bodily organs, and is in fact the only organ to be singled out and contrasted with the body as a whole. Anyone familiar with Warring States literature will recognize that the rhetorical contrast of xin and, for instance, xing 形 (“physical form, body”; one of the three basic words for the physical body) is quite common. It is worth noting this fact precisely because it seems so unremarkable. Passages that contrast the xin with the body are processed effortlessly and excite very little comment—either in traditional commentaries or contemporary secondary literature. This effortlessness is itself a data point when it comes to innate cognitive universals: one only needs to consider the immediate bizarreness of a passage that contrasted the body with, say, the liver to get a sense of how deep our own intuitive mind–body dualism runs. As I will touch upon again below, the qualitative “otherness” of the xin typically passes unnoticed—both by traditional commentators and by modern readers—precisely because of our shared innate dualism, and for this reason, it is worth doing a bit of work to tease it out. One of the ways in which the xin is singled out in many Warring States texts is when it is identified as the natural “ruler” or “lord” (jun 君) of the rest of the body. As a passage from the recently discovered Five Types of Action puts it, “The ears, the eyes, nose, mouth, hands and feet—these six are all slaves to the mind. If the mind says, ‘yes,’ none of them dare say ‘no.’ If the mind says, ‘let it be so,’ none of them dare to disagree” (strips 45–46). Indeed, the permission of the xin is seen by some Warring States writers as necessary for the other organs to simply carry out their tasks. As a passage from the Lushi Chunqiu remarks, “It is the essential nature of the ear to desire [pleasant] sounds. However, if the mind is not pleased, the ear will not hear even if the five musical notes are right before it.” (5/4.1; Knoblock and Riegel 2000: 142–143). A similar passage in the Xunzi remarks, “If the xin is not exerted in the process of sensory perception, then black and white could be in front of one's eyes and one would not see them, and thunder drums could be pounded by one's side and one would not hear them” (21/1; Knoblock 1994: 100). The xin's authority to rule is not arbitrary: it is the ruler of the self because it possesses special, qualitatively unique powers. The Liushi Chunqiu passage just quoted attributes the sensory-blocking power of the xin to its unique ability to bracket simple desires and make broader normative judgments: “Those that [simply] desire are the ears, eyes, nose and mouth; that which is pleased or not pleased is the xin” (5/4.1; Knoblock and Riegel 2000: 143). This is because while the other organs of the self are drawn blindly to their sensory objects in a mechanistic fashion, the xin alone is able to think, reflect, and make free decisions. As Mencius 6:A:15 famously observes: The organs of sight and hearing do not think (si 思), and therefore are dominated by things. When things interact with other things [i.e., unthinking senses], there is mechanical attraction, that's all. The organ of the mind, on the other hand, is capable of thought. If it thinks, it obtains its object; if it doesn't think it does not. (Van Norden 2008: 156) One could not wish for a clearer expression of a central feature of folk mind–body dualism: a distinction between mental causation, which involves reflection and free will, and the sort of blind, mechanistic interaction characteristic of the physical world.16 The mind's function as the center of free will and reflection makes it, in turn, the center of moral responsibility. We see this theme summed up admirably in a passage from the Xunzi that both celebrates the freedom of the xin and the moral burden that comes with this power. The slavish, mechanical parts of the self cannot ultimately be held responsible for what they do, since they are merely following mechanistic causation or orders from above. On the other hand, the xin's power of self-determination means that it alone bears responsibility for the moral or immoral behavior of the body as a whole: The mind is the lord of the body, and master of its spiritual brightness. It issues commands, it does not receive commands. On its own initiative it forbids or commands, rejects or adopts, begins or stops. Therefore, although the mouth can be compelled to remain silent or speak, and the body can be compelled to crouch or stretch, the mind cannot be compelled to change its ideas. If it thinks something is right, it accepts it; if it thinks that something is wrong, it rejects it. (21/6; Knoblock 1994: 105, modified) This is not to say that all Warring States thinkers share precisely the same view of the xin. Lee Yearley has argued convincingly that the Xunzian xin, for instance, is a rather autocratic, disembodied ruler, with preferences completely distinct from the first-order desires of the body and the power of absolute fiat over it. He contrasts this with Mencius' xin, which—though in charge—derives its moral direction from first-order desires, and must marshal the support of other parts of the self, such as the qi (Yearley 1980). Despite such differences of degree, however, the basic picture is the same: the xin alone of all the organs possesses the powers of thought and choice, and is the locus of a special sort of causality—free will—that is completely distinct from the sort of causation that governs the physical word and the other organs of the body. We should also note that there are, in fact, at least some passages in Warring States texts that explicitly deny a special role to the xin. The most well-known example is Mencius 6:A:7, where we read: With regard to the mouth, all palates find the same things tasty; with regard to the ears, all find the same things pleasant to listen to; with regard to the eyes, all find the same things beautiful. Now, when it comes to the xin, is it somehow unique in lacking such common preferences? What is it, then, that minds share a preference for? I say that it is order and rightness. (Van Norden 2008: 151) This passage serves as a key piece of evidence for Geaney, who portrays it as a strong indication that the xin and other organs are not “radically different in nature” (2002: 101), and that, for the early Chinese, the xin “behaves like the senses and seems to be considered a sense function” (13), no different from the other organs. The basic point that is being missed here, though, is that the rhetorical structure of this passage makes it clear that Mencius, in claiming that the xin is like the other organs, is making an argument, not expressing an assumption—and making an argument that he clearly expects will be met with resistance or incredulity.17 The “taste” for rightness and order that Mencius attributes to the xin is understood metaphorically on the analogy of physical taste: were the xin really viewed as on equal footing with the mouth or the body, there would be no necessity for Mencius to posit such analogies. Anyone who doubts that passages such as 6:A:7 are fundamentally predicated on mind–body dualism should try substituting another organ for the xin: “Now, when it comes to the ear, is it somehow unique in lacking such common preferences?” sounds as ridiculous in classical Chinese as it does in English. Here again, processing fluency tells us much about the implicit background of cognitive universals that provides the very context of intelligibility within with philosophical argumentation can take place. Over a decade ago, Michael Puett explicitly identified the sort of conflation of argument and assumption described here as the key to many false stereotypes about early Chinese thought, such as the supposed holism between nature and culture, or the supposed lack of the concept of innovation (Puett 2001: 1–20). It is time for scholars of early China to take this point to heart, as it were. THE HUMANITIES–SCIENCE INTERFACE: TWO NEW POINTS OF CONTACT In this section, I would like to supplement the more traditional humanistic evidence presented above with two relatively new—at least for humanities scholars—sources of evidence. Both serve to strengthen the case that the early Chinese were mind–body dualists of a sort. They are also both intended as illustrations of the potential benefits of making use of techniques and specific findings about human cognition borrowed from the sciences, as well as how this interaction has to flow both ways. Points of Contact, Part I: Textual Analysis One issue with the evidence that I provide above—as with the evidence presented in almost any exchange between humanities scholars—is the problem of cherry picking: defenders of holism tend to highlight particular textual passages or details of the archeological record, opponents others. This is less of a problem when it comes to extreme, culturally essentialistic claims to the effect that the early Chinese completely lacked a given concept—whether mind–body dualism or any other—where a handful of clear counter-examples are sufficient for debunking purposes. More reasonable claims concerning cultural differences, however, are typically less totalizing, and focus on general trends or dominant patterns rather than claims of complete exclusion. A. C. Graham, for instance, notes that defensible generalizations about “Chinese thought”—for instance, that it is relatively uninterested in formal logic—are reports of general trends, always admitting exceptions for particular thinkers or historical periods (1989: 6–7; cf. Van Norden 2007: 10–15). I would argue that, when it comes to these sorts of more reasonable claims about general cultural trends, our traditional method of drawing upon textual evidence is undermined by the possibility of persistent bias. The cultural significance of individual passages suggesting a more or less “holistic” stance toward mind and body is difficult to assess without a clear sense of how representative they are of the corpus as a whole, and such a sense cannot be accurately captured by traditional methods. As scholars of religion, we have a deep familiarity with the textual corpus relevant to the tradition(s) we study, and we all at least implicitly assume that the passages that we draw upon when we make generalizations about our traditions are in some way more revealing or more representative than those of our opponents. It remains the case, however, that intuitions are often misleading or intellectually self-serving. This problem of individual bias is a central concern in the various branches of the natural sciences, which have developed a variety of methodologies to minimize its influence. When it comes to the qualitative analysis of any sort of corpus—written texts, transcripts of interviews, videos of human or other animal behavior—these methodologies include large-scale random sampling of data, coding or analysis of these data by independent researchers, checks of intercoder reliability, and statistical analysis in order to evaluate the significance of any discerned trends. Concepts of Xin in Early China: A Large-Scale Corpus Analysis Inspired by these methods, I recently ran a study that attempted to supplement the exclusively qualitative, but therefore necessarily somewhat ad hoc, methods typically employed by scholars of religion with methods borrowed from the natural sciences that combine qualitative and quantitative analysis. This study attempted to approach the question of the relative prevalence of mind–body dualism in early China by performing a keyword-focused random sampling of passages from the pre-Qin corpus of received texts, supplemented by the corpus of recently discovered Warring States archeological texts from Guodian.18 To get a sense of changes over time, these texts were classified into three rough periods: pre-Warring States (circa 1500 to 475 BCE), early Warring States (late fifth to mid-fourth century BCE), and late Warring States (mid-fourth century BCE–221 BCE).19 We extracted passages containing xin from an online database of the entire received pre-Qin corpus,20 as well as a database of a cache of recently discovered pre-Qin archeological texts.21 The result was 1,321 passages, automatically chunked into traditionally established textual units by the search engine. Then my three coders (graduate students of mine, who were technically blind to the hypothesis that I wanted to explore, although—being my students—no doubt at least dimly aware of the purpose of the study) and I randomly sampled sixty passages and inductively developed a set of twenty-nine codes to classify its usage (see Table 1). Next, the three coders applied these codes to 620 randomly sampled passages, presented in a randomized order. First, each passage was independently coded by two of the three coders. Passages for which both coders' decisions agreed on all twenty-nine codes were considered finalized at this point (310 passages, or about half). For the remaining passages a third coder (i.e., the one not in the pair who initial coded that passage) independently coded these passages, and where their twenty-nine decisions corresponded exactly to one of the first two coders, these passages were again considered finalized (159 passages, or approximately half of the remaining passages). The remaining disagreements were arbitrated and finalized by myself, with full access to the original coders' decisions and notes. Considering the rather high standards set for intercoder agreement—perfect agreement on twenty-nine separate decisions—intercoder reliability was quite good, with an initial 0.50 correlation in Round 1 and 0.76 correlation having been achieved by the end of Round 2. In order to assure that my own coding in Round 3 did not distort the results, we also did a check and confirmed that all of the trends discussed below were still significant after Round 2: all effects retained their statistical significance and directions, and their magnitudes remained close to those reported below. Of the codes applied to the passages, two main categories bear directly on the analysis of our results that I would like to review here: (a) whether or not xin is contrasted with the body; and (b) whether it is used to refer to a bodily organ, locus of feelings and emotions, or a locus of cognition in the deliberate, reflective sense usually connoted by mind. To begin with, we found that passages involving an explicit contrast between the xin and the body22 were quite common, constituting 4% of pre-Warring States passages (7 of 179) and roughly 10% of early (3 of 35) and late (42 of 406) Warring States passages. This increase in the frequency of contrasts over time was statistically significant, suggesting that mind–body disjunction was becoming a more prominent concern or theme. One question that came up when I presented our preliminary results to groups of psychologists was how this frequency of xin–body contrasts compared to contrasts between other organs and the body. My initial response was that there were no examples of other organs being contrasted with the body—my intuition was that, although xin–body contrasts slip under the interpretative radar because they accord with our innate folk dualism, any mention of a liver–body or ear–body contrast would have come to my attention. In the spirit of quantitative demonstration, however, we put this to the test: to provide a baseline for comparison, we did a quick follow-up study looking for any contrasts between the body and four other commonly mentioned organs in Warring States texts, two external (mu 目 “eye” and er 耳 “ear”) and two internal (gan 肝 “liver” and fu 腹 “stomach”). Of the 864 passages containing occurrences of these terms in the received pre-Qin textual database, only 337 also contained one of the predominant “body” terms (xing 形, shen 身, ti 體) and thus were likely candidates for a contrast, and these 337 were coded by two coders working independently on mutually exclusive subsets. Only one contrast—a single passage where the stomach is contrasted with the body23—was found. This means that the odds of xin being contrasted with the body were about seventy-seven times greater than the other organs we examined: in other words, xin is essentially unique in being contrasted with the body. This finding alone renders completely untenable the claim that the xin is in no way qualitatively different from the other organs. A second trend in which we were interested was the extent to which xin was portrayed as primarily a physical organ, a locus of emotion, or a locus of “higher” cognition,24 and whether or not there were any patterns in such references that changed over time. What we found is that the frequency with which xin referred to body did not differ significantly between the three periods, but the rates of reference to xin as locus of cognition and emotion did. Xin as locus of cognition was much more frequent in the early and late Warring States compared to the pre-Warring States period, although there was no statistically significant difference in the frequency between the early and late Warring States. In contrast, xin as locus of emotion showed the reverse pattern: it was referred to significantly less in the early and late periods than the pre period, while also not significantly differing between the early and late period. The general pattern of our findings is illustrated in Figure 1. TEMPORAL TRENDS IN THE RATE AT WHICH XIN REFERS TO A PHYSICAL ORGAN, A LOCUS OF EMOTION, OR A LOCUS OF COGNITION, IN THE PRE, EARLY, AND LATE WARRING STATES PERIODS, WITH 95% CONFIDENCE INTERVALS—THE MARGIN OF POSSIBLE STATISTICAL ERROR (FROM SLINGERLAND AND CHUDEK 2011a, USED WITH PERMISSION). Throughout all three periods, xin referred to a physical body organ at a consistently low rate (about 3%). During the pre-Warring States period, it referred about equally often to a locus of emotion or cognition. By the early Warring States period, it was being used to refer to the locus of cognition far more frequently (about 80% of the time) than emotions (about 10% of the time), and this pattern persisted into the late Warring States period. This change also corresponded to a rise in the frequency of explicit contrasts of xin with the physical body. Although the xin is often portrayed as the locus of emotion as well as other cognitive abilities in the pre-Warring States period (roughly 1500 BCE–450 BCE), this study suggests that, by the end of the Warring States (221 BCE), there is a clear trend whereby the xin is less and less associated with emotions and becomes increasingly portrayed as the unique locus of “higher” cognitive abilities: planning, goal maintenance, rational thought, categorization and language use, decision-making, and voluntary willing. This neatly maps onto a parallel trend in the translation of early Chinese texts: in pre-Warring States texts, xin is almost exclusively translated as “heart,” whereas translations begin to switch to “heart–mind” (or simply vary among themselves between “heart” or “mind”) by the early Warring States and then render xin almost exclusively as “mind” by the time we reach such late Warring States texts as the Zhuangzi or Xunzi. This trend, when noticed at all, has often been attributed to linguistic sloppiness on the part of the translators, but our study suggests that in fact the situation is quite the opposite, in that xin seems to gradually shed its associations with emotions—especially strong, “irrational” emotions25—and comes to be seen as a faculty whose abilities map on fairly closely to the folk notion conveyed by the English mind. Moreover, it alone of all the organs is singled out to be contrasted with the various terms used to refer to the physical body (xing 形, shen 身, ti 體). What is so interesting about this early Chinese case is that linguistic resources seem to militate against mind–body dualism: the term that came to refer to the seat of cognition was represented by a graph denoting the physical heart, a concrete organ embedded in the body and also the locus of desires and emotions. Nonetheless, over a several hundred year period, texts employing classical Chinese still developed a strong form of mind–body dualism that strikingly mirrors modern Western folk conceptions, and that remained the default picture for the rest of its history.26 While identification of potential causation is necessarily speculative, we think that the best explanation for the trend that we documented in this study is that it represents a semantic shift driven by a need for increased conceptual precision that accompanied the vast expansion of literacy as we move into the late Warring States, and that was guided by intuitive folk dualism. In other words, as more and more human beings began using classical Chinese as a means of communication, the semantic range of words like xin converged on a cognitive anchorpoint provided by intuitive folk dualism. Methodological and Theoretical Issues One motivation in reporting this study here is that its techniques can be easily adapted for use in accessing other historical materials—“data from dead minds” (Martin and Beck forthcoming)—in order to address, in a rigorous and quantifiable manner, a wide variety of questions that interest scholars of religion.27 The complete literary records of many cultures are now available in fully searchable, electronic databases, providing us with an incredibly powerful tool simply not available to any other generation of scholars. One of the next frontiers is automated or semiautomated coding. Fully automated coding involves using powerful search engines to scour large quantities of materials over time to look for specific patterns of usage specified by the researchers. This technique was employed in one high-profile study that involved querying the entire the Google Book archive, which contains over five million books—4% of the books ever published (Michel et al. 2011). Despite a couple intriguing results—particularly the idea of using proper name frequency patterns to document statistical signatures of active suppression or censorship—the results of this study probably strike most humanists as rather gimmicky, and there are problems with the source materials being queried (only books, and only books that have been entered in Google's database). Nevertheless, as a proof-of-concept demonstration, I would submit that anyone not impressed and excited by the potential for such techniques to enhance humanistic research simply has not thought about it carefully enough. Another exciting approach is semiautomated coding, where—during a trial run or repeated iterations of trial runs—the qualitative judgments of a human coder can be tracked by an algorithm-generator or actively codified into “dictionaries,” with the resulting patterns then able to be instantly and automatically applied to mind-bogglingly large quantities of data.28 This technique, still in its infancy, combines the best of qualitative and quantitative approaches, and is perhaps the most promising from the perspective of scholars of religion. Although the methods employed in this study are standard for scientific qualitative coding excises, to anyone trained in interpreting texts for a living several potential limitations immediately leap out.29 The formulation of the initial coding categories has an obvious role in shaping the results, coding decisions will be biased by individual coders' cultural models and individual assumptions about the texts, and the very idea of “hypothesis-blind” coding seems undermined by the signals sent by the chosen keyword and coding categories, as well as the high degree of personal knowledge on the part of the coders of my own preassumptions. Moreover, pre-Qin texts are notoriously difficult to understand: classical Chinese is a relatively uninflected language, and the inevitable ambiguities present in the original texts are often resolved in a very particular—but perhaps inaccurate—direction by the traditional commentaries and English translations that my coders were allowed to consult. There is, moreover, the problem of proper rhetorical framing mentioned above. The single most common issue that ended up having to be adjudicated by me in Round 3 concerned the rather abstract codes having to do with xin being implicitly or explicitly contrasted or identified with the body and other organs. As we saw with Mencius 6:A:7, discussed above, even specialists in field would seriously disagree about which codes to apply to that passage: Jane Geaney and many others would have coded it as “0.3 Xin Conceptually (Explicitly) Identified with Other Organs,” whereas I have argued that one needs to add the code “0.2 Xin Grammatically/Rhetorically (Implicitly) Contrasted with Other Organs” to pick up the proper rhetorical framework. This is difficult—and debatable—stuff. Finally, I think it is fair to say that humanities scholars in general are suspicious of attempts to handle the complexity of textual interpretation by means of a process that results in graphs and charts and statistical margins of errors: the statistical cleanliness masks a host of potential systemic complications. I anticipate that many of my colleagues will see our study as an instance of sciency-sounding smoke and mirrors being used to obscure the messiness of interpretation—an attempt to borrow the prestige of the “ethnoscience of the West” to push our own interpretative agenda. I obviously disagree. Despite the many reasons for being cautious in both applying and interpreting the results of such methods, I think that they can serve as a useful example of how techniques from the natural sciences—large-scale, team-based analysis, random sampling, statistical analysis—can be put to good use in the humanities. Humanists have always been empirically minded: scholarly claims are not taken seriously unless supported by textual or archeological evidence. This sort of evidence has, however, typically been gathered and presented in a highly biased and unsystematic manner. Scholars arguing for mind–body holism in early China, for instance, will cherry pick a dozen or so passages from among hundreds or thousands on the topic to defend their claim. François Jullien, to take a habitually egregious example, cites only a single substantive passage in support of his argument that the early Chinese concept of a holistic body–mind is quite alien to “our” dualism (Jullien 2007: chap. 4), and this passage is from a late Warring States text portraying the xin as a physical organ—a category that makes up 2% of the passages we coded from this period. Even careful scholars such as Geaney, who substantiates her claims with copious textual evidence, are constrained by the standard of our genre to limit themselves to a subset of available passages that have been chosen in anything but a disinterested manner. Of course, each partisan in any given debate works under the assumption that his or her chosen passages are somehow more representative or revealing that of his or her opponents, but there has been a surprising lack of interest among humanists in adopting techniques to compensate for personal bias that have long been pillars of the scientific method.30 The sort of large-scale corpus sampling method employed in this study is expensive and, frankly, irritating to implement. As I quickly discovered upon embarking on this project, large-scale corpus coding projects share many of the liabilities of scientific inquiry in general: they are enormously time-consuming, expensive, full of administrative difficulties, and most of all boring. For a scholar used to working solo in the pristine silence of his or her office, managing a team of coders, with all of their personal dramas and idiosyncratic takes on the coding process, is surprisingly difficult. Methodological advances, automation, and hard-won lessons—coding sheets should be simple, coding schedules generous—can help to reduce the burden, but simple funding limitations (coders need to get paid, software needs to be purchased) will no doubt slow the adoption of these techniques. Despite these limitations, the ability of large-scale corpus analyses to give us relatively objective overviews of huge quantities of historical materials—using the power of sheer numbers to compensate for inevitable individual biases—should not be dismissed by scholars of religion. As we note in our reply to Klein and Klein (2011), who see the problems inherent in interpreting early Chinese texts as potentially fatal to our project, our approach is not intended to sidestep the problems of textual interpretation, but rather to use the power of sheer quantity to help put qualitative disagreements into perspective: Large-scale coding and statistical analysis allow the noise of randomly distributed interpretative differences to be distinguished from the signal of genuine historical patterns by exploiting large samples and statistical inference. These methods also quantify qualitative disagreements, providing measures of inter-coder reliability that specify just how much difference in interpretation exists. They provide a path out of endless cycles of disagreement by specifying precisely documented techniques for resolving disagreements, which can be replicated, systematically altered and statistically analyzed. (Slingerland and Chudek 2011b) These techniques can also provide counterintuitive results that help us to better situate our qualitative intuitions as well as reveal unexpected patterns. For instance, I was very much surprised by the sharp reduction in xin as locus of emotion in the late Warring States: my intuition, I think shared by most in my field, was that xin maintained a strong emotional component throughout the Warring States. Our study results suggest that this intuition is wrong. Large-scale corpus analyses therefore can, and should, play an important role in supporting, supplementing, and—when necessary—correcting traditional approaches. At the same time, as humanists become more familiar with the manner in which qualitative analysis is undertaken in the sciences, their deep familiarity with the problems inherent to cross-cultural comparison—and hermeneutics more generally—can and should begin to have an impact. It is significant that, in the initial version of their piece in Cognitive Science, Klein and Klein strongly contrasted the more objective, “unproblematic” coding issues faced in most psychological experiments with the interpretative challenges inherent to studying early Chinese texts. In fact, interpretation is very much front and center in most areas of the sciences—a point that has been made loudly and clearly in the “science studies” literature—which means that injecting a bit of humanistic hermeneutic Angst into the sciences would be extremely helpful, provided that it is done in a constructive manner. My experience with work in cognitive and social psychology suggests that most scientific researchers are much less concerned than they ought to be about potential complications that are screamingly obvious to anyone coming out of the humanities: problems of translation, differing cultural models, pervasive conceptual bias on the part of investigators, etc. This means that the quality of this sort of work would be vastly improved by input from humanities scholars, not merely as data-providers (glorified research assistants), but as theoretical and methodological advisors involved in the most preliminary steps of study design. One reason for the growing gap between the sciences and the humanities is that—arguably under the influence of epistemologically skeptical “Theory”—humanists too often see the interpretative and methodological problems inherent to scientific research as an excuse to entirely dismiss scientific inquiry as a useful source of knowledge about the world (Slingerland 2008: chaps. 1–3), even though we presumably feel that there is something useful or informative about our own work. When presented with scientific studies informed by culturally or linguistically naïve assumptions, our response is too often to throw up our hands and completely dismiss the results, rather than to offer to work together to help overcome—to the extent that it is possible—the relevant naïveté. As someone trained the humanities, I am familiar with the stereotype of scientists as culturally and linguistically illiterate, blissfully unaware of their own cultural assumptions and unjustifiably confident of the validity of their own categories of understanding. I have also certainly met my fair share of scientists who fit this characterization. As I have come to spend more and more time collaborating with scientists, however, I have also become familiar with their stereotype of the stubbornly obscurantist humanist, who wrinkles up her nose at their ridiculous “data,” but who—when pressed for details or concrete suggestions for improvement—walks off with her nose in the air, muttering in French. Again, a cartoon, but again containing a modicum of truth. For all their faults, scientists are very keenly concerned with the accuracy of their results, and are very willing to listen to anyone who has concrete suggestions for how to improve this accuracy. It is time for us to begin talking. To anyone starting from a cognitive scientific standpoint, the idea that any Homo sapiens anywhere completely lacked any sense of mind–body dualism comes as a bit of a surprise. Cognitive scientists have been arguing for decades for the existence in human beings of a tendency to project intentionality onto other agents, and the world more broadly. This tendency has come to referred to by cognitive scientists as “Theory of Mind” (ToM),31 being “theory”-like because it goes beyond the available data to postulate the existence of unobservable, causal forces: mental forces such as thoughts, desires, or beliefs. It is apparent that, from a very early age, human beings conceive of intentionality as a distinct sort of causality, and distinguish it from both the kind of physical causation that characterizes folk physics and teleological, “vitalistic” causation. Infants and very young children suspend contact requirement for interpersonal causality, and understand that agents—as opposed to objects—harbor goals and desires and experience emotions (Spelke et al. 1995). Intentionality is viewed by children as a special type of “internal” cause that can work at a distance, and that invites responses from affected agents (Premack and James-Premack 1995). Even very young children also seem to expect agents to be self-propelled, as opposed to objects, which should only move when contacted by another object (Spelke et al. 1995; Rakison 2003). There is a massive, and rapidly growing, literature on ToM. Here I will merely note that this tendency appears to emerge quite early in development (e.g., Spelke et al. 1995; Bloom 2004; Phillips and Wellman 2005); has a largely automatic and perceptual component in addition to cognitive components emerging later in development (Scholl and Tremoulet 2000); is present cross-culturally in contemporary populations (Avis and Harris 1991; Barrett et al. 2005; Cohen 2007; Cohen et al. 2011); is vulnerable to selective and at least partial damage in conditions such as autism (Baron-Cohen 1995; Tager-Flusberg 2005); and would appear to be distributed in human populations in a spectrum ranging from autism (deficient ToM) to schizophrenia (excessive ToM) with a clear genetic basis (Crespi and Badcock 2008; Crespi et al. 2009). As Paul Bloom (2004) has observed, this ToM or “intentional stance” (Dennett 1987) lies behind a disjunction in the humanly experienced world between mind-possessing, intentional agents and mindless things governed by mechanistic causality. Moreover, there is increasing evidence that something at least functionally analogous to ToM may cross the species barrier. Although there is a heated controversy over whether or not other great apes possess full-blown ToM—that is, the ability to model belief systems in other agents that differs from one's own belief system—primates and other mammals clearly possess some elements of ToM, and recent studies have suggested that some sort of fundamental distinction between animate agents and inanimate objects may be deeply rooted in the vertebrate brain (Mascalzoni et al. 2010). The fundamental nature of this disjunction—its early onset in infant development, automaticity, and apparent universality—motivates Paul Bloom's argument that mind–body dualism is not an accidental philosophical legacy of Plato or Descartes, but rather a universal feature of embodied human “folk” cognition. The Impact of Cognitive Science: Shifting Our Hermeneutical Starting Point As I have argued in great detail elsewhere (Slingerland 2008), taking seriously scientific work on the nature of human cognition would have a salubrious constraining effect on the humanities by challenging some of our fundamental assumptions. Humanistic inquiry in Western academy has, especially over the last half-century or so, been dominated by disembodied models of human cognition. Whether rationalistic and universalist or social constructivist and radically particularistic, these models have been based on the assumption that the basic architecture of human thought arises in a manner completely independent of our evolved, biological embodiment. Such a position is no longer empirically tenable. The human mind is inextricably embodied, and like all embodied minds is the product of evolutionary processes. In the case of humans, these evolutionary processes occur in both biological (genetic) and cultural forms,32 but neither one has the effect of magically extracting us from the physical world in which we are embedded. As we all know, the manner in which a hermeneutic journey unfolds depends very much upon its point of departure. In both my broader field of Religious Studies and my more specialized field of early Chinese thought, the default point of departure has become the assumption of radical cultural difference that naturally falls out of a disembodied, culturally or linguistically constructed model of human cognition. As several scholars of Chinese thought have observed, the result has been a continuation of the kind of exoticization of China one finds in early European Orientalism, whereby China is transformed into a culturally monolithic, timeless, and eternal Other that can be juxtaposed with a similarly monolithic, static West (Zhang 1998; Saussy 2001; Billeter 2006). A representative example of this phenomenon is François Jullien's treatment of the same passage from the Zhuangzi that I discussed above—Confucius's observations concerning the nursing piglets. Being perfectly capable of reading classical Chinese, Jullien is forced to acknowledge that the passage suggests the presence of something “that puts the physical being to good use, something that Aristotle would no doubt have named ‘the soul’ [qui fait oeuvrer l’être physique à son service, nul doute qu'Aristote l'aurait nommée l’<âme>]” (Jullien 2007: 65). However, he then dismisses the importance of this entity because it is not explicitly named, and “in the absence of a substantial notion of the soul”—Jullien's asserted, but never genuinely demonstrated assumption—it can be nothing more than a vague capacity. In fact, Zhuangzi does give this entity a variety of names: it is likely the shen 神, the guiding force in Zhuangzian wu-wei (Slingerland 2003: chap. 5), which Zhuangzi sometimes refers to as the “true ruler,” with essence but no form, or the “Heavenly Lord” (tianjun 天君). This, however, is not my point. We have here a case where two Sinologists who know the relevant texts quite well diametrically disagree on their proper interpretation, and these disagreements are very much a product of relative interpretative starting points. Jullien's interpretative starting point—the complete absence of anything like “our” notion of a soul, and (more deeply) radical conceptual difference produced by linguistic/cultural/historic difference33—leads him to dismiss as an aberration what might otherwise be seen as definitive evidence against his position. This is, of course, the nature of the hermeneutical beast: specific bits of evidence take on varying significances when embedded in incommensurable explanatory frameworks, or seen from the perspective of different “horizons” of understanding (Gadamer 2004). What I would like to suggest here is that, as scholars of religion, we need to change our horizon of understanding in light of our current best understanding of the mind coming out of the cognitive sciences. If it were, in fact, the case that we were disembodied consciousnesses, inscribed upon or constructed by language and culture all the way down, radical difference between, say, Greek-inspired thought and Chinese-inspired thought would be a reasonable starting assumption—the languages and social systems are quite different. However, the overwhelming weight of empirical evidence about human cognition strongly suggests that we are not, in fact, so deeply embedded in language and culture: we are embodied animals, with a conceptual world co-structured by genes and the physical–cultural environment (Slingerland 2008: chap. 3). Taking cognitive science, and a fully embodied picture of human beings, seriously transforms radical cultural–linguistic difference into something that needs to decisively demonstrated, rather than merely assumed. Just as work on ToM should make us profoundly skeptical of claims that any people anywhere lacked a basic sense of mind–body dualism, work on basic-level cognitive categories, innate human essentialism, and folk physics (basic causality) similarly changes the burden of proof for scholars who would argue for other aspects of early Chinese holism—that, for instance, they lacked a concept of psychological interiority, of biological essences or teleology, the distinction between fact and appearance, or anything resembling “our” concepts of causation or time. Cognitive scientific evidence about human cognition changes the burden of proof for all of these claims on two scales. In the broader context, it is simply a priori unlikely that we would find such radical differences in such basic concepts among members of the same species—even a species as “hyper-cultural” as our own. In a narrower context, there is also a large, and constantly growing, body of specific experimental findings that argue against each particular claim (Slingerland 2008: chap. 3; De Jesus 2010). This combined burden is one that claims of radical incommensurability simply cannot bear. Integrating Cognitive Science with Cultural Studies Having argued that we scholars of religion tend to fetishize cultural difference to our professional detriment, I would like to close with a discussion of the benefits of focusing upon difference. Arguably one of the primary rationales for studying other cultures is that they are often founded upon distinct conceptions of the self, the self's relationship to society, the relationship between reason and emotion, etc., and that difference can provide space for reconsidering deep assumptions of one's own culture. Some of the scholars who have been most active in promoting the uniqueness of early Chinese thought, such as Roger Ames or Henry Rosemont, Jr., are motivated by the conviction that Western economic rationalism and extreme individualism have led to social alienation and ecological disaster, and that the more “holistic” view of the self and society that we find in certain forms of Confucianism might present an alternative, more positive vision.34 Although I oppose these scholars' more extreme claims about radical cultural difference, this aspect of their projects represents an important contribution to our understanding of both early China and ourselves. In the remainder of this section, I explore the kernel of truth behind the myth of radical Chinese holism, as well as how a serious consideration of early Chinese mind–body concepts has much to offer contemporary conceptions of the self and models of ethical education. Paul Bloom, in arguing for universal mind–body folk dualism, has portrayed this dualism as Cartesian in nature (xii)—that is, as an ontological substance dualism. Even within cognitive science circles, this claim has not gone unchallenged, and some important recent work in cognitive science, combined with data from early China, allows us to add nuances to the basic schema outlined in Bloom (2004). Challenges to the idea that we are all Cartesian dualists have been advanced on at least three fronts: (1) whether or not it is the case that our division of the world and agents boils down to only two parts; (2) if that is the case, whether or not we distinguish entirely sharply and cleanly between the two parts; and (3) whether or not any fundamental divisions in human cognition, if they exist at all, map onto the semantic ranges picked out by the English words mind and body. I will focus upon each of these challenges in turn, exploring the manner in which knowledge of early China bears upon the debate, hoping thereby to illustrate how humanistic knowledge—deep, textured knowledge of other cultures—can and should inform work in the cognitive sciences. Are Folk Views of the Self “Dualistic”? Cartesian dualism posits a stark dichotomy between a single, indivisible consciousness-soul and a body, only the latter of which may be divisible into subcomponents. For scholars of early China, one of the most obvious problems with claiming universality for this schema is the fact that, at least by the time that we reach the Warring States, “the” soul is generally not conceived of as unitary, but made up of several components related to one another in a complex and probably somewhat inconsistent manner—the specific conceptions varying over time and by region, and not even showing rigid consistency within single texts. From the earliest texts, we have the body being contrasted with the “spirit” (shen 神), a more-or-less unitary entity that represents the personal essence of the deceased, leaves the body at death to take residence somewhere “up” above the visible world, and serves as the focus of sacrificial rituals or prognostications. Even in the early texts, however, and with increasing frequency as we move into the Warring States, the spirit is discussed alongside at least two other subsouls, the po 魄 and the hun 魂. The standard scholarly position has long been that these two souls were conceived of as separate and as having different fates after the death of the body. A classic article by Kenneth Brashier (1996) has called this neat dichotomy into question, demonstrating that, although there is considerable evidence for a hun-po dualism in the elite literati tradition, there were multiple other scholarly and popular conceptions in which hunpo was used as a compound, or the two terms were used interchangeably. The only constant seems to be that, despite their varying degrees of entanglement with the “body complex” (149), terms like hun and “spirit” were all consistently linked to mental activity and the continuation of consciousness—as well some degree of personal identity—beyond the death of the physical body. This sort of contrast between the body or “body complex” and a more rarefied spirit is a dualism of a sort, but significantly weaker than the ontological substance dualism we find in Descartes. Interestingly, similar challenges to Bloom have been presented by cognitive scientists familiar with cross-cultural data. Rebekah Richert and Paul Harris (2008), for instance, provide a variety of cross-cultural evidence suggesting the prevalence of a tripartite (body–mind–soul) model of the self, rather than simple mind–body dualism. As is the case with early Chinese conceptions, this tripartite schema can still be brought under the umbrella of folk dualism if we note that concepts such as that of “soul” or personal essence are fundamentally parasitic on the concept of mind: things without minds do not have souls. In this respect, the various soul-like concepts that we find in the world's religious traditions—as well as the fact that these souls themselves can have quite numerous subtypes—can be understood as cultural fine-tunings and subdivisions of a more fundamental and universal concept of mind. Nonetheless, too simplistic a picture of the self as consisting of two, and only two, components is clearly inadequate. “Weak” or “Sloppy” Folk Dualism: Mind and Body Interpenetrate There is a rather large and constantly growing literature on the “embodied” or mind–body integrated nature of Chinese thought.35 Since this is a well-trodden path, I keep this portion of my discussion brief. To begin, with regard to xin–body relations, the early Chinese conception of xin is undeniably different from Cartesian esprit or Kantian Geist in that it refers to a concrete organ in the body, the seat of emotions and desires—or at least certain emotions and desires—as well as “reason” and language ability. As part of the body, the xin interacts with body and bodily energies (qi 氣) in multitudinous and complex ways, a fact that is highlighted both in the philosophical literature—see particularly Mencius 2:A:2—and the later medical literature.36 This means that, as Henry Rosemont, Jr., has noted, we do not find in early Chinese thought the sort of widespread and sharp “cognitive/affective split” (2001: 78) that characterizes much post-Enlightenment thought in the West. For many early Chinese thinkers, the xin is the locus not only of the sort of rational functions that thinkers such as Descartes or Immanuel Kant associate with the mind—abstract thought, free will, reflection—but also a panoply of normative emotions, such as compassion or moral disgust, that such thinkers would relegate to the “heteronymous” realm of the body. Even some of the Chinese thinkers who in fact posit a rather sharp divide between the xin and the emotions still embrace a relatively “holistic” model of the perfected sage, who has reshaped his emotions and desires to accord with the normative order.37 This is why many early Chinese thinkers also value embodied “know-how” or tacit knowledge over the sort of abstract, explicit theoretical knowledge that is prized in most of post-Enlightenment Western thought (Fingarette 1972; Billeter 1984; Eno 1990; Ivanhoe 1993/2000; Slingerland 2003). Another sense in which the early Chinese conceptions of mind and body could be considered “holistic” is that neither the mind nor the postmortem spirit is completely immaterial. The xin is, as noted above, very much a part of the body, and despite its special powers does not consist of a separate substance. Ancestral spirits and other supernatural beings occupy a space somewhere between the visible human world and the very rarified abode of heaven, and interact causally with the visible world in a variety of ways. The kernel of truth behind claims that the early Chinese had a radically “immanent” conception of the universe is that they appeared to have seen minds, souls, or spirits as not completely immaterial—that is, “made” out of a different stuff than the visible world—but rather as consisting of very rarified stuff, on some sort of continuum with the material making up the visible world. Even a cursory examination of non-Chinese traditions, however, makes it clear that this kind of overlap or interpenetration of mind and body, or reason and emotion, is by no means unique to China or “the East.” To begin with, it is important to recognize that conceiving of the mind as exclusively a seat of amodal, algorithmic reason—completely detached from and ontologically distinct from the body and the material world—is by no means a hegemonic position even within the Western philosophical tradition. Aristotle, for instance, based his entire ethics upon virtues, which are essentially a type of “intelligent” emotional-somatic capacity, linked to the body and to a type of “skill” or implicit knowledge (Wiggins 1975/76). In the Aristotelian model of the self—one that dominated scholastic philosophy throughout the Middle Ages—such capacities occupy a third place in between abstract cognitive capacities and more gross bodily functions. Although the disembodied model of the mind came to assert a fairly broad hold on the Western philosophical mind during the European Enlightenment, there were prominent holdouts—including Leibniz and Spinoza—and the development of post-Enlightenment philosophy in the West has arguably been a story of attempts move beyond Cartesianism and reintegrate the body and mind. As Bryan Van Norden has observed, philosophical Cartesianism in fact only represents a small portion of the Western philosophical tradition, and is no longer seriously defended by most Western philosophers; the portrayal of “Western” philosophy as characterized by some kind of monolithic Cartesianism is thus an unfortunate example of a “methodologically dualist” approach that caricatures both “Eastern” and “Western” thought (2002: 167–168). Once we leave the realm of philosophy, it becomes clear from even a cursory survey of the literature on folk intuitions that strong Cartesianism is, in fact, a rather strange and counterintuitive view even for “us Westerners.” When reasoning about topics such as spirit possession or the afterlife, study participants in the Western world have intuitions about which capacities clearly go with “the mind” (abstract thoughts and personal identity); which clearly go with “the body” (physiological functions); and which are intermediate capacities, such as appetites and habits, that straddle body and mind (Cohen 2007; Cohen and Barrett 2008). In one recent study, Emma Cohen et al. (2011) found that, when asked to imagine having left their own body and entered a rock or a plant, subjects in both rural Brazil and Oxford viewed their capacities as more or less “body dependent.” For instance, they were quite likely to say that, even if they had entered a rock, they would still remember things, see things, or know things, but relatively unlikely to say that they would feel achy or sore or feel hungry. The sorts of capacities that we typically associate with mind tended to be seen as body-independent—easily migrating to the rock or the plant—while others remained tightly yoked to the physical body and many hover somewhere in between. In all of these studies, the sorts of capacities that often, but not always, migrate with the spirit or survive the death of the physical body map quite nicely onto the functions of the Chinese xin that are often cited as examples of radical “holistic” thinking. What is particularly interesting about this study is that the rural Brazilian subjects—most of them entirely without formal education—were more dualistic than the U.K. subjects. Cohen et al. speculate that this may be due to the U.K. subjects' exposure to Western biomedical and neurological education, with its message of an integrated mind–body system. That is, education in “Western” science—so typically associated with the supposedly monolithically Cartesian Western mind—may in fact serve to undermine innate folk dualism. Very similar results were obtained by a recent study by Maciek Chudek et al., which found that, among rural Fijian subjects, mind–body dualism decreased in subjects who had more exposure to Western education (forthcoming). Another helpful set of illustrations (both figuratively and literally) of folk mind–body overlap is provided by K. Mitch Hodge in an important study that explicitly critiques Bloom's theory of “innate Cartestianism” (Hodge 2008). Examining examples of funerary rites, mythology, iconography, and religious doctrine drawn from a variety of world cultures, Hodge points out that the folk's dualism is clearly not one whereby mind and body are conceived of as entirely different, noninteracting substances. Inert bodies continue to contain traces of the minds that once inhabited them, which is why corpses present such a profound religious and emotional problem: they are objects—and, within a short period of time, threats to public health—that somehow seem different from ordinary objects. Indeed, one could argue that the primary purpose of mortuary rituals is to break this connection in a workable fashion, allowing the corpse to be disposed of safely while either gradually detaching the mind-traces from it completely or transferring these traces to another, more durable object (a gravestone, ancestor tablet).38 In a similar fashion, minds never free themselves entirely from their mortal coil: the dead continue to be imagined as possessing ethereal bodies resembling those they “possessed” in life, as well as being subject to the sorts of physical limitations typically imposed by bodies.39 There are a myriad of parallels in early China to the sort of physical representations of the dead that Hodge documents, in which the deceased are visually represented as possessing very much the same form in the afterlife—although sometimes rather more attenuated or vague—as the one they possessed in life, and where the human and supernatural realms are portrayed as distinct but connected in some fashion.40 Artistic portrayals of this sort are extremely revealing precisely because they are not explicitly about worldviews—i.e., they are not consciously formulated theological or philosophical accounts—but rather their indirect expressions, and therefore arguably much better at revealing the contours of real-life cognition in a given culture. If we set side-by-side, for instance, a silk tomb painting from Zidanku (fourth century BCE) representing the deceased as a male figure riding on a dragon (Lai 2005) and any randomly chosen Renaissance painting depicting the soul of the dead as a rather buff and well-dressed Italian aristocrat, one would be hard-put to single out one of the two as more or less “holistic”: in both cases, the dead person is imagined as body-like in form but somehow less than material. The famous Changsha Mawangdui name banner of Lady Dai portrays the universe as consisting of distinct registers—most scholars see them as at least threefold, representing an immanent realm sandwiched by a heavenly realm above and underworld below (Wu 1992: 121–127)—populated by somewhat ethereal, but nonetheless body-like, figures. Compare this painting to, say, Paolo Veronese's The Battle of Lepanto (1572),41 depicting the famous battle in 1571 where a fleet of galleys from the Christian Holy League defeated the Ottoman fleet in a battle off Greece. In the painting, we see the two fleets locked in combat below, while in the clouds above a gathering of quite vigorous-looking saints, led by St. Justina, is pleading with the Virgin to grant victory to the Christian forces. They are apparently winning her over, because to the upper right we see a cherub beginning to rain flaming arrows down on the Turkish forces. Which of the vertical schemas depicted in these paintings is more disjunctive? I fail to see any principled reason for seeing either depicted universe as any more or less “immanent” or involving more or less interpenetration of otherworldly realms, than the other. “We” Are Neither Greek Nor Cartesian No Chinese philosopher presents a radically conflictual theory of a bi-, tri-, or multi-partite soul: no more do we find a stark dichotomy between soul and body conceived, as by Plato, as two distinct substances, the one invisible and destined for immortality, the other visible, the soul's prison. (2007: 75) It is hard to take strong issue with any of these statements. Rosemont and Lloyd seem to feel, however, that “not Cartesian,” “not like Plato,” “not Western,” and “not like us” are synonymous phrases. This is, in fact, a common—and crucial—rhetorical move in the neo-Orientalist literature: setting up a straw-man “West” (Cartesian, Greek, rationalistic), and then using the fact that most Chinese thinkers are not Cartesian or Greek or rationalistic to demonstrate a profound gulf between the West and East. The evidence that we have reviewed above suggests that Descartes' austere mind–body substance dualism is a rather counterintuitive philosophical position, alien to any person's everyday cognition. Cartesianism represents an intellectually rigorous working out of a rather “sloppy” folk intuition, but like many philosophical or theological concepts—e.g., a completely transcendent immaterial God, Calvinistic predestination, or Buddhist “no-self” doctrines (Barrett 1996; Slone 2004)—online human cognition seems somewhat impervious to its logic. Looked at in this light, the fact that the early Chinese were not Cartesian dualists is not much to write home about, and in no way entails that mind–body dualism of some sort was entirely alien to their thought. While the early Chinese did not posit a scalpel-sharp, perfectly clear divide between mind and body—or “higher” cognitive abilities residing in the “mind” as opposed to lower ones located in the body—they clearly saw xin and the various words for the physical body as two qualitatively distinct points of attraction on a spectrum, with some intermediate abilities or features potentially falling on one side of the line or the other depending upon the exact time period, the school of thought, or the pragmatic context (for instance, medical diagnosis and treatment vs. philosophical reflection on methods of self-cultivation). In this sense, they were no more or less dualistic than “we” are. CONCLUSION: DOING COMPARATIVE RELIGION Once the shift from radical cultural–linguistic constructivism to embodied commonality is made, the landscape of comparative religious studies begins to appear to us in a very different light. Not only does comparison as a very project actually begin to make sense (Slingerland 2004), but perhaps the ambitions of the early pioneers of comparative religion also begin to seem a bit less ridiculous. Recognition of the cultural and intellectual limitations of scholars such as James Frazer and Edward Tylor has caused adjectives such as “Tylorian” to become terms of abuse—synonymous with theoretically and culturally naïve, colonialist, “hegemonic.”42 The result, arguably, has been to transform religious studies from a science of human cultural patterns to an endless process of interpretation (“turtles all the way down” [Geertz 1973: 29]) and accumulation of massive quantities of “thick description” with no analytic goal in mind—indeed, with the explicit assumption that any attempt to “explain” our material would be to betray it. This has brought the progressive research projects of the early pioneers of our field to a screeching halt (Slingerland and Bulbulia 2011), throwing the comparative baby out with the colonialist bathwater and ceding the task of exploring the origins and nature of human religious life to scholars coming from other fields who too often lack the linguistic and cultural backgrounds to do the job well. We scholars of religion need to get back in the explanation game. As Roger Ames, Henry Rosemont, Jr., and François Jullien have argued quite convincingly, early Chinese conceptions of the self—and we should acknowledge that there are many of them—do present us with models of mind–body, reason–emotion, and individual–society relations that, on the whole, provide edifying contrasts to the disembodied, hyperrationalist models that have dominated recent Western philosophical thinking. This has implications that go far beyond philosophy or religion, since these psychologically unrealistic models coming out of philosophy have had—and continue to have—deleterious impacts on legal, political, and educational policy (Slingerland 2011a, 2011b). They also played a role in sending so-called first-generation cognitive science down some ultimately dead-end paths—an influence that the field as a whole has only recently recovered from. In these respects, engaging with early Chinese models of the self can clearly serve as an important, substantive corrective to recent philosophical–religious excesses and wrong turns. However, it is also imperative that the sort of conceptual variation that emerges from comparative religion be contextualized within a framework of basic human cognitive universals—indeed, it is this very framework that allows texts or thinkers from another era or cultural context to be comprehensible in the first place. It is important to recognize that a fully exoticized “Other” cannot engage us at all, and that the religious or philosophical challenge of texts such as those of early China can only be felt against a background of cognitive universality. As Jean-François Billeter has noted in his book-length critique of Jullien: We can begin with the myth of the fundamental “otherness” [l'altérité foncière] of China … and we will then merely develop a vision of China that confirms the “otherness” posed at the outset. … When we begin with this a priori assumption of difference, we lose sight of the shared foundation; when we begin from the standpoint of a shared foundation, the differences will then naturally emerge on their own. (2006: 82; cf. Saussy 2001: 111–112; Cheng 2009) Zhang Longxi further observes that an obsession with radical cultural difference is not only intellectually paralyzing—what can one really say about the incommensurably different?—but also creates a situation where it becomes difficult to see why anyone else outside of Chinese studies would care about what we do. Speaking of his own field, he notes that, to the extent that study of Chinese literature remains constrained by cultural myths, it is likely to remain a kind of “cultural ghetto … closed and of little interest to outsiders in the academic environment of the American university” (1998: 118). If the study of Chinese religion is to move out of the ghetto and engage with the broader academic world, it is necessary to ground it in a more realistic model of human cognition and culture-cognition interaction. Embodied cognition and a dual-inheritance model of gene-culture coevolution provide precisely this sort of model, an ideal new starting point for cross-cultural comparative work. Embodied experience of a shared world can serve as a bridge to the cultural “Other” and provide us with powerful new theories of how this shared cognitive structure can be elaborated by culture, language, and history into quite idiosyncratic—but ultimately still comprehensible—forms (Slingerland 2008: chap. 4). If comparative thought and some sense of progressive research agenda are to regain their proper place at the core of religious studies, an approach that combines the best knowledge and practices of both the sciences and the humanities is our most promising way forward. Also see Farmer et al. (2000) for an excellent general critique of the position that holism is somehow unique to China, as well as Michael Puett's commonly ignored, but nonetheless definitive, debunking of claims concerning nature–culture holism in early China (Puett 2001). Although the traditional starting date for the Shang Dynasty is circa 1600 BCE, our earliest written records—the so-called oracle bones—date from circa 1250 at the earliest. For an introduction to Shang religion, see Eno (2009). 6 Book of Odes, Mao #209. The fact that these relatively incorporeal spirits could consume at least the invisible essence of food and drink, and even become drunk, is a manifestation of the “weakness” of the folk's dualism that will be discussed later. 7 For helpful introductions, see Poo (1998) and Lai (2005); for an excellent recent survey focused on early Chinese mortuary practices, see Thote (2009). See the account in Lai (2005); Lai notes that the place where the proper name would be expected is occupied by the character mou 某 (“so-and-so”), suggesting that this was a form text that had, for whatever reason, yet to be filled in. Also see Poo (2009) for a discussion that situates this text in the broader context of Warring States afterlife beliefs. 10 Translations from Chinese texts cited are my own, unless otherwise noted, and keyed to the most commonly used English translations. 11 As I discuss later, it seems that spirits and other supernatural beings in early China were not conceived of as completely immaterial: spirits are invisible and freed of physical bodies, but arguably are viewed as merely very tenuous or diffuse forms of matter. Nonetheless, this spectrum of tangibility/visibility is not continuous: there is a clear divide between the mundane world of physical bodies and other concrete, visible objects and the “numinous” (shen 神, ling 靈) realm of the ghost-demons (gui 鬼) and various spiritual beings and gods. 12 Goldin also links such mind–body dualism to “folk psychology” (232), anticipating the connections to cognitive science work on ToM that I discuss later. I am indebted to Wayne Kreger for bringing this passage to my attention, and to Paul Goldin for observing that, although this particular passage is almost certainly post-Buddhist, Jeffrey Ritchie has documented probably earlier versions from the Liezi in Ritchie (2011). 15 Anyone familiar with Descartes' writings will note the parallel to Descartes' own obsession with automatons. Of course the dating—even rough—of texts from the pre-Qin period is controversial, not least of all because, like most preprinting-press texts, they are rather permeable, taking in material from different time periods and subject to scribal and editorial whims. There are currently various factions within the field of early Chinese studies, ranging from scholars who still defend a very clear and “traditional” chronology of pre-Qin texts to what I would characterize as a “radical fringe” that has been arguing for extreme textual indeterminancy in all pre-Han texts (e.g., Brooks and Brooks 1998). I would place myself somewhere in the middle, and would stand by the claim that the three-part periodization that I employed in the study is broadly defensible on both philological and philosophical grounds (see Slingerland 2000 and Goldin 2011). In any case, the contrast between the pre-Warring States texts on the one side and the early- and late-Warring States texts on the other is certainly uncontroversial, and the trends I discuss still hold if we collapse early- and late-Warring States into one category. An online database of the so-called Guodian corpus of bamboo texts (interred roughly 300 BCE and discovered in 1993), maintained by the Chinese University of Hong Kong (http://bamboo.lib.cuhk.edu.hk/). The inclusion of archeological texts was intended to help offset the inevitable bias introduced in dealing with transmitted texts, which may have been subjected to editorial selection bias over the centuries. The Guodian corpus was chosen because of the ease of accessing and searching it online. As Paul Goldin has observed (personal communication), however, this introduces a potential new, and more avoidable, source of bias—the selection of one particular archeological corpus among the many now available—that should be corrected in future iterations of this study. 22 Instances of xing 形, shen 身, ti 體, li 力 (“physical strength,” one instance in the late Warring States), and qi 氣 (when used in the sense of physiological energy) were all taken as references to the “body.” 23 Mencius 4:A:19 (Van Norden 2008: 99), where physically taking care of one's parents is characterized as “merely caring for their mouths and limbs” (yang kouti 養口體); this arguably expresses a coordination rather than a contrast. 24 Note that, for the purposes of the final analysis, content codes 13–15 (all referring to various aspects of what one might term “higher cognition”) were collapsed into one code. 25 We did not systematically explore the issue of where these emotions go once they are expunged from the xin, but my qualitative intuition is that they are downloaded onto the qi, or bodily energy. 26 Nothing like this sort of systematic study has yet been performed on post-Qin, let alone contemporary materials, but qualitative analysis suggests that, once Buddhism is introduced to China in the beginning of the Common Era, the conception of xin becomes, if anything, even more disengaged from the body. 27 See, for instance, the study by Clark and Winslett (2011) recently published in this journal, the methodology of which was inspired by an early version of the project reported in Slingerland and Chudek (2011a). 28 See, for instance, James Pennebaker's “Linguistic Inquiry and Word Count” program at http://www.liwc.net/. As part of a large, multiyear grant just awarded to us at the University of British Columbia to study “The Evolution of Religion and Morality,” we will be making such tools (along with summaries of their strengths and weaknesses and instructions on how to use them) available to scholars of religion; consult our web site at: http://www.hecc.ubc.ca/cerc/project-summary/. One prominent exception, as Luther Martin has observed (personal communication), has been the work of biblical scholars, who since the nineteenth century have used concordances (and more recently electronic databases) to perform word-count studies aimed at, for instance, distinguishing between “genuine” Pauline letters and the deutero-Pauline literature. 31 Perhaps the best recent (and quite readable) introduction to ToM is Bloom (2004). For example, the fact that “the fabric of our thought … is woven by Indo-European languages” (Jullien 1995: 18), while “the structure of ancient Chinese … gave rise to an interplay of correlations and alternations that led to the expression of constant variation within a process” (2007: 111). Harper (1998) and Porkert (1974) provide helpful discussions of the medical literature; Ishida (1989) serves as a representative example of how the conception of xin in a particular medical text is too commonly reified into “the” Chinese view, as if that view were both monolithic and eternal. 37 Xunzi, for instance, has a quite “rationalistic” model of the xin, one of the main functions of which is to monitor and control the emotions and desires. “The likes and dislikes, delights and angers, griefs and joys of the inborn nature are called emotions,” he says in one typical passage. “When the emotions are aroused and the mind makes a choice among them, this is called thought” (22/1b; Knoblock 1994: 127). See Yearley (1980) for more on the Xunzian model of the mind. 38 In his “Discourse on Ritual” (lilun 禮論), Xunzi offers an extremely detailed and sophisticated account of how Confucian funerary rites are designed to perform this psychological function (Knoblock 1994: 49–73). See Lai (2005) for an excellent review of early Chinese funerary materials. 41 Now housed in the Gallerie dell'Accademia, Venice. 42 See, for instance, Clifford Geertz's comment that “even to consider people's religious beliefs as attempts to bring anomalous events or experiences … within the circle of the at least potentially explicable seems to smack of Tyloreanism or worse” (1973: 100; emphasis added).
Strong views about mind–body holism are also quite common—if not the default position—in contemporary Chinese scholarship: Zhang Zailin, for instance, observes that, in early Chinese thought, there is no dichotomy of mind versus physical body, but rather a holistic conception whereby mental processes are produced holistically by the body (Zhang 2008: 29; cf. Yang 1996; Tang 2007). Even scholars who might seem, at first glance, to be adopting a stance consistent with weak holism often end up embracing positions that only make sense if one takes holism in a strong sense. Mark Edward Lewis, for instance, observes that “the Chinese”—in contrast to “the Western tradition”—“accepted that the mind was part of the body, more refined and essentialized, but of the same substance” (2006: 20), and then goes on to describe the body in early China as an apparently arbitrarily chosen, culturally constructed “marker of supreme value” (20), and the bodily surface as a constantly “fluid and shifting … zone of exchange” (61). Considering the obvious and intuitive importance of the body as a locus for value and discrete individuality in most Western traditions, this suggests that the early Chinese inhabited an intellectual milieu in which the concepts of body, mind, and self-other boundaries were quite alien to our own. A similar assumption of radical alienness regarding mind and body informs Herbert Fingarette's famous claim that the Confucius of the Analects completely lacked anything like the concept of psychological interiority (Fingarette 1972; cf. 2008).4 Indeed, strong forms of the holist position typically link a complete absence of mind–body dualism in early China to a lack of inner life, individualism, or concept of a personal afterlife. Paolo Santangelo, for instance, claims that, in contrast to “Western cultures,” in China: there is no clear separation between spirit and matter, or soul and body … the concept of “
no
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://brainworldmagazine.com/think-therefore-mind-body-connection/
I Think, Therefore I Am (The Mind-Body Connection) » Brain World
I Think, Therefore I Am (The Mind-Body Connection) In 1637, René Descartes said, “Cogito ergo sum” (“I think, therefore I am”). Originally written in French (“Je pense, donc je suis”), it is best known both in Latin form and as representative of “the mind-body problem” or dualism, a concept intrinsic to modern science and philosophy. In essence, dualism promotes the idea of isolation and separation in the dynamics between thought and matter, between the cerebral and corporal. A striking example of this thinking is found in traditional allopathic medicine, wherein the human mind and body have been treated as separate entities for the past 400 years or so. The Quantum Human: Dualism’s Demise Quantum physics and quantum science are increasingly demonstrating that all matter arises from and is connected to an infinite (and likely unified) field of universal energy, which in its myriad forms creates everything in existence — including us. Physicist David Bohm classified this energy as either invisible (“implicate”) or visible (“explicate”). Humanity represents a miraculous expression of these energies: We are the implicate energy of mind, and the explicate energy of body; there is no separation. Indeed, we appear to be quantum beings! Mind and body are inextricably connected and interactive, an idea reflective of ancient tribal philosophies. Ironically, what they simply accepted intuitively is now being proven by the very science that rejected their thinking as primitive and backward. The Mind-Body The implicate energies of mind — consciousness — comprise three interactive strata: the cognitive conscious, the automatic and autonomic subconscious, and the more ethereal superconscious — the spirit or soul. Living can thus be described as a multitudinous exchange of energies between consciousness and its environment. These energy exchanges create internal energies we call thought. Thoughts in turn create and interact with energies we call feelings, triggering the sensation of energies we call emotions. The explicate energies of body provide the mind both a residence and vehicle through which to experience the energy exchanges of living. The body conducts incoming energies through our sensory apparatus to our internal processor, the brain. In turn, the mind, via the brain, uses these inputs to generate somatic sensations and conditions within the body, which then feed into the loop of thoughts, feelings, and emotions. This interconnection and communication defines a human being. The mind is the nexus, generating and integrating thoughts, feelings, and emotions with somatic sensations and perceptions, producing what we perceive as reality. Oops! While that’s how it all apparently works, it’s not how it all necessarily feels. Duality still holds sway; for example, most people would describe pain as distinct and separate from thoughts and emotions. Yet studies show that 70 to 80 percent of our perception of pain is emotionally generated.
I Think, Therefore I Am (The Mind-Body Connection) In 1637, René Descartes said, “Cogito ergo sum” (“I think, therefore I am”). Originally written in French (“Je pense, donc je suis”), it is best known both in Latin form and as representative of “the mind-body problem” or dualism, a concept intrinsic to modern science and philosophy. In essence, dualism promotes the idea of isolation and separation in the dynamics between thought and matter, between the cerebral and corporal. A striking example of this thinking is found in traditional allopathic medicine, wherein the human mind and body have been treated as separate entities for the past 400 years or so. The Quantum Human: Dualism’s Demise Quantum physics and quantum science are increasingly demonstrating that all matter arises from and is connected to an infinite (and likely unified) field of universal energy, which in its myriad forms creates everything in existence — including us. Physicist David Bohm classified this energy as either invisible (“implicate”) or visible (“explicate”). Humanity represents a miraculous expression of these energies: We are the implicate energy of mind, and the explicate energy of body; there is no separation. Indeed, we appear to be quantum beings! Mind and body are inextricably connected and interactive, an idea reflective of ancient tribal philosophies. Ironically, what they simply accepted intuitively is now being proven by the very science that rejected their thinking as primitive and backward. The Mind-Body The implicate energies of mind — consciousness — comprise three interactive strata: the cognitive conscious, the automatic and autonomic subconscious, and the more ethereal superconscious — the spirit or soul. Living can thus be described as a multitudinous exchange of energies between consciousness and its environment. These energy exchanges create internal energies we call thought. Thoughts in turn create and interact with energies we call feelings, triggering the sensation of energies we call emotions. The explicate energies of body provide the mind both a residence and vehicle through which to experience the energy exchanges of living.
no
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://www.bartleby.com/essay/Descartes-Meditations-And-The-Separation-Of-Mind-FKNAW2VYS4FF
Descartes ' Meditations And The Separation Of Mind And Body ...
Descartes ' Meditations And The Separation Of Mind And Body Descartes’ Meditations take us through what can be called into doubt and what he concludes is absolutely certain. Descartes argues that the mind and body are two distinct things, but he acknowledges that they are somehow connected. Although, Scholars have noted Descartes’ argument of the separation of mind and body, they have missed the importance of how he justifies the connection between the two, because God willed it so. At the start of his meditations Descartes is sifting through his prior assumptions of what he knows and he makes his claim “I am, I exist”. It is absolutely certain that “I am, I exist” is true and cannot be called into doubt because the argument presented is if I will it or understand it, it is true. I am going to…show more content… The body may still exist, but he wouldn’t be aware of it. Descartes observes a piece of wax to try and discern what “I” means as well as what a body is. I dint quite understand the experiment until I read: “Surely I am aware of my own self in a truer and more certain way than I am of the wax, and also in a much more distinct and evident way. What leads me to think that the wax exists—namely, that I see it—leads much more obviously to the conclusion that I exist” (Descartes 7-8). I took from this that because Descartes senses the wax that must mean he exists because if he was nothing how could he possibly think he sees the wax. He endorses his claim by stating: “I now know that even bodies are perceived not by the senses or by imagination but by the intellect alone, not through their being touched or seen but through their being understood” (Descartes 8). I interpreted this claim as I can perceive my body not by pinching my arm or conjuring the image of it in my head, but by acknowledging its extension of my mind. Descartes continues to explore the concept of the body as being an extension, though it is unique and distinct from the immaterial. Descartes spends the majority of the meditations proving that the mind and body are two distinct and separate things. The Third Meditation examines the difference between imagination and thought/understanding. Descartes uses an In the Sixth Meditation, Descartes makes a point that there is a distinction between mind and body. It is in Meditation Two when Descartes believes he has shown the mind to be better known than the body. In Meditation Six, however, he goes on to claim that, as he knows his mind and knows clearly and distinctly that its essence consists purely of thought. Also, that bodies' essences consist purely of extension, and that he can conceive of his mind and body as existing separately. By the power of God, anything that can be clearly and distinctly conceived of as existing separately from something else can be created as existing separately. However, Descartes claims that the mind and body have been created separated without good reason. This Descartes has a very distinct thought when thinking about the mind, and how it relates to the body, or more specifically then brain. He seems to want to explain that the mind in itself is independent from the body. A body is merely a physical entity that could be proven to be true scientifically and also can be proven through the senses. Such things are not possible with the meta-physical mind because it is independent of the body. Building on his previous premises, Descartes finally proves whether material things exist or not and determines whether his mind and body are separate from each other or not. In Meditation Six, Descartes lays the foundation for dualism which has become one of the most important arguments in philosophy. Descartian dualism is one of the most long lasting legacies of Rene Descartes’ philosophy. He argues that the mind and body operate as separate entities able to exist without one another. That is, the mind is a thinking, non-extended entity and the body is non-thinking and extended. His belief elicited a debate over the nature of the mind and body that has spanned centuries, a debate that is still vociferously argued today. In this essay, I will try and tackle Descartes claim and come to some conclusion as to whether Descartes is correct to say that the mind and body are distinct. In the Meditations, Rene Descartes attempts to doubt everything that is possible to doubt. His uncertainty of things that existence ranges from God to himself. Then he goes on to start proving that things do exist by first proving that he exists. After he establishes himself he can go on to establish everything else in the world. Next he goes to prove that the mind is separate then the body. In order to do this he must first prove he has a mind, and then prove that bodily things exist. I do agree with Descartes that the mind is separate from the body. These are the arguments that I agree with Descartes. princess Elisabeth of Bohemia was a noble woman. She was raised by her grandmother and aunt, but soon left to go live with her parents in Holland. Princess Elisabeth was an intelligent woman who excelled in everything, including music, dancing, and art. She is mainly known for her correspondence with Descartes and Meditation Three At the beginning of Meditation three, Descartes has made substantial progress towards defeating skepticism. Using his methods of Doubt and Analysis he has systematically examined all his beliefs and set aside those which he could call into doubt until he reached three beliefs which he could not possibly doubt. First, that the evil genius seeking to deceive him could not deceive him into thinking that he did not exist when in fact he did exist. Second, that his essence is to be a thinking thing. Third, the essence of matter is to be flexible, changeable and extended. Descartes concludes from his first meditation that he is a thinking thing, and as long as he thinks, he exists. In the second meditation, Descartes attempts to define what the “thinking thing” that he concluded himself to be in the first meditation actually was. Descartes’ determines that he gains knowledge of the world, that is, knowledge that is separate from the mind, through the senses; and that the senses can deceive. This he outlines within the first meditation, and mentions on the second meditation. Furthermore, in the second meditation, Descartes refuses to define himself as a rational animal, instead going back and relying on labeling him mind as a thinking thing. In the fifth and sixth paragraphs of the second meditation, Descartes distinguishes the body from the soul. Descartes indicates that there is the presence of the body, and it seems to be in the physical world, but he also notes that his mind does not seem to exist in the same manner. Descartes also claims that the ability to perceive is a power of the soul, but inoperable without the body. Descartes then explores another object with physical substance, which is a piece of wax. The piece of wax is undeniably physical; it takes up space within the material world. The body falls into the category, just as any other physical object in the material world. The main point of Descartes’ second meditation is that any given person can know more about their mind than of the world surrounding them. Descartes’ Arguments for the Real Distinction of Mind and Body Descartes argues has three main arguments for minds and bodies being two different distinct types of substance. These are known as arguments for substance dualism and are as follows. * The Argument from doubt : Descartes argues that while he could pretend or think that he had no body and therefore did not exist in any place, he could not think or pretend he had no mind, as merely having a doubt that he had a mind proves that he does. * The Argument from Clear and distinct understanding: Descartes argues that if two things can be separated even if only by god then they must be two different things. Descartes Further more the mere doubt that you exist is proof that you in fact exist as how can you doubt In Descartes Meditations on First Philosophy, he introduces the divisibility argument for his idea of mind-body dualism. It argues that the mind is distinct from the body and that they are different "substances". The argument has two premises; the mind is indivisible and the body is divisible. In this essay, I will interpret Descartes' argument by discussing the key points of these premises and how they are supported. I will also be incorporating my own thoughts on the argument to determine whether the divisibility argument is enough to validate the idea of mind-body dualism. Descartes' First Meditation Rene Descartes decision to shatter the molds of traditional thinking is still talked about today. He is regarded as an influential abstract thinker; and some of his main ideas are still talked about by philosophers all over the world. While he wrote the "Meditations", he secluded Descartes: Week Three Question In Meditation six: Concerning the Existence of Material Things, and the Real Distinction between Mind and Body, Rene Descartes wrote of his distinctions between the mind and the body, first by reviewing all things that he believed to be true, then assessing the causes and later The Mind-Body Problem The mind-body problem, which is still debated even today, raises the question about the relationship between the mind and the body. Theorists, such as René Descartes and Thomas Nagel, have written extensively on the problem but they have many dissenting beliefs. Descartes, a dualist, contends that the mind In Meditation Six entitled “Concerning the Existence of Material Things, and Real Distinction between the Mind and Body”, one important thing Descartes explores is the relationship between the mind and body. Descartes believes the mind and body are separated and they are two difference substances. He believes this to be clearly and distinctly true which is a Cartesian quality for true knowledge. I, on the other hand, disagree that the mind and body are separate and that the mind can exist without the body. First, I will present Descartes position on mind/body dualism and his proof for such ideas. Secondly, I will discuss why I think his argument is weak and offer my own ideas that dispute his reasoning while I keep in mind how he might Descartes’ Meditation 6 explains the distinction between the mind and body. He explains that he is confused as to why his mind is attached to a particular body to which he calls his own. He questions why pain or tickling happens in his own body but does not occur in any body outside of his own and why a tugging feeling in his stomach tells him that he is hungry and that he should eat. From this, he perceives that he is only a thinking thing. The idea of a body is merely extended and the mind is Descartes' Meditations Descartes' meditations are created in pursuit of certainty, or true knowledge. He cannot assume that what he has learned is necessarily true, because he is unsure of the accuracy of its initial source. In order to purge himself of all information that is possibly wrong, he subjects his knowledge to methodic doubt. This results in a (theoretical) doubt of everything he knows. Anything, he reasons, that can sustain such serious doubt must be unquestionable truth, and knowledge can then be built from that base. Eventually, Descartes doubts everything. But by doubting, he must exist, hence his "Cogito ergo sum".
Rene Descartes wrote of his distinctions between the mind and the body, first by reviewing all things that he believed to be true, then assessing the causes and later The Mind-Body Problem The mind-body problem, which is still debated even today, raises the question about the relationship between the mind and the body. Theorists, such as René Descartes and Thomas Nagel, have written extensively on the problem but they have many dissenting beliefs. Descartes, a dualist, contends that the mind In Meditation Six entitled “Concerning the Existence of Material Things, and Real Distinction between the Mind and Body”, one important thing Descartes explores is the relationship between the mind and body. Descartes believes the mind and body are separated and they are two difference substances. He believes this to be clearly and distinctly true which is a Cartesian quality for true knowledge. I, on the other hand, disagree that the mind and body are separate and that the mind can exist without the body. First, I will present Descartes position on mind/body dualism and his proof for such ideas. Secondly, I will discuss why I think his argument is weak and offer my own ideas that dispute his reasoning while I keep in mind how he might Descartes’ Meditation 6 explains the distinction between the mind and body. He explains that he is confused as to why his mind is attached to a particular body to which he calls his own. He questions why pain or tickling happens in his own body but does not occur in any body outside of his own and why a tugging feeling in his stomach tells him that he is hungry and that he should eat. From this, he perceives that he is only a thinking thing. The idea of a body is merely extended and the mind is Descartes' Meditations Descartes' meditations are created in pursuit of certainty, or true knowledge. He cannot assume that what he has learned is necessarily true, because he is unsure of the accuracy of its initial source. In order to purge himself of all information that is possibly wrong, he subjects his knowledge to methodic doubt. This results in a (theoretical) doubt of everything he knows.
yes
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://www.acsu.buffalo.edu/~duchan/new_history/early_modern/descartes.html
Judy Duchan's History of Speech - Language Pathology
21st Century Related Topics Rene Descartes 1596-1650 Rene Descartes, a French mathematician, philosopher, and physiologist was a pivotal figure in philosophical thinking in the 17th century, and, indeed still is today. His ideas lead him to a conceptual separation of the mind (what he referred to as “soul”) and the body. This separation between what today might be called psychological and physical phenomena, has come to be known as “Cartesian dualism”. Descartes saw the mind as having a separate status from the world of quantifiable material things such as the physical things that go on in the body. He explained the body as well as other material phenomena with reference to shapes, sizes and motions of elements of matter. Descartes conceptual model of mind and body acknowledges that the mind and body intermingle. Sensations like burning oneself for example, are intertwined with mental reactions of pain. Descartes hypothesized that animal spirits or pneuma served as the basis of nerve and muscle function. He theorized further that the finest particles in the blood passed through the pineal gland on their way to the brain, where they turned into animal spirits. The spirits, something like the wind, traveled to the pores in the lining of the ventricles of the brain. The nerves were seen as tubes whose function it was to carry the animal spirits from one part of the body to another. Stimulation of a sense organ activated the filaments in the nerve tubes, which conveyed the sense information to the ventricles. From there they traveled automatically, like a reflex, through the nerves to the muscles. Also operating in Descartes’ mechanistic renderings was a rational soul that worked independently of the muscle action. The interaction between the soul and body took place, according to Descartes, in the pineal gland, and it was there that the person (but not other animals) were able to have a conscious appreciation of the sensation and it was there that voluntary movements of the muscles were initiated. The pineal gland acted as a floating valve to the third ventricle and took part in the processing of sensation, imagination, memory and the causation of voluntary bodily movements. In this work, Descartes proposed a mechanism for automatic reaction in response to external events. According to his proposal, external motions affect the peripheral ends of the nerve fibrils, which in turn displace the central ends. As the central ends are displaced, the pattern of interfibrillar space is rearranged and the flow of animal spirits is thereby directed into the appropriate nerves. It was Descartes' articulation of this mechanism for automatic, differentiated reaction that led to his being credited with the founding of reflex theory. Although Rene Descartes drew from the tradition of scholastics such as Thomas Aquinas, he created a shift in direction from scholastic research that was to have long-lasting impact. He agreed with Aristotle and his followers that the intellect or soul can operate independently of the body and that it is immortal. But, unlike Aristotle, Descartes saw the soul as more active, serving to create consciousness. Other functions that Aristotle saw as part of the soul, Descartes relegated to the body. These included nutritive functions (nutrition, growth, reproduction) and sense functions (perception, locomotion). Descartes conceived of these bodily functions in mechanistic terms. Light from the object (ABC) enters the eyes and form visual images on the retina. The images are connect to the walls of the ventricle by hollow tubes representing the optic nerve. From the tubes the image is taken by animal spirits to the ventricles and from there to the pineal gland (marked H on the drawing). Here the motor response is initiated and carried through the opening 8 into the nerve to the arm muscle, which it inflates, producing movement. A sampling of the writings of Rene Descartes Descartes, Rene (1633). The treatise on man Descartes, Rene (1637). Discourse on method Descartes, Rene (1641). Meditations on first philosophy Descartes, Rene (1649). The passions of the soul Writings about Rene Descartes’s theories of the mind and body. Bennett, M. R. & Hacker, P. M. S. (2002). The motor system in neuroscience: A history and analysis of conceptual developments. Progress in Neurobiology, 67, 1-52.
21st Century Related Topics Rene Descartes 1596-1650 Rene Descartes, a French mathematician, philosopher, and physiologist was a pivotal figure in philosophical thinking in the 17th century, and, indeed still is today. His ideas lead him to a conceptual separation of the mind (what he referred to as “soul”) and the body. This separation between what today might be called psychological and physical phenomena, has come to be known as “Cartesian dualism”. Descartes saw the mind as having a separate status from the world of quantifiable material things such as the physical things that go on in the body. He explained the body as well as other material phenomena with reference to shapes, sizes and motions of elements of matter. Descartes conceptual model of mind and body acknowledges that the mind and body intermingle. Sensations like burning oneself for example, are intertwined with mental reactions of pain. Descartes hypothesized that animal spirits or pneuma served as the basis of nerve and muscle function. He theorized further that the finest particles in the blood passed through the pineal gland on their way to the brain, where they turned into animal spirits. The spirits, something like the wind, traveled to the pores in the lining of the ventricles of the brain. The nerves were seen as tubes whose function it was to carry the animal spirits from one part of the body to another. Stimulation of a sense organ activated the filaments in the nerve tubes, which conveyed the sense information to the ventricles. From there they traveled automatically, like a reflex, through the nerves to the muscles. Also operating in Descartes’ mechanistic renderings was a rational soul that worked independently of the muscle action. The interaction between the soul and body took place, according to Descartes, in the pineal gland, and it was there that the person (but not other animals) were able to have a conscious appreciation of the sensation and it was there that voluntary movements of the muscles were initiated.
yes
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://medium.com/bottomline-conversations/cartesian-dualism-the-mind-body-problem-in-self-improvement-fe9807d8ddd5
Cartesian Dualism: The Mind-Body Problem in Self-Improvement ...
Is Our Mind Independent From Our Body? Descartes’ theory of dualism implies that the mind is not accessible to us. This means that we cannot fully understand what goes on in our minds, let alone others. He also believed the soul was inaccessible to humans; we could only get an inkling of it through our senses. Cartesian dualism is a belief that there are two different kinds of substance in the world: matter and spirit, also known as body and mind. Each one has its unique properties, functions, and capacities. Self-improvement, according to Descartes, is only possible through a correct understanding of the mind. This is because our ability to reason sets us apart from animals. Reason allows us to reflect on our thoughts and actions. It is only by using our reason that we can improve ourselves. The separation between body and mind enables self-improvement through introspection and logical thinking. Descartes’ dualism has been influential in the Western world. Dualism is still a widespread belief today, particularly in the philosophical tradition of mind-body dualism. But, there are many criticisms of Cartesian dualism. Some argue that it leads to a form of mental slavery, where we are at the mercy of our minds. Others say it is untrue and that the mind is as physical as the body. Besides, Cartesian dualism remains an influential philosophical theory with essential implications for self-improvement. By understanding the mind as a separate entity from the body, we can begin to understand our thoughts and actions and change them for the better. Through reason, we can gain greater control over our lives and improve ourselves both physically and mentally. Principles of Cartesian Dualism Cartesian dualism is a doctrine in philosophy that states that there are two types of substance, mind and matter.
Is Our Mind Independent From Our Body? Descartes’ theory of dualism implies that the mind is not accessible to us. This means that we cannot fully understand what goes on in our minds, let alone others. He also believed the soul was inaccessible to humans; we could only get an inkling of it through our senses. Cartesian dualism is a belief that there are two different kinds of substance in the world: matter and spirit, also known as body and mind. Each one has its unique properties, functions, and capacities. Self-improvement, according to Descartes, is only possible through a correct understanding of the mind. This is because our ability to reason sets us apart from animals. Reason allows us to reflect on our thoughts and actions. It is only by using our reason that we can improve ourselves. The separation between body and mind enables self-improvement through introspection and logical thinking. Descartes’ dualism has been influential in the Western world. Dualism is still a widespread belief today, particularly in the philosophical tradition of mind-body dualism. But, there are many criticisms of Cartesian dualism. Some argue that it leads to a form of mental slavery, where we are at the mercy of our minds. Others say it is untrue and that the mind is as physical as the body. Besides, Cartesian dualism remains an influential philosophical theory with essential implications for self-improvement. By understanding the mind as a separate entity from the body, we can begin to understand our thoughts and actions and change them for the better. Through reason, we can gain greater control over our lives and improve ourselves both physically and mentally. Principles of Cartesian Dualism Cartesian dualism is a doctrine in philosophy that states that there are two types of substance, mind and matter.
yes
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://www.herts.ac.uk/link/volume-1-issue-1/our-body-and-mind-are-one
Our body and mind are one | LINK
Our body and mind are one Professor Helen Payne - Professor of Psychotherapy, School of Education, University of Hertfordshire 'The mind had to be first about the body, or it could not have been' (Damasio 2006). Neuroscience demonstrates that perception and relationship to others are through an embodied sense of self (Damasio 1994). Damasio's view is that the mind, brain and body are unbroken rather than separate aspects: It is the entire organism rather than the body alone or the brain alone that interacts with the environment...When we see, or hear, or touch or taste or smell, body proper and brain participate in the interaction with the environment (Damasio 1994:224) Our bodies and minds are profoundly inter-related. Our emotions are anchored in the body and integral to the autonomic nervous system which is the nerve complex that controls the involuntary actions of our internal organs, blood vessels and glands. Emotional balance is mediated by the sympathetic and parasympathetic systems. The way we move or hold our bodies can inform us of our emotional states and are evoked in tandem with sensations. Neuropsychology tells us that our thoughts are governed by our emotions which are, in turn, grounded in our bodies, emotions being part of self-regulation (homeostasis). Descartes famously wrote: 'I think, therefore I am'. However, following research into embodied cognition it is time to replace that claim with 'I move, therefore I think'. In the teaching of reading we might apply this notion by helping children to envision what they are reading to increase comprehension. In adulthood it is often easier to lose the connection with these essential networks of communication which can enable us to cope with psychological physical injury/symptoms, stress and trauma (Fogal 2009). For example, if we can learn to explore the emotional content of our physical symptoms, even understand their purpose perhaps, we are more likely to be able to self-manage them. Feelings, such as fear, anxiety or profound sadness often disconnect us from others. We are told that these feelings mean we are unwell in our mental health, in our minds. This distress is seen as separate and distinct from physical symptoms in our bodies, our physical health. We have a mental and physical healthcare system, without a connection between the two. Physical symptoms, such as irritable bowel syndrome, ME, fibromyalgia, chronic pain, asthma or eczema, although unexplained medically, are understood generally to mean that we are unwell in our bodies. However, ground-breaking research into chronic unexplainable physical symptoms demonstrates that there is a complex and dense inter-connectedness between the brain and the body resulting in the conclusion that the idea of a split between body and mind is unhelpful (Payne 2009a; 2009b; Payne and Stott 2010; Payne 2011). The outcomes show that a newly developed course in bodily wisdom can reduce symptom distress, anxiety and/or depression and support people to self-manage their bodily symptoms. These outcomes make us realise that we need to work with the whole person including their bodily symptoms. The brain and the body entwined The brain is entwined with the whole body through the nervous system, via the spinal cord for example, including all the systems, organs, musculature, liquids and chemicals constantly influencing the brain. There is no separation between body and mind. Your whole being is ever-changing, new pathways are forming in your brain as you read this. We are dynamically pre-disposed to all influences. We can become anxious (termed hyper- {over} aroused) and sad (termed hypo- {under} aroused) at any one time. We can freeze - becoming rigid in body and mind, as in fear - or be shut down and numb, as in deep depression. When balanced in body and mind we can feel at peace, connected with ourselves and others. Furthermore, our muscles have a memory whereby triggers via our own or another's postures and/or gestures can remind us of autobiographical experiences close to our heart and traumatic events (Konopatsch and Payne 2011). Did you know that when you are not in alignment with your body's wisdom, you are sent all sorts of 'signals' - both big and small? Here are some of the many signals termed 'symptoms' you may have experienced: Engaging with your bodily wisdom can re-connect you with your body and your mind in powerful ways. You may discover insights and practical skills to experience your body as a source of effective knowledge and healing potential. This can help you to transcend many of these common, yet challenging, symptoms. By transforming your relationship with your body in a holistic way you may experience more life force energy, creativity and resilience on a sustained basis over time. Whether you're young or old, in good health or poor, whatever your level of body intelligence, you can go further. Learning to listen to our bodies, to the 'signals' which can be termed symptoms, such as when we feel pain or other physical symptoms, this 'listening' can help us to regulate ourselves (Payne and Warnecke 2014). We can use our bodily symptoms as a gateway to the self-healing/management of conditions by accepting that both body and mind are connected in that they are one of the same. Using the BodyMind Approach™ Awareness practices to support feelings of wellbeing and resilience to cope with unexplained symptoms such as pain, and life events including trauma, can help us to learn about the body and mind connections. This feeling of connection can enable us to feel more in control of our unexplained symptoms, our feelings of depression and anxiety and can promote feelings of wellbeing. You can learn to become empowered by your body's natural wisdom. Did you know that your innate body intelligence can help you to create better health, prosperity and richer, more authentic relationships? In our everyday lives of sitting in an office, presenting learning materials, screening, studying etc., it is easy to forget our body and mind are one of the same. Action re-connects us to our bodies as lived experience. Your body is a gift with a source of precious knowledge if you know how to access it. Moving with awareness is one way to access this wisdom. However, in a culture that emphasises cognitive achievement, targets and output, it is easy to experience a major disconnection with your body's natural, deep intelligence - you are missing out on the full, exciting variety of human experience. Using The BodyMind Approach™ in a pathways2wellbeing (P2W) course will help you to learn how being fully present in your body is a creative and healthy response to, for example, unexplained physical symptoms, everyday stresses and accelerated pace of our daily lives. The more you honour living in your body, the more all the aspects of your life will become easier, and more collaborative. As you master body intelligence, you will feel more empowered to create the kind of life you want. The courses enable you to learn to be more fully present in your body, be aware of and listen to your body empowering you to live well with any persistent bodily signals (symptoms) which do not have any medical explanation. Courses draw from the latest neuroscience research: group work practice; biology; human development; movement psychology; breath-work and the most progressive transformational modalities. You will get inside access to one of the leading experts in this exciting, emergent field - and other professionals including doctors, psychologists, researchers, and group facilitators in health care and wellbeing. They will share the secrets for transforming your relationship with your body in a holistic way, so you experience more life force energy, creativity and resilience on a sustained basis over time. Payne, H. (2009a) 'The BodyMind approach to psychotherapeutic groupwork with patients with medically unexplained symptoms: a review of the literature, description of approach and methodology selected for a pilot study'. European Journal for Counselling and Psychotherapy. 11, 3 pp. 287-310.
The brain and the body entwined The brain is entwined with the whole body through the nervous system, via the spinal cord for example, including all the systems, organs, musculature, liquids and chemicals constantly influencing the brain. There is no separation between body and mind. Your whole being is ever-changing, new pathways are forming in your brain as you read this. We are dynamically pre-disposed to all influences. We can become anxious (termed hyper- {over} aroused) and sad (termed hypo- {under} aroused) at any one time. We can freeze - becoming rigid in body and mind, as in fear - or be shut down and numb, as in deep depression. When balanced in body and mind we can feel at peace, connected with ourselves and others. Furthermore, our muscles have a memory whereby triggers via our own or another's postures and/or gestures can remind us of autobiographical experiences close to our heart and traumatic events (Konopatsch and Payne 2011). Did you know that when you are not in alignment with your body's wisdom, you are sent all sorts of 'signals' - both big and small? Here are some of the many signals termed 'symptoms' you may have experienced: Engaging with your bodily wisdom can re-connect you with your body and your mind in powerful ways. You may discover insights and practical skills to experience your body as a source of effective knowledge and healing potential. This can help you to transcend many of these common, yet challenging, symptoms. By transforming your relationship with your body in a holistic way you may experience more life force energy, creativity and resilience on a sustained basis over time. Whether you're young or old, in good health or poor, whatever your level of body intelligence, you can go further. Learning to listen to our bodies, to the 'signals' which can be termed symptoms, such as when we feel pain or other physical symptoms, this 'listening' can help us to regulate ourselves (Payne and Warnecke 2014).
no
Philosophy
Is the mind separate from the body?
yes_statement
the "mind" is "separate" from the "body".. there is a "separation" between the "mind" and the "body".
https://plato.stanford.edu/entries/merleau-ponty/
Maurice Merleau-Ponty (Stanford Encyclopedia of Philosophy)
Maurice Merleau-Ponty Maurice Jean Jacques Merleau-Ponty (1908–1961), French philosopher and public intellectual, was the leading academic proponent of existentialism and phenomenology in post-war France. Best known for his original and influential work on embodiment, perception, and ontology, he also made important contributions to the philosophy of art, history, language, nature, and politics. Associated in his early years with the existentialist movement through his friendship with Jean-Paul Sartre and Simone de Beauvoir, Merleau-Ponty played a central role in the dissemination of phenomenology, which he sought to integrate with Gestalt psychology, psychoanalysis, Marxism, and Saussurian linguistics. Major influences on his thinking include Henri Bergson, Edmund Husserl, Martin Heidegger, Max Scheler, and Jean-Paul Sartre, as well as neurologist Kurt Goldstein, Gestalt theorists such as Wolfgang Köhler and Kurt Koffka, and literary figures including Marcel Proust, Paul Claudel, and Paul Valéry. In turn, he influenced the post-structuralist generation of French thinkers who succeeded him, including Michel Foucault, Gilles Deleuze, and Jacques Derrida, whose similarities with and debt to the later Merleau-Ponty have often been underestimated. Merleau-Ponty published two major theoretical texts during his lifetime: The Structure of Behavior (1942 SC) and Phenomenology of Perception (1945 PP). Other important publications include two volumes of political philosophy, Humanism and Terror (1947 HT) and Adventures of the Dialectic (1955 AdD), as well as two books of collected essays on art, philosophy, and politics: Sense and Non-Sense ([1948]1996b/1964) and Signs (1960/1964). Two unfinished manuscripts appeared posthumously: The Prose of the World (1969/1973), drafted in 1950–51; and The Visible and the Invisible (1964 V&I), on which he was working at the time of his death. Lecture notes and student transcriptions of many of his courses at the Sorbonne and the Collège de France have also been published. For most of his career, Merleau-Ponty focused on the problems of perception and embodiment as a starting point for clarifying the relation between the mind and the body, the objective world and the experienced world, expression in language and art, history, politics, and nature. Although phenomenology provided the overarching framework for these investigations, Merleau-Ponty also drew freely on empirical research in psychology and ethology, anthropology, psychoanalysis, linguistics, and the arts. His constant points of historical reference are Descartes, Kant, Hegel, and Marx. The characteristic approach of Merleau-Ponty’s theoretical work is his effort to identify an alternative to intellectualism or idealism, on the one hand, and empiricism or realism, on the other, by critiquing their common presupposition of a ready-made world and failure to account for the historical and embodied character of experience. In his later writings, Merleau-Ponty becomes increasingly critical of the intellectualist tendencies of the phenomenological method as well, although with the intention of reforming rather than abandoning it. The posthumous writings collected in The Visible and the Invisible aim to clarify the ontological implications of a phenomenology that would self-critically account for its own limitations. This leads him to propose concepts such as “flesh” and “chiasm” that many consider to be his most fruitful philosophical contributions. Merleau-Ponty’s thought has continued to inspire contemporary research beyond the usual intellectual history and interpretive scholarship, especially in the areas of feminist philosophy, philosophy of mind and cognitive science, environmental philosophy and philosophy of nature, political philosophy, philosophy of art, philosophy of language, and phenomenological ontology. His work has also been widely influential on researchers outside the discipline of philosophy proper, especially in anthropology, architecture, the arts, cognitive science, environmental theory, film studies, linguistics, literature, and political theory. 1. Life and Works Merleau-Ponty was born in Rochefort-sur-Mer, in the province of Charente-Maritime, on March 14, 1908.[1] After the death in 1913 of his father, a colonial artillery captain and a knight of the Legion of Honor, he moved with his family to Paris. He would later describe his childhood as incomparably happy, and he remained very close to his mother until her death in 1953. Merleau-Ponty pursued secondary studies at the Parisian lycees Janson-de-Sailly and Louis-le-Grand, completing his first course in philosophy at Janson-de-Sailly with Gustave Rodrigues in 1923–24. He won the school’s “Award for Outstanding Achievement” in philosophy that year and would later trace his commitment to the vocation of philosophy to this first course. He was also awarded “First Prize in Philosophy” at Louis-le-Grand in 1924–25. He attended the École Normale Supérieure from 1926 to 1930, where he befriended Simone de Beauvoir and Claude Lévi-Straus. Some evidence suggests that, during these years, Merleau-Ponty authored a novel, Nord. Récit de l’arctique, under the pseudonym Jacques Heller (Alloa 2013b). His professors at ENS included Léon Brunschvicg and Émile Bréhier, the latter supervising his research on Plotinus for the Diplôme d’études supérieures in 1929. Bréhier would continue to supervise Merleau-Ponty’s research through the completion of his two doctoral dissertations in 1945. During his student years, Merleau-Ponty attended Husserl’s 1929 Sorbonne lectures and Georges Gurvitch’s 1928–1930 courses on German philosophy. He received the agrégation in philosophy in 1930, ranking in second place. After a year of mandatory military service, Merleau-Ponty taught at the lycee in Beauvais from 1931 to 1933, pursued a year of research on perception funded by a subvention from the Caisse nationale des sciences (the precursor of today’s Centre national de la recherche scientifique) in 1933–34, and taught at the lycee in Chartres in 1934–35. From 1935 to 1940, he was a tutor (agégé-répétiteur) at the École Normale Supérieure, where his primary duty was to prepare students for the agrégation. During this period, he attended Alexandre Kojève’s lectures on Hegel and Aron Gurwitsch’s lectures on Gestalt psychology. His first publications also appeared during these years, as a series of review essays on Max Scheler’s Ressentiment (1935), Gabriel Marcel’s Being and Having (1936), and Sartre’s Imagination (1936).[2] In 1938, he completed his thèse complémentaire, originally titled Conscience et comportement [Consciousness and Behavior] and published in 1942 as La structure du comportement [The Structure of Behavior, SC]. He was the first outside visitor to the newly established Husserl Archives in Louvain, Belgium, in April 1939, where he met Eugen Fink and consulted Husserl’s unpublished manuscripts, including Ideen II and later sections of Die Krisis. With the outbreak of World War Two, Merleau-Ponty served for a year as lieutenant in the 5th Infantry Regiment and 59th Light Infantry Division, until he was wounded in battle in June 1940, days before the signing of the armistice between France and Germany. He was awarded the Croix de guerre, recognizing bravery in combat. After several months of convalescence, he returned to teaching at the Lycée Carnot in Paris, where he remained from 1940 until 1944. In November 1940, he married Suzanne Jolibois, and their daughter Marianne was born in June 1941. In the winter of 1940–41, Merleau-Ponty renewed his acquaintance with Jean-Paul Sartre, whom he had met as a student at the École Normale, through their involvement in the resistance group Socialisme et Liberté. The group published around ten issues of an underground review until the arrest of two members in early 1942 led to its dissolution. After the conclusion of the war, in 1945, Merleau-Ponty would collaborate with Sartre and Beauvoir to found Les Temps Modernes, a journal devoted to “littérature engagée”, for which he served as political editor until 1952. At the end of the 1943–44 school year, Merleau-Ponty completed his main thesis, Phénoménologie de la perception [Phenomenology of Perception, PP], and in 1944–45 he taught at the Lycée Condorcet in Paris, replacing Sartre during the latter’s leave from this position. Merleau-Ponty defended his two dissertations in July 1945, fulfilling the requirements for the Docteur ès lettres, which was awarded “with distinction”. In October 1945, Les Temps Modernes published its inaugural issue; Merleau-Ponty was a founding member of the journal’s governing board, managed its daily affairs, and penned many of its editorials that were signed simply “T.M.”, even though he refused to allow his name to be printed on the cover alongside Sartre’s as the review’s Director. That fall, Merleau-Ponty was appointed to the post of Maître de conférences in Psychology at the University of Lyon, where he was promoted to the rank of Professor in the Chair of Psychology in 1948. From 1947 to 1949, he also taught supplementary courses at the École Normale Supérieure, where his students included the young Michel Foucault. Student notes (taken by Jean Deprun) from Merleau-Ponty’s 1947–48 course on “The Union of the Soul and the Body in Malebranche, Biran, and Bergson”—a course that he taught at both Lyon and E.N.S. to prepare students for the agrégation and which was attended by Foucault—were published in 1968 (1997b/2001). In 1947, Merleau-Ponty participated regularly in the Collège philosophique, an association formed by Jean Wahl to provide an open venue for intellectual exchange without the academic formality of the Sorbonne, and frequented by many leading Parisian thinkers. Merleau-Ponty published his first book of political philosophy in 1947, Humanisme et terreur, essai sur le problème communiste [Humanism and Terror: An Essay on the Communist Problem, 1969, HT], in which he responded to the developing opposition between liberal democracies and communism by cautioning a “wait-and-see” attitude toward Marxism. A collection of essays concerning the arts, philosophy, and politics, Sens et non-sense [Sense and Non-Sense, 1996b/1964], appeared in 1948. In the fall of 1948, Merleau-Ponty delivered a series of seven weekly lectures on French national radio that were subsequently published as Causeries 1948 (2002/2004). Merleau-Ponty declined an invitation to join the Department of Philosophy at the University of Chicago as a Visiting Professor in 1948–49, but instead received a leave from Lyon for the year to present a series of lectures at the University of Mexico in early 1949. Later in 1949, Merleau-Ponty was appointed Professor of Child Psychology and Pedagogy at the University of Paris, and in this position lectured widely on child development, psychoanalysis, phenomenology, Gestalt psychology, and anthropology. His eight courses from the Sorbonne are known from compiled student notes reviewed by him and published in the Sorbonne’s Bulletin de psychologie (1988/2010). Merleau-Ponty held this position for three years until his election, in 1952, to the Chair of Philosophy at the Collège de France, the most prestigious post for a philosopher in France, which he would hold until his death in 1961. At forty-four, Merleau-Ponty was the youngest person ever elected to this position, but his appointment was not without controversy. Rather than following the typical procedure of ratifying the vote of the General Assembly of Professors, who had selected Merleau-Ponty as their lead candidate, the Académie des sciences morales et politiques made the unprecedented decision to remove his name from the list of candidates; the Académie’s decision was subsequently overturned by the Minister of Education himself, who allowed the faculty vote in favor of Merleau-Ponty to stand. Merleau-Ponty’s January 1953 inaugural lecture at the Collège de France was published under the title Éloge de la Philosophie [In Praise of Philosophy, 1953/1963]. Many of his courses from the Collège de France have subsequently been published, based either on student notes or Merleau-Ponty’s own lecture notes (1964b, 1968/1970, 1995/2003, 1996a, 1998/2002, 2003/2010, 2011, 2013). In the face of growing political disagreements with Sartre set in motion by the Korean War, Merleau-Ponty resigned his role as political editor of Les Temps Modernes in December of 1952 and withdrew from the editorial board altogether in 1953. His critique of Sartre’s politics became public in 1955 with Les Aventures de la dialectique [Adventures of the Dialectic, 1973 AdD], in which Merleau-Ponty distanced himself from revolutionary Marxism and sharply criticized Sartre for “ultrabolshevism”. Beauvoir’s equally biting rebuttal, “Merleau-Ponty and Pseudo-Sartreanism”, published the same year in Les Temps Modernes, accuses Merleau-Ponty of willfully misrepresenting Sartre’s position, opening a rift between the three former friends that would never entirely heal. Merleau-Ponty’s intellectual circle during his years at the Collège de France included Lévi-Straus and Jacques Lacan, and for several years he was a regular contributor to the popular weekly magazine L’Express. In October and November 1955, on a commission from Alliance française, Merleau-Ponty visited several African countries, including Tunisia, French Equatorial Africa, the Belgian Congo, and Kenya, where he delivered a series of lectures on the concept of race, colonialism, and development. In 1956, he published Les Philosophes célèbres [Famous Philosophers], a large edited volume of original introductions to key historical and contemporary thinkers (beginning, interestingly, with philosophers from India and China) whose contributors included Gilles Deleuze, Gilbert Ryle, Alfred Schutz, and Jean Starobinski. In April 1957, Merleau-Ponty declined to accept induction into France’s Order of the Legion of Honor, presumably in protest over the inhumane actions of the Fourth Republic, including the use of torture, during the Battle of Algiers. In October and November of 1957, as his second commission from Alliance française, he lectured in Madagascar, Reunion Island, and Mauritius, citing as a primary motivation for accepting the commission his desire to see first-hand the effects of reforms in French policies governing overseas territories. The last book Merleau-Ponty published during his lifetime, Signes [Signs, 1960/1964], appearing in 1960, collecting essays on art, language, the history of philosophy, and politics that spanned more than a decade. His last published essay, “L’Œil et l’esprit” [“Eye and Mind”, 1964a OEE] addressing the ontological implications of painting, appeared in the 1961 inaugural issue of Art de France. Merleau-Ponty died of a heart attack in Paris on May 3rd, 1961, at the age of 53, with Descartes’ Optics open on his desk. Merleau-Ponty’s friend and former student Claude Lefort published two of his teacher’s unfinished manuscripts posthumously: La prose du monde [The Prose of the World, 1969/1973], an exploration of literature and expression drafted in 1950–51 and apparently abandoned; and Le visible et l’invisible [The Visible and the Invisible, 1968 V&I], a manuscript and numerous working notes from 1959–1961 that present elements of Merleau-Ponty’s mature ontology. The latter manuscript was apparently part of a larger project, Être et Monde [Being and World], for which two additional unpublished sections were substantially drafted in 1957–1958: La Nature ou le monde du silence [Nature or the World of Silence] and Introduction à l’ontologie [Introduction to Ontology] (Saint Aubert 2013: 28).[3] These manuscripts, along with many of Merleau-Ponty’s other unpublished notes and papers, were donated to the Bibliothèque Nationale de France by Suzanne Merleau-Ponty in 1992 and are available for consultation by scholars.[4] 2. The Nature of Perception and The Structure of Behavior Merleau-Ponty’s lifelong interest in the philosophical status of perception is already reflected in his successful 1933 application for a subvention to study the nature of perception, where he proposes to synthesize recent findings in experimental psychology (especially Gestalt psychology) and neurology to develop an alternative to dominant intellectualist accounts of perception inspired by critical (Kantian) philosophy. Interestingly, this early proposal emphasizes the significance of the perception of one’s own body for distinguishing between the “universe of perception” and its intellectual reconstructions, and it gestures toward the “realist philosophers of England and America” (presumably William James and A. N. Whitehead, as presented in Jean Wahl’s 1932 Vers le concret) for their insights into the irreducibility of the sensory and the concrete to intellectual relations. While this initial proposal makes no mention of phenomenology, Merleau-Ponty’s subsequent 1934 report on the year’s research, noting the limitations of approaching the philosophical study of perception through empirical research alone, emphasizes the promise of Husserlian phenomenology for providing a distinctively philosophical framework for the investigation of psychology. In particular, Merleau-Ponty mentions the distinction between the natural and transcendental attitudes and the intentionality of consciousness as valuable for “revising the very notions of consciousness and sensation” (NP: 192/78). He also cites approvingly Aron Gurwitsch’s claim that Husserl’s analyses “lead to the threshold of Gestaltpsychologie”, the second area of focus in this early study. The Gestalt is “a spontaneous organization of the sensory field” in which there are “only organizations, more or less stable, more or less articulated” (NP: 193/79). Merleau-Ponty’s brief summary of Gestalt psychology, anticipating research presented in his first two books, emphasizes the figure-ground structure of perception, the phenomena of depth and movement, and the syncretic perception of children. Nevertheless, Merleau-Ponty concludes—again citing Gurwitsch—that the epistemological framework of Gestalt psychology remains Kantian, requiring that one look “in a very different direction, for a very different solution” to the problem of the relation between the world described naturalistically and the world as perceived (NP: 198/82). Merleau-Ponty’s first book, The Structure of Behavior (SC), resumes the project of synthesizing and reworking the insights of Gestalt theory and phenomenology to propose an original understanding of the relationship between “consciousness” and “nature”. Whereas the neo-Kantian idealism then dominant in France (e.g., Léon Brunschvicg, Jules Lachelier) treated nature as an objective unity dependent on the synthetic activity of consciousness, the realism of the natural sciences and empirical psychology assumed nature to be composed of external things and events interacting causally. Merleau-Ponty argues that neither approach is tenable: organic life and human consciousness are emergent from a natural world that is not reducible to its meaning for a mind; yet this natural world is not the causal nexus of pre-existing objective realities, since it is fundamentally composed of nested Gestalts, spontaneously emerging structures of organization at multiple levels and degrees of integration. On the one hand, the idealist critique of naturalism should be extended to the naturalistic assumptions framing Gestalt theory. On the other hand, there is a justified truth in naturalism that limits the idealist universalization of consciousness, and this is discovered when Gestalt structures are recognized to be ontologically basic and the limitations of consciousness are thereby exposed. The notion of “behavior”, taken by Merleau-Ponty as parallel to the phenomenological concept of “experience” (in explicit contrast with the American school of behaviorism), is a privileged starting point for the analysis thanks to its neutrality with respect to classical distinctions between the “mental” and the “physiological” (SC: 2/4). The Structure of Behavior first critiques traditional reflex accounts of the relation between stimulus and reaction in light of the findings of Kurt Goldstein and other contemporary physiologists, arguing that the organism is not passive but imposes its own conditions between the given stimulus and the expected response, so that behavior remains inexplicable in purely anatomical or atomistic terms. Merleau-Ponty instead describes the nervous system as a “field of forces” apportioned according to “modes of preferred distribution”, a model inspired by Wolfgang Köhler’s Gestalt physics (SC: 48/46). Both physiology and behavior are “forms”, that is, total processes whose properties are not the sum of those which the isolated parts would possess…. [T]here is form wherever the properties of a system are modified by every change brought about in a single one of its parts and, on the contrary, are conserved when they all change while maintaining the same relationship among themselves. (SC: 49–50/47) Form or structure therefore describes dialectical, non-linear, and dynamic relationships that can function relatively autonomously and are irreducible to linear mechanical causality (see Thompson 2007). The critique of physiological atomism is also extended to theories of higher behavior, such as Pavlov’s theory of conditioned reflexes. Merleau-Ponty argues that such accounts rely on gratuitous hypotheses lacking experimental justification and cannot effectively explain brain function or learning. In the case of brain function, experimental work on brain damage demonstrates that localization hypotheses must be rejected in favor of a global process of neural organization comparable to the figure-ground structures of perceptual organization. Similarly, learning cannot be explained in terms of trial-and-error fixing of habitual reactions, but instead involves a general aptitude with respect to typical structures of situations. Merleau-Ponty proposes an alternative tripartite classification of behavior according to the degree to which the structures toward which it is oriented emerge thematically from their content. Syncretic behaviors, typical of simpler organisms such as ants or toads, respond to all stimuli as analogues of vital situations for which the organism’s responses are instinctually prescribed by its “species a priori”, with no possibility for adaptive learning or improvisation. Amovable behaviors are oriented toward signals of varying complexity that are not a function of the organism’s instinctual equipment and can lead to genuine learning. Here the organism, guided by its vital norms, responds to signals as relational structures rather than as objective properties of things. Drawing on Köhler’s experimental work with chimpanzees, Merleau-Ponty argues that even intelligent non-human animals lack an orientation toward objective things, which emerges only at the level of symbolic behavior. While amovable behavior remains attached to immediate functional structures, symbolic behavior (here limited to humans) is open to virtual, expressive, and recursive relationships across structures, making possible the human orientation toward objectivity, truth, creativity, and freedom from biologically determined norms. More generally, Merleau-Ponty proposes that matter, life, and mind are increasingly integrative levels of Gestalt structure, ontologically continuous but structurally discontinuous, and distinguished by the characteristic properties emergent at each integrative level of complexity. A form is defined here as a field of forces characterized by a law which has no meaning outside the limits of the dynamic structure considered, and which on the other hand assigns its properties to each internal point so much so that they will never be absolute properties, properties of this point. (SC: 148/137–38) Merleau-Ponty argues that this understanding extends to all physical laws, which “express a structure and have meaning only within this structure”; the laws of physics always refer back to “a sensible or historical given” and ultimately to the history of the universe (SC: 149/138, 157/145). At the level of life, form is characterized by a dialectical relation between the organism and its environment that is a function of the organism’s vital norms, its “optimal conditions of activity and its proper manner of realizing equilibrium”, which express its style or “general attitude toward the world” (SC: 161/148). Living things are not oriented toward an objective world but toward an environment that is organized meaningfully in terms of their individual and specific style and vital goals. Mind, the symbolic level of form that Merleau-Ponty identifies with the human, is organized not toward vital goals but by the characteristic structures of the human world: tools, language, culture, and so on. These are not originally encountered as things or ideas, but rather as “significative intentions” embodied within the world. Mind or consciousness cannot be defined formally in terms of self-knowledge or representation, then, but is essentially engaged in the structures and actions of the human world and encompasses all of the diverse intentional orientations of human life. While mind integrates within itself the subordinate structures of matter and life, it goes beyond these in its thematic orientation toward structures as such, which is the condition for such characteristically human symbolic activities as language and expression, the creation of new structures beyond those set by vital needs, and the power of choosing and varying points of view (which make truth and objectivity possible). In short, mind as a second-order or recursive structure is oriented toward the virtual rather than simply toward the real. Ideally, the subordinate structure of life would be fully absorbed into the higher order of mind in a fully integrated human being; the biological would be transcended by the “spiritual”. But integration is never perfect or complete, and mind can never be detached from its moorings in a concrete and embodied situation. Merleau-Ponty emphasizes throughout The Structure of Behavior that form, even though ontologically fundamental, cannot be accounted for in the terms of traditional realism; since form is fundamentally perceptual, an “immanent signification”, it retains an essential relationship with consciousness. But the “perceptual consciousness” at stake here is not the transcendental consciousness of critical philosophy. The last chapter of The Structure of Behavior clarifies this revised understanding of consciousness in dialogue with the classical problem of the relation between the soul and the body in order to account for the relative truths of both transcendental philosophy and naturalism. The issue concerns how to reconcile the perspective of consciousness as “universal milieu” (i.e., transcendental consciousness) with consciousness as “enrooted in the subordinated dialectics”, that is, as a Gestalt emerging from lower-order Gestalts (i.e., perceptual consciousness) (SC: 199/184). In the natural attitude of our pre-reflective lives, we are committed to the view that our perceptual experience of things is always situated and perspectival (i.e., that physical objects are presented through “profiles”, Husserl’s Abschattungen), but also that we thereby experience things “in themselves”, as they really are in the mind-independent world; the perspectival character of our opening onto the world is not a limitation of our access but rather the very condition of the world’s disclosure in its inexhaustibility. At the level of this prereflective faith in the world, there is no dilemma of the soul’s separation from the body; “the soul remains coextensive with nature” (SC: 203/189). This prereflective unity eventually splinters under our awareness of illness, illusion, and anatomy, which teach us to separate nature, body, and thought into distinct orders of events partes extra partes. This culminates in a naturalism that cannot account for the originary situation of perception that it displaces, yet on which it tacitly relies; perception requires an “internal” analysis, paving the way for transcendental idealism’s treatment of subject and object as “inseparable correlatives” (SC: 215/199). But transcendental idealism in the critical tradition subsequently goes too far: by taking consciousness as “milieu of the universe, presupposed by every affirmation of the world”, it obscures the original character of the perceptual relation and culminates in “the dialectic of the epistemological subject and the scientific object” (SC: 216/200, 217/201). Merleau-Ponty aims to integrate the truth of naturalism and transcendental thought by reinterpreting both through the concept of structure, which accounts for the unity of soul and body as well as their relative distinction. Against the conception of transcendental consciousness as a pure spectator correlated with the world, Merleau-Ponty insists that mind is an accomplishment of structural integration that remains essentially conditioned by the matter and life in which it is embodied; the truth of naturalism lies in the fact that such integration is essentially fragile and incomplete. Since “integration is never absolute and always fails”, the dualism of mind and body is not a simple fact; it is founded in principle—all integration presupposing the normal functioning of subordinated formations, which always demand their own due. (SC: 226–27/210) The Structure of Behavior concludes with a call for further investigation of “perceptual consciousness”, a task taken up by its sequel, Phenomenology of Perception. In the concluding pages of Structure, Merleau-Ponty offers a preliminary sketch of phenomenologically inspired approaches to the “problem of perception” that set the stage for his subsequent work, emphasizing (a) the difference between what is directly given as an aspect of individual lived experience and intersubjective significations that are only encountered virtually; and (b) the distinctiveness of one’s own body, which is never experienced directly as one objective thing among many. The book concludes by identifying the “problem of perception” as its encompassing concern: Can one conceptualize perceptual consciousness without eliminating it as an original mode; can one maintain its specificity without rendering inconceivable its relation to intellectual consciousness? (SC: 241/224) The solution requires a “return to perception as to a type of originary experience” by means of an “inversion of the natural movement of consciousness”, an inversion that Merleau-Ponty here equates with Husserl’s phenomenological reduction (SC: 236/220). If successful, this rehabilitation of the status of perception would lead to a redefinition of transcendental philosophy “in such a way as to integrate with it the very phenomenon of the real” (SC: 241/224). 3. Phenomenology of Perception Completed in 1944 and published the following year, Phenomenology of Perception (PP) is the work for which Merleau-Ponty was best known during his lifetime and that established him as the leading French phenomenologist of his generation. Here Merleau-Ponty develops his own distinctive interpretation of phenomenology’s method, informed by his new familiarity with Husserl’s unpublished manuscripts and his deepened engagement with other thinkers in this tradition, such as Eugen Fink and Martin Heidegger. Phenomenology of Perception again draws extensively on Gestalt theory and contemporary research in psychology and neurology; the case of Schneider, a brain-damaged patient studied by Adhémar Gelb and Kurt Goldstein, serves as an extended case-study. Psychological research complements and, at times, serves as a counterpoint to phenomenological descriptions of perceptual experience across a wide range of existential dimensions, including sexuality, language, space, nature, intersubjectivity, time, and freedom. In Phenomenology, Merleau-Ponty develops a characteristic rhythm of presenting, first, the realist or empiricist approach to a particular dimension of experience, followed then by its idealist or intellectualist alternative, before developing a third way that avoids the problematic assumption common to both, namely, their “unquestioned belief in the world”: the prejudice that the objective world exists as a ready-made and fully present reality. Phenomenology of Perception introduces its inquiry with a critique of the “classical prejudices” of empiricism and intellectualism. Merleau-Ponty rejects the empiricist understanding of sensation, with its correlative “constancy hypothesis”, and the role empiricism grants to association and the projection of memory for treating the basic units of sensation as determinate atoms rather than as meaningful wholes. These wholes include ambiguities, indeterminacies, and contextual relations that defy explanation in terms of the causal action of determinate things. Intellectualism aims to provide an alternative to empiricism by introducing judgment or attention as mental activities that synthesize experience from the sensory givens, yet it adopts empiricism’s starting point in dispersed, atomic sensations. Both approaches are guilty of reading the results of perception (the objective world) back into perceptual experience, thereby falsifying perception’s characteristic structure: the spontaneous organization or configuration of perceived phenomena themselves, with their indeterminacies and ambiguities, and the dynamic character of perception as an historical process involving development and transformation. By treating perception as a causal process of transmission or a cognitive judgment, empiricism and intellectualism deny any meaningful configuration to the perceived as such and treat all values and meanings as projections, leaving no basis in perception itself for distinguishing the true from the illusory. In contrast, Merleau-Ponty argues that the basic level of perceptual experience is the gestalt, the meaningful whole of figure against ground, and that the indeterminate and contextual aspects of the perceived world are positive phenomenon that cannot be eliminated from a complete account. Sensing, in contrast with knowing, is a “living communication with the world that makes it present to us as the familiar place of our life” (PP: 79/53), investing the perceived world with meanings and values that refer essentially to our bodies and lives. We forget this “phenomenal field”, the world as it appears directly to perception, as a consequence of perception’s own tendency to forget itself in favor of the perceived that it discloses. Perception orients itself toward the truth, placing its faith in the eventual convergence of perspectives and progressive determination of what was previously indeterminate. But it thereby naturally projects a completed and invariant “truth in itself” as its goal. Science extends and amplifies this natural tendency through increasingly precise measurements of the invariants in perception, leading eventually to the theoretical construction of an objective world of determinate things. Once this determinism of the “in itself” is extended universally and applied even to the body and the perceptual relation itself, then its ongoing dependence on the “originary faith” of perception is obscured; perception is reduced to “confused appearances” that require methodical reinterpretation, and the eventual result is dualism, solipsism, and skepticism. The “fundamental philosophical act” would therefore be to “return to the lived world beneath the objective world” (PP: 83/57). This requires a transcendental reduction: a reversal of perception’s natural tendency to cover its own tracks and a bracketing of our unquestioned belief in the objective world. Yet this cannot be a recourse to any transcendental consciousness that looks on the world from outside and is not itself emergent from and conditioned by the phenomenal field. Rather than a transcendental ego, Merleau-Ponty speaks of a “transcendental field”, emphasizing that reflection always has a situated and partial perspective as a consequence of being located within the field on which it reflects. The first of the three major parts of Phenomenology concerns the body. As we have seen, perception transcends itself toward a determinate object “in itself”, culminating in an objective interpretation of the body. Part One shows the limits of this objective account and sketches an alternative understanding of the body across a series of domains, including the experience of one’s own body, lived space, sexuality, and language. Through a contrast with pathological cases such as phantom limbs, Merleau-Ponty describes the body’s typical mode of existence as “being-toward-the-world”—a pre-objective orientation toward a vital situation that is explicable neither in terms of third-person causal interactions nor by explicit judgments or representations. The body’s orientation toward the world is essentially temporal, involving a dialectic between the present body (characterized, after Husserl, as an “I can”) and the habit body, the sedimentations of past activities that take on a general, anonymous, and autonomous character. While the body’s relation to the world serves as the essential background for the experience of any particular thing, the body itself is experienced in ways that distinguish it in kind from all other things: it is a permanent part of one’s perceptual field, even though one cannot in principle experience all of it directly; it has “double sensations”, such as when one hand touches another, that enact a form of reflexivity; it has affective experiences that are not merely representations; and its kinesthetic sense of its own movements is given directly. This kinesthetic awareness is made possible by a pre-conscious system of bodily movements and spatial equivalences that Merleau-Ponty terms the “body schema”. In contrast with the “positional spatiality” of things, the body has a “situational spatiality” that is oriented toward actual or possible tasks (PP: 129/102). The body’s existence as “being-toward-the-world”, as a projection toward lived goals, is therefore expressed through its spatiality, which forms the background against which objective space is constituted. Merleau-Ponty introduces here the famous case of Schneider, whose reliance on pathological substitutions for normal spatial abilities helps to bring the body’s typical relationship with lived space to light. Schneider lacks the ability to “project” into virtual space; more generally, his injury has disrupted the “intentional arc” that projects around us our past, our future, our human milieu, our physical situation, our ideological situation, and our moral situation, or rather, that ensures that we are situated within all of these relationships. (PP: 170/137) The body’s relationship with space is therefore intentional, although as an “I can” rather than an “I think”; bodily space is a multi-layered manner of relating to things, so that the body is not “in” space but lives or inhabits it. Just as bodily space reflects an originary form of intentionality—a pre-cognitive encounter with the world as meaningfully structured—the same is shown to be the case for sexuality and for language. Sexuality takes on a special significance because it essentially expresses the metaphysical drama of the human condition while infusing the atmosphere of our lives with sexual significance. Like space and sexuality, speech is also a form of bodily expression. Language does not initially encode ready-made thoughts but rather expresses through its style or physiognomy as a bodily gesture. We mistake language for a determined code by taking habitual or sedimented language as our model, thereby missing “authentic” or creative speech. Since language, like perception, hides its own operations in carrying us toward its meaning, it offers an ideal of truth as its presumptive limit, inspiring our traditional privileging of thought or reason as detachable from all materiality. But, at a fundamental level, language is comparable to music in the way that it remains tied to its material embodiment; each language is a distinct and ultimately untranslatable manner of “singing the world”, of extracting and expressing the “emotional essence” of our surroundings and relationships (PP: 228/193). Having rediscovered the body as expressive and intentional, Merleau-Ponty turns in Part Two of Phenomenology to the perceived world, with the aim of showing how the pre-reflective unity of co-existence that characterizes the body has as its correlate the synthesis of things and the world; “One’s own body is in the world just as the heart is in the organism” (PP: 245/209), and its expressive unity therefore also extends to the sensible world. Merleau-Ponty develops this interpretation of the sensible through detailed studies of sensing, space, and the natural and social worlds. Sensing takes place as the “co-existence” or “communion” of the body with the world that Merleau-Ponty describes as a reciprocal exchange of question and answer: a sensible that is about to be sensed poses to my body a sort of confused problem. I must find the attitude that will provide it with the means to become determinate … I must find the response to a poorly formulated question. And yet I only do this in response to its solicitation… . The sensible gives back to me what I had lent to it, but I received it from the sensible in the first place. (PP: 259/222) As co-existence, sensing is characterized by an intentionality that sympathetically attunes itself to the sensed according to a dialectic in which both terms—the perceiving body and the perceived thing—are equally active and receptive: the thing invites the body to adopt the attitude that will lead to its disclosure. Since the subject of this perception is not the idealist’s “for itself”, neither is the object of perception the realist’s “in itself”; rather, the agent of perception is the pre-reflective and anonymous subjectivity of the body, which remains enmeshed in and “connatural” with the world that it perceives. The senses are unified without losing their distinctness in a fashion comparable to the binocular synthesis of vision, and their anonymity is a consequence of the “historical thickness” of perception as a tradition that operates beneath the level of reflective consciousness (PP: 285/248). For first-person awareness, one’s anonymous perceptual engagement with the world operates as a kind of “original past, a past that has never been present” (PP: 252/252). The pre-historical pact between the body and the world informs our encounters with space, revealing a synthesis of space that is neither “spatialized” (as a pre-given container in which things are arranged) nor “spatializing” (like the homogenous and interchangeable relations of geometrical space). Drawing on psychological experiments concerning bodily orientation, depth, and movement, Merleau-Ponty argues that empiricist and intellectualist accounts of space must give way to a conception of space as co-existence or mutual implication characterized by existential “levels”: our orientation toward up and down, or toward what is in motion or stationary, is a function of the body’s adoption of a certain level within a revisable field of possibilities. Lived inherence in space contrasts with the abstract space of the analytical attitude, revindicating the existential space of night, dreams, or myths in relation to the abstract space of the “objective” world. The properties of things that we take to be “real” and “objective” also tacitly assume a reference to the body’s norms and its adoption of levels. An object’s “true” qualities depend on the body’s privileging of orientations that yield maximum clarity and richness. This is possible because the body serves as a template for the style or logic of the world, the concordant system of relations that links the qualities of an object, the configuration of the perceptual field, and background levels such as lighting or movement. In this symbiosis or call-and-response between the body and the world, things have sense as the correlates of my body, and reality therefore always involves a reference to perception. Yet, to be real, things cannot be reducible to correlates of the body or perception; they retain a depth and resistance that provides their existential index. While each thing has its individual style, the world is the ultimate horizon or background style against which any particular thing can appear. The perspectival limitations of perception, both spatially and temporally, are the obverse of this world’s depth and inexhaustibility. Through an examination of hallucination and illusions, Merleau-Ponty argues that skepticism about the existence of the world makes a category mistake. While we can doubt any particular perception, illusions can appear only against the background of the world and our primordial faith in it. While we never coincide with the world or grasp it with absolute certainty, we are also never entirely cut off from it; perception essentially aims toward truth, but any truth that it reveals is contingent and revisable. Rejecting analogical explanations for the experience of other people, Merleau-Ponty proposes that the rediscovery of the body as a “third genre of being between the pure subject and the object” makes possible encounters with embodied others (PP: 407/366). We perceive others directly as pre-personal and embodied living beings engaged with a world that we share in common. This encounter at the level of anonymous and pre-personal lives does not, however, present us with another person in the full sense, since our situations are never entirely congruent. The perception of others involves an alterity, a resistance, and a plenitude that are never reducible to what is presented, which is the truth of solipsism. Our common corporeality nevertheless opens us onto a shared social world, a permanent dimension of our being in the mode of the anonymous and general “someone”. The perception of others is therefore a privileged example of the paradox of transcendence running through our encounter with the world as perceived: Whether it is a question of my body, the natural world, the past, birth or death, the question is always to know how I can be open to phenomena that transcend me and that, nevertheless, only exist to the extent that I take them up and live them. (PP: 422/381) This “fundamental contradiction” defines our encounters with every form of transcendence and requires new conceptions of consciousness, time, and freedom. The fourth and final section of Phenomenology explores these three themes, starting with a revision of the concept of the cogito that avoids reducing it to merely episodic psychological fact or elevating it to a universal certainty of myself and my cogitationes. Merleau-Ponty argues that we cannot separate the certainty of our thoughts from that of our perceptions, since to truly perceive is to have confidence in the veracity of one’s perceptions. Furthermore, we are not transparent to ourselves, since our “inner states” are available to us only in a situated and ambiguous way. The genuine cogito, Merleau-Ponty argues, is a cogito “in action”: we do not deduce “I am” from “I think”, but rather the certainty of “I think” rests on the “I am” of existential engagement. More basic than explicit self-consciousness and presupposed by it is an ambiguous mode of self-experience that Merleau-Ponty terms the silent or “tacit” cogito—our pre-reflective and inarticulate grasp on the world and ourselves that becomes explicit and determinate only when it finds expression for itself. The illusions of pure self-possession and transparency—like all apparently “eternal” truths—are the results of acquired or sedimented language and concepts. Rejecting classic approaches to time that treat it either as an objective property of things, as a psychological content, or as the product of transcendental consciousness, Merleau-Ponty returns to the “field of presence” as our foundational experience of time. This field is a network of intentional relations, of “protentions” and “retentions”, in a single movement of dehiscence or self-differentiation, such that “each present reaffirms the presence of the entire past that it drives away, and anticipates the presence of the entire future or the ‘to-come’” (PP: 483/444). Time in this sense is “ultimate subjectivity”, understood not as an eternal consciousness, but rather as the very act of temporalization. As with the tacit cogito, the auto-affection of time as ultimate subjectivity is not a static self-identity but involves a dynamic opening toward alterity. In this conception of time as field of presence, which “reveals the subject and the object as two abstract moments of a unique structure, namely, presence” (PP: 494/454–55), Merleau-Ponty sees the resolution to all problems of transcendence as well as the foundation for human freedom. Against the Sartrean position that freedom is either total or null, Merleau-Ponty holds that freedom emerges only against the background of our “universal engagement in a world”, which involves us in meanings and values that are not of our choosing. We must recognize, first, an “authochthonous sense of the world that is constituted in the exchange between the world and our embodied existence” (PP: 504/466), and, second, that the acquired habits and the sedimented choices of our lives have their own inertia. This situation does not eliminate freedom but is precisely the field in which it can be achieved. Taking class consciousness as his example, Merleau-Ponty proposes that this dialectic of freedom and acquisition provides the terms for an account of history, according to which history can develop a meaning and a direction that are neither determined by events nor necessarily transparent to those who live through it. The Preface to Phenomenology of Perception, completed after the main text, offers Merleau-Ponty’s most detailed and systematic exposition of the phenomenological method. His account is organized around four themes: the privileging of description over scientific explanation or idealist reconstruction, the phenomenological reduction, the eidetic reduction, and intentionality. Phenomenology sets aside all scientific or naturalistic explanations of phenomena in order to describe faithfully the pre-scientific experience that such explanations take for granted. Similarly, since the world exists prior to reflective analysis or judgment, phenomenology avoids reconstructing actual experience in terms of its conditions of possibility or the activity of consciousness. The phenomenological reduction, on his interpretation, is not an idealistic method but an existential one, namely, the reflective effort to disclose our pre-reflective engagement with the world. Through the process of the reduction, we discover the inherence of the one who reflects in the world that is reflected on, and consequently, the essentially incomplete character of every act of reflection, which is why Merleau-Ponty claims that the “most important lesson of the reduction is the impossibility of a complete reduction” (PP: 14/lxxvii). Similarly, the “eidetic reduction”, described by Husserl as the intuition of essential relations within the flux of conscious experience, is necessary if phenomenology is to make any descriptive claims that go beyond the brute facts of a particular experience. But this does not found the actual world on consciousness as the condition of the world’s possibility; instead, “the eidetic method is that of a phenomenological positivism grounding the possible upon the real” (PP: 17/lxxxi). Lastly, Merleau-Ponty reinterprets the phenomenological concept of intentionality, traditionally understood as the recognition that all consciousness is consciousness of something. Following Husserl, he distinguishes the “act intentionality” of judgments and voluntary decisions from the “operative intentionality” that “establishes the natural and pre-predicative unity of the world and of our life” (PP: 18/lxxxii). Guided by this broader concept of intentionality, philosophy’s task is to take in the “total intention” of a sensible thing, a philosophical theory, or an historical event, which is its “unique manner of existing” or its “existential structure” (PP: 19–20/lxxxii–lxxxiii). Phenomenology thereby expresses the emergence of reason and meaning in a contingent world, a creative task comparable to that of the artist or the political activist, which requires an ongoing “radical” or self-referential reflection on its own possibilities. On Merleau-Ponty’s presentation, the tensions of phenomenology’s method therefore reflect the nature of its task: The unfinished nature of phenomenology and the inchoative style in which it proceeds are not the sign of failure, they were inevitable because phenomenology’s task was to reveal the mystery of the world and the mystery of reason. (PP: 21–22/lxxxv) 4. Expression, Language, and Art The concepts of expression and style are central to Merleau-Ponty’s thought and already play a key role in his first two books, where they characterize the perceptual exchange between an organism and its milieu, the body’s sensible dialogue with the world, and even the act of philosophical reflection (see Landes 2013). In both works, Merleau-Ponty draws on a range of literary and artistic examples to describe the creative and expressive dimensions of perception and reflection, emphasizing in particular the parallels between the task of the artist and that of the thinker: as the concluding lines of the Preface to Phenomenology of Perception note, Phenomenology is as painstaking as the works of Balzac, Proust, Valéry, or Cézanne—through the same kind of attention and wonder, the same demand for awareness, the same will to grasp the sense of the world or of history in its nascent state. (PP: 22/lxxxv) Expression, particularly in language and the arts, plays an increasingly central role in Merleau-Ponty’s thought in the years following Phenomenology, when he aimed to formulate a general theory of expression as the grounding for a philosophy of history and culture.[5] This interest is first reflected in a series of essays addressing painting, literature, and film published in the years immediately following Phenomenology (in Merleau-Ponty 1996b/1964). These include Merleau-Ponty’s first essay on painting, “Cézanne’s Doubt”, which finds in Cézanne a proto-phenomenological effort to capture the birth of perception through painting. Cézanne epitomizes the paradoxical struggle of creative expression, which necessarily relies on the idiosyncracies of the artist’s individual history and psychology, as well as the resources of the tradition of painting, but can succeed only by risking a creative appropriation of these acquisitions in the service of teaching its audience to see the world anew. Similarly, Leonardo da Vinci’s artistic productivity is explicable neither in terms of his intellectual freedom (Valéry) nor his childhood (Freud) but as the dialectic of spontaneity and sedimentation by which Merleau-Ponty had formerly defined history. In 1951, Merleau-Ponty summarizes his research after Phenomenology as focused on a “theory of truth” exploring how knowledge and communication with others are “original formations with respect to perceptual life, but … also preserve and continue our perceptual life even while transforming it” (UMP: 41–42/287). Expression, language, and symbolism are the key to this theory of truth and provide the foundation for a philosophy of history and of “transcendental” humanity. Whereas the study of perception could only provide a “bad ambiguity” that mixes “finitude and universality”, Merleau-Ponty sees in the phenomenon of expression a “good ambiguity” that “gathers together the plurality of monads, the past and the present, nature and culture, into a single whole” (UMP: 48/290). Many of Merleau-Ponty’s courses from 1947 through 1953 at the University of Lyon, the Sorbonne, and the Collège de France focus on language, expression, and literature.[6] The manuscript partially completed during these years and published posthumously as The Prose of the World (1969/1973) pursues these themes through a phenomenological investigation of literary language and its relationship with scientific language and painting. Critiquing our commonsense ideal of a pure language that would transparently encode pre-existing thoughts, Merleau-Ponty argues that instituted language—the conventional system of language as an established set of meanings and rules—is derivative from a more primordial function of language as genuinely creative, expressive, and communicative. Here he draws two insights from Saussurian linguistics: First, signs function diacritically, through their lateral relations and differentiations, rather than through a one-to-one correspondence with a conventionally established meaning. Ultimately, signification happens through the differences between terms in a referential system that lacks any fixed or positive terms. This insight into diacritical difference will later prove important to Merleau-Ponty’s understanding of perception and ontology as well (see Alloa 2013a). Second, the ultimate context for the operation of language is effective communication with others, by which new thoughts can be expressed and meanings shared. Expression accomplishes itself through a coherent reorganization of the relationships between acquired signs that must teach itself to the reader or listener, and which may afterwards again sediment into a taken-for-granted institutional structure. In a long extract from the manuscript that was revised and published in 1952 as “Indirect Language and the Voices of Silence” (in Merleau-Ponty 1960/1964), Merleau-Ponty brings this understanding of language into conversation with Sartre’s What is Literature? and André Malraux’s The Voices of Silence. Sharing Malraux’s criticisms of the museum’s role in framing the reception of painting, but rejecting his interpretation of modern painting as subjectivist, Merleau-Ponty offers an alternative understanding of “institution” (from Husserl’s Stiftung) as the creative establishment of a new field of meaning that opens an historical development. The style of an artist is not merely subjective but lived as a historical trajectory of expression that begins with perception itself and effects a “coherent deformation” in inherited traditions. Rather than opposed as silent and speaking, painting and language are both continuations of the expressivity of a perceptual style into more malleable mediums. The unfinished character of modern painting is therefore not a turn from the objectivity of representation toward subjective creation but rather a more authentic testament to the paradoxical logic of all expression. Merleau-Ponty returns to the analysis of painting in his final essay, “Eye and Mind” (1964a OEE), where he accords it an ontological priority—between the linguistic arts and music—for revealing the “there is” of the world that the operationalism of contemporary science has occluded. It is by “lending his body to the world that the artist changes the world into paintings” (OEE: 16/353), and this presupposes that the artist’s body is immersed in and made of the same stuff as the world: to touch, one must be tangible, and to see, visible. Merleau-Ponty describes this as an “intertwining” or “overlapping”, in which the artist’s situated embodiment is the other side of its opening to the world. There is as yet no sharp division between the sensing and the sensed, between body and things as one common “flesh”, and painting arises as the expression of this relation: it is a “visible to the second power, a carnal essence or icon” of embodied vision (OEE: 22/355). Descartes’s efforts, in the Optics, to reconstruct vision from thought leads him to focus on the “envelope” or form of the object, as presented in engraved lines, and to treat depth as a third dimension modeled after height and width. This idealization of space has its necessity, yet, once elevated to a metaphysical status by contemporary science, it culminates in an understanding of being as purely positive and absolutely determinate. The ontological significance of modern painting and the plastic arts—e.g., Klee, de Staël, Cézanne, Matisse, Rodin—lies in the alternative philosophy that they embody, as revealed through their treatment of depth, color, line, and movement. Ultimately, such works teach us anew what it means to see: Vision is not a certain mode of thought or presence to self; it is the means given me for being absent from myself, for being present from the inside at the fission of Being only at the end of which do I close up into myself. (OEE: 81/374) 5. Political Philosophy From the first issue of Les Temps Modernes in October 1945 until his death, Merleau-Ponty wrote regularly on politics, including reflections on contemporary events as well as explorations of their philosophical underpinnings and the broader political significance of his times. During his eight-year tenure as unofficial managing editor of Les Temps Modernes, he charted the review’s political direction and penned many of its political editorials. After leaving Les Temps Modernes in 1953, Merleau-Ponty found new outlets for his political writings, including L’Express, a weekly newspaper devoted to the non-communist left. Both of the essay collections that he published during his lifetime, Sense and Non-Sense and Signs, devote significant space to his political writings. He also published two volumes devoted entirely to political philosophy, Humanism and Terror (HT) and Adventures of the Dialectic (1955 AdD). Always writing from the left, Merleau-Ponty’s position gradually shifted from a qualified Marxism, maintaining a critical distance from liberal democracy as well as from Soviet communism, to the rejection of revolutionary politics in favor of a “new liberalism”. His political writings have received relatively scant attention compared with other aspects of his philosophy, perhaps because of their close engagement with the political situations and events of his day. Nevertheless, scholars of his political thought emphasize its continuity with his theoretical writings and ongoing relevance for political philosophy (see Coole 2007; Whiteside 1998). The 1947 publication of Humanism and Terror responded to growing anti-communist sentiment in France fueled in part by the fictional account of the Moscow trials in Arthur Koestler’s popular novel Darkness at Noon. Merleau-Ponty sought to articulate an alternative to the choice Europe apparently faced in the solidifying opposition between the United States and the Soviet Union. Humanism and Terror criticizes Koestler’s portrayal of the fictional Rubochov, modeled on Nikolai Bukharin, for replacing the “mutual praxis” of genuine Marxism (HT: 102/18) with an opposition between pure freedom and determined history, the “yogi” who withdraws into spiritual ideals or the “commissar” who acts by any means necessary. Turning to an examination of Bukharin’s 1938 trial, Merleau-Ponty finds there an example of “revolutionary justice” that “judges in the name of the Truth that the Revolution is about to make true” (HT: 114/28), even though the historical contingency that this entails is denied by the procedures of the trials themselves. On the other hand, Trotsky’s condemnation of Stalinism as counter-revolutionary similarly misses the ambiguity of genuine history. Ultimately, the dimension of terror that history harbors is a consequence of our unavoidable responsibility in the face of its essential contingency and ambiguity. Although violence is a consequence of the human condition and therefore the starting point for politics, Merleau-Ponty finds hope in the theory of the proletariat for a fundamental transformation in the terms of human recognition: The proletariat is universal de facto, or manifestly in its very condition of life…. [I]t is the sole authentic intersubjectivity because it alone lives simultaneously the separation and union of individuals. (HT: 221/116–17) A genuinely historical Marxism must recognize that nothing guarantees progress toward a classless society, but also that this end cannot be brought about by non-proletarian means, which is what Soviet communism had apparently forgotten. Despite the failures of the Soviet experiment, Merleau-Ponty remains committed to a humanist Marxism: Marxism is not just any hypothesis that might be replaced tomorrow by some other. It is the simple statement of those conditions without which there would be neither any humanism, in the sense of a mutual relation between men, nor any rationality in history. In this sense Marxism is not a philosophy of history; it is the philosophy of history and to renounce it is to dig the grave of Reason in history. (HT: 266/153) Even if the proletariat is not presently leading world history, its time may yet come. Merleau-Ponty therefore concludes with a “wait-and-see” Marxism that cautions against decontextualized criticisms of Soviet communism as well as apologetics for liberal democracies that whitewash their racist and colonial violence. Revelations about the Gulag camps and the outbreak of the Korean War forced Merleau-Ponty to revise his position on Marxism and revolutionary politics, culminating in the 1955 Adventures of the Dialectic (AdD). The book begins with the formulation of a general theory of history in conversation with Max Weber. Historians necessarily approach the past through their own perspectives, but, since they are themselves a part of history’s movement, this need not compromise their objectivity. The historical events and periods within which the historian traces a particular style or meaning emerge in conjunction with historical agents, political actors or classes, who exercise a creative action parallel to the expressive gesture of the artist or the writer. History may eliminate false paths, but it guarantees no particular direction, leaving to historical agents the responsibility for the continuation or transformation of what is inherited from the past through a genius for inventing what the times demand: “In politics, truth is perhaps only this art of inventing what will later appear to have been required by the time” (AdD: 42/29). Merleau-Ponty finds a similar position articulated by the young Georg Lukacs, for whom “There is only one knowledge, which is the knowledge of our world in a state of becoming, and this becoming embraces knowledge itself” (AdD: 46/32). History forms a third order, beyond subjects and objects, of interhuman relations inscribed in cultural objects and institutions, and with its own logic of sedimentation and spontaneity. The self-consciousness that emerges within this third order is precisely the proletariat, whose consciousness is not that of an “I think” but rather the praxis of their common situation and system of action. Historical truth emerges from the movement of creative expression whereby the Party brings the life of the proletariat to explicit awareness, which requires, in return, that the working class recognize and understand itself in the Party’s formulations. With this understanding, Lukacs aims to preserve the dialectic of history, to prevent it from slipping into a simple materialism, and thereby to discover the absolute in the relative. But Lukacs backtracks on this position after its official rejection by the communist establishment in favor of a metaphysical materialism, and Merleau-Ponty finds a parallel in Marx’s own turn away from genuine dialectic toward a simple naturalism that justifies any action in the name of a historical necessity inscribed in things. For the lack of a genuine concept of institution that can recognize dialectic in embodied form, Marxist materialism repeatedly abandons its dialectical aspirations, as Merleau-Ponty further illustrates through the example of Trostky’s career. In the final chapter of Adventures, Merleau-Ponty turns his sights toward Sartre’s endorsement of communism in The Communists and Peace. On Merleau-Ponty’s interpretation, Sartre’s ontological commitment to a dualism of being and nothingness, where the full positivity of determinate things juxtaposes with the negating freedom of consciousness, eliminates any middle ground for history or praxis. Since consciousness is unconstrained by any sedimentation or by the autonomous life of cultural acquisitions, it can recognize no inertia or spontaneity at the level of institutions, and therefore no genuine historical becoming. More centrally, by interpreting the relation between the Party and the proletariat through his own conception of consciousness as pure freedom, Sartre rules out in principle any possibility for their divergence. This leads Sartre to an “ultrabolshevism” according to which the Party’s position is identified with the revolutionary agenda, any opposition to which must be suppressed. In the Epilogue that summarizes Merleau-Ponty’s own position, he explains his rejection of revolutionary action, understood as proletarian praxis, for remaining equivocal rather than truly dialectical. The illusion that has brought dialectic to a halt is precisely the investment of history’s total meaning in the proletariat, ultimately equating the proletariat with dialectic as such, which leads to the conviction that revolution would liquidate history itself. But it is essential to the very structure of revolutions that, when successful, they betray their own revolutionary character by sedimenting into institutions. Drawing on the extended example of the French revolution, Merleau-Ponty argues that every revolution mistakes the structure of history for its contents, believing that eliminating the latter will absolutely transform the former. Thus, “The very nature of revolution is to believe itself absolute and to not be absolute precisely because it believes itself to be so” (AdD: 298/222). While Soviet communism may continue to justify itself in absolute terms, it is concretely a progressivism that tacitly recognizes the relativity of revolution and the gradual nature of progress. The alternative that Merleau-Ponty endorses is the development of a “noncommunist left”, an “a-communism”, or a “new liberalism”, the first commitment of which would be to reject the description of the rivalry between the two powers as one between “free enterprise” and Marxism (AdD: 302–3/225). This noncommunist left would occupy a “double position”, “posing social problems in terms of [class] struggle” while also “refusing the dictatorship of the proletariat” (AdD: 304/226). This pursuit must welcome the resources of parliamentary debate, in clear recognition of their limitations, since Parliament is “the only known institution that guarantees a minimum of opposition and of truth” (AdD: 304/226). By exercising “methodical doubt” toward the established powers and denying that they exhaust political and economic options, the possibility opens for a genuine dialectic that advances social justice while respecting political freedom. 6. The Visible and the Invisible The manuscript and working notes published posthumously as The Visible and the Invisible (1964 V&I), extracted from a larger work underway at the time of Merleau-Ponty’s death, is considered by many to be the best presentation of his later ontology. The main text, drafted in 1959 and 1960, is contemporaneous with “Eye and Mind” and the Preface to Signs, Merleau-Ponty’s final collection of essays. The first three chapters progressively develop an account of “philosophical interrogation” in critical dialogue with scientism, the philosophies of reflection (Descartes and Kant), Sartrean negation, and the intuitionisms of Bergson and Husserl. These are followed by a stand-alone chapter, “The Intertwining—The Chiasm”, presenting Merleau-Ponty’s ontology of flesh. The published volume also includes a brief abandoned section of the text as an appendix and more than a hundred pages of selected working notes composed between 1959 and 1961.[7] Merleau-Ponty frames the investigation with a description of “perceptual faith”, our shared pre-reflective conviction that perception presents us with the world as it actually is, even though this perception is mediated, for each of us, by our bodily senses. This apparent paradox creates no difficulties in our everyday lives, but it becomes incomprehensible when thematized by reflection: The “natural” man holds on to both ends of the chain, thinks at the same time that his perception enters into the things and that it is formed this side of his body. Yet coexist as the two convictions do without difficulty in the exercise of life, once reduced to theses and to propositions they destroy one another and leave us in confusion. (V&I: 23–24/8) For Merleau-Ponty, this “unjustifiable certitude of a sensible world” is the starting point for developing an alternative account of perception, the world, intersubjective relations, and ultimately being as such. Neither the natural sciences nor psychology provide an adequate clarification of this perceptual faith, since they rely on it without acknowledgment even as their theoretical constructions rule out its possibility. Philosophies of reflection, exemplified by Descartes and Kant, also fail in their account of perception, since they reduce the perceived world to an idea, equate the subject with thought, and undermine any understanding of intersubjectivity or a world shared in common (V&I: 62/39, 67/43). Sartre’s dialectic of being (in-itself) and nothingness (for-itself) makes progress over philosophies of reflection insofar as it recognizes the ecceity of the world, with which the subject engages not as one being alongside others but rather as a nothingness, that is, as a determinate negation of a concrete situation that can co-exist alongside other determinate negations. Even so, for Sartre, pure nothingness and pure being remain mutually exclusive, ambivalently identical in their perfect opposition, which brings any movement of their dialectic to a halt. The “philosophy of negation” is therefore shown to be a totalizing or “high-altitude” thought that remains abstract, missing the true opening onto the world made possible by the fact that nothingness is “sunken into being” (V&I: 121–122/88–89). This “bad” dialectic must therefore give way to a “hyperdialectic” that remains self-critical about its own tendency to reify into fixed and opposed theses (V&I: 129/94). The philosophy of intuition takes two forms: the Wesenschau of Husserl, which converts lived experience into ideal essences before a pure spectator, and Bergsonian intuition, which seeks to coincide with its object by experiencing it from within. Against the first, Merleau-Ponty argues that the world’s givenness is more primordial than the ideal essence; the essence is a variant of the real, not its condition of possibility. Essences are not ultimately detachable from the sensible but are its “invisible” or its latent structure of differentiation. Against a return to the immediacy of coincidence or a nostalgia for the pre-reflective, Merleau-Ponty holds that there is no self-identical presence to rejoin; the “immediate” essentially involves distance and non-coincidence. Consequently, truth must be redefined as “a privative non-coinciding, a coinciding from afar, a divergence, and something like a ‘good error’” (V&I: 166/124–25). In the final chapter, “The Intertwining—The Chiasm”, Merleau-Ponty turns directly to the positive project of describing his ontology of “flesh”. Intertwining [entrelacs] here translates Husserl’s Verflechtung, entanglement or interweaving, like the woof and warp of a fabric. Chiasm has two senses in French and English that are both relevant to Merleau-Ponty’s project: a physiological sense that refers to anatomical or genetic structures with a crossed arrangement (such as the optic nerves), and a literary sense referring to figures of speech that repeat structures in reverse order (AB:BA). For Merleau-Ponty, the chiasm is a structure of mediation that combines the unity-in-difference of its physiological sense with the reversal and circularity of its literary usage (see Toadvine 2012; Saint Aubert 2005). A paradigmatic example of chiasmic structure is the body’s doubling into sensible and sentient aspects during self-touch. Elaborating on Husserl’s descriptions of this phenomenon, Merleau-Ponty emphasizes three consequences: First, the body as sensible-sentient is an “exemplar sensible” that demonstrates the kinship or ontological continuity between subject and object among sensible things in general. Second, this relationship is reversible, like “obverse and reverse” or “two segments of one sole circular course” (V&I: 182/138). Third, the sentient and sensible never strictly coincide but are always separated by a gap or divergence [écart] that defers their unity. Chiasm is therefore a crisscrossing or a bi-directional becoming or exchange between the body and things that justifies speaking of a “flesh” of things, a kinship between the sensing body and sensed things that makes their communication possible. Flesh in this sense is a “general thing” between the individual and the idea that does not correspond to any traditional philosophical concept, but is closest to the notion of an “element” in the classical sense (V&I: 184/139). Merleau-Ponty denies that this is a subjective or anthropocentric projection: carnal being, as a being of depths, of several leaves or several faces, a being in latency, and a presentation of a certain absence, is a prototype of Being, of which our body, the sensible sentient, is a very remarkable variant, but whose constitutive paradox already lies in every visible. (V&I: 179/136) The generality of flesh embraces an intercorporeity, an anonymous sensibility shared out among distinct bodies: just as my two hands communicate across the lateral synergy of my body, I can touch the sensibility of another: “The handshake too is reversible” (V&I: 187/142). Sensible flesh—what Merleau-Ponty calls the “visible”—is not all there is to flesh, since flesh also “sublimates” itself into an “invisible” dimension: the “rarified” or “glorified” flesh of ideas. Taking as his example the “little phrase” from Vinteuil’s sonata (in Swann’s Way), Merleau-Ponty describes literature, music, and the passions as “the exploration of an invisible and the disclosure of a universe of ideas”, although in such cases these ideas “cannot be detached from the sensible appearance and be erected into a second positivity” (V&I: 196/149). Creative language necessarily carries its meaning in a similarly embodied fashion, while the sediments of such expression result in language as a system of formalized relations. What we treat as “pure ideas” are nothing more than a certain divergence and ongoing process of differentiation, now occurring within language rather than sensible things. Ultimately we find a relation of reversibility within language like that holding within sensibility: just as, in order to see, my body must be part of the visible and capable of being seen, so, by speaking, I make myself one who can be spoken to (allocutary) and one who can be spoken about (delocutary). While all of the possibilities of language are already outlined or promised within the sensible world, reciprocally the sensible world itself is unavoidably inscribed with language. This final chapter of The Visible and the Invisible illustrates chiasmic mediation across a range of relations, including sentient and sensed, touch and vision, body and world, self and other, fact and essence, perception and language. There is not one chiasm but rather various chiasmic structures at different levels. As Renaud Barbaras notes, It is necessary … to picture the universe as intuited by Merleau-Ponty as a proliferation of chiasms that integrate themselves according to different levels of generality. (1991, 352/2004, 307) The ultimate ontological chiasm, that between the sensible and the intelligible, is matched by an ultimate epistemological chiasm, that of philosophy itself. As Merleau-Ponty writes in a working note from November 1960, the idea of chiasm, that is: every relation with being is simultaneously a taking and a being held, the hold is held, it is inscribed and inscribed in the same being that it takes hold of. Starting from there, elaborate an idea of philosophy… . It is the simultaneous experience of the holding and the held in all orders. (V&I: 319/266; see also Saint Aubert 2005: 162–64) 7. Influence and Current Scholarship While the generation of French post-structuralist thinkers who succeeded Merleau-Ponty, including Deleuze, Derrida, Irigaray, and Foucault, typically distanced themselves from his work, lines of influence are often recognizable (see Lawlor 2006, 2003; Reynolds 2004). Irigaray (1993) suggests that Merleau-Ponty’s ontology of flesh tacitly relies on feminine and maternal metaphors while rendering sexual difference invisible. Derrida’s most detailed engagement with Merleau-Ponty, in Le Toucher, Jean-Luc Nancy (On Touching—Jean-Luc Nancy, 2000/2005), criticizes the latter’s account of touch and ontology of flesh for its tendency to privilege immediacy, continuity, and coincidence over rupture, distance, and untouchability. Nevertheless, Derrida ultimately suspends judgment over the relation between these two tendencies in Merleau-Ponty’s final writings. The legacy of Merleau-Ponty’s philosophy of embodiment and ontology of flesh is also apparent in the work of subsequent French phenomenologists, including Françoise Dastur, Michel Henry, Henri Maldiney, Jean-Louis Chrétien, and Jacob Rogozinski. Recent English-language scholarship on Merleau-Ponty, inspired by the availability of new materials from his course notes and unpublished writings, has focused on his concept of subjectivity (Morris and Maclaren 2015; Welsh 2013; Marratto 2012), his relationship to literature, architecture, and the arts (Carbone 2015; Locke & McCann 2015; Wiskus 2014; Kaushik 2013; Johnson 2010), and his later ontology and philosophy of nature (Foti 2013; Toadvine 2009). His work has also made important contributions to debates in cognitive science (Thompson 2007; Gallagher 2005), feminism (Olkowski and Weiss 2006; Heinämaa 2003), animal studies (Westling 2014; Buchanan 2009; Oliver 2009), and environmental philosophy (Cataldi & Hamrick 2007; Abram 1996). Bibliography Cited Works by Merleau-Ponty Citations of these texts list the French pagination first followed by that of the English translation.
At the level of this prereflective faith in the world, there is no dilemma of the soul’s separation from the body; “the soul remains coextensive with nature” (SC: 203/189). This prereflective unity eventually splinters under our awareness of illness, illusion, and anatomy, which teach us to separate nature, body, and thought into distinct orders of events partes extra partes. This culminates in a naturalism that cannot account for the originary situation of perception that it displaces, yet on which it tacitly relies; perception requires an “internal” analysis, paving the way for transcendental idealism’s treatment of subject and object as “inseparable correlatives” (SC: 215/199). But transcendental idealism in the critical tradition subsequently goes too far: by taking consciousness as “milieu of the universe, presupposed by every affirmation of the world”, it obscures the original character of the perceptual relation and culminates in “the dialectic of the epistemological subject and the scientific object” (SC: 216/200, 217/201). Merleau-Ponty aims to integrate the truth of naturalism and transcendental thought by reinterpreting both through the concept of structure, which accounts for the unity of soul and body as well as their relative distinction. Against the conception of transcendental consciousness as a pure spectator correlated with the world, Merleau-Ponty insists that mind is an accomplishment of structural integration that remains essentially conditioned by the matter and life in which it is embodied; the truth of naturalism lies in the fact that such integration is essentially fragile and incomplete.
no
Philosophy
Is the mind separate from the body?
no_statement
the "mind" is not "separate" from the "body".. there is no distinction between the "mind" and the "body".
https://en.wikipedia.org/wiki/Mind%E2%80%93body_dualism
Mind–body dualism - Wikipedia
Aristotle shared Plato's view of multiple souls and further elaborated a hierarchical arrangement, corresponding to the distinctive functions of plants, animals, and humans: a nutritive soul of growth and metabolism that all three share; a perceptive soul of pain, pleasure, and desire that only humans and other animals share; and the faculty of reason that is unique to humans only. In this view, a soul is the hylomorphic form of a viable organism, wherein each level of the hierarchy formally supervenes upon the substance of the preceding level. For Aristotle, the first two souls, based on the body, perish when the living organism dies,[3][4] whereas remains an immortal and perpetual intellective part of mind.[5] For Plato, however, the soul was not dependent on the physical body; he believed in metempsychosis, the migration of the soul to a new physical body.[6] It has been considered a form of reductionism by some philosophers, since it enables the tendency to ignore very big groups of variables by its assumed association with the mind or the body, and not for its real value when it comes to explaining or predicting a studied phenomenon.[7] Dualism is closely associated with the thought of René Descartes (1641), which holds that the mind is a nonphysical—and therefore, non-spatial—substance. Descartes clearly identified the mind with consciousness and self-awareness and distinguished this from the brain as the seat of intelligence.[8] Hence, he was the first documented Western philosopher to formulate the mind–body problem in the form in which it exists today.[9] Dualism is contrasted with various kinds of monism. Substance dualism is contrasted with all forms of materialism, but property dualism may be considered a form of emergent materialism or non-reductive physicalism in some sense. Substance dualism, or Cartesian dualism, most famously defended by René Descartes, argues that there are two kinds of foundation: mental and physical.[8] This philosophy states that the mental can exist outside of the body, and the body cannot think. Substance dualism is important historically for having given rise to much thought regarding the famous mind–body problem. The Copernican Revolution and the scientific discoveries of the 17th century reinforced the belief that the scientific method was the unique way of knowledge. Bodies were seen as biological organisms to be studied in their constituent parts (materialism) by means of anatomy, physiology, biochemistry and physics (reductionism).[10] The mind-body dualism remained the biomedical paradigm and model for the following three centuries.[10] Substance dualism is a philosophical position compatible with most theologies which claim that immortal souls occupy an independent realm of existence distinct from that of the physical world.[1][disputed – discuss] In contemporary discussions of substance dualism, philosophers propose dualist positions that are significantly less radical than Descartes's: for instance, a position defended by William Hasker called Emergent Dualism[11] seems, to some philosophers, more intuitively attractive than the substance dualism of Descartes in virtue of its being in line with (inter alia) evolutionary biology. Property dualism asserts that an ontological distinction lies in the differences between properties of mind and matter, and that consciousness may be ontologically irreducible to neurobiology and physics. It asserts that when matter is organized in the appropriate way (i.e., in the way that living human bodies are organized), mental properties emerge. Hence, it is a sub-branch of emergent materialism. What views properly fall under the property dualism rubric is itself a matter of dispute. There are different versions of property dualism, some of which claim independent categorisation.[12] Non-reductive physicalism is a form of property dualism in which it is asserted that all mental states are causally reducible to physical states. One argument for this has been made in the form of anomalous monism expressed by Donald Davidson, where it is argued that mental events are identical to physical events, however, relations of mental events cannot be described by strict law-governed causal relationships. Another argument for this has been expressed by John Searle, who is the advocate of a distinctive form of physicalism he calls biological naturalism. His view is that although mental states are ontologically irreducible to physical states, they are causally reducible. He has acknowledged that "to many people" his views and those of property dualists look a lot alike, but he thinks the comparison is misleading.[12] Epiphenomenalism is a form of property dualism, in which it is asserted that one or more mental states do not have any influence on physical states (both ontologically and causally irreducible). It asserts that while material causes give rise to sensations, volitions, ideas, etc., such mental phenomena themselves cause nothing further: they are causal dead-ends. This can be contrasted to interactionism, on the other hand, in which mental causes can produce material effects, and vice versa.[13] Predicate dualism is a view espoused by such non-reductive physicalists as Donald Davidson and Jerry Fodor, who maintain that while there is only one ontological category of substances and properties of substances (usually physical), the predicates that we use to describe mental events cannot be redescribed in terms of (or reduced to) physical predicates of natural languages.[14][15] Predicate dualism is most easily defined as the negation of predicate monism. Predicate monism can be characterized as the view subscribed to by eliminative materialists, who maintain that such intentional predicates as believe, desire, think, feel, etc., will eventually be eliminated from both the language of science and from ordinary language because the entities to which they refer do not exist. Predicate dualists believe that so-called "folk psychology," with all of its propositional attitude ascriptions, is an ineliminable part of the enterprise of describing, explaining, and understanding human mental states and behavior. For example, Davidson subscribes to anomalous monism, according to which there can be no strict psychophysical laws which connect mental and physical events under their descriptions as mental and physical events. However, all mental events also have physical descriptions. It is in terms of the latter that such events can be connected in law-like relations with other physical events. Mental predicates are irreducibly different in character (rational, holistic, and necessary) from physical predicates (contingent, atomic, and causal).[14] Four varieties of dualist causal interaction. The arrows indicate the direction of causations. Mental and physical states are shown in red and blue, respectively. This part is about causation between properties and states of the thing under study, not its substances or predicates. Here a state is the set of all properties of what's being studied. Thus each state describes only one point in time. Interactionism is the view that mental states, such as beliefs and desires, causally interact with physical states. This is a position which is very appealing to common-sense intuitions, notwithstanding the fact that it is very difficult to establish its validity or correctness by way of logical argumentation or empirical proof. It seems to appeal to common-sense because we are surrounded by such everyday occurrences as a child's touching a hot stove (physical event) which causes him to feel pain (mental event) and then yell and scream (physical event) which causes his parents to experience a sensation of fear and protectiveness (mental event) and so on.[8] Non-reductive physicalism is the idea that while there is one kind of substance (physical), it has two kinds of properties: mental and physical. Mental properties aren't reducible to physical properties, in that an ontological distinction lies in the differences between the properties of mind and matter. According to non-reductive physicalism all mental states may be causally reducible to physical states where mental properties map to physical properties and vice versa. A prominent form of non-reductive physicalism, called anomalous monism, was first proposed by Donald Davidson in his 1970 paper "Mental events", in which he claims that mental events are identical with physical events, and that the mental is anomalous, i.e. under their mental descriptions these mental events are not regulated by strict physical laws. Epiphenomenalism states that all mental events are caused by a physical event and have no physical consequences, and that one or more mental states do not have any influence on physical states. So, the mental event of deciding to pick up a rock ("M1") is caused by the firing of specific neurons in the brain ("P1"). When the arm and hand move to pick up the rock ("P2") this is not caused by the preceding mental event M1, nor by M1 and P1 together, but only by P1. The physical causes are in principle reducible to fundamental physics, and therefore mental causes are eliminated using this reductionist explanation. If P1 causes both M1 and P2, there is no overdetermination in the explanation for P2.[8] The idea that even if the animal were conscious nothing would be added to the production of behavior, even in animals of the human type, was first voiced by La Mettrie (1745), and then by Cabanis (1802), and was further explicated by Hodgson (1870) and Huxley (1874).[16]Jackson gave a subjective argument for epiphenomenalism, but later rejected it and embraced physicalism.[17] Psychophysical parallelism is a very unusual view about the interaction between mental and physical events which was most prominently, and perhaps only truly, advocated by Gottfried Wilhelm von Leibniz. Like Malebranche and others before him, Leibniz recognized the weaknesses of Descartes' account of causal interaction taking place in a physical location in the brain. Malebranche decided that such a material basis of interaction between material and immaterial was impossible and therefore formulated his doctrine of occasionalism, stating that the interactions were really caused by the intervention of God on each individual occasion. Leibniz's idea is that God has created a pre-established harmony such that it only seems as if physical and mental events cause, and are caused by, one another. In reality, mental causes only have mental effects and physical causes only have physical effects. Hence, the term parallelism is used to describe this view.[13] Occasionalism is a philosophical doctrine about causation which says that created substances cannot be efficient causes of events. Instead, all events are taken to be caused directly by God itself. The theory states that the illusion of efficient causation between mundane events arises out of a constant conjunction that God had instituted, such that every instance where the cause is present will constitute an "occasion" for the effect to occur as an expression of the aforementioned power. This "occasioning" relation, however, falls short of efficient causation. In this view, it is not the case that the first event causes God to cause the second event: rather, God first caused one and then caused the other, but chose to regulate such behaviour in accordance with general laws of nature. Some of its most prominent historical exponents have been Al-Ghazali, Louis de la Forge, Arnold Geulincx, and Nicolas Malebranche.[18] According to the philosophy of Immanuel Kant, there is a distinction between actions done by desire and those performed by reason in liberty (categorical imperative). Thus, not all physical actions are caused either by matter alone or by freedom alone. Some actions are purely animal in nature, while others are the result of mind's free action on matter. Hermotimus of Clazomenae (fl. c. 6th century BCE) was a philosopher who first proposed the idea of mind being fundamental in the cause of change.[19] He proposed that physical entities are static, while reason[20] causes the change. Sextus Empiricus places him with Hesiod, Parmenides, and Empedocles, as belonging to the class of philosophers who held a dualistic theory of a material and an active principle being together the origin of the universe.[21] Similar ideas were expounded by Anaxagoras. In the dialogue Phaedo, Plato formulated his famous Theory of Forms as distinct and immaterial substances of which the objects and other phenomena that we perceive in the world are nothing more than mere shadows.[6] In the Phaedo, Plato makes it clear that the Forms are the universalia ante res, i.e. they are ideal universals, by which we are able to understand the world. In his allegory of the cave, Plato likens the achievement of philosophical understanding to emerging into the sun from a dark cave, where only vague shadows of what lies beyond that prison are cast dimly upon the wall. Plato's forms are non-physical and non-mental. They exist nowhere in time or space, but neither do they exist in the mind, nor in the pleroma of matter; rather, matter is said to "participate" in form (μεθεξις, methexis). It remained unclear however, even to Aristotle, exactly what Plato intended by that. Aristotle argued at length against many aspects of Plato's forms, creating his own doctrine of hylomorphism wherein form and matter coexist. Ultimately however, Aristotle's aim was to perfect a theory of forms, rather than to reject it. Although Aristotle strongly rejected the independent existence Plato attributed to forms, his metaphysics do agree with Plato's a priori considerations quite often. For example, Aristotle argues that changeless, eternal substantial form is necessarily immaterial. Because matter provides a stable substratum for a change in form, matter always has the potential to change. Thus, if given an eternity in which to do so, it will, necessarily, exercise that potential. Part of Aristotle's psychology, the study of the soul, is his account of the ability of humans to reason and the ability of animals to perceive. In both cases, perfect copies of forms are acquired, either by direct impression of environmental forms, in the case of perception, or else by virtue of contemplation, understanding and recollection. He believed the mind can literally assume any form being contemplated or experienced, and it was unique in its ability to become a blank slate, having no essential form. As thoughts of earth are not heavy, any more than thoughts of fire are causally efficient, they provide an immaterial complement for the formless mind.[3] The philosophical school of Neoplatonism, most active in Late Antiquity, claimed that the physical and the spiritual are both emanations of the One. Neoplatonism exerted a considerable influence on Christianity, as did the philosophy of Aristotle via scholasticism.[22] In the scholastic tradition of Saint Thomas Aquinas, a number of whose doctrines have been incorporated into Roman Catholic dogma, the soul is the substantial form of a human being.[23] Aquinas held the Quaestiones disputate de anima, or 'Disputed questions on the soul', at the Roman studium provinciale of the Dominican Order at Santa Sabina, the forerunner of the Pontifical University of Saint Thomas Aquinas, Angelicum during the academic year 1265–1266.[24] By 1268 Aquinas had written at least the first book of the Sententia Libri De anima, Aquinas' commentary on Aristotle's De anima, the translation of which from the Greek was completed by Aquinas' Dominican associate at Viterbo, William of Moerbeke in 1267.[25] Like Aristotle, Aquinas held that the human being was a unified composite substance of two substantial principles: form and matter. The soul is the substantial form and so the first actuality of a material organic body with the potentiality for life.[26] While Aquinas defended the unity of human nature as a composite substance constituted by these two inextricable principles of form and matter, he also argued for the incorruptibility of the intellectual soul,[23] in contrast to the corruptibility of the vegetative and sensitive animation of plants and animals.[23] His argument for the subsistence and incorruptibility of the intellectual soul takes its point of departure from the metaphysical principle that operation follows upon being (agiture sequitur esse), i.e., the activity of a thing reveals the mode of being and existence it depends upon. Since the intellectual soul exercises its own per se intellectual operations without employing material faculties, i.e. intellectual operations are immaterial, the intellect itself and the intellectual soul, must likewise be immaterial and so incorruptible. Even though the intellectual soul of man is able to subsist upon the death of the human being, Aquinas does not hold that the human person is able to remain integrated at death. The separated intellectual soul is neither a man nor a human person. The intellectual soul by itself is not a human person (i.e., an individual supposit of a rational nature).[27] Hence, Aquinas held that "soul of St. Peter pray for us" would be more appropriate than "St. Peter pray for us", because all things connected with his person, including memories, ended with his corporeal life.[28] The Catholic doctrine of the resurrection of the body does not subscribe that, sees body and soul as forming a whole and states that at the second coming, the souls of the departed will be reunited with their bodies as a whole person (substance) and witness to the apocalypse. The thorough consistency between dogma and contemporary science was maintained here[29] in part from a serious attendance to the principle that there can be only one truth. Consistency with science, logic, philosophy, and faith remained a high priority for centuries, and a university doctorate in theology generally included the entire science curriculum as a prerequisite. This doctrine is not universally accepted by Christians today. Many believe that one's immortal soul goes directly to Heaven upon death of the body.[30] In his Meditations on First Philosophy, René Descartes embarked upon a quest in which he called all his previous beliefs into doubt, in order to find out what he could be certain of.[9] In so doing, he discovered that he could doubt whether he had a body (it could be that he was dreaming of it or that it was an illusion created by an evil demon), but he could not doubt whether he had a mind. This gave Descartes his first inkling that the mind and body were different things. The mind, according to Descartes, was a "thinking thing" (Latin: res cogitans), and an immaterial substance. This "thing" was the essence of himself, that which doubts, believes, hopes, and thinks. The body, "the thing that exists" (res extensa), regulates normal bodily functions (such as heart and liver). According to Descartes, animals only had a body and not a soul (which distinguishes humans from animals). The distinction between mind and body is argued in Meditation VI as follows: I have a clear and distinct idea of myself as a thinking, non-extended thing, and a clear and distinct idea of body as an extended and non-thinking thing. Whatever I can conceive clearly and distinctly, God can so create. The central claim of what is often called Cartesian dualism, in honor of Descartes, is that the immaterial mind and the material body, while being ontologically distinct substances, causally interact. This is an idea that continues to feature prominently in many non-European philosophies. Mental events cause physical events, and vice versa. But this leads to a substantial problem for Cartesian dualism: How can an immaterial mind cause anything in a material body, and vice versa? This has often been called the "problem of interactionism." Descartes himself struggled to come up with a feasible answer to this problem. In his letter to Elisabeth of Bohemia, Princess Palatine, he suggested that spirits interacted with the body through the pineal gland, a small gland in the centre of the brain, between the two hemispheres.[9] The term Cartesian dualism is also often associated with this more specific notion of causal interaction through the pineal gland. However, this explanation was not satisfactory: how can an immaterial mind interact with the physical pineal gland? Because Descartes' was such a difficult theory to defend, some of his disciples, such as Arnold Geulincx and Nicolas Malebranche, proposed a different explanation: That all mind–body interactions required the direct intervention of God. According to these philosophers, the appropriate states of mind and body were only the occasions for such intervention, not real causes. These occasionalists maintained the strong thesis that all causation was directly dependent on God, instead of holding that all causation was natural except for that between mind and body.[18] In addition to already discussed theories of dualism (particularly the Christian and Cartesian models) there are new theories in the defense of dualism. Naturalistic dualism comes from Australian philosopher, David Chalmers (born 1966) who argues there is an explanatory gap between objective and subjective experience that cannot be bridged by reductionism because consciousness is, at least, logically autonomous of the physical properties upon which it supervenes. According to Chalmers, a naturalistic account of property dualism requires a new fundamental category of properties described by new laws of supervenience; the challenge being analogous to that of understanding electricity based on the mechanistic and Newtonian models of materialism prior to Maxwell's equations. A similar defense comes from Australian philosopher Frank Jackson (born 1943) who revived the theory of epiphenomenalism which argues that mental states do not play a role in physical states. Jackson argues that there are two kinds of dualism: substance dualism that assumes there is second, non-corporeal form of reality. In this form, body and soul are two different substances. property dualism that says that body and soul are different properties of the same body. He claims that functions of the mind/soul are internal, very private experiences that are not accessible to observation by others, and therefore not accessible by science (at least not yet). We can know everything, for example, about a bat's facility for echolocation, but we will never know how the bat experiences that phenomenon. Another one of Descartes' illustrations. The fire displaces the skin, which pulls a tiny thread, which opens a pore in the ventricle (F) allowing the "animal spirit" to flow through a hollow tube, which inflates the muscle of the leg, causing the foot to withdraw. An important fact is that minds perceive intra-mental states differently from sensory phenomena,[31] and this cognitive difference results in mental and physical phenomena having seemingly disparate properties. The subjective argument holds that these properties are irreconcilable under a physical mind. Mental events have a certain subjective quality to them, whereas physical ones seem not to. So, for example, one may ask what a burned finger feels like, or what the blueness of the sky looks like, or what nice music sounds like.[32] Philosophers of mind call the subjective aspects of mental events qualia. There is something that it's like to feel pain, to see a familiar shade of blue, and so on. There are qualia involved in these mental events. And the claim is that qualia cannot be reduced to anything physical.[1] Thomas Nagel first characterized the problem of qualia for physicalistic monism in his article, "What Is It Like to Be a Bat?". Nagel argued that even if we knew everything there was to know from a third-person, scientific perspective about a bat's sonar system, we still wouldn't know what it is like to be a bat. However, others argue that qualia are consequent of the same neurological processes that engender the bat's mind, and will be fully understood as the science develops.[33] Frank Jackson formulated his well-known knowledge argument based upon similar considerations. In this thought experiment, known as Mary's room, he asks us to consider a neuroscientist, Mary, who was born, and has lived all of her life, in a black and white room with a black and white television and computer monitor where she collects all the scientific data she possibly can on the nature of colours. Jackson asserts that as soon as Mary leaves the room, she will come to have new knowledge which she did not possess before: the knowledge of the experience of colours (i.e., what they are like). Although Mary knows everything there is to know about colours from an objective, third-person perspective, she has never known, according to Jackson, what it was like to see red, orange, or green. If Mary really learns something new, it must be knowledge of something non-physical, since she already knew everything about the physical aspects of colour.[34] However, Jackson later rejected his argument and embraced physicalism.[35] He notes that Mary obtains knowledge not of color, but of a new intramental state, seeing color.[17] Also, he notes that Mary might say "wow," and as a mental state affecting the physical, this clashed with his former view of epiphenomenalism. David Lewis' response to this argument, now known as the ability argument, is that what Mary really came to know was simply the ability to recognize and identify color sensations to which she had previously not been exposed.[36]Daniel Dennett and others also provide arguments against this notion. Chalmers' argument is that it seems plausible that such a being could exist because all that is needed is that all and only the things that the physical sciences describe and observe about a human being must be true of the zombie. None of the concepts involved in these sciences make reference to consciousness or other mental phenomena, and any physical entity can be described scientifically via physics whether it is conscious or not. The mere logical possibility of a p-zombie demonstrates that consciousness is a natural phenomenon beyond the current unsatisfactory explanations. Chalmers states that one probably could not build a living p-zombie because living things seem to require a level of consciousness. However (unconscious?) robots built to simulate humans may become the first real p-zombies. Hence Chalmers half-joking calls for the need to build a "consciousness meter" to ascertain if any given entity, human or robot, is conscious or not.[37][38] Others such as Dennett have argued that the notion of a philosophical zombie is an incoherent,[39] or unlikely,[40] concept. In particular, nothing proves that an entity (e.g., a computer or robot) which would perfectly mimic human beings, and especially perfectly mimic expressions of feelings (like joy, fear, anger, ...), would not indeed experience them, thus having similar states of consciousness to what a real human would have. It is argued that under physicalism, one must either believe that anyone including oneself might be a zombie, or that no one can be a zombie—following from the assertion that one's own conviction about being (or not being) a zombie is a product of the physical world and is therefore no different from anyone else's. Howard Robinson argues that, if predicate dualism is correct, then there are "special sciences" that are irreducible to physics. These allegedly irreducible subjects, which contain irreducible predicates, differ from hard sciences in that they are interest-relative. Here, interest-relative fields depend on the existence of minds that can have interested perspectives.[13]Psychology is one such science; it completely depends on and presupposes the existence of the mind. Physics is the general analysis of nature, conducted in order to understand how the universe behaves. On the other hand, the study of meteorological weather patterns or human behavior is only of interest to humans themselves. The point is that having a perspective on the world is a psychological state. Therefore, the special sciences presuppose the existence of minds which can have these states. If one is to avoid ontological dualism, then the mind that has a perspective must be part of the physical reality to which it applies its perspective. If this is the case, then in order to perceive the physical world as psychological, the mind must have a perspective on the physical. This, in turn, presupposes the existence of mind.[13] This argument concerns the differences between the applicability of counterfactual conditionals to physical objects, on the one hand, and to conscious, personal agents on the other.[50] In the case of any material object, e.g. a printer, we can formulate a series of counterfactuals in the following manner: This printer could have been made of straw. This printer could have been made of some other kind of plastics and vacuum-tube transistors. This printer could have been made of 95% of what it is actually made of and 5% vacuum-tube transistors, etc.. Somewhere along the way from the printer's being made up exactly of the parts and materials which actually constitute it to the printer's being made up of some different matter at, say, 20%, the question of whether this printer is the same printer becomes a matter of arbitrary convention. Imagine the case of a person, Frederick, who has a counterpart born from the same egg and a slightly genetically modified sperm. Imagine a series of counterfactual cases corresponding to the examples applied to the printer. Somewhere along the way, one is no longer sure about the identity of Frederick. In this latter case, it has been claimed, overlap of constitution cannot be applied to the identity of mind. As Madell puts it:[50] But while my present body can thus have its partial counterpart in some possible world, my present consciousness cannot. Any present state of consciousness that I can imagine either is or is not mine. There is no question of degree here. If the counterpart of Frederick, Frederickus, is 70% constituted of the same physical substance as Frederick, does this mean that it is also 70% mentally identical with Frederick? Does it make sense to say that something is mentally 70% Frederick?[51] A possible solution to this dilemma is that of open individualism. Richard Swinburne, in his book The Existence of God, put forward an argument for mind-body dualism based upon personal identity. He states that the brain is composed of two hemispheres and a cord linking the two and that, as modern science has shown, either of these can be removed without the person losing any memories or mental capacities. He then cites a thought-experiment for the reader, asking what would happen if each of the two hemispheres of one person were placed inside two different people. Either, Swinburne claims, one of the two is me or neither is—and there is no way of telling which, as each will have similar memories and mental capacities to the other. In fact, Swinburne claims, even if one's mental capacities and memories are far more similar to the original person than the others' are, they still may not be him. From here, he deduces that even if we know what has happened to every single atom inside a person's brain, we still do not know what has happened to 'them' as an identity. From here it follows that a part of our mind, or our soul, is immaterial, and, as a consequence, that mind-body dualism is true.[52] Philosophers and scientists such as Victor Reppert, William Hasker, and Alvin Plantinga have developed an argument for dualism dubbed the "argument from reason". They credit C.S. Lewis with first bringing the argument to light in his book Miracles; Lewis called the argument "The Cardinal Difficulty of Naturalism", which was the title of chapter three of Miracles.[53] The argument postulates that if, as naturalism entails, all of our thoughts are the effect of a physical cause, then we have no reason for assuming that they are also the consequent of a reasonable ground. However, knowledge is apprehended by reasoning from ground to consequent. Therefore, if naturalism were true, there would be no way of knowing it (or anything else), except by a fluke.[53] Through this logic, the statement "I have reason to believe naturalism is valid" is inconsistent in the same manner as "I never tell the truth."[54] That is, to conclude its truth would eliminate the grounds from which to reach it. To summarize the argument in the book, Lewis quotes J. B. S. Haldane, who appeals to a similar line of reasoning:[55] If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true...and hence I have no reason for supposing my brain to be composed of atoms. — J. B. S. Haldane, Possible Worlds, p. 209 In his essay "Is Theology Poetry?", Lewis himself summarises the argument in a similar fashion when he writes: If minds are wholly dependent on brains, and brains on biochemistry, and biochemistry (in the long run) on the meaningless flux of the atoms, I cannot understand how the thought of those minds should have any more significance than the sound of the wind in the trees. Descartes puts forward two main arguments for dualism in Meditations: firstly, the "modal argument," or the "clear and distinct perception argument," and secondly the "indivisibility" or "divisibility" argument. The argument is distinguished from the zombie argument as it establishes that the mind could continue to exist without the body, rather than that the unaltered body could exist without the mind.[60]Alvin Plantinga,[61]J. P. Moreland,[62] and Edward Feser[63] have both supported the argument, although Feser and Moreland think that it must be carefully reformulated in order to be effective. The indivisibility argument for dualism was phrased by Descartes as follows:[64] [T]here is a great difference between a mind and a body, because the body, by its very nature, is something divisible, whereas the mind is plainly indivisible…insofar as I am only a thing that thinks, I cannot distinguish any parts in me.… Although the whole mind seems to be united to the whole body, nevertheless, were a foot or an arm or any other bodily part amputated, I know that nothing would be taken away from the mind… The argument relies upon Leibniz'principle of the identity of indiscernibles, which states that two things are the same if and only if they share all their properties. A counterargument is the idea that matter is not infinitely divisible, and thus that the mind could be identified with material things that cannot be divided, or potentially Leibnizian monads.[65] One argument against dualism is with regard to causal interaction. If consciousness (the mind) can exist independently of physical reality (the brain), one must explain how physical memories are created concerning consciousness. Dualism must therefore explain how consciousness affects physical reality. One of the main objections to dualistic interactionism is lack of explanation of how the material and immaterial are able to interact. Varieties of dualism according to which an immaterial mind causally affects the material body and vice versa have come under strenuous attack from different quarters, especially in the 20th century. Critics of dualism have often asked how something totally immaterial can affect something totally material—this is the basic problem of causal interaction. First, it is not clear where the interaction would take place. For example, burning one's finger causes pain. Apparently there is some chain of events, leading from the burning of skin, to the stimulation of nerve endings, to something happening in the peripheral nerves of one's body that lead to one's brain, to something happening in a particular part of one's brain, and finally resulting in the sensation of pain. But pain is not supposed to be spatially locatable. It might be responded that the pain "takes place in the brain." But evidently, the pain is in the finger. This may not be a devastating criticism. However, there is a second problem about the interaction. Namely, the question of how the interaction takes place, where in dualism "the mind" is assumed to be non-physical and by definition outside of the realm of science. The mechanism which explains the connection between the mental and the physical would therefore be a philosophical proposition as compared to a scientific theory. For example, compare such a mechanism to a physical mechanism that is well understood. Take a very simple causal relation, such as when a cue ball strikes an eight ball and causes it to go into the pocket. What happens in this case is that the cue ball has a certain amount of momentum as its mass moves across the pool table with a certain velocity, and then that momentum is transferred to the eight ball, which then heads toward the pocket. Compare this to the situation in the brain, where one wants to say that a decision causes some neurons to fire and thus causes a body to move across the room. The intention to "cross the room now" is a mental event and, as such, it does not have physical properties such as force. If it has no force, then it would seem that it could not possibly cause any neuron to fire. However, with Dualism, an explanation is required of how something without any physical properties has physical effects.[66] At the time C. S. Lewis wrote Miracles,[68]quantum mechanics (and physical indeterminism) was only in the initial stages of acceptance, but still Lewis stated the logical possibility that, if the physical world was proved to be indeterministic, this would provide an entry (interaction) point into the traditionally viewed closed system, where a scientifically described physically probable/improbable event could be philosophically described as an action of a non-physical entity on physical reality. He states, however, that none of the arguments in his book will rely on this. Although some interpretations of quantum mechanics consider wave function collapse to be indeterminate, in others this event is defined as deterministic.[69] The argument from physics is closely related to the argument from causal interaction. Many physicists and consciousness researchers have argued that any action of a nonphysical mind on the brain would entail the violation of physical laws, such as the conservation of energy.[70][71][72][73] By assuming a deterministic physical universe, the objection can be formulated more precisely. When a person decides to walk across a room, it is generally understood that the decision to do so, a mental event, immediately causes a group of neurons in that person's brain to fire, a physical event, which ultimately results in his walking across the room. The problem is that if there is something totally non-physical causing a bunch of neurons to fire, then there is no physical event which causes the firing. This means that some physical energy is required to be generated against the physical laws of the deterministic universe—this is by definition a miracle and there can be no scientific explanation of (repeatable experiment performed regarding) where the physical energy for the firing came from.[74] Such interactions would violate the fundamental laws of physics. In particular, if some external source of energy is responsible for the interactions, then this would violate the law of the conservation of energy.[75] Dualistic interactionism has therefore been criticized for violating a general heuristic principle of science: the causal closure of the physical world. The Stanford Encyclopedia of Philosophy[8] and the New Catholic Encyclopedia[76] provide two possible replies to the above objections. The first reply is that the mind may influence the distribution of energy, without altering its quantity. The second possibility is to deny that the human body is causally closed, as the conservation of energy applies only to closed systems. However, physicalists object that no evidence exists for the causal non-closure of the human body.[77] Robin Collins responds[78] that energy conservation objections misunderstand the role of energy conservation in physics. Well understood scenarios in general relativity violate energy conservation and quantum mechanics provides precedent for causal interactions, or correlation without energy or momentum exchange.[79] However, this does not mean the mind spends energy and, despite that, it still doesn't exclude the supernatural. Another reply is akin to parallelism—Mills holds that behavioral events are causally overdetermined, and can be explained by either physical or mental causes alone.[80] An overdetermined event is fully accounted for by multiple causes at once.[81] However, J. J. C. Smart and Paul Churchland have pointed out that if physical phenomena fully determine behavioral events, then by Occam's razor an unphysical mind is unnecessary.[82] Robinson suggests that the interaction may involve dark energy, dark matter or some other currently unknown scientific process.[13] However, such processes would necessarily be physical, and in this case dualism is replaced with physicalism, or the interaction point is left for study at a later time when these physical processes are understood.[citation needed] Yet another reply to the interaction problem is to note that it doesn't seem that there is an interaction problem for all forms of substance dualism. For instance, Thomistic dualism doesn't obviously face any issue with regards to interaction.[85] This argument has been formulated by Paul Churchland, among others. The point is that, in instances of some sort of brain damage (e.g. caused by automobile accidents, drug abuse, pathological diseases, etc.), it is always the case that the mental substance and/or properties of the person are significantly changed or compromised. If the mind were a completely separate substance from the brain, how could it be possible that every single time the brain is injured, the mind is also injured? Indeed, it is very frequently the case that one can even predict and explain the kind of mental or psychological deterioration or change that human beings will undergo when specific parts of their brains are damaged. So the question for the dualist to try to confront is how can all of this be explained if the mind is a separate and immaterial substance from, or if its properties are ontologically independent of, the brain.[86] Property dualism and William Hasker's "emergent dualism"[87] seek to avoid this problem. They assert that the mind is a property or substance that emerges from the appropriate arrangement of physical matter, and therefore could be affected by any rearrangement of matter. Phineas Gage, who suffered destruction of one or both frontal lobes by a projectile iron rod, is often cited as an example illustrating that the brain causes mind. Gage certainly exhibited some mental changes after his accident. This physical event, the destruction of part of his brain, therefore caused some kind of change in his mind, suggesting a correlation between brain states and mental states. Similar examples abound; neuroscientist David Eagleman describes the case of another individual who exhibited escalating pedophilic tendencies at two different times, and in each case was found to have tumors growing in a particular part of his brain.[88][89] Case studies aside, modern experiments have demonstrated that the relation between brain and mind is much more than simple correlation. By damaging, or manipulating, specific areas of the brain repeatedly under controlled conditions (e.g. in monkeys) and reliably obtaining the same results in measures of mental state and abilities, neuroscientists have shown that the relation between damage to the brain and mental deterioration is likely causal. This conclusion is further supported by data from the effects of neuro-active chemicals (e.g., those affecting neurotransmitters) on mental functions,[90] but also from research on neurostimulation (direct electrical stimulation of the brain, including transcranial magnetic stimulation).[91] Another common argument against dualism consists in the idea that since human beings (both phylogenetically and ontogenetically) begin their existence as entirely physical or material entities and since nothing outside of the domain of the physical is added later on in the course of development, then we must necessarily end up being fully developed material beings. There is nothing non-material or mentalistic involved in conception, the formation of the blastula, the gastrula, and so on.[92] The postulation of a non-physical mind would seem superfluous.[citation needed] In some contexts, the decisions that a person makes can be detected up to 10 seconds in advance by means of scanning their brain activity.[93] Subjective experiences and covert attitudes can be detected,[94] as can mental imagery.[95] This is strong empirical evidence that cognitive processes have a physical basis in the brain.[96][97] The argument from simplicity is probably the simplest and also the most common form of argument against dualism of the mental. The dualist is always faced with the question of why anyone should find it necessary to believe in the existence of two, ontologically distinct, entities (mind and brain), when it seems possible and would make for a simpler thesis to test against scientific evidence, to explain the same events and properties in terms of one. It is a heuristic principle in science and philosophy not to assume the existence of more entities than is necessary for clear explanation and prediction. This argument was criticized by Peter Glassen in a debate with J. J. C. Smart in the pages of Philosophy in the late 1970s and early 1980s.[98][99][100] Glassen argued that, because it is not a physical entity, Occam's razor cannot consistently be appealed to by a physicalist or materialist as a justification of mental states or events, such as the belief that dualism is false. The idea is that Occam's razor may not be as "unrestricted" as it is normally described (applying to all qualitative postulates, even abstract ones) but instead concrete (only applies to physical objects). If one applies Occam's Razor unrestrictedly, then it recommends monism until pluralism either receives more support or is disproved. If one applies Occam's Razor only concretely, then it may not be used on abstract concepts (this route, however, has serious consequences for selecting between hypotheses about the abstract).[101] ^Stanford Encyclopedia of Philosophy, "Emergent Properties". Excerpt: "William Hasker (1999) goes one step further in arguing for the existence of the mind conceived as a non-composite substance which ‘emerges’ from the brain at a certain point in its development. He dubs his position ‘emergent dualism,’ and claims for it all the philosophical advantages of traditional, Cartesian substance dualism while being able to overcome a central difficulty, viz., explaining how individual brains and mental substances come to be linked in a persistent, ‘monogamous’ relationship. Here, Hasker, is using the term to express a view structurally like one (vitalism) that the British emergentists were anxious to disavow, thus proving that the term is capable of evoking all manner of ideas for metaphysicians." ^Plato Stanford Encyclopedia of Philosophy: Simplicity. Excerpt: "Perhaps scientists apply an unrestricted version of Occam's Razor to that portion of reality in which they are interested, namely the concrete, causal, spatiotemporal world. Or perhaps scientists apply a 'concretized' version of Occam's Razor unrestrictedly. Which is the case? The answer determines which general philosophical principle we end up with: ought we to avoid the multiplication of objects of whatever kind, or merely the multiplication of concrete objects? The distinction here is crucial for a number of central philosophical debates. Unrestricted Occam's Razor favors monism over dualism, and nominalism over platonism. By contrast, 'concretized' Occam's Razor has no bearing on these debates, since the extra entities in each case are not concrete". Amoroso, Richard L. 2010. Complementarity of Mind and Body: Realizing the Dream of Descartes, Einstein and Eccles. ISBN978-1-61668-203-3. History making volume with first comprehensive model of dualism-interactionism, that is also empirically testable. Bracken, Patrick, and Philip Thomas. 2002. "Time to move beyond the mind–body split." British Medical Journal 325:1433–1434. doi:10.1136/bmj.325.7378.1433. A controversial perspective on the use and possible overuse of the Mind–Body split and its application in medical practice.
The body, "the thing that exists" (res extensa), regulates normal bodily functions (such as heart and liver). According to Descartes, animals only had a body and not a soul (which distinguishes humans from animals). The distinction between mind and body is argued in Meditation VI as follows: I have a clear and distinct idea of myself as a thinking, non-extended thing, and a clear and distinct idea of body as an extended and non-thinking thing. Whatever I can conceive clearly and distinctly, God can so create. The central claim of what is often called Cartesian dualism, in honor of Descartes, is that the immaterial mind and the material body, while being ontologically distinct substances, causally interact. This is an idea that continues to feature prominently in many non-European philosophies. Mental events cause physical events, and vice versa. But this leads to a substantial problem for Cartesian dualism: How can an immaterial mind cause anything in a material body, and vice versa? This has often been called the "problem of interactionism. " Descartes himself struggled to come up with a feasible answer to this problem. In his letter to Elisabeth of Bohemia, Princess Palatine, he suggested that spirits interacted with the body through the pineal gland, a small gland in the centre of the brain, between the two hemispheres.[9] The term Cartesian dualism is also often associated with this more specific notion of causal interaction through the pineal gland. However, this explanation was not satisfactory: how can an immaterial mind interact with the physical pineal gland? Because Descartes' was such a difficult theory to defend, some of his disciples, such as Arnold Geulincx and Nicolas Malebranche, proposed a different explanation: That all mind–body interactions required the direct intervention of God. According to these philosophers, the appropriate states of mind and body were only the occasions for such intervention, not real causes.
yes
Selenology
Is the moon geologically active?
yes_statement
the "moon" is "geologically" "active".. geologic "activity" occurs on the "moon".
https://en.wikipedia.org/wiki/Geology_of_the_Moon
Geology of the Moon - Wikipedia
The Moon is the only extraterrestrial body for which we have samples with a known geologic context. A handful of lunar meteorites have been recognized on Earth, though their source craters on the Moon are unknown. A substantial portion of the lunar surface has not been explored, and a number of geological questions remain unanswered. For a long period of time, the fundamental question regarding the history of the Moon was of its origin. Early hypotheses included fission from Earth, capture, and co-accretion. Today, the giant-impact hypothesis is widely accepted by the scientific community.[15] Cliffs in the lunar crust indicate the Moon shrank globally in the geologically recent past and is still shrinking today. The geological history of the Moon has been defined into six major epochs, called the lunar geologic timescale. Starting about 4.5 billion years ago,[16] the newly formed Moon was in a molten state and was orbiting much closer to Earth resulting in tidal forces.[17] These tidal forces deformed the molten body into an ellipsoid, with the major axis pointed towards Earth. The first important event in the geologic evolution of the Moon was the crystallization of the near global magma ocean. It is not known with certainty what its depth was, but several studies imply a depth of about 500 km or greater. The first minerals to form in this ocean were the iron and magnesium silicatesolivine and pyroxene. Because these minerals were denser than the molten material around them, they sank. After crystallization was about 75% complete, less dense anorthositicplagioclasefeldspar crystallized and floated, forming an anorthositic crust about 50 km in thickness. The majority of the magma ocean crystallized quickly (within about 100 million years or less), though the final remaining KREEP-rich magmas, which are highly enriched in incompatible and heat-producing elements, could have remained partially molten for several hundred million (or perhaps 1 billion) years. It appears that the final KREEP-rich magmas of the magma ocean eventually became concentrated within the region of Oceanus Procellarum and the Imbrium basin, a unique geologic province that is now known as the Procellarum KREEP Terrane. Quickly after the lunar crust formed, or even as it was forming, different types of magmas that would give rise to the Mg-suitenorites and troctolites[18] began to form, although the exact depths at which this occurred are not known precisely. Recent theories suggest that Mg-suite plutonism was largely confined to the region of the Procellarum KREEP Terrane, and that these magmas are genetically related to KREEP in some manner, though their origin is still highly debated in the scientific community. The oldest of the Mg-suite rocks have crystallization ages of about 3.85 Ga. However, the last large impact that could have been excavated deep into the crust (the Imbrium basin) also occurred at 3.85 Ga before present. Thus, it seems probable that Mg-suite plutonic activity continued for a much longer time, and that younger plutonic rocks exist deep below the surface. Analysis of the samples from the Moon seems to show that a lot of the Moon's impact basins formed in a short amount of time between about 4 and 3.85 Ga ago. This hypothesis is referred to as the lunar cataclysm or late heavy bombardment. However, it is now recognized that ejecta from the Imbrium impact basin (one of the youngest large impact basins on the Moon) should be found at all of the Apollo landing sites. It is thus possible that ages for some impact basins (in particular Mare Nectaris) could have been mistakenly assigned the same age as Imbrium. The lunar maria represent ancient flood basaltic eruptions. In comparison to terrestrial lavas, these contain higher iron abundances, have low viscosities, and some contain highly elevated abundances of the titanium-rich mineral ilmenite. The majority of basaltic eruptions occurred between about 3 and 3.5 Ga ago, though some mare samples have ages as old as 4.2 Ga. The youngest (based on the method of crater counting) was long thought to date to 1 billion years ago,[4] but research in the 2010s has found evidence of eruptions from less than 50 million years in the past.[6][19] Along with mare volcanism came pyroclastic eruptions, which launched molten basaltic materials hundreds of kilometers away from the volcano. A large portion of the mare formed, or flowed into, the low elevations associated with the nearside impact basins. However, Oceanus Procellarum does not correspond to any known impact structure, and the lowest elevations of the Moon within the farside South Pole-Aitken basin are only modestly covered by mare (see lunar mare for a more detailed discussion). Impacts by meteorites and comets are the only abrupt geologic force acting on the Moon today, though the variation of Earth tides on the scale of the Lunar anomalistic month causes small variations in stresses.[20] Some of the most important craters used in lunar stratigraphy formed in this recent epoch. For example, the crater Copernicus, which has a depth of 3.76 km and a radius of 93 km, is estimated to have formed about 900 million years ago (though this is debatable). The Apollo 17 mission landed in an area in which the material coming from the crater Tycho might have been sampled. The study of these rocks seem to indicate that this crater could have formed 100 million years ago, though this is debatable as well. The surface has also experienced space weathering due to high energy particles, solar wind implantation, and micrometeorite impacts. This process causes the ray systems associated with young craters to darken until it matches the albedo of the surrounding surface. However, if the composition of the ray is different from the underlying crustal materials (as might occur when a "highland" ray is emplaced on the mare), the ray could be visible for much longer times. After resumption of Lunar exploration in the 1990s, it was discovered there are scarps across the globe that are caused by the contraction due to cooling of the Moon.[21] The stratigraphy of Mercury closely resembles the stratigraphy of the Moon. At the top of the Moon’s lunar stratigraphy is the Copernican unit consisting of craters with a ray system. Below this is the Eratosthenian unit, defined by craters with established impact crater morphology, but lacking the ray system of the Copernican. These two units are present in smaller spots on the lunar surface. Further down the stratigraphy are the Mare units (previously known as the Procellarian unit), and the Imbrian unit which is related to ejecta and tectonics from the Imbrium basin. The bottom of the lunar stratigraphy is the pre-Nectarian unit, which consists of old crater plains. All of these are also found on Mercury in similar arrangements. The most distinctive aspect of the Moon is the contrast between its bright and dark zones. Lighter surfaces are the lunar highlands, which receive the name of terrae (singular terra, from the Latin for earth, land), and the darker plains are called maria (singular mare, from the Latin for sea), after Johannes Kepler who introduced the names in the 17th century. The highlands are anorthositic in composition, whereas the maria are basaltic. The maria often coincide with the "lowlands," but it is important to note that the lowlands (such as within the South Pole-Aitken basin) are not always covered by maria. The highlands are older than the visible maria, and hence are more heavily cratered. The major products of volcanic processes on the Moon are evident to Earth-bound observers in the form of the lunar maria. These are large flows of basalticlava that correspond to low-albedo surfaces covering nearly a third of the near side. Only a few percent of the farside has been affected by mare volcanism. Even before the Apollo missions confirmed it, most scientists already thought that the maria are lava-filled plains, because they have lava flow patterns and collapses attributed to lava tubes. The ages of the mare basalts have been determined both by direct radiometric dating and by the technique of crater counting. The oldest radiometric ages are about 4.2 Ga (billion years), and ages of most of the youngest maria lavas have been determined from crater counting to be about 1 Ga. Due to better resolution of more recent imagery, about 70 small areas called irregular mare patches (each area only a few hundred meters or a few kilometers across) have been found in the maria that crater counting suggests were sites of volcanic activity in the geologically much more recent past (less than 50 million years).[6] Volumetrically, most of the mare formed between about 3 and 3.5 Ga before present. The youngest lavas erupted within Oceanus Procellarum, whereas some of the oldest appear to be located on the farside. The maria are clearly younger than the surrounding highlands given their lower density of impact craters. A large portion of maria erupted within, or flowed into, the low-lying impact basins on the lunar nearside. However, it is unlikely that a causal relationship exists between the impact event and mare volcanism because the impact basins are much older (by about 500 million years) than the mare fill. Furthermore, Oceanus Procellarum, which is the largest expanse of mare volcanism on the Moon, does not correspond to any known impact basin. It is commonly suggested that the reason the mare only erupted on the nearside is that the nearside crust is thinner than the farside. Although variations in the crustal thickness might act to modulate the amount of magma that ultimately reaches the surface, this hypothesis does not explain why the farside South Pole-Aitken basin, whose crust is thinner than Oceanus Procellarum, was only modestly filled by volcanic products. Another type of deposit associated with the maria, although it also covers the highland areas, are the "dark mantle" deposits. These deposits cannot be seen with the naked eye, but they can be seen in images taken from telescopes or orbiting spacecraft. Before the Apollo missions, scientists predicted that they were deposits produced by pyroclastic eruptions. Some deposits appear to be associated with dark elongated ash cones, reinforcing the idea of pyroclasts. The existence of pyroclastic eruptions was later confirmed by the discovery of glass spherules similar to those found in pyroclastic eruptions here on Earth. Many of the lunar basalts contain small holes called vesicles, which were formed by gas bubbles exsolving from the magma at the vacuum conditions encountered at the surface. It is not known with certainty which gases escaped these rocks, but carbon monoxide is one candidate. The samples of pyroclastic glasses are of green, yellow, and red tints. The difference in color indicates the concentration of titanium that the rock has, with the green particles having the lowest concentrations (about 1%), and red particles having the highest concentrations (up to 14%, much more than the basalts with the highest concentrations). Rilles on the Moon sometimes resulted from the formation of localized lava channels. These generally fall into three categories, consisting of sinuous, arcuate, or linear shapes. By following these meandering rilles back to their source, they often lead to an old volcanic vent. One of the most notable sinuous rilles is the Vallis Schröteri feature, located in the Aristarchus plateau along the eastern edge of Oceanus Procellarum. An example of a sinuous rille exists at the Apollo 15 landing site, Rima Hadley, located on the rim of the Imbrium Basin. Based on observations from the mission, it is generally thought that this rille was formed by volcanic processes, a topic long debated before the mission took place. A variety of shield volcanoes can be found in selected locations on the lunar surface, such as on Mons Rümker. These are thought to be formed by relatively viscous, possibly silica-rich lava, erupting from localized vents. The resulting lunar domes are wide, rounded, circular features with a gentle slope rising in elevation a few hundred meters to the midpoint. They are typically 8–12 km in diameter, but can be up to 20 km across. Some of the domes contain a small pit at their peak. Wrinkle ridges are features created by compressive tectonic forces within the maria. These features represent buckling of the surface and form long ridges across parts of the maria. Some of these ridges may outline buried craters or other features beneath the maria. A prime example of such an outlined feature is the crater Letronne. Grabens are tectonic features that form under extensional stresses. Structurally, they are composed of two normal faults, with a down-dropped block between them. Most grabens are found within the lunar maria near the edges of large impact basins. The origin of the Moon's craters as impact features became widely accepted only in the 1960s. This realization allowed the impact history of the Moon to be gradually worked out by means of the geologic principle of superposition. That is, if a crater (or its ejecta) overlaid another, it must be the younger. The amount of erosion experienced by a crater was another clue to its age, though this is more subjective. Adopting this approach in the late 1950s, Gene Shoemaker took the systematic study of the Moon away from the astronomers and placed it firmly in the hands of the lunar geologists.[22] Impact cratering is the most notable geological process on the Moon. The craters are formed when a solid body, such as an asteroid or comet, collides with the surface at a high velocity (mean impact velocities for the Moon are about 17 km per second). The kinetic energy of the impact creates a compression shock wave that radiates away from the point of entry. This is succeeded by a rarefaction wave, which is responsible for propelling most of the ejecta out of the crater. Finally there is a hydrodynamic rebound of the floor that can create a central peak. These craters appear in a continuum of diameters across the surface of the Moon, ranging in size from tiny pits to the immense South Pole–Aitken basin with a diameter of nearly 2,500 km and a depth of 13 km. In a very general sense, the lunar history of impact cratering follows a trend of decreasing crater size with time. In particular, the largest impact basins were formed during the early periods, and these were successively overlaid by smaller craters. The size frequency distribution (SFD) of crater diameters on a given surface (that is, the number of craters as a function of diameter) approximately follows a power law with increasing number of craters with decreasing crater size. The vertical position of this curve can be used to estimate the age of the surface. The lunar crater King displays the characteristic features of a large impact formation, with a raised rim, slumped edges, terraced inner walls, a relatively flat floor with some hills, and a central ridge. The Y-shaped central ridge is unusually complex in form. The most recent impacts are distinguished by well-defined features, including a sharp-edged rim. Small craters tend to form a bowl shape, whereas larger impacts can have a central peak with flat floors. Larger craters generally display slumping features along the inner walls that can form terraces and ledges. The largest impact basins, the multiring basins, can even have secondary concentric rings of raised material. The impact process excavates high albedo materials that initially gives the crater, ejecta, and ray system a bright appearance. The process of space weathering gradually decreases the albedo of this material such that the rays fade with time. Gradually the crater and its ejecta undergo impact erosion from micrometeorites and smaller impacts. This erosional process softens and rounds the features of the crater. The crater can also be covered in ejecta from other impacts, which can submerge features and even bury the central peak. The ejecta from large impacts can include large blocks of material that reimpact the surface to form secondary impact craters. These craters are sometimes formed in clearly discernible radial patterns, and generally have shallower depths than primary craters of the same size. In some cases an entire line of these blocks can impact to form a valley. These are distinguished from catena, or crater chains, which are linear strings of craters that are formed when the impact body breaks up prior to impact. Generally speaking, a lunar crater is roughly circular in form. Laboratory experiments at NASA's Ames Research Center have demonstrated that even very low-angle impacts tend to produce circular craters, and that elliptical craters start forming at impact angles below five degrees. However, a low angle impact can produce a central peak that is offset from the midpoint of the crater. Additionally, the ejecta from oblique impacts show distinctive patterns at different impact angles: asymmetry starting around 60˚ and a wedge-shaped "zone of avoidance" free of ejecta in the direction the projectile came from starting around 45˚.[23] Dark-halo craters are formed when an impact excavates lower albedo material from beneath the surface, then deposits this darker ejecta around the main crater. This can occur when an area of darker basaltic material, such as that found on the maria, is later covered by lighter ejecta derived from more distant impacts in the highlands. This covering conceals the darker material below, which is later excavated by subsequent craters. The largest impacts produced melt sheets of molten rock that covered portions of the surface that could be as thick as a kilometer. Examples of such impact melt can be seen in the northeastern part of the Mare Orientale impact basin. The surface of the Moon has been subject to billions of years of collisions with both small and large asteroidal and cometary materials. Over time, these impact processes have pulverized and "gardened" the surface materials, forming a fine-grained layer termed regolith. The thickness of the lunar regolith varies between 2 meters (6.6 ft) beneath the younger maria, to up to 20 meters (66 ft) beneath the oldest surfaces of the lunar highlands. The regolith is predominantly composed of materials found in the region, but also contains traces of materials ejected by distant impact craters. The term mega-regolith is often used to describe the heavily fractured bedrock directly beneath the near-surface regolith layer. The regolith contains rocks, fragments of minerals from the original bedrock, and glassy particles formed during the impacts. In most of the lunar regolith, half of the particles are made of mineral fragments fused by the glassy particles; these objects are called agglutinates. The chemical composition of the regolith varies according to its location; the regolith in the highlands is rich in aluminium and silica, just as the rocks in those regions.[citation needed] The regolith in the maria is rich in iron and magnesium and is silica-poor, as are the basaltic rocks from which it is formed. The lunar regolith is very important because it also stores information about the history of the Sun. The atoms that compose the solar wind – mostly hydrogen, helium, neon, carbon and nitrogen – hit the lunar surface and insert themselves into the mineral grains. Upon analyzing the composition of the regolith, particularly its isotopic composition, it is possible to determine if the activity of the Sun has changed with time. The gases of the solar wind could be useful for future lunar bases, because oxygen, hydrogen (water), carbon and nitrogen are not only essential to sustain life, but are also potentially very useful in the production of fuel. The composition of the lunar regolith can also be used to infer its source origin. Lunar lava tubes form a potentially important location for constructing a future lunar base, which may be used for local exploration and development, or as a human outpost to serve exploration beyond the Moon. A lunar lava cave potential has long been suggested and discussed in literature and thesis.[24] Any intact lava tube on the Moon could serve as a shelter from the severe environment of the lunar surface, with its frequent meteorite impacts, high-energy ultraviolet radiation and energetic particles, and extreme diurnal temperature variations.[25][26][27] Following the launch of the Lunar Reconnaissance Orbiter, many lunar lava tubes have been imaged.[28] These lunar pits are found in several locations across the Moon, including Marius Hills, Mare Ingenii and Mare Tranquillitatis. The first rocks brought back by Apollo 11 were basalts. Although the mission landed on Mare Tranquillitatis, a few millimetric fragments of rocks coming from the highlands were picked up. These are composed mainly of plagioclasefeldspar; some fragments were composed exclusively of anorthite. The identification of these mineral fragments led to the bold hypothesis that a large portion of the Moon was once molten, and that the crust formed by fractional crystallization of this magma ocean. A natural outcome of the hypothetical giant-impact event is that the materials that re-accreted to form the Moon must have been hot. Current models predict that a large portion of the Moon would have been molten shortly after the Moon formed, with estimates for the depth of this magma ocean ranging from about 500 km to complete melting. Crystallization of this magma ocean would have given rise to a differentiated body with a compositionally distinct crust and mantle and accounts for the major suites of lunar rocks. As crystallization of the lunar magma ocean proceeded, minerals such as olivine and pyroxene would have precipitated and sank to form the lunar mantle. After crystallization was about three-quarters complete, anorthositic plagioclase would have begun to crystallize, and because of its low density, float, forming an anorthositic crust. Importantly, elements that are incompatible (i.e., those that partition preferentially into the liquid phase) would have been progressively concentrated into the magma as crystallization progressed, forming a KREEP-rich magma that initially should have been sandwiched between the crust and mantle. Evidence for this scenario comes from the highly anorthositic composition of the lunar highland crust, as well as the existence of KREEP-rich materials. Additionally, zircon analysis of Apollo 14 samples suggests the lunar crust differentiated 4.51±0.01 billion years ago.[29] The Apollo program brought back 380.05 kilograms (837.87 lb) of lunar surface material,[30] most of which is stored at the Lunar Receiving Laboratory in Houston, Texas, and the uncrewed Soviet Luna programme returned 326 grams (11.5 oz) of lunar material. These rocks have proved to be invaluable in deciphering the geologic evolution of the Moon. Lunar rocks are in large part made of the same common rock forming minerals as found on Earth, such as olivine, pyroxene, and plagioclasefeldspar (anorthosite). Plagioclase feldspar is mostly found in the lunar crust, whereas pyroxene and olivine are typically seen in the lunar mantle.[31] The mineral ilmenite is highly abundant in some mare basalts, and a new mineral named armalcolite (named for Armstrong, Aldrin, and Collins, the three members of the Apollo 11 crew) was first discovered in the lunar samples. The maria are composed predominantly of basalt, whereas the highland regions are iron-poor and composed primarily of anorthosite, a rock composed primarily of calcium-rich plagioclase feldspar. Another significant component of the crust are the igneous Mg-suite rocks, such as the troctolites, norites, and KREEP-basalts. These rocks are thought to be related to the petrogenesis of KREEP. Composite rocks on the lunar surface often appear in the form of breccias. Of these, the subcategories are called fragmental, granulitic, and impact-melt breccias, depending on how they were formed. The mafic impact melt breccias, which are typified by the low-K Fra Mauro composition, have a higher proportion of iron and magnesium than typical upper crust anorthositic rocks, as well as higher abundances of KREEP. The main characteristics of the basaltic rocks with respect to the rocks of the lunar highlands is that the basalts contain higher abundances of olivine and pyroxene, and less plagioclase. They are richer in iron than terrestrial basalts, and also have lower viscosities. Some of them have high abundances of a ferro-titanicoxide called ilmenite. Because the first sampling of rocks contained a high content of ilmenite and other related minerals, they received the name of "high titanium" basalts. The Apollo 12 mission returned to Earth with basalts of lower titanium concentrations, and these were dubbed "low titanium" basalts. Subsequent missions, including the Soviet robotic probes, returned with basalts with even lower concentrations, now called "very low titanium" basalts. The Clementine space probe returned data showing that the mare basalts have a continuum in titanium concentrations, with the highest concentration rocks being the least abundant. The temperature and pressure of the Moon's interior increase with depth The current model of the interior of the Moon was derived using seismometers left behind during the crewed Apollo program missions, as well as investigations of the Moon's gravity field and rotation. The mass of the Moon is sufficient to eliminate any voids within the interior, so it is estimated to be composed of solid rock throughout. Its low bulk density (~3346 kg m−3) indicates a low metal abundance. Mass and moment of inertia constraints indicate that the Moon likely has an iron core that is less than about 450 km in radius. Studies of the Moon's physical librations (small perturbations to its rotation) furthermore indicate that the core is still molten. Most planetary bodies and moons have iron cores that are about half the size of the body. The Moon is thus anomalous in having a core whose size is only about one quarter of its radius. The crust of the Moon is on average about 50 km thick (though this is uncertain by about ±15 km). It is estimated that the far-side crust is on average thicker than the near side by about 15 km.[32] Seismology has constrained the thickness of the crust only near the Apollo 12 and Apollo 14 landing sites. Although the initial Apollo-era analyses suggested a crustal thickness of about 60 km at this site, recent reanalyses of this data suggest that it is thinner, somewhere between about 30 and 45 km. Compared with that of Earth, the Moon has only a very weak external magnetic field. Other major differences are that the Moon does not currently have a dipolar magnetic field (as would be generated by a geodynamo in its core), and the magnetizations that are present are almost entirely crustal in origin. One hypothesis holds that the crustal magnetizations were acquired early in lunar history when a geodynamo was still operating. The small size of the lunar core, however, is a potential obstacle to this hypothesis. Alternatively, it is possible that on airless bodies such as the Moon, transient magnetic fields could be generated during impact processes. In support of this, it has been noted that the largest crustal magnetizations appear to be located near the antipodes of the largest impact basins. Although the Moon does not have a dipolar magnetic field like Earth's, some of the returned rocks do have strong magnetizations. Furthermore, measurements from orbit show that some portions of the lunar surface are associated with strong magnetic fields. ^Ivankov, A. "Luna 16". National Space Science Data Center Catalog. NASA. Retrieved 13 October 2018. The drill was deployed and penetrated to a depth of 35 cm before encountering hard rock or large fragments of rock. The column of regolith in the drill tube was then transferred to the soil sample container... the hermetically sealed soil sample container, lifted off from the Moon carrying 101 grams of collected material
The Moon is the only extraterrestrial body for which we have samples with a known geologic context. A handful of lunar meteorites have been recognized on Earth, though their source craters on the Moon are unknown. A substantial portion of the lunar surface has not been explored, and a number of geological questions remain unanswered. For a long period of time, the fundamental question regarding the history of the Moon was of its origin. Early hypotheses included fission from Earth, capture, and co-accretion. Today, the giant-impact hypothesis is widely accepted by the scientific community.[15] Cliffs in the lunar crust indicate the Moon shrank globally in the geologically recent past and is still shrinking today. The geological history of the Moon has been defined into six major epochs, called the lunar geologic timescale. Starting about 4.5 billion years ago,[16] the newly formed Moon was in a molten state and was orbiting much closer to Earth resulting in tidal forces.[17] These tidal forces deformed the molten body into an ellipsoid, with the major axis pointed towards Earth. The first important event in the geologic evolution of the Moon was the crystallization of the near global magma ocean. It is not known with certainty what its depth was, but several studies imply a depth of about 500 km or greater. The first minerals to form in this ocean were the iron and magnesium silicatesolivine and pyroxene. Because these minerals were denser than the molten material around them, they sank. After crystallization was about 75% complete, less dense anorthositicplagioclasefeldspar crystallized and floated, forming an anorthositic crust about 50 km in thickness. The majority of the magma ocean crystallized quickly (within about 100 million years or less), though the final remaining KREEP-rich magmas, which are highly enriched in incompatible and heat-producing elements, could have remained partially molten for several hundred million (or perhaps 1 billion) years.
yes
Selenology
Is the moon geologically active?
yes_statement
the "moon" is "geologically" "active".. geologic "activity" occurs on the "moon".
http://astronomy.nmsu.edu/candaceg/Europa/Tidal_Heating.html
Geomorphic Features Of Europa Tidal Heating
It is interesting to see moons in our solar system that are so geologically active while our own Moon is not. Due to its size, our Moon has had enough time to radiate its heat into space, causing its interior to cool, prohibiting geological activity. However, Europa is smaller than our Moon (Table1), yet it is still geologically active. The reason for its continued activity is due to tidal heating - a continual flexing and stretching of the Europa caused by the shape of its orbit and gravitational pull from Jupiter, Io, and Ganymede. The magnitude of gravitational force is defined as: F = GMm/r2 Where G is the gravitational constant, M is the mass of the more massive body (Jupiter), m is the mass of the less massive body (Europa), and r is the distance of the two bodies. It can be seen that as the distance between the two bodies decreases, the force increases as r2, and as the mass of the bodies increase the force also increases. Tidal force is the differential gravitational force felt across a body. Since the front face of Europa is closer to Jupiter than the far side, it will feel a greater gravitational force. The magnitude of force felt on the side of Europa facing Jupiter is: Ffront = GMm/r2 The force felt on the side facing away from Jupiter is: Fback = GMm/(r+d)2 where d is the diameter of the moon. The difference in these two forces is the tidal force felt across Europa: Ftidal = 2GMmd/r3 It can be seen that larger moons have greater tidal forces across them. However, it is not tidal force alone that causes tidal heating within a body. In order for heating to occur, the tidal force must change. This can be imagined if you take a rubber band and stretch it repeatedly. A band that is stretched and left stretched will not generate heat. However, if the band is repeatedly stretched and flexed, heat will be generated. This is becuase stretching and flexing induce frictional heating, similar to rubbing your hands together on a cold day. There is enough tidal heating occurring in Europa to keep its interior warm and the moon to stay geological activity, even though it is smaller than our own Moon. The change in tidal force (also known as tidal stress) occurs because of the orbits of Io, Europa, and Ganymede. Table 1 shows that the first three Galilean moons are locked in a 4:2:1 orbital resonance. For every orbit Ganymede completes, Europa completes two and Io completes four. This is illustrated in the diagram below (Figure 4). This resonance forces Europa to have an eccentricity of e = 0.01 (Greenberg 1981, Peale 1986). Eccentricity is a measure of how much an orbit deviates from a perfect circle. A perfect circle has an eccentricity of 0, an elliptical orbit has an eccentricity between 0 < e < 1, e = 1 is a parabolic orbit, and e > 1 is a hyperbolic orbit. Since Europa does not have a perfectly circular orbit, its distance from Jupiter changes. This causes a change in the tidal force on the moon. In addition, Io and Ganymede change their distance with respect to Europa and also introduce a tidal stress. Figure 4. Orbital Resonances of Io, Europa, and Ganymede. Europa is locked into a synchronous rotation around Jupiter, just like our Moon is around the Earth. For every revolution Europa makes around Jupiter it also completes one rotation. It has a 3.55 day orbital period and rotation period, so the same side of Europa always faces Jupiter. Thus, is makes one revolution around Jupiter every Europa day. The tidal stress experienced over one orbit is referred to as the "diurnal" stress. The changing tidal force induced by the eccentricity of Europa and tug from the other moons deforms Europa by as much as 3% each Europa day. (Greenly et al. 2004). Moore and Schubert (2000) calculate that this should result in a deformation anywhere between 1 m and 30 m depending on if the ice shell is solid down to the silicate layer or if the ice shell is thinner and sits above a liquid water layer. The dissipation of this strain heats the interior of the moon and flexing from the diurnal stress results in the geomorphic features seen across the surface.
It is interesting to see moons in our solar system that are so geologically active while our own Moon is not. Due to its size, our Moon has had enough time to radiate its heat into space, causing its interior to cool, prohibiting geological activity. However, Europa is smaller than our Moon (Table1), yet it is still geologically active. The reason for its continued activity is due to tidal heating - a continual flexing and stretching of the Europa caused by the shape of its orbit and gravitational pull from Jupiter, Io, and Ganymede. The magnitude of gravitational force is defined as: F = GMm/r2 Where G is the gravitational constant, M is the mass of the more massive body (Jupiter), m is the mass of the less massive body (Europa), and r is the distance of the two bodies. It can be seen that as the distance between the two bodies decreases, the force increases as r2, and as the mass of the bodies increase the force also increases. Tidal force is the differential gravitational force felt across a body. Since the front face of Europa is closer to Jupiter than the far side, it will feel a greater gravitational force. The magnitude of force felt on the side of Europa facing Jupiter is: Ffront = GMm/r2 The force felt on the side facing away from Jupiter is: Fback = GMm/(r+d)2 where d is the diameter of the moon. The difference in these two forces is the tidal force felt across Europa: Ftidal = 2GMmd/r3 It can be seen that larger moons have greater tidal forces across them. However, it is not tidal force alone that causes tidal heating within a body. In order for heating to occur, the tidal force must change. This can be imagined if you take a rubber band and stretch it repeatedly. A band that is stretched and left stretched will not generate heat. However, if the band is repeatedly stretched and flexed, heat will be generated. This is becuase stretching and flexing induce frictional heating, similar to rubbing your hands together on a cold day.
no
Selenology
Is the moon geologically active?
yes_statement
the "moon" is "geologically" "active".. geologic "activity" occurs on the "moon".
https://pages.uoregon.edu/imamura/121/lecture-9/lecture-9.html
Surface Features of the Terrestrial Planets
The surface features of Terrestrial planets may be young or old based upon whether the planet shows ongoing active geology. Young surfaces are geologically active while old surfaces are not. In the latter case, when active geology disappeared determines what processes determine the appearance of the current surface of the planet. Today, the appearances of the Terrestrial planets differ strongly. We look at the Earth and the Moon in detail but, this time, we will also look at Venus, and Mars as well. In terms of their surfaces, the Terrestrial planets may be divided into two groups: (1) Venus and Earth; and (2) Mercury and the moons. Mars is intermediate between the two groups. The division is based on the amount of ongoing evolution and appearance of the surface features of the planets (which is determined by the size of the planet). Group 1: the larger planets have hot interiors which leads to significant heat flow from their centers to their surfaces. The heat flow drives geology (crustal motions) which has the dominant effect on the evolution of the surface features of the planets. The geologically active Earth has a few rocks almost 4 billion years old (with perhaps some as old as 4.4 billion years) on its continents, but there are much younger features such as the oceanic basins which are less than a few hundred million years old, and other features such as mountain ranges and volcanos as young millions of years to tens of millions of years old. The Earth has a young surface. Group 2: the smaller planets have cold interiors and geologically inactive. The geologically dead Moon has surface features as old as 4.1 billion years with nearly all features older than 3 billion years old. The Moon has an old surface. Its surface has been shaped primarily by impacts. I. THE MOON (GROUP 2) A. Surface Features (1) A neat animation shows the rotating Moon put together using Clementine images. (2) Two pairs of images are shown below. The top two are photographs of the Moon, one of the near side (the Earth-facing side) and one of the far side (the side never presented to us). The Moon rotates at the same rate as it orbits the Earth so that it always presents the same face to the Earth. The second pair of images are relief maps (topographic maps) for the near side and far side of the Moon. The Moon is divided into two types of terrain: maria, the lightly cratered, dark-colored lowland basins which cover 15 % of the lunar surface. The maria are large basins (old impact craters) which have been filled in by large lava flows. Their surfaces are dark and smooth with little evidence of cratering. The largest impact basin on the Moon is the South Pole-Aitken basin. It is 2,500 kilometers in diameter! The circumference of the Moon is 11,000 kilometers! The South Pole-Aitken basin is 4.2-4.3 billion years old. highlands, the light-colored, heavily cratered regions which cover 85 % of the lunar surface. Because of their heavy cratering, the highland regions are inferred to be ancient while the lightly-cratered maria are inferred to be young (younger). The lunar mountains in the highland regios are produced by the large impacts uplifting the terrain. They are not formed as are mountains on Earth, that is, they are not formed through crutal motions driven by geologic forces ( e.g., on Earth driven by motion of lithospheric plates). Features similar to those found on the Moon are also seen on Mars and Mercury. The far side of the Moon is similar to the near side of the Moon in terms of cratering rate. There are some differences in cratering though. (1) There are fewer maria on the far side of the Moon, probably because of the asymmetry in the crutal thickness of the Moon. (2) Recent studies find that impact basins are smaller in size on the far side of the Moon than those on the Earth-facing side of the Moon. What are possible implications of this result? Crater Origin: The craters on the Moon and on most other Terrestrial planets (see Mercury) were produced by impacts and not by volcanism. The Lunar craters are low-lying, below sea-level, not like the caldera of volcanos. The primary factors that affect crater size are the mass of the impacting object, the speed with which the objects strikes the ground, and the local geology (type of rock). Copernicus (93 km) and Tycho (85 km) are fairly young craters on the Moon, 800 million years and 108 million years old, respectively. Suppose the objects which produced Copernicus and Tycho on the Moon had actually struck the Earth, would the craters on Earth be larger or smaller than Copernicus and Tycho? B. Lunar Chronology The relative ages of different regions on the Moon are easy to determine. If we count the number of crators in different regions, we infer The higher the crater density, the older the region The lower the crater density, the younger the region So, for the Moon, we knew early on that the heavily cratered highland regions were older than the lightly cratered maria. After we went to the Moon, we could take the next step. We could find the chronology for the Moon, in that we could determine actual ages in terms of years for features, not just whether they were younger or older. We could do this because we have been there, seen it. This allowed us to gather rock samples from various terrains on the Moon and determine their ages using radioactive age dating. This allowed us to set a firm timeline for the evolution of the lunar surface features and to set the chronology for lunar cratering. This is valuable because the Lunar cratering history is, presumably, the same as the the inner Solar System planets cratering history. The cratering density has, so far, been the most useful way to judge ages of features on Mars. Lunar Chronology Using radioactive age dating, Moon rocks were found to be: Mare Basalt: - 3.1 to 3.8 billion years old Highland Breccia: - 3.8 to 4.0 billion years old Highland Anorthosite - proved difficult for absolute dating; but are estimated to be 4.4 billion years or older. Given these ages, can construct the lunar chronology shown to the right. The abbreviation, Ga, is 1 billion years. C. Cratering Rate We think about how this works below but first, let me point out that because the Moon has an old surface, and so the cratering history goes back further than it does on the Earth. The left figure shows the cratering density for the Moon and how it has changed over the years. The first step is to simply count the numbers of craters in some region, say a mare. For mare, we find that there are relatively few craters, mare are definitely younger than the highland regions. Next, find the ages of the craters in the mare. The ages measured for the craters then tell us the rate at which craters were formed in the mare. In this way we found that mare fairly old, between 3.2 and 3.8 billion years or so, despite being lightly cratered. This shows us that the cratering rate for the Moon was low over the lifetime of the mare. Looking at other regions on the Moon, we can get the cratering rate for nearly the entire lifetime of the Moon. The crater counting is consistent with a cratering rate on the Moon that has been slow and steady for the past 3 billion or so years. Around 4 billion years ago, there isn't a jump, but it was around this time the the large impact basins that formed the maria were made. Apparently an increased rate of large impactors happened at this time, the so-called Late Heavy Bombardment. Not apparent in the plot, but there was also a jump in the cratering rate around 290 million years ago when the rate jumped by around a factor of 3. We can do a similar exercise for the Earth. Surprisingly, studies suggest that the cratering history for land masses on Earth is complete for craters larger than 6 km over the last billion years or so. This result says that we have found nearly all of the craters on the Earth with diameters larger than 6 km over this time. Craters smaller than 6 km in diameter are destroyed by erosion and geological effects. Older craters suffer erosion and affects due to geology. So, although this result is great, it could be better in that it doesn't reach back to Late Heavy Bombardment times. The cratering rate for the Earth is then The shape of the plot is similar to that for the Moon suggesting we share a common source for the impactors. A dinosaur killer occurs roughly every 100 million years or so. The dinosaur killer produced the Chicxulub crater 65 million years ago. II. GEOLOGIC ACTIVITY ON EARTH (GROUP 1) The most massive Terrestrial planets, Earth and Venus, are likely to show active geologies. Mars may show current or recent geological activity as well. Mercury and the Moon are probably dead from a geological standpoint. The sorts of geology we expect are volcanism and crustal motion, both manifestations of the fact that the interiors of the planets are hot. A consequence of geological activity (and wind and water erosion on the Earth) is to produce a relatively young surface. The surface of Venus is around 300-800 million years old, while the majority of the Earth's surface features are much younger. For example, Ocean basins are < couple hundred million years old Hawaiian Islands, island chain, ~ millions of years old Grand Canyon (cut by the Colorado River) < 10 million years old Himalayas, Andes mountain ranges < tens of millions of years old These ages are to be compared to the Lunar Chronology where the youngest features, the maria, are older than 3 billion years. Geologic Activity on the Terrestrial Planets? The Earth shows plate tectonics. In addition to showing vertical crustal motion, the Earth has a segmented lithosphere that moves horizontally. This proves to be crucial for how our atmosphere evolves. It is thus of interest to know if other Terrestrial planets show Plate Tectonics. Venus and Mars both show plenty of geologic activity but neither shows compelling evidence for Plate Tectonics. For some reason both planets do not have segmented lithospheres, their lithospheres are single solid pieces. (1) Venus's lithosphere may not be brittle as Earth's because Venus is hot. Even if Venus had a segmented lithosphere it might still not have plate tectonics because it is so dry. On Earth, convection drives plate tectonics and water acts as lubrication in subduction zones. (2) Mars's lithosphere is thicker than is the Earth's (likely because Mars is smaller and thus cools faster than does Earth). Consequently, Mars's lithosphere may not break as easily as does the Earth's.
The geologically active Earth has a few rocks almost 4 billion years old (with perhaps some as old as 4.4 billion years) on its continents, but there are much younger features such as the oceanic basins which are less than a few hundred million years old, and other features such as mountain ranges and volcanos as young millions of years to tens of millions of years old. The Earth has a young surface. Group 2: the smaller planets have cold interiors and geologically inactive. The geologically dead Moon has surface features as old as 4.1 billion years with nearly all features older than 3 billion years old. The Moon has an old surface. Its surface has been shaped primarily by impacts. I. THE MOON (GROUP 2) A. Surface Features (1) A neat animation shows the rotating Moon put together using Clementine images. (2) Two pairs of images are shown below. The top two are photographs of the Moon, one of the near side (the Earth-facing side) and one of the far side (the side never presented to us). The Moon rotates at the same rate as it orbits the Earth so that it always presents the same face to the Earth. The second pair of images are relief maps (topographic maps) for the near side and far side of the Moon. The Moon is divided into two types of terrain: maria, the lightly cratered, dark-colored lowland basins which cover 15 % of the lunar surface. The maria are large basins (old impact craters) which have been filled in by large lava flows. Their surfaces are dark and smooth with little evidence of cratering. The largest impact basin on the Moon is the South Pole-Aitken basin. It is 2,500 kilometers in diameter! The circumference of the Moon is 11,000 kilometers!
no
Selenology
Is the moon geologically active?
yes_statement
the "moon" is "geologically" "active".. geologic "activity" occurs on the "moon".
http://astronomy.nmsu.edu/candaceg/Europa/Europa.html
Untitled
In 1610, Galileo Galilei observed Jupiter and found four objects which appeared to orbit the planet. These were later identified as the four largest moons of Jupiter, which are now referred to as the Galilean moons. Their order, from closest to furthest from Jupiter, is: Io, Europa, Ganymede, and Callisto. In 1979, the Voyager space craft passed Jupiter and took the first close-up pictures of the Galilean moons. The geology that was observed astonished astronomers. It was clear that these moons were very different from our own Moon. Our Moon is a heavily cratered, geologically dead, body. Due to its small size, it has had enough time to sufficiently cool and is no longer geologically active. However, images of the Galilean moons show the opposite. Both Io and Europa are very geologically active bodies. Voyager images showed Io to be covered in volcanoes. In fact, it is the most volcanically active body in the solar system and is continually being resurfaced, causing a lack of craters to be observed. Detailed images of Io's surface are shown in Figure 1. The wide range of colors are due to large sulfur deposits originating from the volcanoes. Figure 1. Io. Courtesy NASA/JPL. Europa is quite the opposite of Io. It is covered in a thick shell of water-ice that encompasses the entire surface. Its density is ~ 3 g/cm3, suggesting the water-ice shell is ~100 - 150 km thick. This shell is likely liquid past a certain depth, which makes Europa a prime target in the search for for extra terrestrial life. Like Io, there are very few craters present on the surface of Europa, indicating that it is a geologically active body possessing a "young" surface. The reason for its geolocial activity is due to a tidal forces exerted by Jupiter, Io, and Ganymede, which leads to stretching and flexing of Europa. This continual stretching and flexing causes tidal heating to occur, keeps the interior of the moon warm and creating many geomorphic features. Figure 2 illustrates ridges, bands, troughs, created by tidal stress. There is also a large range of colors observed, especially on the anti-jovian side of Europa, as seen in Figure 3. These colors represent different chemical compositions and likely occur from upwelling material beneath the ice shell. Figure 2. Ridges, bands, and wedges on Europa. Courtesy NASA/JPL. Figure 3. Europa. Courtesy NASA/JPL. Several more space crafts have visited the moons of Jupiter since Voyager, including the Galileo space craft which sent back high-resolution detailed images of Europa's surface. These images are helping astronomers determine the geological activity responsible for the intriguing features seen around the moon which are described in following sections.
In 1610, Galileo Galilei observed Jupiter and found four objects which appeared to orbit the planet. These were later identified as the four largest moons of Jupiter, which are now referred to as the Galilean moons. Their order, from closest to furthest from Jupiter, is: Io, Europa, Ganymede, and Callisto. In 1979, the Voyager space craft passed Jupiter and took the first close-up pictures of the Galilean moons. The geology that was observed astonished astronomers. It was clear that these moons were very different from our own Moon. Our Moon is a heavily cratered, geologically dead, body. Due to its small size, it has had enough time to sufficiently cool and is no longer geologically active. However, images of the Galilean moons show the opposite. Both Io and Europa are very geologically active bodies. Voyager images showed Io to be covered in volcanoes. In fact, it is the most volcanically active body in the solar system and is continually being resurfaced, causing a lack of craters to be observed. Detailed images of Io's surface are shown in Figure 1. The wide range of colors are due to large sulfur deposits originating from the volcanoes. Figure 1. Io. Courtesy NASA/JPL. Europa is quite the opposite of Io. It is covered in a thick shell of water-ice that encompasses the entire surface. Its density is ~ 3 g/cm3, suggesting the water-ice shell is ~100 - 150 km thick. This shell is likely liquid past a certain depth, which makes Europa a prime target in the search for for extra terrestrial life. Like Io, there are very few craters present on the surface of Europa, indicating that it is a geologically active body possessing a "young" surface. The reason for its geolocial activity is due to a tidal forces exerted by Jupiter, Io, and Ganymede, which leads to stretching and flexing of Europa. This continual stretching and flexing causes tidal heating to occur, keeps the interior of the moon warm and creating many geomorphic features. Figure 2 illustrates ridges,
no
Selenology
Is the moon geologically active?
yes_statement
the "moon" is "geologically" "active".. geologic "activity" occurs on the "moon".
https://solarsystem.nasa.gov/moons/jupiter-moons/europa/in-depth/
In Depth | Europa – NASA Solar System Exploration
Decades ago, science fiction offered a hypothetical scenario: What if alien life were thriving in an ocean beneath the icy surface of Jupiter’s moon Europa? The notion pulled Europa out of obscurity and into the limelight where it has remained, stoking the imaginations of people both within and outside the science community who fantasize about humans discovering life beyond Earth. That fantasy, however, may be grounded in reality. Resource Packages In 1972, scientists using a telescope at Kitt Peak National Observatory in Tucson, Arizona, made spectroscopic observations that showed that Europa's surface composition is mostly water ice. Thermal models dating back to 1971 also suggested that the interior of Europa could contain a layer of liquid water. NASA’s Pioneer 10 and 11 spacecraft flew by Jupiter in the early 1970s, but the first spacecraft to image the surfaces of Jupiter's moons in significant detail were the Voyager 1 and 2 spacecraft. This picture of Europa was taken on March 4, 1979, from a distance of about 1.2 million miles (2 million kilometers) by NASA's Voyager 1. Credit: NASA/JPL-Caltech Voyager 1's closest approach to Jupiter occurred on March 4, 1979. The spacecraft snapped a full global image of Europa from a distance of about 1.2 million miles (2 million kilometers). A few months later, Voyager 2 had its closest encounter with Europa on July 9, 1979. Images from the two Voyagers revealed a surface brighter than that of Earth's moon, crisscrossed with numerous bands and ridges, and with a surprising lack of large impact craters, tall cliffs, or mountains. In other words, Europa has a very smooth surface, relative to the other icy moons. Even though the Voyagers did not pass extremely close to Europa, their images were of high enough quality that researchers noted some of the dark bands had opposite sides that matched each other extremely well, like pieces of a jigsaw puzzle. These cracks had separated, and dark, icy material appeared to have flowed into the opened gaps, suggesting that the surface had been active at some time in the past. This color image of Europa was taken by Voyager 2 during its close encounter on July 9, 1979. Credit: NASA/JPL-Caltech Voyager images showed only a handful of impact craters, which are expected to build up over time as a planetary surface is constantly bombarded by meteorites over billions of years until the surface is covered in craters. Thus, a lack of large impact craters suggested that the moon's surface was relatively young and implied that something had erased them - such as icy, volcanic flows, or settling of the icy crust under its own weight. These intriguing findings led to a strong sense of anticipation for NASA’s Galileo mission, which launched in 1989 and entered orbit around Jupiter in 1995. The puzzling, fascinating surface of Jupiter's icy moon Europa looms large in this view made from images taken by NASA's Galileo spacecraft in the late 1990s. Credit: NASA/JPL-Caltech Galileo's primary mission included observations of each of the four Galilean moons during repeated flybys. The information about Europa that Galileo sent was so intriguing that the mission was extended for a two-year follow-on journey, known as the Galileo Europa mission. In all, the spacecraft made a total of 12 close flybys of the icy moon. One of the most important discoveries made by Galileo showed how Jupiter's magnetic field was disrupted in the space around Europa. This measurement strongly implied that a special type of magnetic field is being created (induced) within Europa by a deep layer of some electrically conductive fluid beneath the surface. Based on Europa's icy composition, scientists think the most likely material to create this magnetic signature is a global ocean of salty water. Scientists think Europa’s ice shell is 10 to 15 miles (15 to 25 kilometers) thick, floating on an ocean 40 to 100 miles (60 to 150 kilometers) deep. So while Europa is only one-fourth the diameter of Earth, its ocean may contain twice as much water as Earth’s global ocean. Europa’s ocean is considered one of the most promising places in the solar system to look for life beyond Earth. While no plumes were observed while the Galileo spacecraft was in the Jupiter system in the 1990s, more recent observations from telescopes such as the Hubble Space Telescope, as well as a reanalysis of some data from the Galileo spacecraft, have suggested that it is possible that thin plumes of water are being ejected 100 miles (160 kilometers) above Europa’s surface. In November 2019, an international research team led by NASA announced it had directly detected water vapor for the first time above Europa’s surface. The team measured the vapor using a spectrograph at the Keck Observatory in Hawaii that measures the chemical composition of planetary atmospheres through the infrared light they emit or absorb. If the plumes do exist, and if their source is linked to Europa’s ocean, then a spacecraft could travel through the plume to sample and analyze it from orbit, and it would essentially be analyzing the moon’s ocean. NASA’s Cassini spacecraft performed this feat at Saturn’s moon Enceladus, which is known to have an ocean spraying into space. Even if Europa isn’t ejecting samples into space, a 2018 study concluded that samples of Europa’s ocean could get frozen into the base of the moon’s ice shell, where the ice makes contact with the ocean. As the ice shell distorts and flexes from tidal forces, warmer and less-dense ice would rise, carrying the ocean samples to the surface where a spacecraft could analyze it remotely, using infrared and ultraviolet instruments, among others. Scientists could then study the material’s composition to determine whether Europa’s ocean might be hospitable for some form of life. Namesake Namesake Europa is named for a woman who, in Greek mythology, was abducted by the god Zeus – Jupiter in Roman mythology. Potential for Life Life as we know it seems to have three main requirements: liquid water, certain chemical elements, and an energy source. Also, life takes time to develop. Europa’s ocean may have existed for the entirety of our solar system’s history, approximately 4 billion years so sufficient time has passed for life to develop. Astrobiologists – scientists who study the origin, evolution, and future of life in the universe – believe Europa has abundant water and the right chemical elements – the building blocks of life – including carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur. The third ingredient for life is energy. All lifeforms need energy to survive. On Earth, most of that energy comes from the Sun. For example, plants grow and thrive through photosynthesis, a process that converts sunlight into energy. The energy is transferred to humans, animals, and other organisms when the plants are eaten. But the type of life that might inhabit Europa likely would be powered purely by chemical reactions instead of by photosynthesis, because any life at Europa would exist beneath the ice, where there is no sunlight. Europa's surface is blasted by radiation from Jupiter. That's a bad thing for life on the surface – it couldn't survive. But the radiation may create fuel for life in an ocean below the surface. If we eventually find some form of life at Europa it may look like microbes, or maybe something more complex. If it can be demonstrated that life formed independently in two places around the same star, it would then be reasonable to suggest that life springs up in the universe fairly easily once the necessary ingredients are present, and that life might be found throughout our galaxy and the universe. Size and Distance Size and Distance With an equatorial diameter of 1,940 miles (3,100 kilometers), Europa is about 90% the size of Earth’s Moon. So if we replaced our Moon with Europa, it would appear roughly the same size in the sky as our Moon does, but brighter – much, much brighter. Europa’s surface is made of water ice and so it reflects 5.5 times the sunlight than our Moon does. Europa orbits Jupiter at about 417,000 miles (671,000 kilometers) from the planet, which itself orbits the Sun at a distance of roughly 500 million miles (780 million kilometers), or 5.2 astronomical units (AU). One AU is the distance from Earth to the Sun. Light from the Sun takes about 45 minutes to reach Europa. Because of the distance, sunlight is about 25 times fainter at Jupiter and Europa than at Earth. Orbit and Rotation Orbit and Rotation Europa orbits Jupiter every 3.5 days and is locked by gravity to Jupiter, so the same hemisphere of the moon always faces the planet. Jupiter takes about 4,333 Earth days (or about 12 Earth years) to orbit the Sun (a Jovian year). Jupiter’s equator (and the orbital plane of its moons) is tilted with respect to Jupiter’s orbital path around the Sun by only 3 degrees (Earth is tilted 23.5 degrees). This means Jupiter spins nearly upright so that the planet, as well as Europa and Jupiter’s other moons, do not have seasons like Earth. Jupiter’s moons Io, Europa, and Ganymede are in what is called a resonance – every time Ganymede orbits Jupiter once, Europa orbits twice, and Io orbits four times. Over time, the orbits of most large satellites or planets tend to become circular, but in the case of these three satellites, the resonance produces a forced eccentricity since the satellites line up with each other at the same points in their orbits over and over, giving each other a small gravitational tug that keeps their orbits from becoming circular. Because Europa's orbit is elliptical (slightly stretched out from circular), its distance from Jupiter varies, and the moon’s near side feels Jupiter’s gravity more strongly than its far side. The magnitude of this difference changes as Europa orbits, creating tides that stretch and relax the moon’s surface. Flexing from the tides likely creates the moon’s surface fractures. If Europa's ocean exists, the tidal heating could also lead to volcanic or hydrothermal activity on the seafloor, supplying nutrients that could make the ocean suitable for living things. Formation Formation Jupiter’s large Galilean moons – Io, Europa, Ganymede, and Callisto – likely formed out of leftover material after Jupiter condensed from the initial cloud of gas and dust surrounding the sun, early in the history of the solar system. Those four moons are likely about the same age as the rest of the solar system – about 4.5 billion years old. In fact, the Galilean moons are sometimes called a “mini solar system” since they formed from the leftovers of Jupiter similar to how Earth and other planets formed from gas and dust left over from the formation of the Sun. The similarities don’t end there. Each planet in the inner solar system is less dense than its inner neighbor – Mars is less dense than Earth, which is less dense than Venus, which is less dense than Mercury. The Galilean moons follow the same principle, being less dense the farther they are from Jupiter. The reduced density at greater distances is likely due to temperature: denser, rocky, and metal material condenses out first, close to Jupiter or the Sun, while lighter-weight icy material only condenses out at larger distances where it is colder. Distance from Jupiter also determines how much tidal heating the Galilean satellites experience – Io, closest to Jupiter, is heated so much that it is the most volcanically-active body in the solar system. Ancient volcanic activity likely long ago evaporated any water it had when it formed. Europa has a layer of ice and water on top of a rocky and metal interior, while Ganymede and Callisto actually have higher proportions of water ice and lower densities. Structure Structure Like our planet, Europa is thought to have an iron core, a rocky mantle, and an ocean of salty water. Unlike Earth, however, Europa’s ocean lies below a shell of ice probably 10 to 15 miles (15 to 25 kilometers) thick, and has an estimated depth of 40 to 100 miles (60 to 150 kilometers). While evidence for an internal ocean is strong, its presence awaits confirmation by a future mission. Surface Europa’s water-ice surface is crisscrossed by dark, reddish-brown cracks. Based on the small number of observable craters, the surface of this moon appears to be no more than 40 to 90 million years old, which is youthful in geologic terms (the surface of Callisto, another of Jupiter’s moons, is estimated to be a few billion years old). All along Europa's many fractures, and in splotchy patterns across its surface, is a reddish-brown material whose composition is not known for certain, but likely contains salts and sulfur compounds that have been mixed with the water ice and modified by radiation. This surface composition may hold clues to the moon's potential as a habitable world. The image on the left shows a region of Europa's crust made up of blocks which are thought to have broken apart and "rafted" into new positions. These images were obtained by NASA's Galileo spacecraft in 1996 and 1997 at a distance of 417,489 miles (677,000 kilometers). Credit: NASA/JPL/University of Arizona | More about this image NASA's Galileo spacecraft explored the Jupiter system from 1995 to 2003 and made numerous flybys of Europa. Galileo revealed strange pits and domes that suggest Europa’s ice layer could be slowly churning, or convecting (cooler, denser ice sinks, while warmer less-dense ice rises) due to heat from below. Long, linear fractures are often only about a half mile to a mile wide (1-2 kilometers wide) but can extend for thousands of kilometers across Europa’s surface. Some of these fractures have built up into ridges hundreds of meters tall, while others appear to have pulled apart into wide bands of multiple parallel fractures. Galileo also found regions called "chaos terrain," where broken, blocky landscapes were covered in mysterious reddish material. In 2011, scientists studying Galileo data proposed that chaos terrains could be places where the surface collapsed above lens-shaped lakes embedded within the ice. Atmosphere Atmosphere Europa has only a tenuous atmosphere of oxygen, but in 2013, NASA announced that researchers using the Hubble Space Telescope found evidence that Europa might be actively venting water into space. This would mean the moon is geologically active in the present day. In November 2019, an international research team led by NASA announced it had directly detected water vapor for the first time above Europa’s surface. If the plumes do exist, and if their source is linked to Europa’s ocean, then a spacecraft could travel through the plume to sample and analyze it from orbit, and it would essentially be analyzing the moon’s ocean. NASA’s Cassini spacecraft performed this feat at Saturn’s moon Enceladus, which is known to have an ocean spraying into space. Magnetosphere Magnetosphere One of the most important measurements made by the Galileo mission showed how Jupiter's magnetic field was disrupted in the space around Europa. The measurement strongly implied that a special type of magnetic field is being created (induced) within Europa by a deep layer of some electrically conductive fluid beneath the surface. Based on Europa's icy composition, scientists think the most likely material to create this magnetic signature is a global ocean of salty water, and this magnetic field result is still the best evidence we have for the existence of an ocean on Europa.
Distance from Jupiter also determines how much tidal heating the Galilean satellites experience – Io, closest to Jupiter, is heated so much that it is the most volcanically-active body in the solar system. Ancient volcanic activity likely long ago evaporated any water it had when it formed. Europa has a layer of ice and water on top of a rocky and metal interior, while Ganymede and Callisto actually have higher proportions of water ice and lower densities. Structure Structure Like our planet, Europa is thought to have an iron core, a rocky mantle, and an ocean of salty water. Unlike Earth, however, Europa’s ocean lies below a shell of ice probably 10 to 15 miles (15 to 25 kilometers) thick, and has an estimated depth of 40 to 100 miles (60 to 150 kilometers). While evidence for an internal ocean is strong, its presence awaits confirmation by a future mission. Surface Europa’s water-ice surface is crisscrossed by dark, reddish-brown cracks. Based on the small number of observable craters, the surface of this moon appears to be no more than 40 to 90 million years old, which is youthful in geologic terms (the surface of Callisto, another of Jupiter’s moons, is estimated to be a few billion years old). All along Europa's many fractures, and in splotchy patterns across its surface, is a reddish-brown material whose composition is not known for certain, but likely contains salts and sulfur compounds that have been mixed with the water ice and modified by radiation. This surface composition may hold clues to the moon's potential as a habitable world. The image on the left shows a region of Europa's crust made up of blocks which are thought to have broken apart and "rafted" into new positions. These images were obtained by NASA's Galileo spacecraft in 1996 and 1997 at a distance of 417,489 miles (677,000 kilometers). Credit:
yes
Selenology
Is the moon geologically active?
yes_statement
the "moon" is "geologically" "active".. geologic "activity" occurs on the "moon".
https://phys.org/news/2019-05-moon-geologically.html
The moon is still geologically active, study suggests
The moon is still geologically active, study suggests The seismometer deployed on the moon by Apollo 14 (nearest of the three instruments). Credit: NASA We tend to think of the moon as the archetypal "dead" world. Not only is there no life, almost all its volcanic activity died out billions of years ago. Even the youngest lunar lava is old enough to have become scarred by numerous impact craters that have been collected over the aeons as cosmic debris crashed into the ground. Hints that the moon is not quite geologically dead though have been around since the Apollo era, 50 years ago. Apollo missions 12, 14, 15 and 16 left working "moonquake detectors" (seismometers) on the lunar surface. These transmitted recorded data to Earth until 1977, showing vibrations caused by internal "moonquakes". But no one was sure whether any of these were associated with actual moving faults breaking the surface of the moon or purely internal movements that could also cause tremors. Now a new study, published in Nature Geoscience, suggests the moon may indeed have active faults today. Another clue that something is still going on at the moon came in 1972 when Apollo 17 astronauts Gene Cernan and Jack Schmitt inspected a step in the terrain, a few tens of metres high, that they called "the Lee-Lincoln scarp". They, and their team of advisers back on Earth thought it might be a geological fault (where one tract of crustal rock has moved relative to another), but they weren't sure. A handful of similar examples were noted in photographs taken from Apollo craft as they orbited near the moon's equator, but it was not until 2010 that the Lunar Reconniassance Orbiter Camera, capable of recording details less than a metre across, revealed that such scarps can be found scattered across the whole globe. The Lee-Lincoln scarp sweeping across the valley floor and making a turn as it cuts up the valley side on the right. NASA Apollo 17 image library (frame AS17-137-20897). Credit: NASA It is now widely agreed that these are thrust faults, caused as the moon cools down from its hot birth. As it does, "thermal contraction" causes its volume to shrink and compresses the surface. That means that the moon is shrinking slightly. However, thrust faults don't necessarily have to be active and moving, causing more further tremors. The same thing has been happening on Mercury on a far grander scale, where the planetary radius has shrunk by 7km during the past 3m years. There, the biggest scarps are nearly a hundred times larger than those on the moon. Active faults Analysis shows that these faults are relatively young, not older than about 50m years. But are they active and still moving today? In the new study, Tom Watters of the Smithsonian Institution in the US and colleagues employed a new way to pinpoint the locations of the near-surface moonquakes in the Apollo data more precisely than was previously possible. A 3.5km wide view of part of the moon disturbed by faults. The team discovered that of the 28 detected shallow quakes, eight are close to (within 30km of) fault scarps, suggesting these faults may indeed be active. Six of them happened when the moon was almost at the greatest distance from Earth in its orbit. At this point, the contraction stress across the surface would be expected to peak, and quakes most likely to be triggered. The team also investigated fresh looking tracks left by boulders that have been dislodged. This was presumably a result of the ground shaking, because they are also seen close to fault scarps – and have rolled or bounced down a slope. There are also traces of landslide deposits. This, they say, all adds up to a strong case that fault movements are still occurring on the moon. So does this mean that the moon is unsafe for human exploration? The US recently announced plans to go there in the next five years, with the aim to set up a lunar base. Luckily, none of the new findings mean that the moon is a hotbed of ground tremors. Moonquakes are rarer and weaker than on Earth, but there are definitely a few places close to the faults where it might be best to avoid when it comes to planning moon bases. The tracks of two boulders that rolled downhill towards the Apollo 17 landing site. Each boulder is at the southern end of its track, where it casts a shadow to its left. Credit: NASA/GSFC/Arizona State University Citation: The moon is still geologically active, study suggests (2019, May 14) retrieved 16 August 2023 from https://phys.org/news/2019-05-moon-geologically.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Let us know if there is a problem with our content Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form. For general feedback, use the public comments section below (please adhere to guidelines). Please select the most appropriate category to facilitate processing of your request Your message to the editors Your email (only if you want to be contacted back) Thank you for taking time to provide your feedback to the editors. Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages. E-mail the story The moon is still geologically active, study suggests Note Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form. Your message Newsletter sign up Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties. Your Privacy This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use.
The moon is still geologically active, study suggests The seismometer deployed on the moon by Apollo 14 (nearest of the three instruments). Credit: NASA We tend to think of the moon as the archetypal "dead" world. Not only is there no life, almost all its volcanic activity died out billions of years ago. Even the youngest lunar lava is old enough to have become scarred by numerous impact craters that have been collected over the aeons as cosmic debris crashed into the ground. Hints that the moon is not quite geologically dead though have been around since the Apollo era, 50 years ago. Apollo missions 12, 14, 15 and 16 left working "moonquake detectors" (seismometers) on the lunar surface. These transmitted recorded data to Earth until 1977, showing vibrations caused by internal "moonquakes". But no one was sure whether any of these were associated with actual moving faults breaking the surface of the moon or purely internal movements that could also cause tremors. Now a new study, published in Nature Geoscience, suggests the moon may indeed have active faults today. Another clue that something is still going on at the moon came in 1972 when Apollo 17 astronauts Gene Cernan and Jack Schmitt inspected a step in the terrain, a few tens of metres high, that they called "the Lee-Lincoln scarp". They, and their team of advisers back on Earth thought it might be a geological fault (where one tract of crustal rock has moved relative to another), but they weren't sure. A handful of similar examples were noted in photographs taken from Apollo craft as they orbited near the moon's equator, but it was not until 2010 that the Lunar Reconniassance Orbiter Camera, capable of recording details less than a metre across, revealed that such scarps can be found scattered across the whole globe. The Lee-Lincoln scarp sweeping across the valley floor and making a turn as it cuts up the valley side on the right.
yes
Selenology
Is the moon geologically active?
yes_statement
the "moon" is "geologically" "active".. geologic "activity" occurs on the "moon".
https://www.spaceflightinsider.com/missions/solar-system/cassini-to-study-activity-on-enceladus-in-three-separate-flybys/
Cassini studies activity on Enceladus in three separate flybys ...
Spaceflight Insider Cassini studies activity on Enceladus in three separate flybys CGI rendition of Cassini’s flyby of Enceladus with Saturn in the background. (Click to enlarge.) Image Credit: David Robinson / Bambam131.com NASA’s Cassini orbiter is conducting three separate flybys of Saturn’s icy moon Enceladus to further study the surprising levels of activity occurring on and below its surface and to obtain images and data that could help scientists determine whether the Saturnian moon could be habitable for microbial life. The first flyby will provide an opportunity to closely observe Enceladus’ north polar region, now illuminated by summer Sun. Earlier approaches to the icy moon during Cassini‘s 11-year mission could conduct only limited observations of this area, which was long shrouded in winter darkness. That encounter (E-20) took place on Wednesday, October 14, and is considered a moderately close approach. Cassini observed the moon from an altitude of 1,142 miles (1,839 km) and searched for signs of ancient geological activity similar to that known to have occurred on the icy moon’s south pole. Earlier in its mission, Cassini discovered the south polar region spewing geysers and covered with “tiger stripes” fractures produced by hydrothermal activity beneath Enceladus’ frozen surface. Cassini imaging scientists used views like this one to help them identify the source locations for individual jets spurting ice particles, water vapor and trace organic compounds from the surface of Saturn’s moon Enceladus. Image & Caption Credit: NASA / JPL / Space Science Institute On Wednesday, October 28, the spacecraft will make its closest approach (E-21) to Enceladus, a daring flight just 20 miles (49 km) above its south polar region. That trajectory will plunge Cassini through the moon’s icy plumes, where data will be collected and images taken. Mission scientists hope the close-up data and images will reveal the level of hydrothermal activity occurring in Enceladus’ subsurface ocean as well as ways that activity affects the ocean’s potential habitability for microbial life. The spacecraft’s final encounter (E-22) with Enceladus will take place on Saturday, December 19, from an altitude of 3,106 miles (4,999 km), with the primary goal of determining the amount of heat coming from the icy moon’s interior. Beginning in November, the Cassini team plans to slowly raise the spacecraft’s orbit, moving it out of Saturn’s equatorial region, where flybys of the large moons have taken place, and head for the smaller moons, located near the planet’s famous rings. Enceladus has proven to be one of Cassini’s most notable successes in terms of discoveries. Its icy plumes were first spotted by the spacecraft in 2005, and subsequent flybys yielded more insight into the material being spewed out of warm fractures in its south polar region. Within just the last year, mission scientists determined the geologically active moon hosts a global ocean beneath its icy crust and found evidence that hydrothermal activity could be occurring on the ocean floor. Discovering that Enceladus wobbles slightly as it orbits Saturn led scientists to conclude the underground ocean is a global one. Such wobbling can occur only if the moon’s icy outer shell is not frozen solid through its interior. “If the surface and core were rigidly connected, the core would provide so much dead weight the wobble would be far smaller than we observe it to be,” said Cassini scientist Matthew Tiscareno of the SETI Institute. “This proves that there must be a global layer of liquid separating the surface from the core.” The plumes emitted from fractured areas near Enceladus’ south pole, made up of water vapor, icy particles, and simple organic molecules, are likely supplied by the vast reservoir beneath the surface, mission team members stated in an article published in the journal Icarus last month. The discoveries have propelled Enceladus to a top destination for future space missions. Bonnie Buratti, a specialist in icy moons at NASA’s Jet Propulsion Laboratory (JPL) and a member of the Cassini team, said, “We’ve been following a trail of clues on Enceladus for 10 years now. The amount of activity on and beneath this moon’s surface has been a huge surprise to us. We’re still trying to figure out what its history has been, and how it came to be this way.” With a global underground ocean and possibly hydrothermal activity on the ocean’s floor, Enceladus’ subsurface environment could be similar to ocean floors on Earth, said Cassini mission scientist Jonathan Lunine of Cornell University. “It is therefore very tempting to imagine that life could exist in such a habitable realm, a billion miles from our home,” he emphasized. Laurel Kornfeld is an amateur astronomer and freelance writer from Highland Park, NJ, who enjoys writing about astronomy and planetary science. She studied journalism at Douglass College, Rutgers University, and earned a Graduate Certificate of Science from Swinburne University’s Astronomy Online program. Her writings have been published online in The Atlantic, Astronomy magazine’s guest blog section, the UK Space Conference, the 2009 IAU General Assembly newspaper, The Space Reporter, and newsletters of various astronomy clubs. She is a member of the Cranford, NJ-based Amateur Astronomers, Inc. Especially interested in the outer solar system, Laurel gave a brief presentation at the 2008 Great Planet Debate held at the Johns Hopkins University Applied Physics Lab in Laurel, MD. Privacy Overview This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
’ subsurface ocean as well as ways that activity affects the ocean’s potential habitability for microbial life. The spacecraft’s final encounter (E-22) with Enceladus will take place on Saturday, December 19, from an altitude of 3,106 miles (4,999 km), with the primary goal of determining the amount of heat coming from the icy moon’s interior. Beginning in November, the Cassini team plans to slowly raise the spacecraft’s orbit, moving it out of Saturn’s equatorial region, where flybys of the large moons have taken place, and head for the smaller moons, located near the planet’s famous rings. Enceladus has proven to be one of Cassini’s most notable successes in terms of discoveries. Its icy plumes were first spotted by the spacecraft in 2005, and subsequent flybys yielded more insight into the material being spewed out of warm fractures in its south polar region. Within just the last year, mission scientists determined the geologically active moon hosts a global ocean beneath its icy crust and found evidence that hydrothermal activity could be occurring on the ocean floor. Discovering that Enceladus wobbles slightly as it orbits Saturn led scientists to conclude the underground ocean is a global one. Such wobbling can occur only if the moon’s icy outer shell is not frozen solid through its interior. “If the surface and core were rigidly connected, the core would provide so much dead weight the wobble would be far smaller than we observe it to be,” said Cassini scientist Matthew Tiscareno of the SETI Institute. “This proves that there must be a global layer of liquid separating the surface from the core.” The plumes emitted from fractured areas near Enceladus’ south pole, made up of water vapor, icy particles, and simple organic molecules, are likely supplied by the vast reservoir beneath the surface, mission team members stated in an article published in the journal Icarus last month. The discoveries have propelled Enceladus to a top destination for future space missions.
yes
Selenology
Is the moon geologically active?
yes_statement
the "moon" is "geologically" "active".. geologic "activity" occurs on the "moon".
https://www.lpi.usra.edu/science/kiefer/Education/SSRG2-Europa/europa.html
Europa and Titan: Oceans in the Outer Solar System
Biologists believe that life requires the presence of some sort of liquid to serve as a medium for the chemical reactions needed to sustain life. On Earth, liquid water plays this role. Water has some chemical properties that make it particularly favorable as a medium for life, although we probably should not rule out the possibility that other types of liquid, such as organic liquids, might play this role in other types of biology. If liquids truly are necessary for life, then the potential abodes for life in the outer Solar System are quite limited. Europa and Titan both have been proposed to have oceans and are therefore the best possible candidate locations for life in the outer Solar System. Europa Among its many moons, Jupiter has four large moons that are known collectively as the Galilean satellites. Of these four, Europa is the smallest and the second closest to Jupiter. With a radius of 1560 kilometers, it is slightly smaller than Earth’s Moon. Objects of this size are expected to lose their internal heat in a relatively short period of time. For comparison, most volcanic activity on the Moon ended more than 3 billion years ago and the last gasps of lunar volcanism occurred roughly 1 billion years ago. Naively, we might therefore expect Europa to be geologically dead. It turns out, however, that there is an energy source that has kept Europa geologically active to the present day. Three of Jupiter’s moons, Io, Europa, and Ganymede, orbit in a condition known as a resonance. Every time Io orbits Jupiter twice, Europa completes one orbit. Similarly, every time Europa orbits Jupiter twice, Ganymede completes one orbit. The gravity of each moon tugs slightly on the other moons, and because of the orbital resonance, the tugs occur repeatedly in the same part of each moon’s orbit. The consequence is that Io and Europa have orbits that are slightly elliptical. If these orbits were perfectly circular, the gravitational force of Jupiter on the moons would be the same everywhere in the orbit. However, because the orbits are not circular, the gravitational force is not the same everywhere in the orbit. As a result, both Io and Europa are slightly deformed by tides as they orbit Jupiter. As the outer portions of these moons flex up and down in response to these tides, the friction of rock and ice grains sliding across each other releases heat. This heat is very important to understanding the geologic history of these objects. On Io, this heating produces volcanic activity that is far more active and intense than volcanism on Earth. On Europa, the tidal heating may have heated its interior enough to melt ice and produce a subsurface ocean. The Earth and Moon also experience tidal deformation as the Moon orbits Earth, but the heat released is not significant for either object. 1. Global View As this global view shows, much of Europa’s surface is covered by a series of dark bands. When studied by NASA’s Voyager spacecraft in 1979, the nature of these bands was enigmatic, but was presumed to reflect some sort of faulting or other type of surface deformation. The virtual absence of impact craters indicates that the surface of Europa is quite young. 2. Wedges Region Observations by NASA’s Galileo spacecraft since 1996 have provided a much clearer view of Europa. This Galileo image is 230 kilometers across and shows some of the dark bands in greater detail. In some cases, these structures can be seen to be low ridges or pairs of ridges. (You can tell whether a feature is high or low by the nature of the shadow it casts. In this image, the illumination is from the left.) The dark band that originates at the bottom center of the image and runs to the left center is wedge-shaped. This wedge-shaped band probably formed by the gradual spreading of Europa’s surface — think for example of the spreading as a door opens on a hinge. As in image 1, there is a noticeable absence of impact craters. 3. Ice Rafts This Galileo image is 42 kilometers across and is illuminated from the right. It shows a series of “ice rafts” that have been disrupted and jostled about. Although we saw indications of surface motions in image 2, this image is by far the clearest evidence for large motions of blocks of material across the surface of Europa. When NASA scientists reported in the spring of 1997 that they had evidence of an ocean below the surface of Europa, this image was their “smoking gun”. The ocean interpretation rests on the belief that the existence of so much lateral motion across the surface requires the presence of some sort of layer to lubricate the flow at depth. These scientists assume that this lubrication requires a liquid, and hence favor the existence of an ocean. As a possible counter-example, consider the physics controlling plate tectonics on Earth. As a general rule, temperature increases with depth inside a planet, and as materials increase in temperature, they tend to become less viscous (less rigid, or more colloquially, softer). The Earth’s surface consists of about 12 large tectonic plates, which move about at speeds of up to 10 centimeters per year, producing all of the earthquakes, volcanic activity, and mountain belt formation that occurs on Earth. These plates move over a mantle which is solid virtually everywhere (we know this because of the way seismic waves travel through the mantle). The “lubrication” that allows all of this motion and geologic activity is actually solid rock that is simply hotter and thus less viscous than the rock above it. The Earth’s example demonstrates that we should consider the possibility that the motion which we see on Europa is lubricated by warm, soft ice rather than by a liquid ocean. 4. Chaos Region This Galileo image is 175 kilometers across and is illuminated from the left. The major feature is a mitten-shaped region of chaotically disrupted terrain in the center of the image. This chaos region is superimposed on the surrounding plains and ridges, so it must be the youngest feature in this region. Based on the pattern of sunlight and shadows around the edge of the chaos region, the chaos region is slightly elevated compared with the surrounding plains. On the west (left) side of the structure, there is a narrow trough separating the plains from the uplifted chaos terrain. Similar chaos units are found in many parts of Europa. Some scientists believe that these regions form when the subsurface ocean melts through a relatively thin outer ice shell. Other scientists believe that the chaos regions are uplifted and disrupted where a diapir (“blob”) of relatively warm ice rose through the surrounding crust of colder ice. Numerous ridges also cross this image. The relative ages of these ridges can be determined by observing the intersections between ridges (the younger ridge will appear to cut the older ridge). 5. Impact Craters This image shows four of the largest impact craters found on Europa. Because impact craters excavate into the crust of a planet, they serve as natural core samples into the structure of the upper crust. Generally, the excavation depth of a crater increases as the size of the crater increases. In other words, small craters make shallow holes and larger craters make deeper holes. If an impacting object penetrated all the way through the solid ice shell on Europa to an underlying ocean, the sudden loss of material strength in the crust would cause the crater to collapse (think about the “hole” that is made when you throw a rock into a pond!). Based on the known depths of the largest craters on Europa, it appears that the ice shell of Europa remains solid to a depth of at least 19 to 25 kilometers. The pattern of crater depths as a function of crater diameter suggests that either an ocean or a layer of warm (and thus soft and weak) ice occurs below this depth. 6. Internal Structure This image shows cross sectional views of Europa’s internal structure. Our current knowledge of the interior of Europa comes from observations of its gravitational and magnetic fields. Europa’s relatively high density of 3.04 grams per cubic centimeter implies that is composed mostly of rock and metal, with relatively little water ice. This material has probably separated into a metal-rich core and a rock-rich mantle, with the core having a radius of 500 to 1000 kilometers. The surface of Europa is known to be predominantly water ice, probably with some rock mixed in, based on spectroscopy studies. This outer shell of water ice is 100 to 200 kilometers thick. The right side of the image highlights two fundamentally different views about the nature of the ice shell on Europa. The available gravity observations do not indicate whether this layer is entirely solid or if there is a subsurface ocean on Europa. However, magnetic field observations do indicate the presence of an ocean: the salts that would likely be dissolved in such an ocean would be good electrical conductors and hence modify Jupiter’s magnetic field in the vicinity of Europa. This effect has been observed by Galileo and is the strongest present evidence for a subsurface ocean inside Europa. This ocean must be globally distributed. Solid ice and rock can not explain the observed magnetic signature. The magnetic evidence requires that the ocean be at least 10 kilometers thick, but does not tightly constrain the depth at which this ocean begins. As noted in the captions for other images, geological arguments have been made for both a thin ice layer and a thick ice layer. In the thin ice shell model, the ice shell might be just 1–2 kilometers thick. In this model, the ocean might frequently break through to the surface, and the various ridges and faults are assumed to be related to tidal forces in the ocean. In the thick ice shell model, the ocean occurs at much greater depths, at least 20 kilometers beneath the surface. There are scientists who argue with great passion for each model. In my personal view, the cratering evidence (image 5) is a strong constraint favoring a relatively thick ice shell. It is possible that heat pulses do occasionally produce regions with a thin ice shell (for example, the chaos regions shown in image 3 and image 4). However, such regions of thin ice are probably restricted to geographically limited regions and to short intervals of time. Future Exploration of Europa NASA has considered a Europa Orbiter mission that might provide clearer evidence about the nature of Europa’s subsurface ocean. The orbiter would use very long-wavelength radar to attempt to see through the ice to an underlying ocean. A radar flown on the Space Shuttle in 1981 was able to “look” below the Sahara Desert and detect ancient river channels that are now buried under 1 to 2 meters of sand. On Apollo 17, a radar system was able to look through the upper kilometer of rock and image buried lava flows on the Moon. Similar radars are planned for launch to Mars in 2003 and 2005 to image the subsurface distribution of water and ice. The Europa Orbiter would also carry an altimeter to accurately measure Europa’s shape. The shape changes over the course of an orbit about Jupiter because of tidal deformation. The amount of tidal deformation depends on whether there is an ocean just below the surface or if Europa is solid throughout. Thus, precise measurements of the shape of Europa may provide details about the structure of the subsurface ocean. The Europa Orbiter would also collect additional high resolution images and gravity observations of Europa. Jupiter is surrounded by very strong radiation belts that are dangerous to spacecraft. The Galileo spacecraft dipped deep into the radiation belts for only a few days every few months. In contrast, the Europa Orbiter would be exposed to strong radiation for a much longer period of time. Because of the high cost of designing a spacecraft to endure such radiation (perhaps one billion dollars), the proposed mission is currently on hold. Titan 7. Voyager Image of Titan Titan is the largest of Saturn’s moons. With a radius of 2575 kilometers, it is the second largest moon in the entire Solar System and is larger than the planets Mercury and Pluto. Titan is the only satellite in the Solar System to have a significant atmosphere. At the surface, the atmospheric pressure is 1.6 bars (60% higher than on Earth) and the temperature is a frigid 94 Kelvin. The atmosphere is composed primarily of nitrogen, as on Earth, and also includes some methane and possibly argon. Trace amounts of hydrogen and many organic molecules are also present. Some of these compounds form a thick haze layer in the upper atmosphere of Titan. At visible wavelengths, this haze makes it impossible to see down to the surface of Titan. Ultraviolet radiation from the Sun can break up methane molecules, and the resulting hydrogen atoms can be lost to space. The remnants of the methane can form heavier organic compounds, such as ethane and acetylene. Even at Titan’s cold temperature, ethane is a liquid and might form an ocean on Titan’s surface. Over the age of the Solar System, an ocean of ethane several hundred meters thick might have formed, probably with some methane dissolved in it. The actual distribution of ethane, whether in a surface ocean or in subsurface cavities, is not known at present. Infrared images of Titan obtained by the Hubble Space Telescope show a pattern of bright and dark regions that some scientists think might be related to oceans and continents. Radar observations of Titan also hint at the possibility of oceans and continents. 8. Cassini Probe at Titan The Cassini spacecraft was launched in October 1997 and will arrive at Saturn in July 2004. In early 2005, a probe will study the composition and physical properties of Titan’s atmosphere and surface. In the event that the probe lands in an ethane ocean, it is able to float. Cassini will also use radar to map parts of Titan’s surface and will also study Saturn’s atmosphere, rings, magnetic field, and other satellites between 2004 and 2008.
Naively, we might therefore expect Europa to be geologically dead. It turns out, however, that there is an energy source that has kept Europa geologically active to the present day. Three of Jupiter’s moons, Io, Europa, and Ganymede, orbit in a condition known as a resonance. Every time Io orbits Jupiter twice, Europa completes one orbit. Similarly, every time Europa orbits Jupiter twice, Ganymede completes one orbit. The gravity of each moon tugs slightly on the other moons, and because of the orbital resonance, the tugs occur repeatedly in the same part of each moon’s orbit. The consequence is that Io and Europa have orbits that are slightly elliptical. If these orbits were perfectly circular, the gravitational force of Jupiter on the moons would be the same everywhere in the orbit. However, because the orbits are not circular, the gravitational force is not the same everywhere in the orbit. As a result, both Io and Europa are slightly deformed by tides as they orbit Jupiter. As the outer portions of these moons flex up and down in response to these tides, the friction of rock and ice grains sliding across each other releases heat. This heat is very important to understanding the geologic history of these objects. On Io, this heating produces volcanic activity that is far more active and intense than volcanism on Earth. On Europa, the tidal heating may have heated its interior enough to melt ice and produce a subsurface ocean. The Earth and Moon also experience tidal deformation as the Moon orbits Earth, but the heat released is not significant for either object. 1. Global View As this global view shows, much of Europa’s surface is covered by a series of dark bands.
yes
Selenology
Is the moon geologically active?
yes_statement
the "moon" is "geologically" "active".. geologic "activity" occurs on the "moon".
https://explanet.info/Chapter04.htm
Chapter 4. The Moon
4.0 Introduction In many ways the Moon is a geologic Rosetta stone: an airless, waterless body untouched by erosion, containing clues to events that occurred in the early years of the solar system, which have revealed some of the details regarding its origin and providing new insight about the evolution of Earth. Although they also posed new questions, the thousands of satellite photographs brought back from the Moon have permitted us to map its surface with greater accuracy than Earth could be mapped a few decades ago. We now have over 380 kg of rocks from nine places on the Moon, rocks that have been analyzed by hundreds of scientists from many different countries. Data from a variety of experiments have revealed much about the Moon's deep interior. As it turns out, the Moon is truly a whole new world, with rocks and surface features that provide a record of events that occurred during the first billion years of the solar system. This record is not preserved on Earth because all rocks formed during the first 800 million years of Earth's history were recycled back into the interior. The importance of the Moon in studying the principles of geology is that it provides an insight into the basic mechanics of planetary evolution and events that occurred early in the solar system. Much of the knowledge we have of how planets are born and of the events that transpired during the early part of their histories has been gained from studies of the Moon. At the outset, it is important to note that we assume that the physical and chemical laws that govern nature are constant. For example, we use observations about how chemical reactions occur today, such as the combination of oxygen and hydrogen at specific temperatures and pressures to produce water, and infer that similar conditions produced the same results in the past. This is the basic assumption of all sciences. Moreover, much of what we "know" about the planets, as in all science, is a mixture of observation and theory---a mixture that is always subject to change. Scientific knowledge is pieced together slowly by observation, experiment, and inference. The account of the origin and differentiation of planets we present is such a theory or model; it explains our current understanding of facts and observations. It will certainly be revised as we continue to explore the solar system and beyond, but the basic elements of the theory are firmly established. 4.1 Major Concepts The surface of the Moon can be divided into two major regions: (a) the relatively low, smooth, dark areas called maria (seas) and (b) the densely cratered, rugged highlands, originally called terrae (land). Most of the craters of the Moon resulted from the impact of meteorites, a process fundamental in planetary development. The geologic time scale for the Moon has been established using the principles of superposition and cross-cutting relations. Radiometric dating of rocks returned from the Moon has provided an absolute time scale. The lunar maria are vast plains of basaltic lava, extruded about 4.0 to 2.5 billion years ago. Other volcanic features on the Moon include sinuous rilles and low shield volcanoes. The major tectonic features on the Moon, mare ridges and linear rilles, are products of minor vertical movements. The Moon is a differentiated planetary body with a crust about 70 km thick. The lithosphere is approximately 1000 km thick. The deeper interior may consist of a partially molten asthenosphere and a small metallic core. The tectonic and thermal evolution of the Moon was very rapid and terminated more than two billion years ago. The Moon has no surface fluids, so that little surface modification has occurred since the termination of its tectonic activity. The major events in the Moon's history were: (a) accretion of material ejected from Earth after a massive collision with a Mars-sized object, (b) and differentiation with the formation of the lunar crust by crystallization of a magma ocean, (b) intense meteoritic bombardment, (c) extrusion of the mare lavas, and (d) light bombardment. 4.2 The Moon as a Planet In July 1969, a human stood for the first time on the surface of another planet, seeing landscape features that were truly alien and returning with a priceless burden of Moon rocks and other information obtainable in no other way. Nonetheless, many of the facts listed in Table 1 were known long before we began to explore space; they represent years of diligent study. For example, it was discovered centuries ago that the Moon revolves about Earth and not the Sun and is thus a natural satellite (the largest in the inner solar system). Long ago the distance from Earth to the Moon was measured and the diameter of the Moon determined. Early astronomers realized that the Moon's rotation period and its period of revolution are the same; thus it keeps one hemisphere facing Earth at all times. Moreover, many of the Moon's surface features have become well known, especially since the days of Galileo, the first to study the Moon through a telescope. Even the density and gravitational field of the Moon had been determined long before our generation. But not until the 1960s---and the inception of space travel with its sophisticated satellites and probes and the eventual Moon landing---did man begin to appreciate the significance of the Moon as a planet. In spite of its small size and forbidding surface, the Moon has revealed secrets that pertain to the ultimate creation of our planet, Earth, and our neighbors beyond.
Most of the craters of the Moon resulted from the impact of meteorites, a process fundamental in planetary development. The geologic time scale for the Moon has been established using the principles of superposition and cross-cutting relations. Radiometric dating of rocks returned from the Moon has provided an absolute time scale. The lunar maria are vast plains of basaltic lava, extruded about 4.0 to 2.5 billion years ago. Other volcanic features on the Moon include sinuous rilles and low shield volcanoes. The major tectonic features on the Moon, mare ridges and linear rilles, are products of minor vertical movements. The Moon is a differentiated planetary body with a crust about 70 km thick. The lithosphere is approximately 1000 km thick. The deeper interior may consist of a partially molten asthenosphere and a small metallic core. The tectonic and thermal evolution of the Moon was very rapid and terminated more than two billion years ago. The Moon has no surface fluids, so that little surface modification has occurred since the termination of its tectonic activity. The major events in the Moon's history were: (a) accretion of material ejected from Earth after a massive collision with a Mars-sized object, (b) and differentiation with the formation of the lunar crust by crystallization of a magma ocean, (b) intense meteoritic bombardment, (c) extrusion of the mare lavas, and (d) light bombardment. 4.2 The Moon as a Planet In July 1969, a human stood for the first time on the surface of another planet, seeing landscape features that were truly alien and returning with a priceless burden of Moon rocks and other information obtainable in no other way. Nonetheless, many of the facts listed in Table 1 were known long before we began to explore space; they represent years of diligent study.
no
Selenology
Is the moon geologically active?
yes_statement
the "moon" is "geologically" "active".. geologic "activity" occurs on the "moon".
https://mobile.arc.nasa.gov/public/iexplore/missions/pages/yss/april2012.html
Ice in the Solar System: Ice!
Ice in the Solar System: Ice! The Mars Express took this photo of a crater on Mars filled with water ice. Credit: ESA/DLR/FU Berlin (G. Neukum) Overview Ice is common in our solar system, from deposits at the poles of Mercury and the Moon to ice-covered moons and rings around distant Jupiter and Saturn, and comets made of ice and other materials streaming across the spaces between. And, of course, ice is present around our own world. The ice on continents contains about 75% of Earth's freshwater. Melting ice sheets on Greenland and Antarctica have the potential to raise global sea levels by 23 feet (Greenland) and 187 feet (Antarctica). Image Token: While the most common type of ice in our solar system is water ice, there are also many other types of ice. Mars' poles have abundant amounts of frozen carbon dioxide (also called dry ice), and comets have frozen ammonia and methane in addition to frozen water and other ices. Saturn's moon Titan is famous for its methane, which can exist as a solid, a liquid and a gas at Titan's surface temperatures and pressures. Scientists are studying water ice both on Earth and on other planets; snow and glaciers are a critical source of freshwater for many regions on Earth, and ice deposits can be a source of water for future explorers in the solar system. Given the apparent need all life has for water, the presence of frozen water may also provide clues to the possibility of life! For more about water's presence and role on the planets, check out the YSS topic Water in the Solar System; check out Got Life for more about the search for life. All types of ice play an important role in the characteristics and planetary processes throughout the solar system. Glaciers have eroded parts of the Earth and Mars, creating new features. Uranus and Neptune are filled with "icy" materials like water, ammonia, and methane, under incredible heat and pressure. Some moons in the outer solar system have volcano-like geysers that erupt ice! Examine this topic as we explore ice and its properties, where it is located, and what it tells us about the planets and moons in our solar system. The processes that formed our solar system a little over 4.5 billion years ago helped to distribute the ices. Close to the sun, it was too hot for water and other ices to condense. Instead, rocky materials and metals collected near the sun to form the smaller rocky planets. Farther out, beginning near the outer asteroid belt, ices were able to condense in the colder reaches of space, forming the cores of Jupiter, Saturn, Uranus, and Neptune -- the gas giants -- and their moons. Beyond the gas giants, the Kuiper belt and Oort cloud are host to the leftovers of solar system formation, small icy rocky bodies (yes, including Pluto!), and icy comets. Ice Exists on Our Nearby Neighbors If the inner, rocky planets formed in a part of the solar system that was too hot for ices to condense, where did all the ice come from? There are two primary sources: first, the planets themselves, and second, delivery by comets or icy asteroids (not unlike having pizza delivered to your home . . .). As Earth, Venus, Mars, and Mercury evolved, they released gases from their interiors through volcanic activity. Volcanos on Earth continue to release gases today, including a lot of water vapor. On the early planets, these gases formed the planetary atmospheres. Atmospheres are important for maintaining relatively constant surface temperatures. On planets or moons without atmospheres that are close to the sun, the surfaces in sunlight get very hot and the surfaces in darkness (nightsides) get very cold. On some of the terrestrial planets, water vapor in the early atmospheres eventually condensed and precipitated to form oceans once the planetary surfaces cooled. Each planet has a different history that influences whether or not it has ice. Mercury: Mercury's relatively small size likely did not provide sufficient gravitational attraction to "hold" an atmosphere. Because it was small, it cooled quickly, so volcanic processes may have stopped early in its history and did not replenish its atmosphere. In addition, Mercury is the closest to the sun. Solar wind weathered away its atmosphere and the sun continues to heat its surface to temperatures that are far too hot for water to condense or for ice to exist . . . except, possibly, in a few special places (foreshadowing!). Venus: Venus has a very dense atmosphere that contains ~97% carbon dioxide. Carbon dioxide is a greenhouse gas, a gas that can absorb solar radiation in the thermal infrared range of the spectrum. This thick blanket of gas traps the sun's radiation and heats the planet's surface to a whopping 872 degrees F (467 degrees C). The surface of Venus is the hottest in the solar system -- hotter even than Mercury, which is closer to the sun! Venus is too hot to have any type of ice on it. Earth: As Earth's surface cooled, water vapor in the early atmosphere condensed and precipitated, forming our oceans. Today Earth's atmosphere contains mostly nitrogen (78%), oxygen (21%), and minor quantities of other gases including carbon dioxide and water vapor. Our atmosphere has evolved; unlike Venus, a large amount of carbon dioxide has been removed from our atmosphere, dissolved in Earth's oceans, and precipitated as carbonate rocks. Over time, plants have contributed the oxygen through the process of photosynthesis. Earth's atmosphere, like any planetary atmosphere, helps to moderate our temperatures so that the sun's radiation does not cause the surface to get too hot on the daytime side or plunge to temperatures well below freezing on the nighttime side. The small amounts of greenhouse gases, such as water vapor and carbon dioxide, help to warm Earth even more, making it habitable. Earth's average temperature is about 59 degrees F (15 degrees C), but it ranges from -128 degrees F (-89 degrees C) to 136 degrees F (58 degrees C). Not surprisingly, the ice on Earth is water ice because we have an abundance of water. Water ice is found where the temperatures are below the freezing point of water and there is enough precipitation for snow or ice crystals to fall or there is water that can freeze. Permanent ice is found on Earth's high mountains and in its polar regions, and sometimes in protected areas such as caves. During the winter months, seasonal temperatures get cold enough to allow snow to temporarily accumulate farther from the poles. The freezing point of carbon dioxide is -108 degrees F (-78 degrees C); pure ammonia's freezing point is -107 degrees F (-77 degrees C). These ices could exist in the coldest places on Earth, but the substances do not exist naturally in sufficient amounts. Ice has not always been present on Earth's surface; during periods of geologic history Earth's climate has been warmer. Our climate also has been colder at times in the past, causing the ice to expand across the Earth's surface. Mars: Early Mars had a climate that was warmer and wetter than today; its atmosphere was thicker and water flowed across the surface. Mars may even have had oceans. As the interior of Mars cooled, volcanism declined and the atmosphere of Mars thinned. Today's atmosphere is made of 95% carbon dioxide, 3% nitrogen, and small amounts of other gases, including water, oxygen and methane. The atmospheric pressure on the surface of Mars is about 1/100 that of Earth's atmospheric pressure at sea level. Because of the thin atmosphere and Mars' distance from the sun, Mars is cold. Its temperatures range from -193 degrees F (-125 degrees C) to 23 degrees F (-5 degrees C), well under the freezing point of water and also cold enough to freeze carbon dioxide. Because of the low atmospheric pressures, liquid water at the surface of Mars would evaporate into water vapor. So what happened to all that water that used to be on the surface of Mars? Some did evaporate into space. But much is frozen under the surface and in the polar ice caps. Mars has water ice! Mars also has another type of ice -- carbon dioxide ice -- which is familiar to us as "dry ice." Because Mars is so cold, in the winter the carbon dioxide in its atmosphere condenses and falls to the ground as carbon dioxide ice. In the summer, much of this changes from the solid form back into gas (sublimates). Mars has ice caps at both its poles. The north pole ice cap is about 600 miles (1000 km) across -- about the width of Montana! The southern ice cap is about 1/3 this size. Both ice caps are made mostly of water ice, but the southern ice cap has a permanent cover of carbon dioxide ice. The ice caps grow each winter as carbon dioxide ice is added to them, and decrease each summer as the carbon dioxide sublimates back to the atmosphere. Like Earth, Mars' climate has fluctuated through geologic time, sometimes getting warmer and sometimes getting colder. During colder times, its ice caps expanded and glaciers extended farther across the Martian landscape. Asteroids: Some asteroids may also contain ice and water. The Dawn mission is exploring Vesta, a very dry asteroid, although some scientists believe that beneath Vesta's surface ice could exist. A possible place for the presence of surface ice is near the north pole that has been in darkness for two Earth years. The gamma ray and neutron detector should be able to detect this if a significant amount of water ice is present. The Dawn mission will also investigate the largest asteroid, Ceres, which scientists believe from its density and shape to contain a vast store of water and water ice. The Moon and Mercury Are Surprising Places to Expect Water Our Moon has no atmosphere; as it spins on its axis, its surface experiences temperatures ranging from 225 degrees F (107 degrees C) in sunlight to -243 degrees F (-153 degrees C) in the dark. Ice and water cannot exist under these conditions; they would evaporate. Why, then, is NASA exploring the Moon's surface to see if water ice exists? The Moon's poles have areas of permanent light and permanent darkness. Sunlight reaches the north and south polar regions at low angles of incidence. Because the Moon's axis of spin is tilted at a very small 1.5 degrees to its orbit around the sun, this low angle of incidence does not change during the year (as it does on Earth, causing seasons). Deep craters at the poles never receive sunlight. They are permanently shadowed and permanently cold! These are the cold-storage pits of the lunar surface. They are cold enough to trap volatiles -- elements that evaporate readily at standard temperature and pressure -- like water. Radar and spectroscopic data collected by several spacecraft, including the Lunar Reconnaissance Orbiter, suggest that large amounts of water ice, perhaps mixed with dust and rocks, exist at the lunar south pole. NASA's Lunar Crater Observation and Sensing Satellite (LCROSS) mission impacted the lunar surface in a permanently shadowed crater. The resulting plume has been analyzed for water ice and vapor and other materials by instruments on the LCROSS shepherding spacecraft and LRO, and by telescopes on Earth. Cold temperatures and thermal dynamics of the plume suggest it is possible for water ice, delivered by comets, to exist near and within some of the Moon's polar craters. Mercury is too hot to have any form of ice . . . or is it? Mercury also lacks an atmosphere, and it is very close to the sun. Like the Moon, however, Mercury's axis is tilted only a small amount; at 0.1 degrees, it is tilted even less than the Moon. And like the Moon, Mercury has deep craters at its poles that are permanently shadowed -- and permanently cold. These cold dark craters could trap water and store it as ice. NASA's MESSENGER mission, currently orbiting Mercury, has found radar-bright areas at Mercury's poles that suggest water ice exists there as well. The Gas Giants and Their Moons Are Rich in Ice Based on the scientific models of how our solar system formed, it is no surprise that the moons of the gas giants are rich in ice! (Additional information, activities, and resources about the rings and moons are available in the YSS topic Moons and Rings: Our Favorite Things. Jupiter's Moons: Europa's crust of water ice floats on top of a saltwater ocean. The crust may be many miles (kilometers) thick and its surface is absent of high topography, but it is crossed by ridges and covered by pits and domes and features called "chaos terrain." It does not have many craters, suggesting that the surface is relatively young and active; the processes that cover or remove craters are continuing to happen. Europa is far from the sun and its surface temperature is a chilling -260 degrees F (-160 degrees C) at the equator and -370 degrees F (-220 degrees C) at its poles. At these temperatures the water ice is very hard and rock-like. The ocean under the ice blanket is kept heated by the constant tidal forces: Europa gets pulled and stretched in different directions by the gravitational attraction of Jupiter and its moons, generating heat. The presence of liquid water could mean that life is supported in the sea of Europa, especially if its rocky core is similarly heated, producing hydrothermal vents. (Check out the YSS topic Got Life? for more information.) Ganymede is the largest moon in the solar system, actually larger than Mercury, and is made mostly of water ice with a rocky core. Scientists suggest that it has a water ocean beneath its crust, sandwiched between thick layers of ice. The surface of Ganymede is older than Europa's, but still has evidence for active geology in its past, and maybe present. This frozen moon even has polar ice caps! The most interesting thing about Ganymede is probably that it has its own magnetic field, meaning its core is likely still molten. Callisto is composed mainly of rock and water ice, although other ices like ammonia ice and carbon dioxide ice may be present. Water ice occurs at the surface of Callisto. Like Europa and Ganymede, a salty ocean may exist under the crust; some scientists hypothesize that a small amount of ammonia in the water may keep it from freezing. Interestingly, Callisto does not seem to be differentiated. Saturn's Rings and Moons: Saturn's rings are one of the most remarkable features in the solar system. They are 155,000 miles (250,000 km) or more in diameter and less than half a mile (one kilometer) thick! The rings are composed of particles ranging from the size of dust specks to large boulders, and they are more than 90% water ice! Saturn has over 60 moons, most of which appear to be composed primarily of water ice with varying amounts of rocky material: Mimas and Tethys are composed almost completely of water ice; Iaeptus and Rhea each appear to be about 25% rocky material; and Dione, Enceladus, and Titan are each about 50% rocky material. All these bodies are heavily cratered. Most have surface temperatures less than-274 degrees F (-170 degrees C), well below the freezing point of water and other ices. Water ice at the surfaces of these moons is rock-hard. Enceladus caught the attention of scientists and the world with its spectacular ice geysers. The Cassini spacecraft flew through a plume and sampled water vapor and ice particles and minor components of other molecules. The material vented by Enceladus is what makes up an entire band of Saturn's rings (called the E ring)! Titan, the largest moon of Saturn, is a geologically complex body with a thick nitrogen-rich atmosphere. Far from the sun, its temperatures remain at a chilly -290 degrees F (-179 degrees C). Titan has lakes of liquid hydrocarbons at its surface and a terrain that contains mountainous features and dunes composed of ice. Deposits of water ice and hydrocarbon ice occur at its surface. This moon probably has a methane cycle that forms clouds and even methane rain! Titan is similar to Ganymede, indicating it may have an "ice-water sandwich" under its surface. Moons of Uranus and Neptune: Uranus' moons are ice-rock conglomerates made of about half-ice and half-rock; in addition to frozen water, the ice may include ammonia and dry ice. Some of the moons appear to have younger surfaces that may have gone through recent geologic activity. Miranda's surface is bizarre, with 20-km deep canyons and valleys and ridges. The surfaces of some of these moons may have been covered with flows of frozen water, perhaps from heating from gravitational tidal interactions, or from the heat of impacts. Neptune's moons are also thought to contain large amounts of ice. Triton, Neptune's largest moon, is the only large moon in our solar system that orbits in the opposite direction of its planet's rotation -- a retrograde orbit. Triton is one of the coolest objects in our solar system; it is so cold that most of Triton's nitrogen atmosphere is condensed as frost, giving its surface an icy sheen that reflects 70 percent of the sunlight that hits it. Triton is geologically active. Nitrogen has erupted from its surface in geysers, and there is volcanic activity from water ice rather than from liquid rock. It has a sparsely cratered surface with smooth volcanic plains, mounds and round pits formed by icy lava flows of frozen water. Its crust is frozen nitrogen, water and dry ice; frozen water makes up much of this cold moon's interior. Comets Comets have been called the "dirty snowballs" of our solar system! Every comet is made of the same basic ingredients -- ice and dust. However, comets vary in how much of the ice is water ice and how much is ice made of other substances, including methane, ammonia, carbon dioxide, carbon monoxide, sulfur, and hydrogen sulfide. Comets also vary in the different types of trace elements and hydrocarbons that are present. Most comets have long elliptical orbits that carry them from the chilly outreaches well beyond Neptune to nearer our sun. As comets approach the sun (within about 280 million miles or 450 million km), they heat up and their ice begins to sublimate -- change from a solid directly to a gas. The gas and dust forms an "atmosphere" around the nucleus called a "coma." Material from the coma gets swept into tails that are millions of miles long. Recently, several small asteroids in the main asteroid belt have been observed to produce comet-like tails when they approach the sun; these are now known as main belt comets. For over 4.6 billion years, since the formation of our solar system, comets have been colliding with planets and moons and asteroids, delivering their water ice to these bodies. Comets may be the source for water ice on the Moon and Mercury, and they certainly have added water to other celestial bodies, including Earth. For further information about comets, please check out the YSS topic, Small Bodies Big Impacts. Scientists Can Look for Ice in the Solar System without Leaving Earth The above discussion about where water ice might be found in our solar system reveals some of the ways that scientists are testing for its presence. If scientists cannot go to a planet to explore or send a lander that will return samples, they can examine the surface using a variety of detectors onboard spacecraft or on Earth-based telescopes. One of the primary ways of detecting water is to analyze the spectrum of light reflected from a planetary surface. Spacecraft detectors may probe surfaces using the sun's reflected light, or they may use radar to bounce radio waves off the surfaces. Different materials reflect and absorb different -- and characteristic -- wavelengths of light. Some of these wavelengths are visible to our eyes (red, orange, yellow, green, blue, purple) and some are invisible to us (for example, infrared and ultraviolet wavelengths). Scientists can compare the spectra from the surface of a planet to spectra of known substances to determine what materials occur on the planet. Water has a characteristic spectral "fingerprint," especially in the infrared. Other substances have their own unique spectral fingerprints. Spectra can be collected by spectrometers onboard orbiting spacecraft or by telescopes viewing the planet or planetary body from Earth. Other wavelengths of light, such as radio waves and gamma rays, can provide additional clues. Different surfaces reflect radio waves in different ways. Radar can detect the characteristic signatures of ice and soil mixed with ice. Other instruments onboard spacecraft, such as gamma-ray spectrometers, can detect the abundance of hydrogen (and other elements), which is a component of water molecules. The presence of hydrogen may be interpreted to indicate the existence of water on a planet. Scientists have interpreted water ice to be present in deep craters near the Moon's poles based on radar and gamma-ray spectrometer data. After successfully completing the first in-depth, up-close study of Saturn and its realm from orbit, Cassini is on an extended mission to follow up on the many discoveries made during its primary 4-year mission. Among the most surprising discoveries were geysers erupting on Enceladus and its effects on the Saturnian environment, giant ice ridges on Iapetus, ice dunes on Titian, and dynamics between mini-ice moons and Saturn's rings. The instruments on this mission are studying and mapping the Moon globally; one of its key objectives is to map the locations and amounts of water on the Moon. It has discovered polar cold traps with low temperatures and radar data consistent with the presence of water ice in permanently shadowed regions in polar craters. Mars Reconnaissance Orbiter is designed to track changes in the water and dust in Mars' atmosphere, look for more evidence of ancient seas and hot springs and peer into past Martian climate changes by studying surface minerals and layering. Among the mission's major findings is that the action of water on and near the surface of Mars occurred for hundreds of millions of years, and observations of layering within Mars' polar ice caps and polar processes across its surface. This mission is on its way to study Pluto and its environment in the Kuiper Belt. Pluto is now classified as a dwarf planet, and appears to be primarily composed of rock and ice. New Horizons will be the first mission to explore this distant world.
Titan, the largest moon of Saturn, is a geologically complex body with a thick nitrogen-rich atmosphere. Far from the sun, its temperatures remain at a chilly -290 degrees F (-179 degrees C). Titan has lakes of liquid hydrocarbons at its surface and a terrain that contains mountainous features and dunes composed of ice. Deposits of water ice and hydrocarbon ice occur at its surface. This moon probably has a methane cycle that forms clouds and even methane rain! Titan is similar to Ganymede, indicating it may have an "ice-water sandwich" under its surface. Moons of Uranus and Neptune: Uranus' moons are ice-rock conglomerates made of about half-ice and half-rock; in addition to frozen water, the ice may include ammonia and dry ice. Some of the moons appear to have younger surfaces that may have gone through recent geologic activity. Miranda's surface is bizarre, with 20-km deep canyons and valleys and ridges. The surfaces of some of these moons may have been covered with flows of frozen water, perhaps from heating from gravitational tidal interactions, or from the heat of impacts. Neptune's moons are also thought to contain large amounts of ice. Triton, Neptune's largest moon, is the only large moon in our solar system that orbits in the opposite direction of its planet's rotation -- a retrograde orbit. Triton is one of the coolest objects in our solar system; it is so cold that most of Triton's nitrogen atmosphere is condensed as frost, giving its surface an icy sheen that reflects 70 percent of the sunlight that hits it. Triton is geologically active. Nitrogen has erupted from its surface in geysers, and there is volcanic activity from water ice rather than from liquid rock. It has a sparsely cratered surface with smooth volcanic plains, mounds and round pits formed by icy lava flows of frozen water.
yes
Selenology
Is the moon geologically active?
no_statement
the "moon" is not "geologically" "active".. geologic "activity" does not occur on the "moon".
https://www.pas.rochester.edu/~blackman/ast104/moon_interior.html
Interior and Geological Activity
Before the Apollo missions we knew almost nothing about the interior of the Moon. The Apollo missions left seismometers on the lunar surface that have allowed us to deduce the general features of the Lunar interior by studying the seismic waves generated by "moonquakes" and occasional meteor impacts. The Structure of the Interior Our present picture of the Moon's interior is that it has a crust about 65 km thick, a mantle about 1000 km thick, and a core that is about 500 km in radius. A limited amount of seismic data suggests that the outer core may be molten. There does appear to be some amount of differentiation, but not on the scale of that of the Earth. It has no magnetic field to speak of, but magnetization of Lunar rocks suggests that it may have had a larger one earlier in its history. Although there is a small amount of geological activity on the Moon, it is largely dead geologically (the energy associated with the Earth's seismic activity is about 10^14 times larger than that of the Moon). Most Lunar seismic activity appears to be triggered by tidal forces induced in the Moon by the Earth. Geological History of the Moon The weight of the evidence is that the Moon was active geologically in its early history, but the general evidence suggests that the Moon has been essentially dead geologically for more than 3 billion years. Based on that evidence, we believe the chronology of Lunar geology was as follows: The Moon was formed about 4.6 billion years ago; maybe hot or maybe cold. The surface was subjected continuously to an intense meteor bombardment associated with debris left over from the formation of the Solar System. By about 4.4 billion years ago the top 100 km was molten, from original heat of formation and from heat generated by the meteor bombardment. By 4.2 billion years ago the surface was solid again. As the intense meteor bombardment associated with debris left over from the formation of the Solar System continued, most of the craters that we now see on the surface of the Moon were formed by meteor impact. The fracturing and heating of the surface and subsurface by the meteor bombardment led to a period of intense volcanic activity in the period 3.8-3.1 billion years ago. Meanwhile, the meteor bombardment had tapered off because by this time much of the debris of the early Solar System had already been captured by the planets. The lava flows associated with the volcanism filled the low areas and many craters. These flows solidified to become the flat and dark maria, which have little cratering because most of the original craters were covered by lava flows and only a few meteors of significant size have struck the surface since the period of volcanic activity. The regions that were not covered by the lava flows are the present Highlands; thus, they are heavily cratered, and formed from different rocks than the seas. The volcanism stopped about 3.1 billion years ago: the Moon has been largely dead geologically since then except for the occasional meteor impact or small moonquake, and micro-meteorite erosion of the surface. Thus, Lunar surface features, particularly in the Highlands, tend to be older than those of the Earth, which remains to this day a geologically active body.
Most Lunar seismic activity appears to be triggered by tidal forces induced in the Moon by the Earth. Geological History of the Moon The weight of the evidence is that the Moon was active geologically in its early history, but the general evidence suggests that the Moon has been essentially dead geologically for more than 3 billion years. Based on that evidence, we believe the chronology of Lunar geology was as follows: The Moon was formed about 4.6 billion years ago; maybe hot or maybe cold. The surface was subjected continuously to an intense meteor bombardment associated with debris left over from the formation of the Solar System. By about 4.4 billion years ago the top 100 km was molten, from original heat of formation and from heat generated by the meteor bombardment. By 4.2 billion years ago the surface was solid again. As the intense meteor bombardment associated with debris left over from the formation of the Solar System continued, most of the craters that we now see on the surface of the Moon were formed by meteor impact. The fracturing and heating of the surface and subsurface by the meteor bombardment led to a period of intense volcanic activity in the period 3.8-3.1 billion years ago. Meanwhile, the meteor bombardment had tapered off because by this time much of the debris of the early Solar System had already been captured by the planets. The lava flows associated with the volcanism filled the low areas and many craters. These flows solidified to become the flat and dark maria, which have little cratering because most of the original craters were covered by lava flows and only a few meteors of significant size have struck the surface since the period of volcanic activity. The regions that were not covered by the lava flows are the present Highlands; thus, they are heavily cratered, and formed from different rocks than the seas.
no
Selenology
Is the moon geologically active?
no_statement
the "moon" is not "geologically" "active".. geologic "activity" does not occur on the "moon".
https://www2.jpl.nasa.gov/galileo/sepo/education/plansurf/all.html
Planetary Surfaces: Top
PLANETARY CURRICULUM MODULE SURFACES: FOCUS ON IMPACTS Most physical science units introduce students to the processes of volcanism, tectonism, and gradation (the effects of water, ice, wind and gravity on a surface); few, however, introduce the process of impact cratering. Impact cratering plays a large role in forming and modifying planetary and satellite surfaces in our solar system. The following activities are designed to introduce this important surface process. This module includes all the necessary background information on the topic of impact cratering as well as activities designed to introduce important concepts related to impact crater formation and subsequent modification. Several sections also include a link called Teacher Feature, a teacher designed section overview with ready-made worksheets for review, wrap-up, or concept activities. The teacher feature also has suggestions for modifying the activity for other grade levels. Scientists have studied the surfaces of the planets and satellites in our solar system for hundreds of years. Geologists first studied the surface of the Earth, and when telescopes were invented, astronomers began to look at the surfaces of the Moon and Mars, and later the other planets. Today, we have sent unmanned space probes to all the planets except Pluto, we have Earth-observing satellites, and we have even walked on the Moon and brought back samples to study in our labs. All these explorations have allowed us to compare surface features on other worlds to the terrestrial ones with which we are the most familiar. Such comparisons are crucial for understanding more about how the Earth and other planets formed, and how they may change in the future. This series of activities will describe how scientists study the surfaces of our own and other planets. First, we will discuss how to locate features on the surface of a spherical planet. Then, we'll talk about cratering, one of the most important surface processes in the solar system. Analysis of craters on the surfaces of planets can help scientists estimate how old the surface is, what its composition is, and what agents of change are important on that body. We'll discuss how craters are made, and what can remove them from the surface of a planet. Then, we'll look in detail at some of the planets in the solar system, and what we can tell about the history of a planet by examining its craters. Finally, we'll apply our understanding of the inner solar system to try to interpret some of the new images from the Galileo spacecraft currently orbiting Jupiter. Teacher Feature Student Objectives: Direct student attention to various planet surface features Identify student's knowledge as it relates to Earth's features Provide overview of how knowledge is collected Background Information: Lesson Format: Teacher lecture, Group work, Discussion Introduction: Have each student list at least 5 surface features in his/her state Class Activity: Here, There, & Everywhere? Type: Group discussion Materials Needed: Reference materials appropriate to age (State maps) Procedures: Here Let volunteers share state features orally. Be sure students only include geological surface features (i.e. not forests). List on board or have several ready on cards. Let tables select a feature & do a fast fact find on the discovery (i.e. who found, year, etc.) using references. Let students share info. There Extend discussion on how distant features are discovered (explorers, observations, etc.). Extend to moon, planets Everywhere Relate discussion to how we learn about processes forming the features (theory, experiment, comparisons, simulations) Evaluations: Have students summarize various surface features discussed, how info. was attained, and how we learn about other planetary features. Let students answer orally or record in log format. Other Activities, Misc. Information, etc.: Given state maps and blank state outline, have students identify, draw, & label surface features Possibly include addition of longitude/latitude link as intro. to next lesson Research & report on the 7 Natural Wonders of the world. Illustrate them as postcards. Research earliest observations/stories of the planets I. LATITUDE AND LONGITUDE Concepts: Geography Coordinate systems Origin Angles 2-d and 3-d geometry In order to discuss features on the surfaces of planets, we first need a way to describe where they're located. On the Earth, we use a system of latitude and longitude, lines which divide up the spherical surface of the Earth into a grid, based on angles. Latitude represents how far north or south of the equator a point is. The latitude of a point is the number of degrees in an angle made by the equator, the center of the Earth, and the point. The latitude of Tucson [or insert your city here], for example, is 32 degrees north, meaning that Tucson is 32 degrees north of the equator. The latitude of points on the equator is 0 degrees (north or south). While with latitude, the obvious place on Earth from which to measure north or south is the equator, there is no similarly obvious choice from which to measure east or west. An imaginary line passing through Greenwich, England was arbitrarily defined, and longitude represents how far east or west a point is from this line. So Tucson, at longitude 111 degrees W, is therefore 111 degrees west of Greenwich, England. The origin of a coordinate system is the starting point for measurements. The origin for the latitude-longitude coordinate system on Earth is where the equator and the line through Greenwich, England, meet, at zero degrees latitude, zero degrees longitude. Interpretation: Given a map or globe, find the origin (0 degrees,0 degrees). Given a list of cities and their latitude and longitude, find them on a globe. What is the latitude of the north pole? the south pole? What is the longitude of the north pole? Trace a circle of constant latitude on a globe. Q: Where is such a circle the largest? A: It is largest at the equator. Q: What is the radius of a circle of constant latitude at the equator? A: It is the radius of the Earth. Trace a circle of constant longitude. Q: Where is such a circle largest? A: They are all the same size. Q: What points do all such circles go through? A: They go through the north and south poles. Such a circle on the surface of the Earth, which goes through the equator or the north and south poles, is called a great circle. Scientific context: A latitude-longitude system is fundamental for the reliable location of surface features. With an origin, any place on a planet can be located with only two numbers: how many degrees north or south of the origin, and how many degrees east or west of the origin. The hard part is choosing an origin. On other planets, the zero of longitude is chosen once the surface of the planet has been mapped in enough detail to pick one. On some planets, the origin is defined as the point on the equator known as the "sub-Earth" point, which is the point on the surface that faces the Earth at the time when the two planets are closest in their orbits. On other planets, the choice is much more arbitrary. The most important criterion is that the origin must be a point that is easy for everyone to find, whether it's the center of a crater or some other obvious permanent geologic feature. Introduction: Using ABC/123 grid, have students draw their own maps, listing 5 features by name on cross lines. You may need to give a list of formations from which to choose: mountains, plateaus, bodies of water by name, canyons, vocanoes, etc. Class Activity: Find the Feature Type: Mapping skills Materials Needed: Maps with longitude/latitude divisions appropriate for age (globe & world maps) Procedures: Using ABC/123 grid maps from intro. activity, have students exchange and give locations for each feature, using the grid. Discuss. Relate to Earth's coordinate system Have students locate cities by their longitude & latitude Continue with interpretation questions as given using globe if possible Let students suggest ways to decide a point of origin on planets II. CRATER FORMATION, MODIFICATION, AND REMOVAL Though fairly rare on the Earth, impact craters are one of the most common, and therefore important, types of surface features in the solar system. Craters are found on almost all the solid planets of the solar system, but not on gas giants like Jupiter and Saturn since there is no solid surface to preserve a record of the impact. All such impacts are governed by a set of physical principles based on properties of the impacting body, the target body, and the speed and angle of the impact. Craters are also affected by the presence (or absence) of an atmosphere on a planetary body. A thick atmosphere can cause smaller impacting bodies to burn up before impact, thus screening the surface of craters caused by these smaller impactors. A surface which is completely covered with craters is called saturated. New craters on a saturated surface tend to cover older craters, so once a surface becomes saturated with craters, the number of craters remains approximately the same. Saturated surfaces are very old. Only geologically inactive planetary bodies can become saturated, since on an active planet such as the Earth, craters are quickly erased by agents of change such as tectonics, volcanism, and erosion. Thus a saturated surface such as the Moon's is a sign that the Moon is no longer geologically active, and regions with a lower crater density are younger than those with a higher crater density. The study of craters can provide much information about the history of planetary bodies in our solar system. This section will discuss in greater detail how craters are made, how they are removed, and what can be learned from images of craters on the Earth and on other planets. IIA. HOW ARE CRATERS MADE? Impact craters are made when an object or bolide impacts the surface of a planet or satellite. A bolide is any falling body such as a comet or meteorite. Many hands on impact simulation activities are available, and all involve dropping a series of bolides, with different masses, onto a simulated planetary surface. The planetary surface can be dry, for example made out of flour with a dusting of cocoa powder, or wet, such as a muddy composite of dirt or sand. The composition of a planetary body and various factors relating to an impact have a strong effect on the appearance of craters on the surface of a planet. Even more important, however, are the many different agents of change which serve to either remove craters from the surface of a planet, or preserve them, over geologic time. Scientific context: The most important factor in predicting how the surface of a planet will look is the degree of geologic activity, or how effective various agents of change are. The appearance of craters on the surfaces of the terrestrial planets is an indication of how geologically active the planets are. Since bolides impact all bodies in the solar system, a lack of craters must be explained by past or current geologic activity. There are many possible agents of change which could be responsible. Craters can be removed by having other craters form over them, as on a saturated surface. They can have lava flows bury them, or tectonic activity fracture them. They can be filled in by dust, blown away by wind, or washed away by water. They can even be obscured by vegetation. Using this indicator, we can rank the planets from old, inactive bodies to young, geologically active worlds. Oldest on this scale are bodies like Mercury and the Moon. These are relatively small objects, with old, heavily cratered surfaces and little evidence of subsequent activity which could have covered or partially obscured their craters. Some of the large craters on the Moon, however, are filled in with lava flows, evidence that the Moon was once active. Mercury and the Moon are sometimes called "dead" bodies, because there is no evidence of current geologic activity. The in-between cases are Mars and Venus. Mars has some craters on its surface, but also has other features like volcanoes and giant rift valleys, evidence that the planet was once much more active than it is today. Some craters on the surface seem to have been filled in with dust or eroded away, evidence that while Mars' thin atmosphere is not very efficient, it does affect surface features. There are relatively few craters on the surface of Venus, and most seem to be preserved in pristine condition. There is also evidence of volcanism and tectonic activity. Scientists have interpreted surface images as indicating that Venus underwent a period of great geologic activity about 500 million years ago, which removed all older craters. Since then, however, the planet has been relatively inactive, meaning that any craters that have accumulated in the last 500 million years have been preserved in relatively pristine form. The other extreme is the Earth, which is a large, very active planet with very few craters preserved on its surface (Meteor crater, in Northern AZ, is one of the best-preserved.). Tectonics and volcanism are important processes on Earth, but even more important are erosional processes caused by wind and water. Earth is the only body with liquid surface water, which quickly washes away most craters. It is no accident that one of the most well-preserved craters on Earth is in a desert! Much can be determined about the state of geologic activity on a planet merely by examining craters and other features on its surface. A heavily cratered surface (Mercury, Moon) indicates that the planet is not currently active, and has not been active for (perhaps) billions of years. A non-saturated surface is an indicator of past (Mars, Venus) or current (Earth) geologic or atmospheric activity of some sort. Whatever the crater removal process, it is important to understand that if the surface of a planet is not covered with craters, there must be a reason! Other Activities, Misc. Information, etc.: - have students draw a cratered surface or various surfaces based on atmospheric densities. Switch and let observer determine atmospheric condition. - teacher can prepare drawings noted above for younger children to match with atmospheres CRATER ACROSTIC Directions: Enter the letters of the appropriate word in the spaces provided in the clue section. When all the clue words are done, enter the indicated letter from each clue in the numbered space in the mystery word. This mystery word may determine whether small craters will exist on a planet or moon. Moon 1: presence of atmosphere may limit number of small craters, probably has moderate number of craters (not saturated because of geologic activity, volcanoes and wind erosion), some craters may be filled in by lava as well as sand/dust carried by wind; crater edges may be weathered by wind; ice can be interpreted as part of the composition of the surface - lower strength means shallower craters, or may indicate seasonal surface water (erosion) Moon 2: thick atmosphere probably limited number of craters, especially small ones (not saturated due to atmosphere); craters should be fairly pristine due to lack of geologic activity and weathering agent (no wind, water or ice) Moon 3: moderate atmosphere probably limited number of small craters (not saturated due to atmosphere and agents of change present); extensive geologic activity in form of volcanoes and erosion (wind and lots of water) fills and erodes and maybe erases craters Bonus Question: some ideas may include - proximity to asteroid belt (lots of impactors), life-form modification, mass of moon (extremes of gravity, landslides) Evaluations: Collect and check or use as discussion activity Other Activities, Misc. Information, etc.: - students may be allowed to use the chars constructed earlier - students may work in groups, then present at the end of the period in a "science symposium" - pictures may be drawn to illustrate conclusions -have students defend others answers based on knowledge from previous lessons LUNAR LINE UP In the year 2056, one of NASA's deep space missions returned the following information about three satellites orbiting a gas giant planet in a distant solar system. From the information provided, predict how impact craters on the surface would look. List the overall appearance of the surface (few, some, many craters or saturated) and the appearance of individual craters (how modified are they?). Justify your predictions (use 'because of' or 'from' statements). Data Set 1 Data Set 2 Data Set 3 composition rocky rocky rocky atmosphere thin with wind thick no wind moderate some wind water some ice no water/no ice lots water volcanoes some no lots Open-Ended Bonus Question: What other conditions might affect cratering on these bodies? IIC. FOCUS ON THE GALILEAN SATELLITES Concepts: Geologic interpretation of images Observing and recording data Making comparisons Drawing conclusions The four largest satellites of Jupiter are referred to as the Galilean satellites, since they were first discovered by Galileo in 1610. These satellites are too small and too far from the Earth to study their surfaces in detail. The surfaces of Io, Europa, Ganymede and Callisto were first imaged at high resolution by the Voyager spacecraft in 1979 and 1980, and are being observed in greater detail by the Galileo spacecraft which will be in orbit around Jupiter until December of 1997. Using the techniques discussed in the previous section, we can examine geologic processes important on these bodies, and determine the relative ages of their surfaces. This technique of comparing newly-explored worlds to those with which we are more familiar is a common one in science. Activity: Look at Galileo images of Io, Europa, Ganymede and Callisto, Compare them to the terrestrial planets you looked at in the previous activity. Using the appearance of craters on the surfaces, and any other features you can identify, try to rank the Galilean satellites in terms of geologic activity. Did you see any examples of weathering or erosion of craters? Why or why not? No. Weathering and erosion require wind and water, but the Galilean satellites have no atmospheres and no liquid water on the surface. What are the main agents of change you see modifying craters on the Galilean satellites? Volcanism: Io has sulfur volcanism, and Europa has possible water outflow. Tectonism: Europa and Ganymede have cracks and grooves etc. which imply tectonism. Are there other differences between the craters in the Jovian system, and those in the inner solar system? Ganymede and Callisto have palimpsests (relaxed craters). These satellites are made up of a large fraction of ice. The viscosity of this ice is such that (over long periods of time) the ice flows and fills in craters instead of preserving the hole. (Relate this to the cratering exercise in section IIA with respect to the composition of the target body.) Background Information: The four largest satellites of Jupiter are referred to as the Galilean satellites, since they were first discovered by Galileo in 1610. These satellites are too small and too far from the Earth to study their surfaces in detail. The surfaces of Io, Europa, Ganymede and Callisto were first imaged at high resolution by the Voyager spacecraft in 1979 and 1980, and are being observed in greater detail by the Galileo spacecraft which will be in orbit around Jupiter until December of 1999. Using the techniques discussed in previous sections, we can examine geologic processes important on these bodies, and determine the relative ages of their surfaces. This technique of comparing newly-explored worlds to those with which we are more familiar is a common one in science. A. Have groups look at Galileo images of Io, Europa, Ganymede, and Callisto. B. Compare them to the terrestrial planets looked at previously. C. Using the appearance of craters on the surfaces, identify any geologic activity. D. Have a recorder keep group notes (see worksheet). E. Have groups report their findings to class with supportive statements. Discussion Points: Observations and answers: Callisto is heavily cratered and looks like our Moon and Mercury, it appears inactive. Ganymede has heavily cratered regions, regions with fewer craters which also have cracks and grooves (implying tectonic activity), and regions which appear to have experienced resurfacing (like our Moon). This means some areas were more recently active than others. Europa has few craters, a smooth surface and cracks (implying tectonics). All indicate that is was recently active. Io has very few craters and clear evidence of surface activity (volcanism), it is active today. 1. No. Weathering and erosion require wind and water, but the Galilean satellites have no atmospheres and no liquid water on the surface. 2. The main agents of change are volcanism (Io has sulfur volcanism, and Europa has possible water outflows) and tectonism (Europa and Ganymede have cracks and grooves). 3. Ganymede and Callisto have large relaxed craters (palimpsests). These satellites are made up of a large fraction of ice. The viscosity of this ice is such that (over long periods of time) the ice flows and fills in craters instead of preserving the hole. (Relate this to the cratering exercise in section IIA with respect to the composition of the target body.) Evaluations: Students are observed taking part and can contribute when called on Other Activities, Misc. Information, etc.: Create drawings of each satellite Include drawings in a "Jupiter's Family" scrapbook for younger children Write a myth about why each satellite is so different GALILEAN SATELLITES - DO YOU SEE WHAT I SEE? Group Recorder Worksheet Directions: Select a recorder for your group. Using the pictures make observations of each satellite. Then answer the questions. Observations: Callisto Ganymede Europa Io Questions: 1. Did you see examples of weathering or erosion of craters? Why or why not? 2. What are the main agents of change you see modifying craters on Galilean satellites? 3. Are there other differences between the craters in the Jovian system, and those in the inner solar system? (HINT - think about temperature) EXTRA CREDIT: Find out about PALIMPSESTS III. CRATER IMAGE INTERPRETATION In the previous section, we examined images of the Galilean satellites taken by the Galileo spacecraft. Such images will be returned until the end of 1999, and scientists will be busy for years to come analyzing them. This data can be compared to what we already know about the Earth and the terrestrial planets, and we can use it to learn about the physical properties and geologic history of the satellites of Jupiter. We have previously examined the agents of change present on the Galilean satellites. However, there is much more we can tell about these worlds merely by carefully inspecting images of them. Initial Activity: Make a list of all the things you think we can measure or determine by looking at images of craters on the surfaces of the Galilean satellites (or any planets), and what such measurements indicate. Teacher Feature Student Objectives: Students will be able to relate shape and size of crater to size and speed of impactor, and surface composition Background Information: In previous sections, we examined images of the Galilean satellites taken by the Galileo spacecraft. Such images will be returned until the end of 1999, and scientists will be busy for years to come analyzing them. These data can be compared to what we already know about the Earth and the terrestrial planets, and we can use it to learn about the physical properties and geologic history of the satellites of Jupiter. We have previously examined the agents of change present on the Galilean satellites. However, there is much more we can tell about these worlds merely by carefully inspecting images of them. There are numerous things we can measure or determine by looking at images of craters of the surfaces of the Galilean satellites (or any planet). Crater depth and diameter provide information about the strength of surface materials, the impactor size and speed; crater shape relates to the structure of the surface material and its composition. The size distribution of craters within an image allows for estimations of the age of the surface. Procedures: Have students complete worksheet (either individually or in groups). Answers are then shared and defended. Evaluations: All students complete worksheet and can defend their answers Other Activities, Misc. Information, etc.: For younger students, move words to bottom and have students cut and paste Create an 8-frame comic of de-Terminator showing the results of each type of determiner. de"TERMINATOR" - CRATERS ARE BACK Directions: Fill each crater characteristic with items that would help determine it. Choose from the determiner list. You may use an answer more than once. An example has been done for you. Be prepared to de-Fend your selections. IIIA. CRATER SIZE Note: This activity is written for two levels. Level one is appropriate for pre-algebra students, and level two involves more sophisticated algebra skills including the manipulation of an equation with two variables. Crater size is related to the mass and velocity of the impacting body. Mass and velocity can be combined to find the kinetic energy of an impactor. Increasing either the mass or the velocity of the impactor increases the kinetic energy of the impact. Review the results of your crater experiments in section IIA. The size of the crater increased with the mass of the bolide, and also with the height from which it was dropped (which is proportional to the speed of impact). This fundamental physical relationship allows an estimate of impactor mass to be made from crater diameter. Activity: Reexamine your results from activity IIA. Graph the mass of the bolide against the diameter of the resulting crater (for bodies dropped from the same height). What relationship do you get? As the mass of the bolide increases, the diameter of the crater increases. What are you assuming is constant, by graphing mass vs. crater diameter for bodies dropped from the same height? Height or velocity is being held approximately constant. Graph the height from which the bolide is dropped vs. the crater diameter for objects of the same mass. What can you observe from your graph? As the impact velocity increases, the crater diameter increases. What conclusion can you draw from this? Crater size is proportional to the mass of the impactor, and the velocity of impact. Given that the kinetic energy of the impact is related to mass and velocity, we can see that the experimental results support the theory that crater diameter is related to the kinetic energy of the impact. Crater size is related to the size and velocity of the impacting body. These two quantities can be combined to find the kinetic energy of an impactor, defined as K = 1/2 m v2 where K is the kinetic energy, m is the mass of the impacting body, and v is the velocity of the impactor. Review the results of your crater experiments in section IIA. The size of the crater increased with the mass of the bolide, and also with the height from which it was dropped (which is proportional to the speed of impact). This fundamental physical relationship allows an estimate of bolide mass to be made from crater diameter. Activity: Reexamine your results from activity IIA. Graph the mass of the bolide, m, against the cube of the diameter, D3, of the resulting crater (for bodies dropped from the same height). Describe and explain the relationship. The results should follow an approximately straight line, due to the relationship below. The total amount of energy, K, required to form a crater is proportional to the volume, V, of material excavated in the impact. Since a crater is basically a hemisphere (half a sphere), its volume, V, is proportional to the diameter, D, of the crater. (V = 2/3 [pi] (D/2)3 ) So the energy, K, needed is proportional to the diamete cubed, D3. The energy of an impact is the kinetic energy, as defined above: K = 1/2 m v2. Since the energy K, is proportional to D3, we can predict D3 is proportional to 1/2 m v2 Measuring the diameters of craters allows us to estimate the size of the impacting bolide. The diameter is proportional to both the mass of the bolide and its impact velocity. We can measure the estimate the size of the impacting bolide. The diameter is proportional to both the mass of the bolide and its impact velocity. We can measure the diameter of the crater, but unless we know either the mass or the velocity of the bolide, we can't solve for the other. By assuming a constant impact velocity, however, we can predict relative bolide masses for different crater diameters. Assuming a constant impact velocity, we know that the mass of an impactor is proportional to the diameter of the crater cubed. (m is proportional to D3) In order to produce a crater twice as large as another, how much larger must a bolide's mass be? What about a crater 10 times larger? 100 times larger? A: 8 times; 1000 times; 1000000 times. Look at some images of planetary surfaces. (Example images 1, 2) Measure the crater diameters and estimate how much more massive the bolide which formed the largest crater in the picture was than the bolides which formed the smaller craters. IIIB. CRATER DEPTH Crater depths are indicative of both the strength of the surface material, and the impactor size and speed. A variety of crater depth exercises are available, such as "Long Distance Detective" in Craters! The depth of a crater can be determined from the length of the shadow cast by the crater rim and the angle of the incoming light source. If the angle of incoming light and image scale are provided along with each image, students can measure shadow lengths and calculate crater depths from the relations below. As shown in the diagram above, using the geometry of triangles, if we know Ø, the angle of incoming light, and can measure L, the length of the shadow, we can calculate d, the crater depth. The tangent function relates d, L, and Ø as follows: tan Ø = d / L Given d and Ø, multiply both sides of the equation above by L to get: L * tan Ø = d Activity: Given a Galileo image, its scale (either in km/pixel, or the size of a feature), and the angle of incoming light, determine the depths pf the craters in the image. Here is one possible image. For each crater, measure: its diameter (in pixels or centimeters, then convert to km) the length of shadow it casts Calculate the depth of the crater, given the angle of incoming light, Ø, and the shadow length, L, using L * tan Ø = d Graph crater depth vs. diameter. Interpretation: Are there any relationships between crater diameter and depth? Crater diameter is related to the mass and speed of the impactor, as discussed in section IIIA. What other factors might influence crater depth? later tectonic or volcanic activity (crater could be filled with lava) relaxation of the crater (surface composition) Scientific context: Scientists often have to do considerable amounts of detective work when analyzing images taken of other worlds. The exercises above demonstrate how scientists start from a single picture and extract valuable information not only about the appearance of the surface, but also the approximate sizes of impacting bodies and the depths of craters on the surface. Crater depths provide clues to surface composition. A crater formed in a firm material such as rock can last much longer than a crater formed in a softer material, such as ice. This distinction is especially important in the outer solar system, when examining craters on such bodies as Europa, Ganymede, and Callisto. These bodies are part-rock, part-ice. While ice behaves almost like rock at the very cold temperatures near Jupiter, its properties are still different enough to let large craters flow slowly over time, eventually resulting in large flat circular areas with almost no topography at all, called palimpsests. (Example image) Crater depths are also important in understanding what events might have modified the crater since its formation. For example, a broad shallow crater on a rocky planet could have been filled in with lava at some point after its formation, either immediately afterwards if the impact was energetic enough to melt the surrounding material, or long afterwards if the planet underwent a period of volcanic activity. If part of a crater floor is higher than another part, it's possible that some sort of fault or other tectonic activity took place nearby, thus disrupting the crater. The simple technique of shadow measurement discussed above also has other applications. On earth, it can even be used to measure the height of far-off mountains or trees! Teacher Feature Student Objectives: Using images the students will measure diameter and shadow length, convert to kilometer, calculate crater depth and graph results Background Information: Crater depths are indicative of both the strength of the surface material, and the impactor size and speed. A variety of crater depth exercised are available, such as "Long Distance Detective" in Craters! The depth of a crater can be determined from the length of the shadow cast by the crater rim and the angle of the incoming light source. If the angle of incoming light and image scale are provided along with each image, students can measure shadow lengths and calculate crater depths from the relations below. As shown in the diagram above, using the geometry of triangles, if we know &Oslash , the angle of the incoming light, and can measure L, the length of the shadow, we can calculate d, the crater depth. The tangent function relates d, L, and &Oslash as follows: tan &Oslash = d / L (or L * tan &Oslash = d ). Crater depths provide clues to surface composition. A crater formed in a firm material such as rock can last much longer than a crater formed in a softer material, such as ice. This distinction is especially important in the outer solar system, when examining craters on such bodies as Europa, Ganymede, and Callisto. These bodies are part-rock, part-ice. While ice behaves almost like rock at the very cold temperatures near Jupiter, its properties are still different enough to let large craters flow slowly over time, eventually resulting in large flat circular areas with no topography at all, called palimpsests [have students brainstorm other modifications here] Crater depths are also important in understanding what events might have modified the crater since its formation. For example, a broad shallow crater on a rocky planet could have been filled in with lava at some point after its formation, either immediately afterwards if the impact was energetic enough to melt the surrounding material, or long afterwards if the planet underwent a period of volcanic activity. If part of a crater floor is higher than another part, it's possible that some sort of fault or other tectonic activity took place nearby, thus disrupting the crater. The simple technique of shadow measurement discussed above also has other applications. On Earth, it can even be used to measure the height of far-off mountains or trees. Lesson Format: lecture, individual work, class discussion Introduction: Have students go outside and measure heights and shadow lengths for group. Complete the sheet and answer questions, then apply formula to crater measurements. Procedures: Work introductory part of activity, then present background information in form of lecture. Work through formula with class, possibly even using data collected to demonstrate formula and calculation. Hand out crater images and continue with activity. Demonstrate conversion to kilometers (scale of photos) as necessary. Other Activities, Misc. Information, etc.: If available, play the intro. to the classic radio show "the Shadow". With younger children, just use the intro. activity (provide a structured data table for collecting and recording measurements) and relate to other items [flagpole, building, teacher]. Then discuss the craters. THE SHADOW KNOWS Worksheet Introductory activity Directions: A. Go outside and measure the heights of everyone in the group, record data in form of a chart along with each student's name. B. Measure the length of each person's shadow, record in the chart with height information. C. Record time of day. Questions: 1. Explain why time of day would be important in shadow length. 2. At the same time of day, would all shadows be in proportion to the height of the objects being measured? Explain. 3. As the angle of the sun changes throughout the day, would the shadows change in proportion? 4. What are the two determining factors in shadow length? The Shadow Knows Activity Directions: A. For each crater, measure the shadow length (L) and record in the space provided. B. Assuming the angle of incoming light , &Oslash, is 34 degrees, use your measurements and the formula L * tan &Oslash = d to calculate the depth, d, of all the craters. Show your work. Crater 1 Crater 3 Crater 2 Crater 4 IIIC. CRATER SIZE DISTRIBUTION AND SURFACE AGES Concepts Measurement Data recording and organization Bar graphs Data interpretation Drawing conclusions The size distribution of craters on a planetary body can be used to estimate relative, and even absolute, surface ages. As discussed in part II, in the absence of agents of change such as erosion, tectonics, and volcanism, a planetary surface tends to become saturated with craters. Since even currently inactive bodies like the moon were active at some point in their past, surface processes have removed craters at some point in every planet's past. Thus regions on a body with higher crater densities tend to be older than regions with lower crater densities. This activity will investigate crater densities and size distributions, and interpret those in terms of relative and absolute surface ages. Decide on appropriate size intervals and plot your results in a bar graph Interpret results: Small craters are much more common than large ones Repeat exercise for different regions on each satellite. Compare results. Some regions are older (have more craters) than others Compare crater density (craters per area) on different satellites Callisto has the highest crater density, Ganymede the next. Europa has very few craters, and Io has almost no recognizable craters. Estimate the relative ages of individual craters and other features in the images New craters cover older ones. Fresh sharp craters are younger than old, smooth, relaxed ones. Scientific context: The first part of the activity above revealed that small craters are much more common than large ones. This is because small bolides are more common than large ones, and, as seen in activity IIIA, crater diameter is proportional to bolide size. Smaller bolides would be more common than larger ones if most craters are caused by impacts of asteroid fragments. The asteroid belt is a region of small planetesimals, most of which orbit the sun between the orbits of Mars and Jupiter. Asteroids come in sizes from very small to very large, but the size distribution is not random. Rather, it is governed by the fact that asteroids often collide with each other. In such a collision, both asteroids break into smaller pieces. The size of these pieces follows a predictable distribution (which can be simulated in a laboratory) made up of many small fragments and a few larger ones. Over geologic time, therefore, asteroids continue to collide with each other and produce many small fragments. These bodies may eventually collide with the planets, producing a crater size distribution dominated by smaller craters. The activity above also involved measurements of crater densities, which can be used for relative dating of planetary surfaces. Assuming that bolides strike all regions of a planet at approximately the same rate, all areas of the surface should have the same crater density unless agents of change have removed some of them during the planet's geologic history. A common way for large numbers of craters to be removed is through volcanism. Early in the Moon's history, for example, large lava flows flooded portions of the surface, completely burying any craters which were there at the time. These areas were essentially wiped clean of craters about 3.5 billion years ago andnd thus their crater density dates back to those lava flows. The lunar highlands, in contrast, were not flooded by lava flows. Their crater density dates back to when they were formed, about 4.1 billion years ago, and thus is much higher. Io, in contrast to the ancient Moon, is one of the most geologically active bodies in the solar system today. Its surface has no recognizable impact craters, and is continually being resurfaced by volcanic eruptions which cover any craters which might form. Regions of a surface with a higher crater density are older than regions with a lower crater density, and a surface like Io's, with no observable craters, is extremely young and implies current geologic activity. Crater size distributions can also be used to estimate the absolute age of a surface. The cratering rate decreased with time in the early solar system, beginning when the planets finished forming about 4.5 billion years ago and there was much leftover interplanetary debris to cause impacts. The amount of debris decreased over time as collisions and impacts swept it up, thus decreasing the frequency of impacts. By measuring the crater density on different areas of the Moon, and measuring the actual ages of rocks returned from different regions by Apollo astronauts, scientists can calibrate cratering rate to actual surface ages. This relationship of cratering rate vs. time can then be extrapolated to other places in the solar system, since it can vary based on distance from the asteroid belt (Mars, located nearer to the asteroid belt than the Earth, may have a rate of crater formation roughly twice that at earth) and proximity to large planets such as Jupiter, whose gravitational field attracts impactors. Materials Needed: 'answers' preprinted (see list below), mounted on colored paper and placed around the room in plain sight copy of the worksheet for each student Procedures: A. Have students read through worksheet with blanks. B. Explain that the words (answers) that fit in the blanks are in the room. C. Allow students to walk around the room to gather information (words) - tell the students not to move the words or point them out to others. There is no talking. D. After sufficient time looking, students should be able to find and place most words in the proper places in the worksheet. Answer Words (make sure to eliminate corresponding number when placing around room) SATELLITE SIGHT Worksheet In order to gather information, satellites orbit various objects. Using this sheet, circulate around the room and observe the various answers for these questions. You may not move the answers or point them out to others. Talking is not permitted. Once you have your answers, complete this sheet. Look for context clues to help you. 1. _____________ craters are much more common than 2. ______________ ones. This is because 3. _______________ bolides are more common than 4. ________________ones, and crater diameter is proportional to bolide size. Smaller bolides would be more common than larger ones if most craters were caused by impacts of 5. ________________ fragments. The 6. ________________is a region of small planetesimals, most of which orbit the sun between the orbits of 7. __________________ and 8. ________________. Asteroids come in size from very small to very large, but the size distribution is not random. Rather, it is governed by the fact that asteroids often 9. _________________ with each other. In such a 10. __________________, both asteroids break into smaller pieces. The size of the pieces follows a 11. __________________________________ (which can be simulated in a laboratory) made up of many small fragments and a few larger one. Over the 12. _________________ time, therefore, asteroids continue to collide and produce many small fragments. These bodies may continue to collide with the 13. ____________________, producing a crater size distribution dominated by smaller craters. Measurements of crater densities can be used for 14. ____________________ of planetary surfaces. Assuming that bolides strike all regions of a planet at approximately the same rate, all areas of the surface should have the same crater density unless 15._______________________ have removed them during the planet's geologic history. A common way for large numbers of craters to be removed is through 16. ____________. 17. _________________ , in contrast to the ancient Moon, is one of the most geologically active bodies in the solar system today. It's surface has no recognizable impact craters, and is continually being 18. __________________ by volcanic eruptions which cover any craters which might form. Regions of a surface with a higher crater density are 19. ______________ than regions with a lower crater density, and a surface like Io's, with no observable craters, is extremely young and implies current geologic activity. Crater size distributions can also be used to estimate the absolute age of a surface. The cratering rate decreased with time in the early solar system, beginning when the 20. ________________ finished forming about 21. ____________________ ago and there was much left over interplanetary 22. ________________ to cause impacts. The amount of debris decreased over time as collisions and impacts swept it up, thus decreasing the 23. ___________________ of impacts. By measuring the crater density of different areas of the Moon, and measuring the actual ages of rocks returned from different regions by 24. ______________________________, scientists can calibrate cratering rate to actual 25. _____________________ ages. Conclusion Images of the surfaces of the Galilean satellites, or of any planets in the solar system, reveal a general history of the planet, and the state of the planet's interior through time. For example, volcanoes require a hot interior. If we know from looking at images of a planetary surface that the last period of volcanism was hundreds of millions of years ago, we know that the interior of the planet must have been hot until that time. This type of information helps to constrain models of the interior structure and evolution of the satellites, as well as provide information about their formation. This set of activities has shown the wealth of information obtainable from the simplest black and white images of a planetary surface. Much of the first wave of reconnaissance of the solar system was done in just this way, with scientists working to understand the little information they had from the early planetary spacecraft. The Galileo spacecraft, which will remain in orbit in the Jovian system until late 1997, not only has a camera capable of taking black and white images of the surfaces of the satellites, but also has a wealth of other instruments to augment this information. The camera has filters in six different colors, allowing color images to be taken and analyzed. This can yield valuable information about the chemical composition of surface materials. Other instruments on Galileo allow it to measure properties of Jupiter and its satellites at a variety of near infrared wavelengths, investigate the radiation and magnetic environments, and obtain more precise measurements of the sizes and densities of the satellites. Galileo's two-year tour through the Jovian system should provide information for scientists to study for years to come. Sources: Some information in this module was adapted from Craters: A Multi-Science Approach to Cratering and Impacts, by W.K. Hartmann and J. Cain. A Joint Project of the National Science Teachers Association, The Planetary Society, and NASA. Published by the National Science Teachers Association, 1995. Procedures: Have students work in groups to: A. Share own 5 facts from lessons B. Tally or record all facts -- As groups decide on a way to demonstrate the various collections (graphs, prioritizations, continuums, info webs, etc). C. Have groups display info on large paper, including any questions they would like to have answered. Evaluations: Visual check of charts/papers. All students should be able to explain all points on the collection paper Other Activities, Misc. Information, etc.: Materials can be notebooked so each student has a collection of unit work. This module was written by Cynthia Phillips, Dept. of Planetary Sciences, University of Arizona, Tucson AZ, and funded in part by the NASA Spacegrant program. The SSI Education and Public Outreach webpages were originally created and managed by Matthew Fishburn and Elizabeth Alvarez with significant assistance from Kelly Bender, Ross Beyer, Detrick Branston, Stephanie Lyons, Eileen Ryan, and Nalin Samarasinha.
Oldest on this scale are bodies like Mercury and the Moon. These are relatively small objects, with old, heavily cratered surfaces and little evidence of subsequent activity which could have covered or partially obscured their craters. Some of the large craters on the Moon, however, are filled in with lava flows, evidence that the Moon was once active. Mercury and the Moon are sometimes called "dead" bodies, because there is no evidence of current geologic activity. The in-between cases are Mars and Venus. Mars has some craters on its surface, but also has other features like volcanoes and giant rift valleys, evidence that the planet was once much more active than it is today. Some craters on the surface seem to have been filled in with dust or eroded away, evidence that while Mars' thin atmosphere is not very efficient, it does affect surface features. There are relatively few craters on the surface of Venus, and most seem to be preserved in pristine condition. There is also evidence of volcanism and tectonic activity. Scientists have interpreted surface images as indicating that Venus underwent a period of great geologic activity about 500 million years ago, which removed all older craters. Since then, however, the planet has been relatively inactive, meaning that any craters that have accumulated in the last 500 million years have been preserved in relatively pristine form. The other extreme is the Earth, which is a large, very active planet with very few craters preserved on its surface (Meteor crater, in Northern AZ, is one of the best-preserved.). Tectonics and volcanism are important processes on Earth, but even more important are erosional processes caused by wind and water. Earth is the only body with liquid surface water, which quickly washes away most craters. It is no accident that one of the most well-preserved craters on Earth is in a desert!
no
Oceanography
Is the trash island in the Pacific Ocean as large as Texas?
yes_statement
the "trash" "island" in the pacific ocean is as "large" as texas.. the size of the "trash" "island" in the pacific ocean is equivalent to that of texas.
https://theoceancleanup.com/great-pacific-garbage-patch/
The Great Pacific Garbage Patch • The Ocean Cleanup
The Great Pacific Garbage Patch The Great Pacific Garbage Patch is the largest accumulation of ocean plastic in the world and is located between Hawaii and California. Scientists of The Ocean Cleanup have conducted the most extensive analysis ever of this area. What is the great pacific garbage patch? The Great Pacific Garbage Patch (GPGP) is the largest of the five offshore plastic accumulation zones in the world’s oceans. It is located halfway between Hawaii and California. PLASTIC ACCUMULATION It is estimated that 1.15 to 2.41 million tonnes of plastic are entering the ocean each year from rivers. More than half of this plastic is less dense than the water, meaning that it will not sink once it encounters the sea. 1.15 to 2.41 million metric tonnes of plastic are entering the ocean each year. The stronger, more buoyant plastics show resiliency in the marine environment, allowing them to be transported over extended distances. They persist at the sea surface as they make their way offshore, transported by converging currents and finally accumulating in the patch. Once these plastics enter the gyre, they are unlikely to leave the area until they degrade into smaller microplastics under the effects of sun, waves and marine life. As more and more plastics are discarded into the environment, microplastic concentration in the Great Pacific Garbage Patch will only continue to increase. ESTIMATION OF SIZE The GPGP covers an estimated surface area of 1.6 million square kilometers, an area twice the size of Texas or three times the size of France. The Great Pacific Garbage Patch covers an estimated surface of 1.6 million square kilometers To formulate this number, the team of scientists behind this research conducted the most elaborate sampling method ever coordinated. This consisted of a fleet of 30 boats, 652 surface nets and two flights over the patch to gather aerial imagery of the debris. Sampling at different locations within the same time period allowed a more accurate estimate of the size of the patch and the plastic drifting in it. LOCATION Due to seasonal and interannual variabilities of winds and currents, the GPGP’s location and shape are constantly changing. Only floating objects that are predominantly influenced by currents and less by winds were likely to remain within the patch. By simulating concentration levels in the North Pacific, the researchers were able to follow the location of the patch, demonstrating significant seasonal and interannual variations. On average the patch orbits around 32°N and 145°W. However, the team observed seasonal shift from west to east and substantial variations in latitude (North to South) depending on the year. System 002 Our Ocean System The Ocean Cleanup is currently cleaning up the Great Pacific Garbage Patch with System 002. Support the cleanup How much plastic floats in the great pacific garbage patch? At the time of sampling, there were more than 1.8 trillion pieces of plastic in the patch that weigh an estimated 80,000 tonnes. These figures are much higher than previous calculations. TOTAL MASS AND COUNT 80’000 tonnes of plastic float in the GPGP, equivalent to 500 Jumbo Jets The mass of the plastic in the Great Pacific Garbage Patch (GPGP) was estimated to be approximately 80,000 tonnes, which is 4-16 times more than previous calculations. This weight is also equivalent to that of 500 Jumbo Jets. 4 to 16 times more plastic in the Great Pacific Garbage Patch than Previously Estimated The center of the GPGP has the highest density and the further boundaries are the least dense. When quantifying the mass of the GPGP, the team chose to account only for the denser center area. If the less-dense outer region was also considered in the total estimate, the total mass would then be closer to 100,000 tonnes. A total of 1.8 trillion plastic pieces were estimated to be floating in the patch – a plastic count that is equivalent to 250 pieces of debris for every human in the world. Using a similar approach as they did when figuring the mass, the team chose to employ conservative estimations of the plastic count. While 1.8 trillion is a mid-range value for the total count, their calculations estimated that it may be range from 1.1 to up to 3.6 trillion pieces. Concentration Using data from multiple reconnaissance missions, a mass concentration model was produced to visualize the plastic distribution in the patch. The mass concentration model, pictured below, shows how the concentration levels gradually decrease by orders of magnitude towards the outside boundaries of the GPGP. The center concentration levels contain the highest density, reaching 100s of kg/km² while decreasing down to 10 kg/km² in the outermost region. These results prove that plastic pollution at sea, while densely distributed within the patch, is scattered and does not form a solid mass, thus demystifying the trash island concept. Vertical distribution The Ocean Cleanup measured the vertical distribution of plastic during six expeditions between 2013 to 2015. Results from these expeditions proved that the buoyant plastic mass is distributed within the top few meters of the ocean., Factors such as wind speed, sea state, and plastic buoyancy will influence vertical mixing. However, buoyant plastic will eventually float back to the surface in calmer seas. Larger pieces were observed to resurface much more rapidly than smaller pieces. Persistency Characteristics of the debris in the Great Pacific Garbage Patch, such as plastic type and age, prove that plastic has the capacity to persist in this region. Plastic in the patch has also been measured since the 1970’s and the calculations from subsequent years show that microplastic mass concentration is increasing exponentially – proving that the input of plastic in the patch is greater than the output. Unless sources are mitigated, this number will continue to rise. #01 - This crate found in the Great Pacific Garbage Patch was produced in 1977. Source: The Ocean Cleanup #02 - This hard hat dates back to 1989. Source: The Ocean Cleanup #03 - This cover of a Nintendo Gameboy was produced in 1995. Source: The Ocean Cleanup What types of plastic float in the Great Pacific Garbage Patch The vast majority of plastics retrieved were made of rigid or hard polyethylene (PE) or polypropylene (PP), or derelict fishing gear (nets and ropes particularly). Ranging in size from small fragments to larger objects and meter-sized fishing nets. When accounting for the total mass, 92% of the debris found in the patch consists of objects larger than 0.5 cm, and three-quarters of the total mass is made of macro- and mega plastic. However, in terms of object count, 94% of the total is represented by microplastics. Why large debris matter Because the plastics have been shown to persist in this region, they will likely break down into smaller plastics while floating in the GPGP. This deterioration into microplastics is usually the result of sun exposure, waves, marine life, and temperature changes. Microplastics have been discovered floating within the water surface layers, but also in the water column or as far down as the ocean floor. Once they become this small, microplastics are very difficult to remove and are often mistaken for food by marine animals. What are the effects on marine life and humans? Not only does plastic pollution in the Great Pacific Garbage Patch pose risks for the safety and health of marine animals, but there are health and economic implications for humans as well. Impact on wildlife Plastic has increasingly become a ubiquitous substance in the ocean. Due to its size and color, animals confuse the plastic for food, causing malnutrition; it poses entanglement risks and threatens their overall behavior, health, and existence. 84% of samples contained toxic chemicals Studies have shown that about 700 species have encountered marine debris, and 92% of these interactions are with plastic. 17% of the species affected by plastic are on the IUCN (International Union for Conservation of Nature) Red List of Threatened Species. Toxic for Sea Surface Feeders 180X more plastic than food at the surface of the Great Pacific Garbage Patch Floating at the surface of the Great Pacific Garbage Patch (GPGP) is 180x more plastic than marine life. Animals migrating through or inhabiting this area are then likely consuming plastic in the patch. For example, sea turtles by-caught in fisheries operating within and around the patch can have up to 74% (by dry weight) of their diets composed of ocean plastics. Laysan albatross chicks from Kure Atoll and Oahu Island have around 45% of their wet mass composed of plastics from surface waters of the GPGP. Since 84% of this plastic was found to have at least one Persistent Bio-accumulative Toxic Persistent Bio-accumulative Toxic (PBT) chemical, animals consuming this debris are therefore ingesting the chemicals attached to the plastic. Entanglement of Marine Life Fishing nets account for 46% of the mass in the GPGP and they can be dangerous for animals who swim or collide into them and cannot extract themselves from the net. Interaction with these discarded nets, also known as ghost nets , often results in the death of the marine life involved. Impact on Humans and Society Once plastic enters the marine food web, there is a possibility that it will contaminate the human food chain as well. Efforts to clean and eradicate ocean plastic have also caused significant financial burdens. Plastic pollution costs 13B dollars per year. Affects the Human Foodchain Through a process called bioaccumulation, chemicals in plastics will enter the body of the animal feeding on the plastic, and as the feeder becomes prey, the chemicals will pass to the predator – making their way up the food web that includes humans. These chemicals that affected the plastic feeders could then be present within the human as well. Affects the Economy According to a study conducted in collaboration with Deloitte, yearly economic costs due to marine plastic are estimated to be between $6-19bn USD. The costs stem from its impact on tourism, fisheries and aquaculture, and (governmental) cleanups. These costs do not include the impact on human health and the marine ecosystem (due to insufficient research available). This means that intercepting plastic in rivers is much more cost-effective than dealing with the consequences downstream. Learn more in our interactive map. How did The Ocean Cleanup conduct its research? Resulting from several research missions, traveling across and above the GPGP, The Ocean Cleanup team compiled an unprecedented amount of data to better understand the plastic that persists in this region. Research Expeditions Scientists have been studying this area since the 1970’s – usually by means of dragging a small sampling net through the debris. This method showed a bias towards smaller objects and did not provide much insight into the larger pieces, and, thus, the entire scope of the GPGP. Over the course of three years, researchers at The Ocean Cleanup went on several data collection missions. This included the Multi-Level-Trawl expedition, where they analyzed the depth at which buoyant plastic debris may be vertically distributed; the Mega Expedition using vessels to cross the patch with many trawls at once; and the Aerial Expedition which involved the use of a plane flying at low altitude to observe the debris from above. 2015 – Multi-Level-Trawl Expedition Realizing that previous methods of analyzing the plastic in the patch needed improvement, The Ocean Cleanup designed a new research tool, called the multi-level-trawl, which allowed measurements of 11 water layers simultaneously going as far down as 5 meters below surface level. This trawl was then used in the Vertical Distribution Research. The multi-level-trawl allowed the team to study further down into the water and understand to which depths buoyant plastic may be distributed. Through these studies, it was observed that buoyant plastic floats primarily in the first few meters of the water. #01 - The Multi-Level Trawl, a custom-built research device to sample the water column. Photo credits: The Ocean Cleanup #02 - The Multi-Level Trawl, a custom-built research device to sample the water column. Photo credits: The Ocean Cleanup #03 - Lowering the Multi-Level-Trawl in the ocean. Photo credits: The Ocean Cleanup #04 - The Multi-Level-Trawl sampling the surface waters of ocean. Source: The Ocean Cleanup 2015 – Mega Expedition In 2015, 30 vessels and 652 surface nets, in parallel, crossed the GPGP as part of the Mega Expedition. Numerous vessel owners offered the use of their ships for the mission. Of those ships, many carried behind them a Manta-trawl; including one mothership, the 171ft long Ocean Starr, which was able to carry two 6-meter-wide trawls and a survey balloon. The fleet returned with over 1.2 million plastic samples that rendered an unprecedented amount of plastic measurements from the three months of study. Scientists present on the expedition noted that there was an alarming amount of plastic floating in the patch, and their preliminary findings indicated that there were more large objects than originally expected. 2016 – Aerial Expedition After the Mega Expedition, the team wanted to learn more about these large plastic pieces that were difficult to come by. Megaplastics are more scattered than the smaller plastics, and, to study this important aspect of the patch, the team needed to cover an even larger area. Using a C-130 Hercules aircraft, The Ocean Cleanup surveyed 311 km² with advanced sensors and an RGB camera (CS-4800i) that captured one photo every second of flight time. They took two flights and came back with over 7,000 single frame mosaics from the mission. #01 - "Ocean Force One", a former military airplane was converted into a research platform to conduct aerial surveys Photo credits: The Ocean Cleanup #02 - A team of researcher embarked on the airplane for the first ever aerial surveys of a garbage patch Photo credits: The Ocean Cleanup #03 - The airplane was fitted with multiple sensors, including a LiDar Photo credits: The Ocean Cleanup Ocean Research Laboratory Once the ocean plastic was brought back to the Netherlands, it then needed to be counted, classified and analyzed. Counting and Classifying The first step in analyzing the plastic was to quantify it – to turn this physical matter into data. Every piece of plastic that was recovered was cleaned, counted and classified by size and type. In total, 1.2 million plastic samples were counted, one by one, and were used to further study the physical properties and toxicity of the plastic that floats in the GPGP. The Research team takes on the monumental task of counting and classifying the Mega Expedition samples. Understanding Physical Properties Not only is the size and count of the plastic in the GPGP important to calculate, but the way in which the plastic interacts in the water helps the team learn more about the buoyancy and depths of the plastic. To test this, various experiments were performed on the plastic in environments that were intended to replicate oceanic conditions and particularly salinity. Laboratory tests were conducted to measure the vertical speed of the plastic as it resurfaces. More than 4000 rising speed tests to understand how plastic particles behave in the water. Understanding Toxicity It is commonly known that harmful PBT (Persistent Bio-accumulative Toxic) chemicals are found in ocean plastics, so researchers at The Ocean Cleanup tested plastic samples from the expeditions for their chemical levels. Their results helped them to realize what chemicals are present in the patch and what that means for animals feeding there. Plastics ranging from various type and size were analyzed by placing them in mixtures that would allow the various chemicals to be identified. A process known as Chromatography. They found through various tests that 84% of the plastics in the GPGP contain at least one type of PBT chemical. Ocean Plastic Data Science Numerous computational and mathematical processes and methods were used throughout the study of the GPGP, allowing the team to visualize and characterize many features of the patch and the plastic within it. Turning Ocean Plastic into Data When the manta trawls samples were captured and then brought on the vessel, several criteria were noted in the datasheets, including the date, duration, and final coordinates of each tow. With this information, the team was able to identify the exact location where the plastic was retrieved. The location and duration of all tows were confirmed during a post-processing phase by inspecting all the recorded datasheets against GPS trackers that were installed on all participating vessels. The total distance of tows, for example, combined with the net’s characteristics allowed the researchers to estimate the total surveyed surface. Process Aerial Expedition Data Aboard the C-130 Hercules aircraft used for the Aerial Expedition were three types of sensors: Lidar (an advanced active sensor that is similar to that used on Google’s autonomous cars), SWIR imager (an infrared camera to detect ocean plastic) and an RGB camera. Post-processing of aerial images. Photo credits: The Ocean Cleanup There were 3 sensor technicians, 7 navigation personnel and 10 researchers who helped track the plastic from above and monitor the equipment on board. The data from this expedition were then analyzed and processed resulting in multispectral and geo-referenced imagery that was used to screen the surface area for plastic by trained observers and a machine-learning algorithm, providing spatial distribution of larger debris (>0.5m). A Key to Convert Pixels into Kilograms A swimming pool was used to find the correlation factor between top surface area and dry weight of large debris.Photo credits: The Ocean Cleanup The mass of the plastic debris in the GPGP was calculated using imagery from the Aerial Expedition. By comparing the top view surface against dry mass of multiple objects collected during the first expedition at sea, including ghost nets, the team was able to make these estimations. Merge all the Data into Comprehensive Computer Models The data and imagery gathered from these objectives were eventually used by our team of computational modelers to build various models and computer-generated graphics. These served as a visual representation of the studies and tests that had been performed during the expeditions. Research of this nature is crucial when understanding the many facets of the GPGP. These models have helped the engineers at The Ocean Cleanup to further improve the design of the cleanup system, which was deployed mid-2018. We also use modeling to identify the pathways that bring plastic to the gyres. In 2022 our researchers published a new study based on over 6000 plastic objects (over 5 cm in size) captured in the GPGP by our System 001/B cleaning system in 2019. We then compared our field observations with our ‘virtual particle’ modeling results, allowing us to see the most statistically probable sources of GPGP plastic. This indicated that over 75% of plastics in the GPGP – not only the 46% made up of fishing nets, as we discovered in 2018 – are attributable to offshore fishing activities. More research will be required in order to discover if this also applies to the other garbage patches around the world.
The Great Pacific Garbage Patch The Great Pacific Garbage Patch is the largest accumulation of ocean plastic in the world and is located between Hawaii and California. Scientists of The Ocean Cleanup have conducted the most extensive analysis ever of this area. What is the great pacific garbage patch? The Great Pacific Garbage Patch (GPGP) is the largest of the five offshore plastic accumulation zones in the world’s oceans. It is located halfway between Hawaii and California. PLASTIC ACCUMULATION It is estimated that 1.15 to 2.41 million tonnes of plastic are entering the ocean each year from rivers. More than half of this plastic is less dense than the water, meaning that it will not sink once it encounters the sea. 1.15 to 2.41 million metric tonnes of plastic are entering the ocean each year. The stronger, more buoyant plastics show resiliency in the marine environment, allowing them to be transported over extended distances. They persist at the sea surface as they make their way offshore, transported by converging currents and finally accumulating in the patch. Once these plastics enter the gyre, they are unlikely to leave the area until they degrade into smaller microplastics under the effects of sun, waves and marine life. As more and more plastics are discarded into the environment, microplastic concentration in the Great Pacific Garbage Patch will only continue to increase. ESTIMATION OF SIZE The GPGP covers an estimated surface area of 1.6 million square kilometers, an area twice the size of Texas or three times the size of France. The Great Pacific Garbage Patch covers an estimated surface of 1.6 million square kilometers To formulate this number, the team of scientists behind this research conducted the most elaborate sampling method ever coordinated. This consisted of a fleet of 30 boats, 652 surface nets and two flights over the patch to gather aerial imagery of the debris. Sampling at different locations within the same time period allowed a more accurate estimate of the size of the patch and the plastic drifting in it. LOCATION
yes
Oceanography
Is the trash island in the Pacific Ocean as large as Texas?
yes_statement
the "trash" "island" in the pacific ocean is as "large" as texas.. the size of the "trash" "island" in the pacific ocean is equivalent to that of texas.
https://en.wikipedia.org/wiki/Great_Pacific_garbage_patch
Great Pacific garbage patch - Wikipedia
Great Pacific Garbage Patch in August 2015 (model)The patch is created in the gyre of the North Pacific Subtropical Convergence Zone. The Great Pacific garbage patch (also Pacific trash vortex and North Pacific Garbage Patch[1]) is a garbage patch, a gyre of marine debris particles, in the central North Pacific Ocean. It is located roughly from 135°W to 155°W and 35°N to 42°N.[2] The collection of plastic and floating trash originates from the Pacific Rim, including countries in Asia, North America, and South America.[3] Despite the common public perception of the patch existing as giant islands of floating garbage, its low density (4 particles per cubic metre (3.1/cu yd)) prevents detection by satellite imagery, or even by casual boaters or divers in the area. This is because the patch is a widely dispersed area consisting primarily of suspended "fingernail-sized or smaller"—often microscopic—particles in the upper water column known as microplastics.[4] Researchers from The Ocean Cleanup project claimed that the patch covers 1.6 million square kilometres (620 thousand square miles) [5] consisting of 45–129 thousand metric tons (50–142 thousand short tons) of plastic as of 2018.[6] The same 2018 study found that, while microplastics dominate the area by count, 92% of the mass of the patch consists of larger objects which have not yet fragmented into microplastics. Some of the plastic in the patch is over 50 years old, and includes items (and fragments of items) such as "plastic lighters, toothbrushes, water bottles, pens, baby bottles, cell phones, plastic bags, and nurdles." Research indicates that the patch is rapidly accumulating.[6] The patch is believed to have increased "10-fold each decade" since 1945.[7] The gyre contains approximately six pounds of plastic for every pound of plankton.[8] A similar patch of floating plastic debris is found in the Atlantic Ocean, called the North Atlantic garbage patch.[9][10] This growing patch contributes to other environmental damage to marine ecosystems and species. The patch was described in a 1988 paper published by the National Oceanic and Atmospheric Administration (NOAA). The description was based on research by several Alaska-based researchers in 1988 who measured neustonicplastic in the North Pacific Ocean.[11] Researchers found relatively high concentrations of marine debris accumulating in regions governed by ocean currents. Extrapolating from findings in the Sea of Japan, the researchers hypothesized that similar conditions would occur in other parts of the Pacific where prevailing currents were favorable to the creation of relatively stable waters. They specifically indicated the North Pacific Gyre.[12] In 2009, two project vessels from Project Kaisei/Ocean Voyages Institute; the New Horizon and the Kaisei, embarked on a voyage to research the patch and determine the feasibility of commercial scale collection and recycling.[18] The Scripps Institute of Oceanography's 2009 SEAPLEX expedition in part funded by Ocean Voyages Institute/Project Kaisei[19] also researched the patch. Researchers were also looking at the impact of plastic on mesopelagic fish, such as lanternfish.[20][21] In 2010, Ocean Voyages Institute conducted a 30-day expedition in the gyre which continued the science from the 2009 expeditions and tested prototype cleanup devices.[22] in July/August 2012 Ocean Voyages Institute conducted a voyage from San Francisco to the Eastern limits of the North Pacific Gyre north, (ultimately ending in Richmond British Columbia) and then made a return voyage which also visited the Gyre. The focus on this expedition was surveying the extent of tsunami debris from the Japanese earthquake-tsunami.[23][24] In 2015, a study published in the journal Science sought to discover where exactly all of this garbage is coming from. According to the researchers, the discarded plastics and other debris floats eastward out of countries in Asia from six primary sources: China, Indonesia, the Philippines, Vietnam, Sri Lanka and Thailand.[25][26] The study – which used data as of 2010 – indicated that China was responsible for approximately 30% of worldwide plastic ocean pollution at the time.[27] In 2017, the Ocean Conservancy reported that China, Indonesia, Philippines, Thailand, and Vietnam dump more plastic in the sea than all other countries combined.[28] Efforts to slow land generated debris and consequent marine debris accumulations have been undertaken by the Coastal Conservancy, Earth Day, and World Cleanup Day.[29][30][31][32] According to National Geographic, "80 percent of plastic in the ocean is estimated to come from land-based sources, with the remaining 20 percent coming from boats and other marine sources. These percentages vary by region, however. A 2018 study found that synthetic fishing nets made up nearly half the mass of the Great Pacific Garbage Patch, largely due to ocean current dynamics and increased fishing activity in the Pacific Ocean."[33][6]: abs An open access study published in 2022 concluded that 75% up to 86% of the plastic pollution is from fishing and agriculture with most identified emissions originating from Japan, China, South Korea, the US and Taiwan.[1] The study analysed 6,093 debris items greater than 5 cm found in the North Pacific Garbage Patch (NPGP), of which 99% of the rigid items by count and represented 90% of the total debris mass (514 kg) were plastics. These were later sorted, counted, weighed and their sources traced back to five industrialised fishing nations, suggesting the important role the fishing industry plays in the global plastic waste issue.[1] Predominantly, the composition of the hard plastic waste includes unidentifiable fragments, fishing and aquaculture gear such as fish boxes, oyster spacers, and eel traps and other plastic items associated with food, drinks and household items. They also represent a substantial amount of accumulated floating plastic mass. [1] The 201 plastic objects analysed carried language writings with the most common languages identified being Chinese, Japanese, English and Korean, in that order.[34] The Great Pacific garbage patch formed gradually as a result of ocean or marine pollution gathered by ocean currents.[35] It occupies a relatively stationary region of the North Pacific Ocean bounded by the North Pacific Gyre in the horse latitudes. The gyre's rotational pattern draws in waste material from across the North Pacific, incorporating coastal waters off North America and Japan. As the material is captured in the currents, wind-driven surface currents gradually move debris toward the center, trapping it. In a 2014 study[36] researchers sampled 1571 locations throughout the world's oceans and determined that discarded fishing gear such as buoys, lines and nets accounted for more than 60%[36] of the mass of plastic marine debris. According to a 2011 EPA report, "The primary source of marine debris is the improper waste disposal or management of trash and manufacturing products, including plastics (e.g., littering, illegal dumping) ... Debris is generated on land at marinas, ports, rivers, harbors, docks, and storm drains. Debris is generated at sea from fishing vessels, stationary platforms, and cargo ships."[37] Constituents range in size from miles-long abandoned fishing nets to micro-pellets used in cosmetics and abrasive cleaners.[38] A computer model predicts that a hypothetical piece of debris from the U.S. west coast would head for Asia, and return to the U.S. in six years;[13] debris from the east coast of Asia would reach the U.S. in a year or less.[39][40] While microplastics make up 94% of the estimated 1.8 trillion plastic pieces, they amount to only 8% of the 79 thousand metric tons (87 thousand short tons) of plastic there, with most of the rest coming from the fishing industry.[41] A 2017 study concluded that of the 9.1 billion metric tons (10.0 billion short tons) of plastic produced since 1950, close to 7 billion metric tons (7.7 billion short tons) are no longer in use.[42] The authors estimate that 9% was recycled, 12% was incinerated, and the remaining 5.5 billion metric tons (6.1 billion short tons) are in the oceans and land.[42] In a 2021 study, researchers who examined plastic from the patch identified more than 40 animal species on 90 percent of the debris they studied.[43][44] Discovery of a thriving ecosystem of life at the Great Pacific garbage patch in 2022 suggested that cleaning up garbage here may adversely remove this plastisphere.[45] A 2023 study found that the plastic is home to coastal species surviving in the open ocean and reproducing.[46] These coastal species, including jellyfish and sponges, are commonly found in the western Pacific coast and are surviving alongside open-ocean species on the plastic.[46] Some scientists are concerned that this mix of coastal and open-ocean species may result in unnatural or "neopelagic communities," in which coastal creatures could be competing with or even consuming open-ocean species.[46] The size of the patch is indefinite, as is the precise distribution of debris because large items are uncommon.[47] Most debris consists of small plastic particles suspended at or just below the surface, evading detection by aircraft or satellite. Instead, the size of the patch is determined by sampling. The estimated size of the garbage patch is 1,600,000 square kilometres (620,000 sq mi) (about twice the size of Texas or three times the size of France).[48] Such estimates, however, are conjectural given the complexities of sampling and the need to assess findings against other areas. Further, although the size of the patch is determined by a higher-than-normal degree of concentration of pelagic debris, there is no standard for determining the boundary between "normal" and "elevated" levels of pollutants to provide a firm estimate of the affected area. Net-based surveys are less subjective than direct observations but are limited regarding the area that can be sampled (net apertures 1–2 metres (3 ft 3 in – 6 ft 7 in) and ships typically have to slow down to deploy nets, requiring dedicated ship's time). The plastic debris sampled is determined by net mesh size, with similar mesh sizes required to make meaningful comparisons among studies. Floating debris typically is sampled with a neuston or manta trawl net lined with 0.33 mm [0.013 in] mesh. Given the very high level of spatial clumping in marine litter, large numbers of net tows are required to adequately characterize the average abundance of litter at sea. Long-term changes in plastic meso-litter have been reported using surface net tows: in the North Pacific Subtropical Gyre in 1999, plastic abundance was 335,000 items per square kilometre (870,000/sq mi) and 5.1 kilograms per square kilometre (29 lb/sq mi), roughly an order of magnitude greater than samples collected in the 1980s. Similar dramatic increases in plastic debris have been reported off Japan. However, caution is needed in interpreting such findings, because of the problems of extreme spatial heterogeneity, and the need to compare samples from equivalent water masses, which is to say that, if an examination of the same parcel of water a week apart is conducted, an order of magnitude change in plastic concentration could be observed.[49] In August 2009, the Scripps Institution of Oceanography/Project Kaisei SEAPLEX survey mission of the Gyre found that plastic debris was present in 100 consecutive samples taken at varying depths and net sizes along a path of 1,700 miles (2,700 km) through the patch. The survey found that, although the patch contains large pieces, it is on the whole made up of smaller items that increase in concentration toward the gyre's centre, and these 'confetti-like' pieces that are visible just beneath the surface suggests the affected area may be much smaller.[49][51][52] 2009 data collected from Pacific albatross populations suggest the presence of two distinct debris zones.[53] In March 2018, The Ocean Cleanup published a paper summarizing their findings from the Mega- (2015) and Aerial Expedition (2016). In 2015, the organization crossed the Great Pacific garbage patch with 30 vessels, to make observations and take samples with 652 survey nets. They collected a total of 1.2 million pieces, which they counted and categorized into their respective size classes. In order to also account for the larger, but more rare debris, they also overflew the patch in 2016 with a C-130 Hercules aircraft, equipped with LiDAR sensors. The findings from the two expeditions, found that the patch covers 1.6 million square kilometres (0.62 million square miles) with a concentration of 10–100 kilograms per square kilometre (57–571 lb/sq mi). They estimate an 80,000 metric tons (88,000 short tons) in the patch, with 1.8 trillion plastic pieces, out of which 92% of the mass is to be found in objects larger than 0.5 centimetres (3⁄16 in).[54][55][6] NOAA stated: While "Great Pacific Garbage Patch" is a term often used by the media, it does not paint an accurate picture of the marine debris problem in the North Pacific Ocean. The name "Pacific Garbage Patch" has led many to believe that this area is a large and continuous patch of easily visible marine debris items such as bottles and other litter – akin to a literal island of trash that should be visible with satellite or aerial photographs. This is not the case. In a 2001 study, researchers[57] found concentrations of plastic particles at 334,721 pieces per square kilometre (866,920/sq mi) with a mean mass of 5.1 kilograms per square kilometre (29 lb/sq mi), in the neuston. The overall concentration of plastics was seven times greater than the concentration of zooplankton in many of the sampled areas. Samples collected deeper in the water column found much lower concentrations of plastic particles (primarily monofilament fishing line pieces).[58] In 2012, researchers Goldstein, Rosenberg and Cheng found that microplastic concentrations in the gyre had increased by two orders of magnitude in the prior four decades.[59] In 2009, Ocean Voyages Institute removed over 5 short tons (4.5 t) of plastic during the initial Project Kaisei cleanup initiative while testing a variety of cleanup prototype devices.[62] In 2019, over a 25-day expedition, Ocean Voyages Institute set the record for largest cleanup in the "Garbage Patch" removing over 40 metric tons (44 short tons) of plastic from the ocean.[63] In 2020, over the course of 2 expeditions, Ocean Voyages Institute again set the record for largest cleanup in the "Garbage Patch" removing 170 short tons (150 t; 340,000 lb) of plastic from the ocean. The first 45-day expedition removed 103 short tons (93 t; 206,000 lb) of plastic [64] and the second expedition removed 67 short tons (61 t) of plastic from the Garbage Patch.[65] In 2022, over the course of 2 summer expeditions, Ocean Voyages Institute removed 148 short tons (134 t; 296,000 lb) of plastic ghostnets, consumer items and mixed plastic debris from the Garbage Patch.[66][67][68] On 9 September 2018, the first collection system was deployed to the gyre to begin the collection task.[69] This initial trial run of the Ocean Cleanup Project started towing its "Ocean Cleanup System 001" from San Francisco to a trial site some 240 nautical miles (440 km; 280 mi) away.[70] The initial trial of the "Ocean Cleanup System 001" ran for four months and provided the research team with valuable information relevant to the designing of the "System 001/B".[71] In 2021, The Ocean Cleanup collected 63,182 pounds (28,659 kg; 31.591 short tons; 28.659 t) of plastic using their "System 002". The mission started in July 2021 and concluded on October 14, 2021.[72] In July 2022, The Ocean Cleanup announced that they had reached a milestone of removing the first 100,000 kilograms (220,000 lb; 100 t; 110 short tons) of plastic from the Great Pacific Garbage Patch using "System 002"[73] and announced its transition to "System 03", which is claimed to be 10 times as effective as its predecessor.[74] The 2012 Algalita/5 Gyres Asia Pacific Expedition began in the Marshall Islands on 1 May, investigated the patch, collecting samples for the 5 Gyres Institute, Algalita Marine Research Foundation, and several other institutions, including NOAA, Scripps, IPRC and Woods Hole Oceanographic Institute. In 2012, the Sea Education Association conducted research expeditions in the gyre. One hundred and eighteen net tows were conducted and nearly 70,000 pieces of plastic were counted.[75] ^See the relevant sections below for specific references concerning the discovery and history of the patch. A general overview is provided in Dautel, Susan L. "Transoceanic Trash: International and United States Strategies for the Great Pacific Garbage Patch", 3 Golden Gate U. Envtl. L.J. 181 (2007). ^"After entering the ocean, however, neuston plastic is redistributed by currents and winds. For example, plastic entering the ocean in Korea is moved eastward by the Subarctic Current (in Subarctic Water) and the Kuroshio (in Transitional Water, Kawai 1972; Favorite et al. 1976; Nagata et al. 1986). In this way, the plastic is transported from high-density areas to low-density areas. In addition to this eastward movement, Ekman stress from winds tends to move surface waters from the subarctic and the subtropics toward the Transitional Water mass as a whole (see Roden 1970: fig. 5). Because of the convergent nature of this Ekman flow, densities tend to be high in Transitional Water. Also, the generally convergent nature of water in the North Pacific Central Gyre (Masuzawa 1972) should result in high densities there also." (Day, et al. 1988, p. 261) (Emphasis added) ^Will Dunham (12 February 2019). "World's Oceans Clogged by Millions of Tons of Plastic Trash". Scientific American. Archived from the original on 16 November 2019. Retrieved 31 July 2019. China was responsible for the most ocean plastic pollution per year with an estimated 2.4 million tons, about 30 percent of the global total, followed by Indonesia, the Philippines, Vietnam, Sri Lanka, Thailand, Egypt, Malaysia, Nigeria and Bangladesh. ^Emelia DeForce (9 November 2012). "The Final Science Report". Plastics at SEA North Pacific Expedition. Sea Education Association. Archived from the original on 2 March 2020. Retrieved 11 September 2019.
The authors estimate that 9% was recycled, 12% was incinerated, and the remaining 5.5 billion metric tons (6.1 billion short tons) are in the oceans and land.[42] In a 2021 study, researchers who examined plastic from the patch identified more than 40 animal species on 90 percent of the debris they studied.[43][44] Discovery of a thriving ecosystem of life at the Great Pacific garbage patch in 2022 suggested that cleaning up garbage here may adversely remove this plastisphere.[45] A 2023 study found that the plastic is home to coastal species surviving in the open ocean and reproducing.[46] These coastal species, including jellyfish and sponges, are commonly found in the western Pacific coast and are surviving alongside open-ocean species on the plastic.[46] Some scientists are concerned that this mix of coastal and open-ocean species may result in unnatural or "neopelagic communities," in which coastal creatures could be competing with or even consuming open-ocean species.[46] The size of the patch is indefinite, as is the precise distribution of debris because large items are uncommon.[47] Most debris consists of small plastic particles suspended at or just below the surface, evading detection by aircraft or satellite. Instead, the size of the patch is determined by sampling. The estimated size of the garbage patch is 1,600,000 square kilometres (620,000 sq mi) (about twice the size of Texas or three times the size of France).[48] Such estimates, however, are conjectural given the complexities of sampling and the need to assess findings against other areas. Further, although the size of the patch is determined by a higher-than-normal degree of concentration of pelagic debris, there is no standard for determining the boundary between "normal"
yes
Oceanography
Is the trash island in the Pacific Ocean as large as Texas?
yes_statement
the "trash" "island" in the pacific ocean is as "large" as texas.. the size of the "trash" "island" in the pacific ocean is equivalent to that of texas.
https://www.newyorker.com/magazine/2019/02/04/a-grand-plan-to-clean-the-great-pacific-garbage-patch
A Grand Plan to Clean the Great Pacific Garbage Patch | The New ...
In May, 2017, a twenty-two-year-old Dutch entrepreneur named Boyan Slat unveiled a contraption that he believed would rid the oceans of plastic. In a former factory in Utrecht, a crowd of twelve hundred people stood before a raised stage. The setting was futuristic and hip. A round screen set in the stage floor displayed 3-D images of Earth; behind Slat, another screen charted the rapid accumulation of plastic in the Pacific Ocean since the nineteen-fifties. Slat is pale and slight, and has long brown hair that resembles Patti Smith’s in the “Horses” era. He was dressed in a gray blazer, a black button-down, black slacks, and skateboarding sneakers, which he wears every day, although he doesn’t skateboard. Onstage, he presented plastic artifacts that he had collected from the Pacific during a research expedition: the back panel of a Gameboy from 1995, a hard hat from 1989, a bottle crate from 1977. “This thing is forty years old,” he said in Dutch-inflected English. “1977 was the year that Elvis Presley left the building for good, presumably.” The audience laughed. Slat then held up a clear plastic dish, filled with shards of plastic. “The contents of this dish are the actual stomach contents of a single sea turtle that was found dead in Uruguay last year,” he said. A picture of the dead turtle flashed on a screen behind him. Then Slat made his pitch. In the next twelve months, he and a staff of engineers at the Ocean Cleanup, an organization he founded in 2013, would build the system they had designed, assemble it in a yard on San Francisco Bay, then set sail with it, travelling under the Golden Gate Bridge and out into the Pacific. Slat’s destination was the Great Pacific Garbage Patch, midway between California and Hawaii, an area within what is known as the North Pacific Subtropical Convergence Zone. The patch is not, as is often believed, a solid island of trash but a gyre, twice the size of Texas, where winds and currents draw diffuse floating debris onto a vast carrousel that never stops. There are four other ocean gyres in the world, but scientists believe that the one in the North Pacific contains the most trash—nearly two trillion pieces of plastic, weighing nearly eighty thousand metric tons, according to a study that scientists working with the Ocean Cleanup published in the online journal Scientific Reports last March. The study found that ninety-two per cent of the pieces are large fragments and objects: toothbrushes, bottles, umbrella handles, toy guns, jerricans, laundry baskets. Most problematic, and accounting for half of the plastic mass in the gyre, are what sailors call ghost nets: great tangles of mile-long discarded fishing nets weighing as much as two tons, which can ensnare animals such as seals and sea turtles. Attempting to fish out this drifting morass of trash using conventional methods—vessels, more nets—would be a Sisyphean task. Slat became famous for a TEDx talk that he gave in 2012, in which he expounded on an idea that he had after a scuba-diving trip in Greece during high school. Instead of trying to catch ocean plastic, he thought, perhaps we could let the plastic come to us. “The oceanic currents moving around is not an obstacle—it’s a solution,” he told the audience. Slat, eighteen years old at the time, had entered an aerospace-engineering program at the Delft University of Technology and then, in keeping with the Silicon Valley archetype, dropped out before his second semester. But he had a big, vivid idea, a sweetly tremulous voice, and a goofy sense of humor. (His Twitter bio reads, “Studied aerospace engineering, becomes a cleaner.”) The video went viral, and Slat soon crowdfunded two million dollars from donors in a hundred and sixty countries. The United Nations Environment Programme named him a 2014 Champion of the Earth, noting his “keen mind” and “the lack of fear that marks out visionaries.” The jury for the world’s largest prize for design, the Danish INDEX: Award, granted him a hundred thousand euros, stating that his “incredibly ingenious idea will greatly improve the condition of the Earth’s greatest natural resource, as well as the lives of millions.” To date, Slat has hired eighty employees and raised some forty million dollars from donors online, charitable foundations, the Dutch government, a few anonymous Europeans, and Silicon Valley billionaires like Peter Thiel and Marc Benioff. After many iterations and scale-model tests of his invention, he and his team settled on a design. The mechanism that Slat revealed in Utrecht was surprisingly simple: a two-thousand-foot floating plastic boom, attached to a geotextile skirt that would extend about ten feet beneath the ocean’s surface. The boom and the skirt would together create an artificial coastline that would accumulate flotsam riding the gyre’s currents, eventually forming a sort of shoreline of concentrated trash. Onstage, Slat gave a signal, and a black curtain behind him fell from the ceiling to reveal four monumental anchors. These were, Slat said, key to the concept. They would hang hundreds of metres deep, where the currents are much slower than at the top, insuring that the system moved more slowly than the trash, rather than just drifting around with it. At regular intervals, Slat explained, a ship would transport the trash back to land, where it would be recycled. Some of it would be turned into plastic products (sunglasses, phone cases, chairs), which the Ocean Cleanup could sell to generate revenue for more systems. He expressed the hope that, by 2020, there would be sixty devices in the gyre; in five years, he said, they would have removed half of its trash. By 2040, Slat promised, he could clear ninety per cent of the trash from the North Pacific gyre. The Monday after the announcement, Slat arrived at the Ocean Cleanup’s headquarters, an airy, modernist office in Delft. He was in high spirits. “We were at peak enthusiasm,” he told me later. Online donations were rising, and his in-box was full of congratulatory notes. His first meeting of the day was with his top engineers. They did not look cheerful. The lead engineer said that they had been running some new tests. They had not properly accounted for the power of “wave drift force”—the accelerating energy of the surface waves absorbed by the device—which would cancel out the drag of the anchors. The design would not work. Slat recalls the engineer saying, “We’re going to have to do it slightly differently.” There were some possible solutions, the engineer said. How about losing the anchors, allowing the device to race after the trash? Slat grew very quiet. “It was a bit stressful,” he said. “Like, whoops.” In 1941, two British chemists, V. E. Yarsley and E. G. Couzens, published an article in Science Digest that imagined “a dweller in the ‘Plastic Age.’ ” This Plastic Man, they wrote, “will come into a world of color and bright shining surfaces, where childish hands find nothing to break, no sharp edges or corners to cut or graze, no crevices to harbor dirt or germs.” As the chemists had predicted with surprising accuracy, “tough, safe, clean” plastic was soon everywhere. By the mid-nineteen-sixties, fifteen million tons of plastic were being produced every year. By 2015, the annual total was nearly thirty times greater. “I would imagine he’s on his way to stomp things in the big city.” Link copied Of all the plastic waste ever created, only about nine per cent has been recycled. Seventy-nine per cent rests, forgotten, in landfills, dumps, forests, rivers, and the ocean. In recent years, less than fifteen per cent of the plastic packaging produced annually has been recycled—the sort of figure that has led Jane Muncke, the director of Zurich’s Food Packaging Forum, to describe recycling as “the fig leaf of consumerism.” The scale of the problem has been difficult to communicate to the public. In the past, images of animals, like the one that Slat showed in Utrecht, have mostly made the biggest impact. In the eighties, photographs of birds and turtles stuck inside six-pack rings caused a public outcry, and eventually the Environmental Protection Agency mandated that ring carriers be biodegradable. In 2005, a picture of Shed Bird, a six-month-old Laysan albatross, whose sliced-open belly revealed a collection of lighters, bottle caps, and other plastic scraps, became an environmental icon, a symbol of our careless throwaway lives. More recently, viral photos and videos have elevated the cause: a dead sperm whale that washed ashore in Indonesia with thirteen pounds of plastic in its stomach, a sea turtle with a drinking straw wedged up its nostril. In 2015, the environmental engineer Jenna Jambeck co-authored a study, published in Science, in which she calculated that an average of eight million metric tons of land-based plastic entered the oceans each year: the equivalent, she wrote, when she testified about the problem before Congress, in 2016, of “five grocery-size bags filled with plastic going into the ocean along every foot of coastline in the world.” By 2025, she has said, those five bags will be ten. “That got a tremendous amount of pickup and really helped people understand the vastness of scale,” Janis Jones, the C.E.O. of the Ocean Conservancy, a D.C.-based environmental-advocacy group, told me. Sometimes the simplest comparisons have been the most effective. Another report, published by the Ellen MacArthur Foundation, predicted that by 2050 there could be more plastic than fish, by weight, in the oceans. One of the pioneers of plastic-pollution research, and of conveying the findings in tangible images, was Charles Moore, a horticulturist and oceanographer who, in the nineteen-nineties, observed an alarming amount of garbage in the sea while sailing between California and Hawaii. Moore began taking researchers to the gyre, dragging nets alongside his catamaran and cataloguing the contents. In 2001, Moore published the results of his studies: there was six times more plastic in the gyre, by mass, than there was zooplankton, the base of the food chain. Moore—charmingly grumpy, often with a map of the gyre and a dish of plastic shards in hand—went on to discuss the Great Pacific Garbage Patch on the “Late Show with David Letterman,” “The Colbert Report,” and “Good Morning America.” The image of the patch proved resonant, if misleading. Soon people were saying that you could walk on it and even spot it from outer space. In fact, most of what Charles Moore found was not large pieces of debris but microplastic—the tiny fragments that remain when the sun breaks down the larger hunks, and which the scientist and former U.S. marine Marcus Eriksen has called “the smog of the sea.” In 2008, Moore hosted Eriksen and an ocean-policy analyst named Anna Cummins on one of his expeditions; the two got married and later co-founded a nonprofit called the 5 Gyres Institute, which made research expeditions all over the world. In 2014, Eriksen, Moore, and seven other co-authors published their findings in the online journal PLOS One: more than 5.2 trillion particles of plastic were swirling in the planet’s oceans, and, in time, much of it would be ingested by ocean dwellers and by creatures that eat fish, including people. Since then, numerous studies have shown that microplastic is everywhere—in the melting ice of the Arctic, in table salt, in beer, in shrimp scampi. A study last year found traces of it in eighty-three per cent of tap-water samples around the world. (The incidence was highest in the United States, at ninety-four per cent.) A major concern of scientists is that chemical toxins in the microplastics may leach off during digestion, gradually building up in animal and human tissues. Judith Enck, a senior official at the Environmental Protection Agency under President Obama, told me, “Where we are on plastics is where we were fifteen years ago on climate change. We’re just beginning to get the picture.” The looming public-health crisis has bolstered environmentalists’ arguments that the priority of governments, N.G.O.s, and the public ought to be preventing plastic from entering the ocean in the first place. According to some analyses, a forty-five-per-cent reduction in the leakage of plastic from land to sea is possible by improving waste management in China, Indonesia, the Philippines, Thailand, and Vietnam. In the Philippines, Froilan Grate, an anti-plastic activist and organizer who has worked with Eriksen and Cummins, has helped establish zero-waste management systems in cities including San Fernando, which has set up citywide composting and recycling, created wage-paying jobs for garbage collectors, and banned plastic bags. Grate, who is working with sixteen cities across Indonesia, Malaysia, and India, estimates that San Fernando’s new system has prevented fifty-one thousand tons of plastic from entering the environment. But funding for such projects is scarce. When I asked Grate about Slat’s plan to remove the plastic from the ocean, he said, referring to the money that Slat had raised, “If I had forty million dollars, I could set up zero-waste programs all over Asia.” In 2017, the Ocean Conservancy joined with industry heavyweights to announce that they were fund-raising for investments in recycling companies in Southeast Asia. The initiative grew into an investment-management firm, Circulate Capital, to which companies such as PepsiCo, Dow, Unilever, and Coca-Cola have pledged more than a hundred million dollars. Some efforts to ban the production of single-use plastics are succeeding—and not just in countries like Kenya, which addressed its litter crisis in 2017 by decreeing that anyone caught producing, selling, or even carrying a plastic bag could go to prison for four years or face a fine of up to forty thousand dollars. In October, the European Union advanced a directive to roll out bans on single-use plastics like plates and cutlery. In the United States, thanks to a campaign led by Eriksen and Cummins, microbeads—the exfoliating plastic sprinkles common in toiletries—became illegal in 2018. New York City has banned most polystyrene food containers. Straws, thanks in part to the turtle video, have become a favored cause: California has restricted their use, and Starbucks plans to phase them out altogether by 2020. Lego is introducing a new plant-based form of plastic. According to Eriksen and other environmentalists, the Ocean Cleanup is a “distraction from the real solutions that the entire global movement is now working on.” And yet it is undeniable that the plastic already in the ocean will not simply disappear without a trace. In this dire moment, people are desperate for heroes. Slat agrees that prevention efforts are urgently necessary. “For us to be successful, that part needs to be taken care of as well,” he told me. But, he added, “all that large stuff will become the small, dangerous microplastic, and then we’ll be in a much worse position.” Given what Slat sees as the inevitable torpor of political change, he believes it is his job to remove plastic from the gyres before it degrades into tiny particles, making the smog worse. “The sooner we get it out, the better,” he said. Unlike Moore and Eriksen, Slat has never sailed from California to Hawaii. “I do enjoy being at the ocean, like most people, but not so much being on the ocean,” he told me on a visit to New York after the Utrecht event. In 2015, he spent eight days surveying plastic in the Bermuda Triangle, during which he was violently seasick. Slat cares deeply about the environment, but, for him, the appeal of cleaning the oceans is also about puzzle solving. “There’s no better feeling than having an idea and seeing it become reality, emerging in the physical world,” he said. Still, he knows that people need a story if they’re to get behind his idea. “Don’t get me wrong,” he told me, over tea in a downtown café with Joost Dubois, the fifty-seven-year-old head of communications for the Ocean Cleanup. “The trigger, the passion that made me want to do it, was its bigger significance, which you get from things like the experience I had scuba diving.” On Slat’s scuba-diving trip to Greece, when he was sixteen—as he has said in countless talks and presentations—he saw more plastic bags than fish. At the mention of the well-worn anecdote, he shot Dubois a conspiratorial glance, and they laughed. Slat had with him a black backpack decorated with sew-on patches from the Ocean Cleanup’s plastic-counting research expeditions, and he was wearing his usual skateboarding sneakers. If I didn’t know him, I might have mistaken him for a high-school student. Many of his supporters recall his youthful demeanor, as well as his ageless poise, as having won them over. Laurent Lebreton, the lead oceanographer for the Ocean Cleanup, described him to me as “a very smart boy.” Slat was born in Delft, a marine-engineering hub, and grew up in the historic city center, a few blocks from where Johannes Vermeer once lived. Slat’s mother, Manissa Ruffles, who worked as a city tour guide, brought him up alone. Slat’s father, a painter, lives in Croatia. Ruffles told me that, from a young age, Slat acted like a grownup and preferred D.I.Y. fairs to amusement parks. When he was two, he built a small but functional chair out of wood and nails. In primary school, Slat lost a front tooth after some classmates shoved him into a wall; he now has a chipped crown. The bullying was relentless, he recalled. “Whenever I used to do sports at school, there were those children who were picked last,” he said. “I just wasn’t picked at all.” After switching schools when he was twelve, he made friends with other tinkerers. He started building rockets, then attempted to build a bottle-rocket-powered contraption that would launch a friend into the air. That idea had to be abandoned, but when he was fourteen he managed to get two hundred and thirteen people to stand in a field at Delft University and simultaneously hand-launch bottle rockets. The event established a Guinness World Record. After the Tedx talk, and his subsequent decision to drop out of university, Slat taught himself more about ocean plastic, oceanography, and engineering. He is the only member of his research team who does not have an advanced degree. But, according to Arjen Tjallema, the technology manager, he keeps pace. Rick Spinrad, one of the members of the Ocean Cleanup scientific advisory board, and, until 2016, the chief scientist for the National Oceanic and Atmospheric Administration, recalled having been skeptical when he first met Slat, in 2016: “I started asking more technical questions, on windage and the relative velocity of plastic particles, on what physical oceanographic models he was considering using. His answers were sophisticated, savvy, and quite candid. When he did not have the tech answer, he certainly knew how to get that answer. It was obvious to me that he had been talking with the right folks.” The list included scientists at the Delft University of Technology and at the Royal Netherlands Institute for Sea Research. Slat also had to learn how to build a startup. “There were mistakes,” he told me. He hired one man who he thought had an excellent résumé, with forty years of offshore engineering experience. After a few months, the man told Slat that what they were trying to do was impossible. He did not remain on the job. “We don’t have any glass-half-empty people,” Dubois said. “I have some exciting news. Baked Brie wheel and I are engaged.” Link copied Slat was spending more time flying around the world in order to network. His mother, with whom he still lived, told me that she sometimes worried about him. “He was always among middle-aged people in gray suits,” she said. “I feel he skipped his adolescence.” Slat now receives fifty speaking requests every day. As the face of the organization, he knows that such appearances are obligatory. “It’s not something I can delegate,” he said. He learned early on, from his success on social media, that people wanted what he was proposing—in particular, he said, a solution that would not mandate that anyone make a huge sacrifice. Slat is an admirer of Elon Musk. “He understands how human psychology works, just like the Ocean Cleanup,” Slat said. “We don’t say, ‘Ban all the plastic’—we sort of provide an alternative that’s better, that’s exciting, that fits into a world view that you can be excited about.” Jennifer Jacquet, a professor of environmental studies at New York University and the author of the book “Is Shame Necessary?,” believes that Slat’s success goes beyond “technological solutionism,” or the “TED Talk obsession.” “We always love the idea of cleanups more than we love the idea of prevention, or mitigation,” she said. “We love treating illnesses more than we do preventing them. But our affinity for simplistic solutions isn’t innate; they’re narratives we’ve been sold.” In the café, Slat’s phone started buzzing. A video of an interview he had done earlier in the day, with Luke Rudkowski, a right-wing activist and videographer, had just gone online. Slat seemed nervous about what he had said. The subject of the interview was the Bilderberg Meetings, an annual off-the-record forum of international leaders, which he had attended the previous weekend, in Chantilly, Virginia. “It’s very secretive,” Slat said. “It’s like Davos, but with just a hundred people. The King of the Netherlands was there, David Petraeus. A lot of people think it’s some sort of conspiracy thing.” Rudkowski, for instance, referring to the Bilderberg participants, had asked Slat how his mission to clean the oceans would “work with their world-domination plan.” It was sunny and warm, and Slat suggested that we take a walk. “When you walk, your brain is working better,” he said. “More blood flow.” We headed west to the Hudson River. Passing a small marina, we stopped, leaning over the railing to look at the dark, oily water. Dubois pointed at a drifting cigarette butt—cigarette filters are made of cellulose acetate that leaches toxins into waterways. “Ocean plastic,” he said. “Not ocean plastic yet,” Slat noted. “Will be soon,” Dubois said. Just that morning, Nature Communications had published a paper by Slat, the oceanographer Lebreton, and four other scientists which estimated that as much as 2.4 million metric tons of plastic could be entering the ocean from rivers each year. Slat is often asked whether he will develop a cleanup system for river mouths, catching the plastic at its source. A few river-mouth systems have already been successfully deployed, including three in Baltimore known as Mr. Trash Wheel, Professor Trash Wheel, and Captain Trash Wheel. Universally celebrated by scientists, and citizens, they are arguably the most beloved and sensible anti-plastic-pollution mechanisms in the country. Slat has a few ideas for river projects down the road, but, he said, “if you do everything at the same time, you’ll succeed at nothing.” We walked around the southern tip of Manhattan, then cut back inland to a park in Chinatown, the spot Slat had chosen for his next meeting. A tall man wearing a tuxedo and a woman in an evening gown appeared from between two parked cars and crossed the street toward us. The man was Hugh Welsh, the head of DSM North America, an arm of a Dutch multinational company that manufactures products including resins and plastics for the building and automobile industries, electronics, medical equipment, and food packaging. Dubois greeted Welsh warmly; he used to work as a public-relations director for DSM, and Welsh has become a major donor to the Ocean Cleanup. Welsh and his colleague apologized for their attire; they were on their way to a black-tie event. In recent years, environmental groups such as Upstream and the international movement #BreakFreeFromPlastic, which Froilan Grate helps lead, have argued for what’s become known as “extended producer responsibility”—the idea that the manufacturers of products that become waste must bear the burden of cleaning it up, especially when they send those products to developing countries that have little solid-waste disposal or recycling infrastructure. Slat’s model, which relies on voluntary donations, might seem a good place to start, or, conversely, a compromise that will make plastic producers feel better about doing little to address the problem at its source. An entire fleet of sixty of Slat’s systems could cost around three hundred and sixty million dollars, and Slat hopes that much of this will come from corporations that have a stake in the production of plastic. Slat politely greeted Welsh, who seemed amused and intrigued by the young Dutchman, as if he were meeting a child celebrity. “Is this your first time to New York?” he asked. Slat, who had been to the city many times before, said, “No, but it’s even more interesting every time.” Dubois and Welsh walked ahead, and Slat bid me farewell. It would be a private meeting. Slat later told me that he hoped companies like DSM, “or anyone who wanted to help the ocean,” would start sponsoring systems. “They would have plenty of space for logos, if there is any company out there that wants to be smart,” he said. I met Slat again ten months later, in April, at the bleak waterfront assembly yard in Alameda, across the bay from downtown San Francisco. A tower housing Marc Benioff’s company, Salesforce, stood high on the skyline. Across from a row of ancient school buses awaiting retrofits, a turquoise Ocean Cleanup sign announced the presence of the prototype: “Home of System 001.” Resting on head-high risers was an enormous black plastic pipe, the first segment of the two-thousand-foot device’s boom. Workers were about to start the next fusion weld. Slat’s original idea—using the ocean’s currents to do the work of collecting trash—has remained the foundation of his design, but almost everything else has changed. His first blueprint, presented at the TEDx talk in 2012, owed more to science fiction than to reality: a chain of manta-ray-shaped stations that would passively funnel trash into their bellies. In this model, an underwater system of mooring lines would anchor the entire structure to the seabed, fifteen thousand feet below. In the summer of 2016, Slat launched a prototype called Boomy McBoomface (a suggestion from social-media followers) into the North Sea. Within two months, the ocean had torn it apart. Although Slat’s engineers were increasingly convinced that mooring a structure to the seabed would not work, Slat was reluctant to let go of the idea. “As an inventor, your inventions are your babies,” he said. Besides, he added, “it can be very risky if you leave an old idea and switch to a new idea too soon. It’s like a new girlfriend. You don’t see the flaws.” As a banner at the assembly yard showed, the final blueprint had no anchors. Instead, it consisted of a free-floating boom, bent into the shape of a horseshoe, with a skirt secured to its underside. The new idea was that the device, driven by the forces of the wind and waves from outside the horseshoe, would act like a sweeper, reorienting itself when the wind changed direction. Models and tests suggested that the sweeper would travel about fifteen centimetres per second faster than the plastic and collect 2.2 metric tons of trash a week. G.P.S. trackers, cameras, and sensors positioned every hundred metres along the length of the boom would communicate the system’s progress to the team onshore, as well as indicate its presence to passing marine vessels and monitor for wildlife. Longtime critics of the Ocean Cleanup, such as Miriam Goldstein, the director of ocean policy at the liberal think tank the Center for American Progress, have repeatedly pointed to the potential of Slat’s system to hurt the “ecological community” on the ocean’s surface, including jellyfish, water striders, and tiny creatures known as blue sea dragons. In 2014, Goldstein and another oceanographer, Kim Martini, expressed this and other concerns—such as the boom’s ability to withstand harsh offshore conditions—in a critique online. In response, Slat countered by saying that they were not engineers. But, he told me, he did not want to dismiss critics, because their contributions had helped transform the design of the system from his original concept. “All the details are different now, and it’s in part thanks to unsolicited feedback,” he said. Goldstein and Martini remained skeptical. Stefan Llewellyn Smith, who teaches fluid mechanics at the Scripps Institution of Oceanography, also warned that successes in testing pools and in computer models were no guarantee that the system would behave the same way once it was at full size, out at sea. “I’d class the difference in velocity between the plastic and the structure as the main issue,” he told me. Gradually, Slat seemed to be absorbing such concerns, and, at the assembly yard, was more cautious in his promises than he had been earlier. “It still is very much an experimental system,” he said. Research equipment and tools would be sent out to the gyre with the device, and these would, in any case, benefit the scientific community, he stressed: “Always good to better understand the problem.” Early on the morning of September 8th, the day of the launch, Slat dreamed that the pipe had been sent out into the seawater and had begun to melt. He woke up and could not get back to sleep, so he started preparing for the dozens of interviews he would give that day. The sky was blue, with light winds and warm air. When he arrived at the pier in San Francisco Bay, the top deck of the media boat, a ferry, was already packed with cameramen and reporters. The Ocean Cleanup’s publicity crew, seven strong, wore turquoise “Ocean Cleanup” shirts, which matched the turquoise-painted Maersk tug ship that would tow the system out to the gyre. Maersk, the largest shipping line in the world, was providing the ship and its crew free of charge. A Maersk spokeswoman named Stephanie Gillespie was aboard, and she told me, “Our seafarers sail through this garbage patch and see this plastic everywhere. So it made sense for our company to invest in cleaning it up.” The team had given Slat’s invention a name, Wilson, for the volleyball that Tom Hanks, lost at sea, befriends in “Cast Away.” Slat held a brief press conference, concluding, “For sixty years, mankind has been putting plastic into the oceans. From this day onward, we’re taking it back out again.” The ferry set off, and a few minutes later Wilson came into view, gliding behind the Maersk ship like a long black tail. It was already speckled with bird droppings. “It is a nice tie, but it’s mine.” Link copied At the moment when the media ferry was behind the ship, as it approached the Golden Gate Bridge, Slat jumped up to stand on some benches at the bow for pictures, his hair blowing in the wind. Then a deckhand emerged from the bridge and yelled, “Please get down!” Slat, looking embarrassed, stepped off the bench. The journey back was fast and, once ashore, Slat collapsed on a flower planter along the boardwalk. “I’m dead,” he said. Dubois and a few other Ocean Cleanup staff were staring at their phones. Dubois started cheering: “We just got to a hundred thousand Instagram followers!” A few days later, Goldstein, who had seen news of the Ocean Cleanup’s big day online, ran into Slat at a party hosted by Benioff at the Salesforce tower, during San Francisco’s Global Climate Action Summit. “Congratulations on the launch,” she said. “I hope it succeeds.” “You do?” Slat replied. Martini tweeted about the launch, posing three “critical questions” about the structure’s strength, efficacy, and impact on marine ecology. She said of Slat and his team, “I think they have overexaggerated what they can do, and that sells.” But despite all her doubts, she said, “at this point I kind of hope it works. Maybe they are going to prove me wrong. That’s my secret hope.” In mid-September, Eriksen sent an e-mail blast to a Listserv called Marine Debris, in which he called Slat’s mission “a misdirected activity” that “makes it harder for those working to focus the narrative to prevention.” Eriksen reminded me, by phone, that only one per cent of the plastic entering the ocean is on the surface of the North Pacific gyre. Scientists still don’t know where, exactly, the rest goes. Eriksen explained that it might be on the seafloor, or suspended as nanoplastic, or have washed back onto the shore. A recent study on debris from the 2011 tsunami in Japan found that, of a thousand boats that the wave carried out to sea, only a hundred were estimated still to be offshore, travelling along the currents. “There are natural mechanisms that eject trash rapidly,” he wrote to the Listserv. Five weeks after the launch, on October 16th, Wilson, towed behind the Maersk tug, arrived at the North Pacific gyre, a few dozen degrees of longitude east of the International Date Line. An Ocean Cleanup engineer had given Wilson its own Twitter feed, and its coördinates and pictures of its surroundings were posted almost daily. Each picture showed a portion of the pipe, and, when the sun was shining, sparkling blue water. There was no plastic in sight. A week later, Slat tweeted a picture: “First plastic.” Two bobbing white baskets and a scattering of fragments were visible in the waves. As the weeks passed, Wilson behaved as the engineers hoped it would—reorienting itself when the wind changed direction, catching and concentrating plastic in its arms. But then much of the plastic would float away, back out to the infinite, or drift around Wilson and collect along its back. Slat’s engineers developed twenty-seven hypotheses to explain why this happened. Wilson was moving too slowly, sometimes slower than the plastic. Perhaps the surface plastic was more affected by the wave drift force than they had calculated, or perhaps an inertial current—one much smaller and more localized than the main one they had accounted for—was slowing Wilson’s motion. “This is, of course, something we never saw in the computer models, which underlines why it’s so important that we’re out there,” Slat said, when I met him in New York again, just before Christmas. “Another fun one is the jellyfish hypothesis,” he said. Scientists noticed that Wilson’s arms were oscillating, creating a propulsion that resembles the way a jellyfish swims, and causing Wilson to “swim” away from the plastic. Slat had a cold, and hadn’t slept much for several weeks. Headlines had blared the system’s failure, and critics on Twitter scoffed at Slat’s boondoggle. “What I would be afraid of is that, if it fails, we let down the world,” he said. “People will point at it and say, ‘Look, fancy gizmos, silver bullet, panacea—basically, that proves that that is not the way to solve a problem.’ ” But his team of engineers had helped him “come to peace,” he said, as they all returned to work. He was already considering new iterations, possibly incorporating sails or wave gliders that would make Wilson go faster. When I described Wilson’s plight to Chris Garrett, a physical oceanographer and an emeritus professor at the University of Victoria, he told me that he found the entire premise of the device “rather strange and puzzling.” He and other scientists pointed to additional phenomena—Stokes drift, Langmuir circulation, Ekman spiral—that could cause the plastic to move faster than Wilson. Spinrad described the Ocean Cleanup system as “medium-level technology,” with a high risk, and a high payoff. “The sea is an unrelenting trial judge,” he said. “The probability of failure was at least as high as the probability of success.” He added, “I see nothing that cannot be remediated with good physics and engineering and money.” The team has since stated that Wilson collected two metric tons of plastic. A week and a half after Slat and I met in New York, crew members stationed in the gyre were doing a maintenance check of Wilson when they noticed that one end of the boom, an eighteen-metre segment, had snapped off and floated away, owing, apparently, to “material fatigue” on a small section. Perversely, Wilson’s plastic wasn’t durable enough. The segment was retrieved, and the two parts were towed to Hawaii, arriving just off the coast on January 17th. Dubois had been in Honolulu, trying to negotiate a way to bring Wilson into the harbor and onto a loading dock, either to be repaired in Hawaii or, if necessary, shipped back to the assembly yard in Alameda. When we spoke, he sounded determined, if weary. “The I-told-you-so’s have been pretty abundant,” he said. “But we’re not just a bunch of cowboys going out there to play with a big tube thing out in the ocean, trying to prove that we’re right. We want to show that this is an important research platform.” One of Slat’s funders, the grant foundation of the Swiss bank Julius Baer, recently gave the Ocean Cleanup an additional six hundred thousand dollars for research into safely recycling ocean plastic, which is brittle and often toxic. None of Slat’s other donors had indicated that they would stop supporting the Ocean Cleanup. Hugh Welsh told me that, despite the “unwarranted criticism that the project has faced,” he and DSM “remain steadfast supporters.” (In spite of repeated requests for an interview, Marc Benioff, Slat’s top donor, said that he was unavailable for comment.) The last time I spoke to Slat, on January 7th, he was undeterred. “This is just a first act,” he said. “It’s not a system failure. It’s a component failure.” His words reminded me of an episode soon after Wilson’s launch, in Pacifica, a beach town south of San Francisco. Slat had learned that I surfed, and, despite being a beginner, he suggested that we go together. On the beach, he told me about a small rip in Wilson’s skirt that had occurred during the trip to the gyre. The skirt was Wilson’s most expensive component; it needed to be made by hand, and had required forty-eight people, working for sixty days, in a warehouse in the United Arab Emirates, to assemble. “That material seemed impenetrable, so it was a little hard to believe,” Slat said. The wind was blowing onshore, turning the waves into a mess of white foam, and it was a long, exhausting paddle beyond the impact zone to calm water. Once there, Slat and I watched other surfers for a while, then Slat spotted a small ridge. He turned, paddled away, and disappeared. His board bobbed up to the surface without him, then he got tossed far back toward shore by the waves that followed. Some twenty minutes later, after another fight through breaking waves, he reappeared beside me. “Now I know how Wilson feels,” he said, gasping. “I feel sorry for him. So much power under there.” He sat on his board. Striving to catch his breath, he returned to Wilson’s journey to the gyre. “The skirt weighs twenty-seven thousand kilograms,” he said. “But when the swells are big enough it can flip out of the water like a flag in a hurricane.” As the afternoon passed, Slat stayed in the water, growing bolder, trying and failing to get up on his feet and ride a wave, washing to shore instead, then fighting the endless white water to get back out. ♦ Published in the print edition of the February 4, 2019, issue, with the headline “The Widening Gyre.”
“1977 was the year that Elvis Presley left the building for good, presumably.” The audience laughed. Slat then held up a clear plastic dish, filled with shards of plastic. “The contents of this dish are the actual stomach contents of a single sea turtle that was found dead in Uruguay last year,” he said. A picture of the dead turtle flashed on a screen behind him. Then Slat made his pitch. In the next twelve months, he and a staff of engineers at the Ocean Cleanup, an organization he founded in 2013, would build the system they had designed, assemble it in a yard on San Francisco Bay, then set sail with it, travelling under the Golden Gate Bridge and out into the Pacific. Slat’s destination was the Great Pacific Garbage Patch, midway between California and Hawaii, an area within what is known as the North Pacific Subtropical Convergence Zone. The patch is not, as is often believed, a solid island of trash but a gyre, twice the size of Texas, where winds and currents draw diffuse floating debris onto a vast carrousel that never stops. There are four other ocean gyres in the world, but scientists believe that the one in the North Pacific contains the most trash—nearly two trillion pieces of plastic, weighing nearly eighty thousand metric tons, according to a study that scientists working with the Ocean Cleanup published in the online journal Scientific Reports last March. The study found that ninety-two per cent of the pieces are large fragments and objects: toothbrushes, bottles, umbrella handles, toy guns, jerricans, laundry baskets. Most problematic, and accounting for half of the plastic mass in the gyre, are what sailors call ghost nets: great tangles of mile-long discarded fishing nets weighing as much as two tons, which can ensnare animals such as seals and sea turtles. Attempting to fish out this drifting morass of trash using conventional methods—vessels, more nets—would be a Sisyphean task.
yes
Oceanography
Is the trash island in the Pacific Ocean as large as Texas?
yes_statement
the "trash" "island" in the pacific ocean is as "large" as texas.. the size of the "trash" "island" in the pacific ocean is equivalent to that of texas.
https://nypost.com/2018/03/23/great-pacific-garbage-patch-is-now-twice-the-size-of-texas/
Great Pacific Garbage Patch is now twice the size of Texas
Related stories:: The Pacific Ocean is being treated like a giant dumpster — and it’s starting to look like one, too. A “floating” island of trash dubbed the Great Pacific Garbage Patch (GPGP) now stretches 600,000 square miles, according to a study published Thursday in Scientific Reports. It’s more than twice the size of Texas (three times the size of France), and it’s growing every day. “[It’s a] ticking time bomb because the big stuff will crumble down to micro-plastics over the next few decades if we don’t act,” Boyan Slat, founder of Ocean Cleanup, a nonprofit that helps remove pollution from the world’s oceans, told Newser at the time. The size of the trash pile has nearly doubled in size since then, containing at least 79,000 tons of plastic — “a figure four to sixteen times higher than previously reported,” Scientific Reports said. Researchers gathered 1.2 million samples during a multi-vessel expedition in October 2017, exactly one year after their previous test. They used large nets to scoop the debris and took several aerial images to examine the extent of the GPGP. Large items such as bottles, ropes, plastic bags and buoys were the most common objects spotted in the pile. Fishing nets had an overwhelming presence, accounting for nearly half of the weight of debris picked up by research vessels. Microscopic particles made up less than 10 percent of the mass collected by researchers. “We were surprised by the amount of large plastic objects we encountered,” Dr. Julia Reisser, the chief scientist of the expeditions, said in a statement. “We used to think most of the debris consists of small fragments, but this new analysis shines a new light on the scope of the debris.” Data from the nets proved more plastic is coming into the ocean than being cleaned up. But scientists didn’t realize how fast garbage was piling up. “Historical data from surface net tows indicate that plastic pollution levels are increasing exponentially inside the GPGP, and at a faster rate than in surrounding waters,” the report said. The findings were “depressing to see,” Laurent Lebreton, an oceanographer and lead author of the study, told The Guardian. “There were things you just wondered how they made it into the ocean,” Lebreton said, adding that the group even found a toilet seat discarded into the sea. “There’s clearly an increasing influx of plastic into the garbage patch.” Pollution is problematic for the environment and humans, but it’s especially troubling for marine life. “Floating plastic litter can be ingested or entangle marine life, and carry invasive organisms across oceanic basins,” Matthew Cole, a research scientist with the Plymouth Marine Laboratory in the UK, told New Scientist. Lebreton hopes to find a way to curb plastic waste. “We need a coordinated international effort to rethink and redesign the way we use plastics,” he said. “The numbers speak for themselves. Things are getting worse and we need to act now.”
Related stories:: The Pacific Ocean is being treated like a giant dumpster — and it’s starting to look like one, too. A “floating” island of trash dubbed the Great Pacific Garbage Patch (GPGP) now stretches 600,000 square miles, according to a study published Thursday in Scientific Reports. It’s more than twice the size of Texas (three times the size of France), and it’s growing every day. “[It’s a] ticking time bomb because the big stuff will crumble down to micro-plastics over the next few decades if we don’t act,” Boyan Slat, founder of Ocean Cleanup, a nonprofit that helps remove pollution from the world’s oceans, told Newser at the time. The size of the trash pile has nearly doubled in size since then, containing at least 79,000 tons of plastic — “a figure four to sixteen times higher than previously reported,” Scientific Reports said. Researchers gathered 1.2 million samples during a multi-vessel expedition in October 2017, exactly one year after their previous test. They used large nets to scoop the debris and took several aerial images to examine the extent of the GPGP. Large items such as bottles, ropes, plastic bags and buoys were the most common objects spotted in the pile. Fishing nets had an overwhelming presence, accounting for nearly half of the weight of debris picked up by research vessels. Microscopic particles made up less than 10 percent of the mass collected by researchers. “We were surprised by the amount of large plastic objects we encountered,” Dr. Julia Reisser, the chief scientist of the expeditions, said in a statement. “We used to think most of the debris consists of small fragments, but this new analysis shines a new light on the scope of the debris.” Data from the nets proved more plastic is coming into the ocean than being cleaned up. But scientists didn’t realize how fast garbage was piling up.
yes
Oceanography
Is the trash island in the Pacific Ocean as large as Texas?
yes_statement
the "trash" "island" in the pacific ocean is as "large" as texas.. the size of the "trash" "island" in the pacific ocean is equivalent to that of texas.
https://today.oregonstate.edu/archives/2011/jan/oceanic-%E2%80%9Cgarbage-patch%E2%80%9D-not-nearly-big-portrayed-media
Oceanic "garbage patch" not nearly as big as portrayed in media ...
Oceanic "garbage patch" not nearly as big as portrayed in media CORVALLIS, Ore. - There is a lot of plastic trash floating in the Pacific Ocean, but claims that the "Great Garbage Patch" between California and Japan is twice the size of Texas are grossly exaggerated, according to an analysis by an Oregon State University scientist. Further claims that the oceans are filled with more plastic than plankton, and that the patch has been growing tenfold each decade since the 1950s are equally misleading, pointed out Angelicque "Angel" White, an assistant professor of oceanography at Oregon State. "There is no doubt that the amount of plastic in the world's oceans is troubling, but this kind of exaggeration undermines the credibility of scientists," White said. "We have data that allow us to make reasonable estimates; we don't need the hyperbole. Given the observed concentration of plastic in the North Pacific, it is simply inaccurate to state that plastic outweighs plankton, or that we have observed an exponential increase in plastic." White has pored over published literature and participated in one of the few expeditions solely aimed at understanding the abundance of plastic debris and the associated impact of plastic on microbial communities. That expedition was part of research funded by the National Science Foundation through C-MORE, the Center for Microbial Oceanography: Research and Education. The studies have shown is that if you look at the actual area of the plastic itself, rather than the entire North Pacific subtropical gyre, the hypothetically "cohesive" plastic patch is actually less than 1 percent of the geographic size of Texas. "The amount of plastic out there isn't trivial," White said. "But using the highest concentrations ever reported by scientists produces a patch that is a small fraction of the state of Texas, not twice the size." Another way to look at it, White said, is to compare the amount of plastic found to the amount of water in which it was found. "If we were to filter the surface area of the ocean equivalent to a football field in waters having the highest concentration (of plastic) ever recorded," she said, "the amount of plastic recovered would not even extend to the 1-inch line." Recent research by scientists at the Woods Hole Oceanographic Institution found that the amount of plastic, at least in the Atlantic Ocean, hasn't increased since the mid-1980s - despite greater production and consumption of materials made from plastic, she pointed out. "Are we doing a better job of preventing plastics from getting into the ocean?" White said. "Is more plastic sinking out of the surface waters? Or is it being more efficiently broken down? We just don't know. But the data on hand simply do not suggest that 'plastic patches' have increased in size. This is certainly an unexpected conclusion, but it may in part reflect the high spatial and temporal variability of plastic concentrations in the ocean and the limited number of samples that have been collected." The hyperbole about plastic patches saturating the media rankles White, who says such exaggeration can drive a wedge between the public and the scientific community. One recent claim that the garbage patch is as deep as the Golden Gate Bridge is tall is completely unfounded, she said. "Most plastics either sink or float," White pointed out. "Plastic isn't likely to be evenly distributed through the top 100 feet of the water column." White says there is growing interest in removing plastic from the ocean, but such efforts will be costly, inefficient, and may have unforeseen consequences. It would be difficult, for example, to "corral" and remove plastic particles from ocean waters without inadvertently removing phytoplankton, zooplankton, and small surface-dwelling aquatic creatures. "These small organisms are the heartbeat of the ocean," she said. "They are the foundation of healthy ocean food chains and immensely more abundant than plastic debris." The relationship between microbes and plastic is what drew White and her C-MORE colleagues to their analysis in the first place. During a recent expedition, they discovered that photosynthetic microbes were thriving on many plastic particles, in essence confirming that plastic is prime real estate for certain microbes. White also noted that while plastic may be beneficial to some organisms, it can also be toxic. Specifically, it is well-known that plastic debris can adsorb toxins such as PCB. "On one hand, these plastics may help remove toxins from the water," she said. "On the other hand, these same toxin-laden particles may be ingested by fish and seabirds. Plastic clearly does not belong in the ocean." Among other findings, which White believes should be part of the public dialogue on ocean trash: Calculations show that the amount of energy it would take to remove plastics from the ocean is roughly 250 times the mass of the plastic itself; Plastic also covers the ocean floor, particularly offshore of large population centers. A recent survey from the state of California found that 3 percent of the southern California Bight's ocean floor was covered with plastic - roughly half the amount of ocean floor covered by lost fishing gear in the same location. But little, overall, is known about how much plastic has accumulated at the bottom of the ocean, and how far offshore this debris field extends; It is a common misperception that you can see or quantify plastic from space. There are no tropical plastic islands out there and, in fact, most of the plastic isn't even visible from the deck of a boat; There are areas of the ocean largely unpolluted by plastic. A recent trawl White conducted in a remote section of water between Easter Island and Chile pulled in no plastic at all. There are other issues with plastic, White said, including the possibility that floating debris may act as a vector for introducing invasive species into sensitive habitats. "If there is a takeaway message, it's that we should consider it good news that the 'garbage patch' doesn't seem to be as bad as advertised," White said, "but since it would be prohibitively costly to remove the plastic, we need to focus our efforts on preventing more trash from fouling our oceans in the first place."
Oceanic "garbage patch" not nearly as big as portrayed in media CORVALLIS, Ore. - There is a lot of plastic trash floating in the Pacific Ocean, but claims that the "Great Garbage Patch" between California and Japan is twice the size of Texas are grossly exaggerated, according to an analysis by an Oregon State University scientist. Further claims that the oceans are filled with more plastic than plankton, and that the patch has been growing tenfold each decade since the 1950s are equally misleading, pointed out Angelicque "Angel" White, an assistant professor of oceanography at Oregon State. "There is no doubt that the amount of plastic in the world's oceans is troubling, but this kind of exaggeration undermines the credibility of scientists," White said. "We have data that allow us to make reasonable estimates; we don't need the hyperbole. Given the observed concentration of plastic in the North Pacific, it is simply inaccurate to state that plastic outweighs plankton, or that we have observed an exponential increase in plastic. " White has pored over published literature and participated in one of the few expeditions solely aimed at understanding the abundance of plastic debris and the associated impact of plastic on microbial communities. That expedition was part of research funded by the National Science Foundation through C-MORE, the Center for Microbial Oceanography: Research and Education. The studies have shown is that if you look at the actual area of the plastic itself, rather than the entire North Pacific subtropical gyre, the hypothetically "cohesive" plastic patch is actually less than 1 percent of the geographic size of Texas. "The amount of plastic out there isn't trivial," White said. "But using the highest concentrations ever reported by scientists produces a patch that is a small fraction of the state of Texas, not twice the size. " Another way to look at it, White said, is to compare the amount of plastic found to the amount of water in which it was found.
no
Oceanography
Is the trash island in the Pacific Ocean as large as Texas?
yes_statement
the "trash" "island" in the pacific ocean is as "large" as texas.. the size of the "trash" "island" in the pacific ocean is equivalent to that of texas.
https://www.nationalgeographic.com/science/article/photos-giant-ocean-trash-vortex-documented-a-first
Photos: Giant Ocean-Trash Vortex Documented—A First
Tangled with plastic, rope, and various aquatic animals, a "ghost net" drifts in August 2009 in the Eastern Pacific Garbage Patch, a loose, free-floating "dump" twice the size of <a href="http://travel.nationalgeographic.com/places/states/state_texas.html">Texas</a>.<br> <br> <a href="http://sio.ucsd.edu/Expeditions/Seaplex/">SEAPLEX (the Scripps Environmental Accumulation of Plastic Expedition)</a> recently became the first dedicated research trip to study the science of the remote plastic vortex in the <a href="http://environment.nationalgeographic.com/environment/habitats/ocean-profile.html">ocean</a> between <a href="http://travel.nationalgeographic.com/places/states/state_california.html">California</a> and <a href="http://travel.nationalgeographic.com/places/states/state_hawaii.html">Hawaii</a>. (See <a href="http://news.nationalgeographic.com/news/2009/07/090731-ocean-trash-pacific.html">"Giant Ocean-Trash Vortex Attracts Explorers."</a>)<br> <br> While large pieces are common, the garbage patch is not an island of plastic, the team found on their 19-day expedition in August. Much of the debris is in the form of countless thumbnail-size scraps.<br> <br> "I think the plastic-confetti metaphor is probably closest to the reality," said expedition member Jesse Powell, a doctoral student in biological oceanography at the Scripps Institution of Oceanography in California. Tangled with plastic, rope, and various aquatic animals, a "ghost net" drifts in August 2009 in the Eastern Pacific Garbage Patch, a loose, free-floating "dump" twice the size of Texas. While large pieces are common, the garbage patch is not an island of plastic, the team found on their 19-day expedition in August. Much of the debris is in the form of countless thumbnail-size scraps. "I think the plastic-confetti metaphor is probably closest to the reality," said expedition member Jesse Powell, a doctoral student in biological oceanography at the Scripps Institution of Oceanography in California. Photograph courtesy Scripps Institution of Oceanography Photo Gallery Photos: Giant Ocean-Trash Vortex Documented—A First Take a look at pieces of the Eastern Pacific Garbage Patch, a loose, free-floating "dump" twice the size of Texas.
Tangled with plastic, rope, and various aquatic animals, a "ghost net" drifts in August 2009 in the Eastern Pacific Garbage Patch, a loose, free-floating "dump" twice the size of <a href="http://travel.nationalgeographic.com/places/states/state_texas.html">Texas</a>.<br> <br> <a href="http://sio.ucsd.edu/Expeditions/Seaplex/">SEAPLEX (the Scripps Environmental Accumulation of Plastic Expedition)</a> recently became the first dedicated research trip to study the science of the remote plastic vortex in the <a href="http://environment.nationalgeographic.com/environment/habitats/ocean-profile.html">ocean</a> between <a href="http://travel.nationalgeographic.com/places/states/state_california.html">California</a> and <a href="http://travel.nationalgeographic.com/places/states/state_hawaii.html">Hawaii</a>. (See <a href="http://news.nationalgeographic.com/news/2009/07/090731-ocean-trash-pacific.html">"Giant Ocean-Trash Vortex Attracts Explorers."</a>)<br> <br> While large pieces are common, the garbage patch is not an island of plastic, the team found on their 19-day expedition in August. Much of the debris is in the form of countless thumbnail-size scraps.<br> <br> "I think the plastic-confetti metaphor is probably closest to the reality," said expedition member Jesse Powell, a doctoral student in biological oceanography at the Scripps Institution of Oceanography in California.
yes
Oceanography
Is the trash island in the Pacific Ocean as large as Texas?
no_statement
the "trash" "island" in the pacific ocean is not as "large" as texas.. the size of the "trash" "island" in the pacific ocean is smaller than that of texas.
https://science.howstuffworks.com/environmental/earth/oceanography/great-pacific-garbage-patch.htm
Why is the world's biggest landfill in the Pacific Ocean ...
Why is the world's biggest landfill in the Pacific Ocean? "" In the vast area of the Great Pacific Garbage Patch, jellyfish and other filter feeders frequently consume or become tangled in floating trash. See more ocean conservation pictures. Image courtesy Algalita Marine Research Foundation In t­he broad expanse of the northern Pacific Ocean, there exists the North Pacific Subtropical Gyre, a slowly moving, clockwise spiral of currents created by a high-pressure system of air currents. The area is an oceanic desert, filled with tiny phytoplankton but few big fish or mammals. Due to its lack of large fish and gentle breezes, fishermen and­ s­ailors rarely travel through the gyre. But the area is filled with something besides plankton: trash, millions of pounds of it, most of it plastic. It's the largest landfill in the world, and it floats in the middle of the ocean. The gyre has actually given birth to two large masses of ever-accumulating trash, known as the Western and Eastern Pacific Garbage Patches, sometimes collectively called the Great Pacific Garbage Patch. The Eastern Garbage Patch floats between Hawaii and California; scientists estimate its size as two times bigger than Texas [source: LA Times]. The Western Garbage Patch forms east of Japan and west of Hawaii. Each swirling mass of refuse is massive and collects trash from all over the world. The patches are connected by a thin 6,000-mile long current called the Subtropical Convergence Zone. Research flights showed that significant amounts of trash also accumulate in the Convergence Zone. Advertisement The garbage patches present numerous hazards to marine life, fishing and tourism. But before we discuss those, it's important to look at the role of plastic. Plastic constitutes 90 percent of all trash floating in the world's oceans [source: LA Times]. The United Nations Environment Program estimated in 2006 that every square mile of ocean hosts 46,000 pieces of floating plastic [source: UN Environment Program]. In some areas, the amount of plastic outweighs the amount of plankton by a ratio of six to one. Of the more than 200 billion pounds of plastic the world produces each year, about 10 percent ends up in the ocean [source: Greenpeace]. Seventy percent of that eventually sinks, damaging life on the ocean floor [source: Greenpeace]. The rest floats; much of it ends up in gyres and the massive garbage patches that form there, with some plastic eventually washing up on a distant shore. The Problem with Plastic The main problem with plastic -- besides there being so much of it -- is that it doesn't biodegrade. No natural process can break it down. (Experts point out ­that the durability that makes plastic so useful to humans also makes it quite harmful to nature.) Instead, plastic photodegrades. A plastic cigarette lighter cast out to sea will fragment into smaller and smaller pieces of plastic without breaking into simpler compounds, which scientists estimate could take hundreds of years. The small bits of plastic produced by photodegradation are called mermaid tears or nurdles. These tiny plastic particles can get sucked up by filter feeders and damage their bodies. Other marine animals eat the plastic, which can poison them or lead to deadly blockages. Nurdles also have the insidious property of soaking up toxic chemicals. Over time, even chemicals or poisons that are widely diffused in water can become highly concentrated as they're mopped up by nurdles. These poison-filled masses threaten the entire food chain, especially when eaten by filter feeders that are then consumed by large creatures. Advertisement Plastic has acutely affected albatrosses, which roam ­a wide swath of the northern Pacific Ocean. Albatrosses frequently grab food wherever they can find it, which leads to many of the birds ingesting -- and dying from -- plastic and other trash. On Midway Island, which comes into contact with parts of the Eastern Garbage Patch, albatrosses give birth to 500,000 chicks every year. Two hundred thousand of them die, many of them by consuming plastic fed to them by their parents, who confuse it for food [source: LA Times]. In total, more than a million birds and marine animals die each year from consuming or becoming caught in plastic and other debris. ­ Advertisement Effects of Plastic and the Great Pacific Garbage Patch "" Image courtesy Besides killing wildlife, plastic and other debris damage boat and submarine equipment, litter beaches, discourage swimming and harm commercial and local fisheries. The problem of plastic and other accumulated trash affects beaches and oceans all over the world, including at both poles. Land masses that end up in the path of the rotating gyres receive particularly large amounts of trash. The 19 islands of the Hawaiian archipelago, including Midway, receive massive quantities of trash shot out from the gyres. Some of the trash is decades old. Some beaches are buried under five to 10 feet of trash, while other beaches are riddled with "plastic sand," millions of grain-like pieces of plastic that are practically impossible to clean up. Most of this trash doesn't come from seafaring vessels dumping junk -- 80 percent of ocean trash originates on land [source: LA Times]. The rest comes from private and commercial ships, fishing equipment, oil platforms and spilled shipping containers (the contents of which frequently wash up on faraway shores years later). Advertisement Some efforts can help to stem the tide of refuse. International treaties prohibiting dumping at sea must be enforced. Untreated sewage shouldn't be allowed to flow into the ocean. Many communities and even some small island nations have eliminated the use of plastic bags. These bags are generally recyclable, but billions of them are thrown away every year. On the Hawaiian Islands, cleanup programs bring volunteers to the beaches to pick up trash, but some beaches, even those subjected to regular cleanings, are still covered in layers of trash several feet thick. Scientists who have studied the issue say that trawling the ocean for all of its trash is simply impossible and would harm plankton and other marine life. In some areas, big fragments can be collected, but it's simply not possible to thoroughly clean a section of ocean that spans the area of a continent and extends 100 feet below the surface [source: UN Environment Program]. Nearly all experts who speak about the subject raise the same point: It comes down to managing waste on land, where most of the trash originates. They recommend lobbying companies to find alternatives to plastic, especially environmentally safe, reusable packaging. Recycling programs should be expanded to accommodate more types of plastic, and the public must be educated about their value. In October 2006, the U.S. government established the Northwestern Hawaiian Islands Marine Monument. This long string of islands, located northwest of Hawaii, frequently comes into contact with the Eastern Garbage Patch. After the creation of the monument, Congress passed legislation to increase funding for cleanup efforts and ordered several government agencies to expand their cleanup work. It may be an important step, especially if it leads to more government attention to a problem that, while dire, has only received serious scientific attention since the early 1990s. For more information about the Great Pacific Garbage Patch, including how to volunteer for cleanup efforts, please check out the links on the next page. ­ Advertisement Frequently Answered Questions What is the biggest landfill in the ocean? The Great Pacific Garbage Patch is the largest landfill in the ocean. It is a floating mass of trash that is twice the size of Texas and is located in the Pacific Ocean between California and Hawaii.
Why is the world's biggest landfill in the Pacific Ocean? "" In the vast area of the Great Pacific Garbage Patch, jellyfish and other filter feeders frequently consume or become tangled in floating trash. See more ocean conservation pictures. Image courtesy Algalita Marine Research Foundation In t­he broad expanse of the northern Pacific Ocean, there exists the North Pacific Subtropical Gyre, a slowly moving, clockwise spiral of currents created by a high-pressure system of air currents. The area is an oceanic desert, filled with tiny phytoplankton but few big fish or mammals. Due to its lack of large fish and gentle breezes, fishermen and­ s­ailors rarely travel through the gyre. But the area is filled with something besides plankton: trash, millions of pounds of it, most of it plastic. It's the largest landfill in the world, and it floats in the middle of the ocean. The gyre has actually given birth to two large masses of ever-accumulating trash, known as the Western and Eastern Pacific Garbage Patches, sometimes collectively called the Great Pacific Garbage Patch. The Eastern Garbage Patch floats between Hawaii and California; scientists estimate its size as two times bigger than Texas [source: LA Times]. The Western Garbage Patch forms east of Japan and west of Hawaii. Each swirling mass of refuse is massive and collects trash from all over the world. The patches are connected by a thin 6,000-mile long current called the Subtropical Convergence Zone. Research flights showed that significant amounts of trash also accumulate in the Convergence Zone. Advertisement The garbage patches present numerous hazards to marine life, fishing and tourism. But before we discuss those, it's important to look at the role of plastic. Plastic constitutes 90 percent of all trash floating in the world's oceans [source: LA Times]. The United Nations Environment Program estimated in 2006 that every square mile of ocean hosts 46,000 pieces of floating plastic [source: UN Environment Program].
yes
Oceanography
Is the trash island in the Pacific Ocean as large as Texas?
no_statement
the "trash" "island" in the pacific ocean is not as "large" as texas.. the size of the "trash" "island" in the pacific ocean is smaller than that of texas.
https://www.verifythis.com/article/news/verify/environment-verify/no-you-cant-see-the-great-pacific-garbage-patch-from-space/536-c5b4906b-3a69-48ce-8708-016fb90990c5
Great Pacific garbage patch not visible from space | verifythis.com
More Videos Did you know there’s a gigantic body of floating plastic particles in the Pacific Ocean? According to The Ocean Cleanup, a nonprofit organization dedicated to cleaning up the world’s oceans, the Great Pacific Garbage Patch (GPGP) is the largest of the five offshore plastic accumulation zones in the world. The patch is located halfway between Hawaii and California. The patch covers an estimated surface area of just over 617,762 square miles. While it is difficult to measure the exact size of the garbage patch because the boundaries are constantly shifting, that surface area is roughly twice the size of Texas or three times the size of France, The Ocean Cleanup reported. According to Google Trends, the top question people searching had about the garbage patch was its visibility from space. Across Twitter, some have even claimed the accumulation problem is so huge, it’s viewable from the International Space Station. This is the massive garbage patch in the ocean as seen from the International Space Station. Still think pollution isn’t a huge problem? pic.twitter.com/xpZmjiXifM THE ANSWER WHAT WE FOUND The Great Pacific Garbage Patch developed because of the North Pacific Gyre, which is a system of circulating currents. The currents pick up large amounts of microplastic debris, or extremely small pieces of plastic, and some larger pieces of floating trash. The flotsam then circulates together within a vortex, slowly growing as more and more debris is gathered. To better understand the Great Pacific Garbage Patch, it’s simpler to think of it like a soup, Sarah Jeanne-Royer, with the Scripps Institute of Oceanography, told VERIFY. “I guess we can see it as a soup of microplastics … It's where you have floating debris of plastic basically moving at the surface and slightly under the surface of the water. So there's a lot of movement, and you get the microplastics going up and down in a couple of meters of water at the surface,” Jeanne-Royer said. Even though the GPGP is very large, it can’t be seen from space because it isn't one giant mass of trash, nor is it a floating island, according to Oceana. Oceana is an ocean conservation group that in 2019 posted a blog dispelling three myths about the patch. “Because microplastics are smaller than a pencil eraser, they are not immediately noticeable to the naked eye,” says the National Oceanic and Atmospheric Administration (NOAA) website. “It’s more like pepper flakes swirling in a soup than something you can skim off the surface.” Credit: NOAA Plastics are the most common form of marine debris. They can come from a variety of land and ocean-based sources; enter the water in many ways; and impact the ocean and Great Lakes. Once in the water, plastic debris never fully biodegrades. Yellow text in the above graphic shows sources of plastic that eventually end up in the ocean. Orange text shows ways that these plastics move into the ocean. Red text provides examples of the harmful impacts of this debris. Mallos told VERIFY the patch can’t be seen from space for that very reason - because most of the vortex is made up of microscopic particles. “Some places you do see large ropes and fishing nets floating up the surface. You'll see large pieces of plastic that may have fallen overboard – laundry baskets, car bumpers, anything you can imagine you might find out there. But in general, what you actually find is a lot of very small pieces of plastics that have broken up into small pieces, like from the larger box they were as they circulate in the ocean over days, months, years,” Mallos said. Nancy Wallace, with NOAA, told VERIFY the garbage patch isn’t something you could even see from a satellite because the trash particles are so small. Credit: Google Maps This screenshot was taken on Jan. 11, 2022, showing the area of the Pacific Ocean where the Great Pacific garbage patch would be located. It is not visible from a satellite. But just because you can’t see the garbage patch from space doesn’t mean it’s not a problem for the environment. The misconception that the garbage patch is a floating island has also made conservation more difficult, Mallos said. He said it’s just not true that someone can just take a net and scoop up physical garbage. “The small stuff is even more concerning, because it is incredibly hard, if not impossible to clean up. And we know the small stuff can be eaten by pretty much every animal and organism in the ocean. Whereas the larger stuff, it's a much smaller selection of marine wildlife that can eat it. So just because it's smaller does not mean it's a smaller problem. And in fact, it may just be the opposite. The smaller plastics become the much greater wider risks they pose to the entire marine ecosystem,” Mallos said.
More Videos Did you know there’s a gigantic body of floating plastic particles in the Pacific Ocean? According to The Ocean Cleanup, a nonprofit organization dedicated to cleaning up the world’s oceans, the Great Pacific Garbage Patch (GPGP) is the largest of the five offshore plastic accumulation zones in the world. The patch is located halfway between Hawaii and California. The patch covers an estimated surface area of just over 617,762 square miles. While it is difficult to measure the exact size of the garbage patch because the boundaries are constantly shifting, that surface area is roughly twice the size of Texas or three times the size of France, The Ocean Cleanup reported. According to Google Trends, the top question people searching had about the garbage patch was its visibility from space. Across Twitter, some have even claimed the accumulation problem is so huge, it’s viewable from the International Space Station. This is the massive garbage patch in the ocean as seen from the International Space Station. Still think pollution isn’t a huge problem? pic.twitter.com/xpZmjiXifM THE ANSWER WHAT WE FOUND The Great Pacific Garbage Patch developed because of the North Pacific Gyre, which is a system of circulating currents. The currents pick up large amounts of microplastic debris, or extremely small pieces of plastic, and some larger pieces of floating trash. The flotsam then circulates together within a vortex, slowly growing as more and more debris is gathered. To better understand the Great Pacific Garbage Patch, it’s simpler to think of it like a soup, Sarah Jeanne-Royer, with the Scripps Institute of Oceanography, told VERIFY. “I guess we can see it as a soup of microplastics … It's where you have floating debris of plastic basically moving at the surface and slightly under the surface of the water. So there's a lot of movement, and you get the microplastics going up and down in a couple of meters of water at the surface,” Jeanne-Royer said.
yes
Mycology
Is the world's largest organism a fungus?
yes_statement
the "world"'s "largest" "organism" is a "fungus".. a "fungus" holds the title for being the "world"'s "largest" "organism".
https://www.scientificamerican.com/article/strange-but-true-largest-organism-is-fungus/
Strange but True: The Largest Organism on Earth Is a Fungus ...
Next time you purchase white button mushrooms at the grocery store, just remember, they may be cute and bite-size but they have a relative out west that occupies some 2,384 acres (965 hectares) of soil in Oregon's Blue Mountains. Put another way, this humongous fungus would encompass 1,665 football fields, or nearly four square miles (10 square kilometers) of turf. The discovery of this giant Armillaria ostoyae in 1998 heralded a new record holder for the title of the world's largest known organism, believed by most to be the 110-foot- (33.5-meter-) long, 200-ton blue whale. Based on its current growth rate, the fungus is estimated to be 2,400 years old but could be as ancient as 8,650 years, which would earn it a place among the oldest living organisms as well. A team of forestry scientists discovered the giant after setting out to map the population of this pathogenic fungus in eastern Oregon. The team paired fungal samples in petri dishes to see if they fused (see photo below), a sign that they were from the same genetic individual, and used DNA fingerprinting to determine where one individual fungus ended. This one, A. ostoyae, causes Armillaria root disease, which kills swaths of conifers in many parts of the U.S. and Canada. The fungus primarily grows along tree roots via hyphae, fine filaments that mat together and excrete digestive enzymes. But Armillaria has the unique ability to extend rhizomorphs, flat shoestringlike structures, that bridge gaps between food sources and expand the fungus's sweeping perimeter ever more. A combination of good genes and a stable environment has allowed this particularly ginormous fungus to continue its creeping existence over the past millennia. "These are very strange organisms to our anthropocentric way of thinking," says biochemist Myron Smith of Carleton University in Ottawa, Ontario. An Armillaria individual consists of a network of hyphae, he explains. "Collectively, this network is called the mycelium and is of an indefinite shape and size." All fungi in the Armillaria genus are known as honey mushrooms, for the yellow-capped and sweet fruiting bodies they produce. Some varieties share this penchant for monstrosity but are more benign in nature. In fact the very first massive fungus discovered in 1992—a 37-acre (15-hectare) Armillaria bulbosa, which was later renamed Armillaria gallica—is annually celebrated at a "fungus fest" in the nearby town of Crystal Falls, Mich. Myron Smith was a PhD candidate in botany at the University of Toronto when he and colleagues discovered this exclusive fungus in the hardwood forests near Crystal Falls. "This was kind of a side project," Smith recalls. "We were looking at the boundaries of [fungal] individuals using genetic tests and the first year we didn't find the edge." Next, the microbiologists developed a new way to tell an individual apart from a group of closely related siblings using a battery of molecular genetic techniques. The major test compared fungal genes for telltale signs of inbreeding, where heterozygous strips of DNA become homozygous. That's when they realized they had struck it big. The individual Armillaria bulbosa they found weighed over 100 tons (90.7 metric tons) and was roughly 1,500 years old. "People had ideas that maybe they were big but nobody had any idea they were that big," says Tom Volk, a biology professor at the University of Wisconsin–La Crosse. "Well it's certainly the biggest publicity that mycology is going to get—maybe ever." Soon afterward, the discovery of an even bigger fungus in southwestern Washington was announced by Terry Shaw, then in Colorado with the U.S. Forest Service (USFS), and Ken Russell, a forest pathologist at Washington State Department of Natural Resources, in 1992. Their fungus, a specimen of Armillaria ostoyae, covered about 1,500 acres (600 hectares) or 2.5 square miles (6.5 square kilometers). And in 2003 Catherine Parks of the USFS in Oregon and her colleagues published their discovery of the current behemoth 2,384-acre Armillaria ostoyae. Ironically, the discovery of such huge fungi specimens rekindled the debate of what constitutes an individual organism. "It's one set of genetically identical cells that are in communication with one another that have a sort of common purpose or at least can coordinate themselves to do something," Volk explains. Both the giant blue whale and the humongous fungus fit comfortably within this definition. So does the 6,615-ton (six-million-kilogram) colony of a male quaking aspen tree and his clones that covers 107 acres (43 hectares) of a Utah mountainside. And, at second glance, even those button mushrooms aren't so tiny. A large mushroom farm can produce as much as one million pounds (454 metric tons) of them in a year. "The mushrooms that people grow in the mushroom houses&133;; they're nearly genetically identical from one grower to another," Smith says. "So in a large mushroom-growing facility that would be a genetic individual—and it's massive!" In fact, humongous may be in the nature of things for a fungus. "We think that these things are not very rare," Volk says. "We think that they're in fact normal." Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers.
Next time you purchase white button mushrooms at the grocery store, just remember, they may be cute and bite-size but they have a relative out west that occupies some 2,384 acres (965 hectares) of soil in Oregon's Blue Mountains. Put another way, this humongous fungus would encompass 1,665 football fields, or nearly four square miles (10 square kilometers) of turf. The discovery of this giant Armillaria ostoyae in 1998 heralded a new record holder for the title of the world's largest known organism, believed by most to be the 110-foot- (33.5-meter-) long, 200-ton blue whale. Based on its current growth rate, the fungus is estimated to be 2,400 years old but could be as ancient as 8,650 years, which would earn it a place among the oldest living organisms as well. A team of forestry scientists discovered the giant after setting out to map the population of this pathogenic fungus in eastern Oregon. The team paired fungal samples in petri dishes to see if they fused (see photo below), a sign that they were from the same genetic individual, and used DNA fingerprinting to determine where one individual fungus ended. This one, A. ostoyae, causes Armillaria root disease, which kills swaths of conifers in many parts of the U.S. and Canada. The fungus primarily grows along tree roots via hyphae, fine filaments that mat together and excrete digestive enzymes. But Armillaria has the unique ability to extend rhizomorphs, flat shoestringlike structures, that bridge gaps between food sources and expand the fungus's sweeping perimeter ever more. A combination of good genes and a stable environment has allowed this particularly ginormous fungus to continue its creeping existence over the past millennia. "These are very strange organisms to our anthropocentric way of thinking," says biochemist Myron Smith of Carleton University in Ottawa, Ontario. An Armillaria individual consists of a network of hyphae, he explains. "Collectively,
yes
Mycology
Is the world's largest organism a fungus?
yes_statement
the "world"'s "largest" "organism" is a "fungus".. a "fungus" holds the title for being the "world"'s "largest" "organism".
https://www.nytimes.com/1992/12/21/opinion/hail-to-the-world-s-largest-organism.html
Opinion | Hail to the World's Largest Organism - The New York Times
Hail to the World's Largest Organism TimesMachine is an exclusive benefit for home delivery and digital subscribers. About the Archive This is a digitized version of an article from The Times’s print archive, before the start of online publication in 1996. To preserve these articles as they originally appeared, The Times does not alter, edit or update them. Occasionally the digitization process introduces transcription errors or other problems; we are continuing to work to improve these archived versions. What a relief! The title of world's largest organism may no longer belong to that creepy giant fungus lying under the ground in northern Michigan. A new, more comely candidate from Utah has been nominated. It was a shock last April to learn that the dainty mushrooms found on the forest floor in Michigan's Upper Peninsula were but the visible manifestations of a single genetically uniform fungus. It had been extending its tentacles beneath the ground for more than 1,500 years, maybe even 10,000 years, before butting into other underground giants. That mega-fungus now covers more than 30 acres and weighs 100 tons. And scientists think even bigger fungi may lie undetected elsewhere. Who can blame this page for fretting, at the time, that the fungi might inherit the earth? Now comes welcome news that an even larger organism has been identified in the Wasatch Mountains of Utah. It's a huge stand of 47,000 quaking aspen trees and stems, growing from a single root system, that covers 106 acres, is genetically uniform and acts as a single organism. When the trees change color in the fall, they do so in unison, like the card section at halftime of a football game. Surely this is a more fitting champion. It weighs 6,000 tons, 60 times as much as the fungus. Its root system is alleged to be intact, whereas the fungus was acknowledged to have many tiny breaks in its network of tentacles and mushrooms. And it can march over hill and dale, as it has for thousands of years. Most important, it is beautiful and non-threatening, an inspiration to artists rather than a subject for horror movies. Identifying the largest organisms was once a simple task. You could see them whole, as living entities bounded by skin or an outer covering. The largest were thought to be whales, elephants and giant sequoia trees; the giants of yore were the dinosaurs. But now, armed with the fancy new tools of genetic analysis, scientists are tempted to define as an organism anything that is genetically uniform and clumped more or less together. Even if no one can see the whole thing, or even be sure it is intact. Give record-conscious scientists enough time and they will surely find an even bigger organism to crown as champion. Can something be lurking, as yet undetected, beneath the quaking aspens? A version of this article appears in print on , Section A, Page 16 of the National edition with the headline: Hail to the World's Largest Organism. Order Reprints | Today’s Paper | Subscribe
Hail to the World's Largest Organism TimesMachine is an exclusive benefit for home delivery and digital subscribers. About the Archive This is a digitized version of an article from The Times’s print archive, before the start of online publication in 1996. To preserve these articles as they originally appeared, The Times does not alter, edit or update them. Occasionally the digitization process introduces transcription errors or other problems; we are continuing to work to improve these archived versions. What a relief! The title of world's largest organism may no longer belong to that creepy giant fungus lying under the ground in northern Michigan. A new, more comely candidate from Utah has been nominated. It was a shock last April to learn that the dainty mushrooms found on the forest floor in Michigan's Upper Peninsula were but the visible manifestations of a single genetically uniform fungus. It had been extending its tentacles beneath the ground for more than 1,500 years, maybe even 10,000 years, before butting into other underground giants. That mega-fungus now covers more than 30 acres and weighs 100 tons. And scientists think even bigger fungi may lie undetected elsewhere. Who can blame this page for fretting, at the time, that the fungi might inherit the earth? Now comes welcome news that an even larger organism has been identified in the Wasatch Mountains of Utah. It's a huge stand of 47,000 quaking aspen trees and stems, growing from a single root system, that covers 106 acres, is genetically uniform and acts as a single organism. When the trees change color in the fall, they do so in unison, like the card section at halftime of a football game. Surely this is a more fitting champion. It weighs 6,000 tons, 60 times as much as the fungus. Its root system is alleged to be intact, whereas the fungus was acknowledged to have many tiny breaks in its network of tentacles and mushrooms. And it can march over hill and dale, as it has for thousands of years.
no
Mycology
Is the world's largest organism a fungus?
yes_statement
the "world"'s "largest" "organism" is a "fungus".. a "fungus" holds the title for being the "world"'s "largest" "organism".
https://frontenacarchbiosphere.ca/worlds-largest-organism/
World's Largest Organism
World’s Largest Organism Have you ever wondered what is the world’s largest organism? When thinking of the largest organism on earth, you tend to think of massive blue whales or giant redwood trees. Although those are both great guesses, they are in fact not the largest organism. In fact, they are not even close. Surprisingly, this title is held by a fungus known as Armillaria solidipes (Honey fungus). Scientists find a network of this fungus in the pacific northwest which spans 5.5 kilometres across. This is equal to roughly 2,384 acres, and they estimate it to be over 2000 years old. This fungus consists of visible above ground mushrooms and a large underground network of mycelia. The fungus gets so large through genetic clones joining together and creating this extensive mass. Are all fungi good? Some fungi can have symbiotic relationships with trees, but some including the honey fungus are destructive. The honey fungus and its network are known to infect live trees, which can lead to their death. It has been seen on several occasions where swaths of trees in the forest die off for seemingly no reason. This happens because the fungus wraps its rhizomorphs around the roots of the tree and emits digestive enzymes. The fungus also creeps up underneath the bark of the tree. This process is not instant, though. Depending on the scale of the fungal infection, the tree may still survive for 50 years. There are some worthy contenders for the world’s largest organism, with the Aspen Tree coming in a close second. An Aspen Tree colony known as Pando in Utah stretches 8 km in length. It also is heavier in total mass when compared to the Honey Fungus. However, the Honey fungus covers a larger area overall, giving it the title of the world’s largest organism.
World’s Largest Organism Have you ever wondered what is the world’s largest organism? When thinking of the largest organism on earth, you tend to think of massive blue whales or giant redwood trees. Although those are both great guesses, they are in fact not the largest organism. In fact, they are not even close. Surprisingly, this title is held by a fungus known as Armillaria solidipes (Honey fungus). Scientists find a network of this fungus in the pacific northwest which spans 5.5 kilometres across. This is equal to roughly 2,384 acres, and they estimate it to be over 2000 years old. This fungus consists of visible above ground mushrooms and a large underground network of mycelia. The fungus gets so large through genetic clones joining together and creating this extensive mass. Are all fungi good? Some fungi can have symbiotic relationships with trees, but some including the honey fungus are destructive. The honey fungus and its network are known to infect live trees, which can lead to their death. It has been seen on several occasions where swaths of trees in the forest die off for seemingly no reason. This happens because the fungus wraps its rhizomorphs around the roots of the tree and emits digestive enzymes. The fungus also creeps up underneath the bark of the tree. This process is not instant, though. Depending on the scale of the fungal infection, the tree may still survive for 50 years. There are some worthy contenders for the world’s largest organism, with the Aspen Tree coming in a close second. An Aspen Tree colony known as Pando in Utah stretches 8 km in length. It also is heavier in total mass when compared to the Honey Fungus. However, the Honey fungus covers a larger area overall, giving it the title of the world’s largest organism.
yes
Mycology
Is the world's largest organism a fungus?
yes_statement
the "world"'s "largest" "organism" is a "fungus".. a "fungus" holds the title for being the "world"'s "largest" "organism".
https://en.wikipedia.org/wiki/Armillaria_ostoyae
Armillaria ostoyae - Wikipedia
Armillaria ostoyae (synonym Armillaria solidipes) is a species of fungus (mushroom), pathogenic to trees, in the family Physalacriaceae. In the western United States, it is the most common variant of the group of species under the name Armillaria mellea. A. ostoyae is common on both hardwood and conifer wood in forests west of the Cascade Range in Oregon, United States. It has decurrent gills and the stipe has a ring.[1] The mycelium invades the sapwood and is able to disseminate over great distances under the bark or between trees in the form of black rhizomorphs ("shoestrings").[2] In most areas of North America, Armillaria ostoyae can be separated from other species by its physical features: cream-brown colors, prominent cap scales, and a well-developed stem ring distinguish it from other Armillaria. Armillaria ostoyae grows and spreads primarily underground, such that the bulk of the organism is not visible from the surface. In the autumn, the subterranean parts of the organism bloom "honey mushrooms" as surface fruits.[2] Low competition for land and nutrients often allow this fungus to grow to huge proportions, and it possibly covers more total geographical area than any other single living organism.[2][3][4] A spatial genetic analysis estimated that an individual specimen of A. ostoyae growing over 91 acres (37 ha) in northern Michigan, United States, weighs 440 tons (4 x 105 kg).[5][6] Another specimen in northeastern Oregon's Malheur National Forest is possibly the largest living organism on Earth by mass, area, and volume – this contiguous specimen covers 3.7 square miles (2,400 acres; 9.6 km2) and is colloquially called the "Humongous fungus".[2] Approximations of the land area occupied by the "Humongous fungus" are 3.5 square miles (9.1 km2) (2,240 acres (910 ha)), and it possibly weighs as much as 35,000 tons (approximately 31,500 tonnes), making it the world's most massive living organism.[7] The species was long known as Armillaria ostoyae Romagn., until a 2008 publication revealed that the species had been described under the earlier name Armillaria solidipes by Charles Horton Peck in 1900,[8] long before Henri Romagnesi had described it in 1970.[9] Subsequently, a proposal to conserve the name Armillaria ostoyae was published in 2011 and has been approved by the Nomenclature Committee for Fungi.[10] This fungus, like most parasitic fungi, reproduces sexually. The fungi begin life as spores, released into the environment by a mature mushroom. Armillaria ostoyae has a white spore print. There are two mating types for spores (not male and female but similar in effect). Spores can be dispersed by environmental factors such as wind, or they can be redeposited by an animal. Once the spores are in a resting state, the single spore must come in contact with a spore of a complementary mating type and of the same species. If the single spore isolates are from different species, the colonies will not fuse together and they will remain separate. When two isolates of the same species but different mating types fuse together, they soon form coalesced colonies which become dark brown and flat. With this particular fungus it will produce mycelial cords – the shoestrings[2] – also known as rhizomorphs.[11] These rhizomorphs allow the fungus to obtain nutrients over distances. These are also the main factors to its pathogenicity. As the fruiting body continues to grow and obtain nutrients, it forms into a mature mushroom. Armillaria ostoyae in particular grows wide and thin sheet-like plates radiating from the stem which is known as its gills. The gills hold the spores of a mature mushroom. This is stained white when seen as a spore print. Once spore formation is complete, this signifies a mature mushroom and now is able to spread its spores to start a new generation. Using genotyping and clonal analysis, scientists determined that a 2500-year old specimen of Armillaria ostoyae in northern Michigan, United States originated from spores of a parent fungus in Ontario, Canada, then grew over millennia into the 21st century to a mass of 440 tons (4 x 105 kg), making it the equivalent in weight of 3 blue whales.[5][6] By comparison of acreage, the Michigan A. ostoyae covers only 38% of the estimated land area of the Oregon "humongous fungus" at 3.5 square miles (9.1 km2),[2][5][6] (2,240 acres (910 ha) which may weigh as much as 35,000 tons. It is currently the world's largest single living organism.[7][12][13] The disease is of particular interest to forest managers, as the species is highly pathogenic to a number of commercial softwoods, notably Douglas-fir (Pseudotsuga menziesii), true firs (Abies spp.), pine trees (Pinus), and Western Hemlock (Tsuga heterophylla).[7] A commonly prescribed treatment is the clear cutting of an infected stand followed by planting with more resistant species such as Western redcedar (Thuja plicata) or deciduous seedlings. Pathogenicity is seen to differ among trees of varying age and location. Younger conifer trees at age 10 and below are more susceptible to infection leading to mortality, with an increased chance of survival against the fungus where mortality can become rare by age 20.[14] While mortality among older conifers is less likely to occur, this does happen, however, in forests with dryer climates.[15] The pathogenicity of Armillaria ostoyae appears to be more common in interior stands, but its virulence is seen to be greater in coastal conifers. Although conifers along the coastal regions show a lower rate of mortality against the root disease, infections can be much worse. Despite differences in how infections occur between these two regions, infections are generally established by rhizomorph strands, and pathogenicity is correlated to rhizomorph production. Armillaria ostoyae is mostly common in the cooler regions of the northern hemisphere. In North America, this fungus is found on host coniferous trees in the forests of British Columbia and the Pacific Northwest.[2] It also grows in parts of Asia.[1] While Armillaria ostoyae is distributed throughout the different biogeoclimatic zones of British Columbia, the root disease causes the greatest problem in the interior parts of the region in the Interior Cedar Hemlock biogeoclimatic zone.[16] It is both present in the interior where it is more common as well as along the coast. A mushroom of this type in the Malheur National Forest in the Strawberry Mountains of eastern Oregon, was found to be the largest fungal colony in the world, spanning an area of 3.5 square miles (2,200 acres; 9.1 km2).[2][7] This organism is estimated to be some 8,000 years old[7][17] and may weigh as much as 35,000 tons.[7] If this colony is considered a single organism, it is one of the largest known organisms in the world by area, only knowingly rivalled by a colony of Posidonia australis on the Australian seabed that measures 200 square kilometres (77 sq mi; 49,000 acres), and rivals the aspen grove "Pando" as the known organism with the highest living biomass. Another "humongous fungus" – a specimen of Armillaria gallica found at a site near Crystal Falls, Michigan – covers 91 acres (0.37 km2; 0.142 sq mi), was found to have originated from a parent fungus in Ontario, Canada.[5][18] A tree is diagnosed with this parasitic fungus once the following characteristics are identified: Resin flow from tree base Crown thinning or changing color to yellow or red Distress crop of cones White mycelial fan under bark Black rhizomorphs penetrating root surfaces Honey-colored mushrooms near base of tree in fall Affected trees often in groups or patches on the east side of the Cascades; usually killed singly on the west side. A. ostoyae may be confused with Mottled rot (Pholiota limonella). It has similar mushrooms, but only if mycelial fans are not present. Dead and diseased trees usually occur in disease centers, which appear as openings in the canopy. GPS tracking can aid in the monitoring of these areas. However, sometimes distinct centers will be absent and diseased trees are scattered throughout the stand. [19] Armillaria can remain viable in stumps for 50 years. Chemical treatments do not eradicate the fungus entirely, and they are not cost-effective. The most frequent and effective approach to managing root disease problems is to attempt to control them at final harvest by replanting site-suited tree species that are disease tolerant. In eastern Washington that typically means replacing Douglas-fir or true fir stands with ponderosa pine, western larch, western white pine, lodgepole pine, western red cedar, alder, or spruce. Species susceptibility varies somewhat from location to location. All trees in the disease center as well as uninfected trees within 50 feet (15 m) should be cut. No tree from a highly susceptible species should be planted within 100 feet (30 m) of a disease center. The use of another fungus, Hypholoma fasciculare has been shown in early experiments to competitively exclude Armillaria ostoyae in both field and laboratory conditions, but further experimentation is required to establish the efficacy of this treatment. Another more expensive alternative to changing species is to remove diseased stumps and trees from the site by pushing them out with a bulldozer. The air will dry and kill the fungus. Any small roots left underground will decay before they can reinfect the new seedlings, so it is not necessary to burn the stumps. After stump removal, any species may be planted. The removal of stumps (stumping) has been used to prevent contact between infected stumps and newer growth resulting in lower infection rates. It is unknown if the lower infection rates will persist as roots of young trees extend closer to the original inoculate from the preceding stand. The most important control measure after planting is to manage for reduced tree stress. This includes regulating species composition, maintaining biological diversity, and reducing the chances for insect pest buildup. Mixed-species forests are more resistant to insect defoliation, and also slow the spread of species-specific pests such as dwarf mistletoe, which are both predisposing agents for Armillaria.[20]
Low competition for land and nutrients often allow this fungus to grow to huge proportions, and it possibly covers more total geographical area than any other single living organism.[2][3][4] A spatial genetic analysis estimated that an individual specimen of A. ostoyae growing over 91 acres (37 ha) in northern Michigan, United States, weighs 440 tons (4 x 105 kg).[5][6] Another specimen in northeastern Oregon's Malheur National Forest is possibly the largest living organism on Earth by mass, area, and volume – this contiguous specimen covers 3.7 square miles (2,400 acres; 9.6 km2) and is colloquially called the "Humongous fungus".[2] Approximations of the land area occupied by the "Humongous fungus" are 3.5 square miles (9.1 km2) (2,240 acres (910 ha)), and it possibly weighs as much as 35,000 tons (approximately 31,500 tonnes), making it the world's most massive living organism.[7] The species was long known as Armillaria ostoyae Romagn., until a 2008 publication revealed that the species had been described under the earlier name Armillaria solidipes by Charles Horton Peck in 1900,[8] long before Henri Romagnesi had described it in 1970.[9] Subsequently, a proposal to conserve the name Armillaria ostoyae was published in 2011 and has been approved by the Nomenclature Committee for Fungi.[10] This fungus, like most parasitic fungi, reproduces sexually. The fungi begin life as spores, released into the environment by a mature mushroom. Armillaria ostoyae has a white spore print. There are two mating types for spores (not male and female but similar in effect). Spores can be dispersed by environmental factors such as wind, or they can be redeposited by an animal.
yes
Mycology
Is the world's largest organism a fungus?
yes_statement
the "world"'s "largest" "organism" is a "fungus".. a "fungus" holds the title for being the "world"'s "largest" "organism".
https://www.guinnessworldrecords.com/world-records/606952-largest-living-organism
Largest living organism | Guinness World Records
Largest living organism The largest single living organism based on area is a specimen of Posidonia australis seagrass (aka Poseidon’s ribbon weed) located in Shark Bay off Western Australia, covering approximately 200 square kilometres (77 square miles) – equivalent to around 28,000 soccer fields or more than 450 times bigger than Vatican City, the world's smallest country. The superlative seagrass meadow was described in a paper published in Proceedings of the Royal Society B on 1 June 2022. This claims the title of largest organism from a specimen of Armillaria ostoya honey mushroom – known colloquially as the "Humongous Fungus" – growing in the Malheur National Forest of Oregon, USA, which occupies 965 ha (2,385 acres). It retains its title as the world's largest fungus. Also the largest plant by area, the seagrass meadow spans a distance of c. 180 km (112 mi) from White Island in Shark Bay's western gulf to the Faure Sill in the eastern gulf. Spawned from a single seed, based on this species' growth rate (15–35 cm/6 in–1 ft 1.8 in per year), it is estimated to be around 4,500 years old. It has spread over the millennia via underground clonal shoots known as rhizomes meaning that the entire plant is connected and shares the same DNA, though the authors of the study note that certain patches (ramets) in a clonal plant do become separated over time so gaps can emerge. The meadow's extent is based on data preceding 2010–11 due to an unprecedented marine heatwave (MHW) that occurred in the austral summer of 2010–11 across the Western Australian coastline that led to a record seasonal reduction of 1,310 km2 (506 sq mi) of Shark Bay's seagrass. The MHW increased the temperature of the water by 2–5°C warmer than average. Another record-breaking clonal plant is Pando, a network of quaking aspen (Populus tremuloides) growing in the Wasatch Mountains of Utah, USA, which is considered the world's most massive plant. The clonal forest, comprised of around 47,000 individual stems, was confirmed in December 1992 to be a single root system, covering 43 ha (106 acres) and weighing an estimated 6,000 tonnes (6,600 US tons). The clonal system is genetically uniform and acts as a single organism, with all the component trees (part of the willow family) changing colour or shedding leaves in unison. The research was a collaborative study undertaken by the University of Western Australia and Flinders University (both Australia). Records change on a daily basis and are not immediately published online. For a full list of record titles, please use our Record Application Search. (You will need to register / login for access)
Largest living organism The largest single living organism based on area is a specimen of Posidonia australis seagrass (aka Poseidon’s ribbon weed) located in Shark Bay off Western Australia, covering approximately 200 square kilometres (77 square miles) – equivalent to around 28,000 soccer fields or more than 450 times bigger than Vatican City, the world's smallest country. The superlative seagrass meadow was described in a paper published in Proceedings of the Royal Society B on 1 June 2022. This claims the title of largest organism from a specimen of Armillaria ostoya honey mushroom – known colloquially as the "Humongous Fungus" – growing in the Malheur National Forest of Oregon, USA, which occupies 965 ha (2,385 acres). It retains its title as the world's largest fungus. Also the largest plant by area, the seagrass meadow spans a distance of c. 180 km (112 mi) from White Island in Shark Bay's western gulf to the Faure Sill in the eastern gulf. Spawned from a single seed, based on this species' growth rate (15–35 cm/6 in–1 ft 1.8 in per year), it is estimated to be around 4,500 years old. It has spread over the millennia via underground clonal shoots known as rhizomes meaning that the entire plant is connected and shares the same DNA, though the authors of the study note that certain patches (ramets) in a clonal plant do become separated over time so gaps can emerge. The meadow's extent is based on data preceding 2010–11 due to an unprecedented marine heatwave (MHW) that occurred in the austral summer of 2010–11 across the Western Australian coastline that led to a record seasonal reduction of 1,310 km2 (506 sq mi) of Shark Bay's seagrass. The MHW increased the temperature of the water by 2–5°C warmer than average.
no
Mycology
Is the world's largest organism a fungus?
yes_statement
the "world"'s "largest" "organism" is a "fungus".. a "fungus" holds the title for being the "world"'s "largest" "organism".
https://en.wikipedia.org/wiki/Largest_organisms
Largest organisms - Wikipedia
Although it appears to be multiple trees, Pando is a clonal colony of an individual quaking aspen with an interconnected root system. It is widely held to be the world's most massive single organism. This article lists the largest organisms for various types of life and mostly considers extant species,[a] which found on Earth can be determined according to various aspects of an organism's size, such as: mass, volume, area, length, height, or even genome size. Some organisms group together to form a superorganism (such as ants or bees), but such are not classed as single large organisms. The Great Barrier Reef is the world's largest structure composed of living entities, stretching 2,000 km (1,200 mi), but contains many organisms of many types of species. If considered individual entities, the largest organisms are clonal colonies which can spread over large areas. Pando, a clonal colony of the quaking aspen tree, is widely considered to be the largest such organism by mass.[1] Even if such colonies are excluded, trees retain their dominance of this listing, with the giant sequoia being the most massive tree.[2] In 2006 a huge clonal colony of the seagrassPosidonia oceanica was discovered south of the island of Ibiza. At 8 kilometres (5 mi) across, and estimated at around 100,000 years old,[3] it may be one of the largest and oldest clonal colonies on Earth.[4][5][6] The largest single-stem tree by wood volume and mass is the giant sequoia (Sequoiadendron giganteum), native to Sierra Nevada and California; it typically grows to a height of 70–85 m (230–280 ft) and 5–7 m (16–23 ft) in diameter. The largest organism in the world, according to mass, is the aspen tree whose colonies of clones can grow up to 8 kilometres (5 mi) long. The largest such colony is Pando, in the Fishlake National Forest in Utah. A form of flowering plant that far exceeds Pando as the largest organism on Earth in area and probably also mass, is the giant marine plant, Posidonia australis, living in Shark Bay, Australia. Its length is about 180 km (112 mi) and it covers an area of 200 km2 (77 sq mi).[7][8] It is among the oldest known clonal plants too. Another giant marine plant of the genus Posidonia, Posidonia oceanica discovered in the Mediterranean near the Balearic Islands, Spain may be the oldest living organism in the world, with an estimated age of 100,000 years.[9] Green algae are photosynthetic unicellular and multicellular protists that are related to land plants. The thallus of the unicellular mermaid's wineglass, Acetabularia, can grow to several inches (perhaps 0.1 to 0.2 m) in length. The fronds of the similarly unicellular, and invasive Caulerpa taxifolia can grow up to a foot (0.3 m) long.[citation needed] In 2023, paleontologists estimated that the extinct whale Perucetus, discovered in Peru, may have outweighed the blue whale, with a mass of 85-340 t (84-335 long tons; 94-375 short tonnes.)[16] While controversial, estimates for the weight of the sauropodBruhathkayosaurus suggest it was around 110-170 tons, with the highest estimate being 240 tons, if scaled with Patagotitan.[17] The upper estimates of weight for these two prehistoric animals would have easily rivaled or exceeded the blue whale. The African bush elephant (Loxodonta africana) is the largest living land animal. A native of various open habitats in sub-Saharan Africa, males weigh about 6.0 tonnes (13,200 lb) on average.[18] The largest elephant ever recorded was shot in Angola in 1974. It was a male measuring 10.67 metres (35.0 ft) from trunk to tail and 4.17 metres (13.7 ft) lying on its side in a projected line from the highest point of the shoulder, to the base of the forefoot, indicating a standing shoulder height of 3.96 metres (13.0 ft). This male had a computed weight of 10.4 tonnes.[10] A spatial genetic analysis estimated that a specimen of Armillaria ostoyae growing over 91 acres (37 ha) in northern Michigan, United States weighs 440 tons (4 x 105 kg).[23][24] Approximations of the land area of the Oregon "humongous fungus" are 3.5 square miles (9.1 km2) (2,240 acres (910 ha), possibly weighing as much as 35,000 tons as the world's most massive living organism.[25] In Armillaria ostoyae, each individual mushroom (the fruiting body, similar to a flower on a plant) has only a 5 cm (2.0 in) stipe, and a pileus up to 12.5 cm (4.9 in) across. There are many other fungi which produce a larger individual size mushroom. The largest known fruiting body of a fungus is a specimen of Phellinus ellipsoideus (formerly Fomitiporia ellipsoidea) found on Hainan Island.[26] The fruiting body masses up to 500 kg (1,100 lb).[27][28] Until P. ellipsoideus replaced it, the largest individual fruit body came from Rigidoporus ulmarius. R. ulmarius can grow up to 284 kg (626 lb), 1.66 m (5.4 ft) tall, 1.46 m (4.8 ft) across, and has a circumference of up to 4.9 m (16 ft). The largest known species of bacterium is named Thiomargarita magnifica, which grows to 1 cm (0.39 in) in length,[37] making it visible to the naked eye and also about five thousand times the size of more typical bacteria.[38]BBC News described it as possessing the "size and shape of a human eyelash."[39]Science published a new paper on the bacterium on June 23, 2022.[40] According to a study coauthored by Jean-Marie Volland, a marine biologist and scientist at California's Laboratory for Research in Complex Systems, and an affiliate at the US Department of Energy Joint Genome Institute, T. magnifica can grow up to 2 centimeters long.[41] The largest virus on record is the Pithovirus sibericum with the length of 1.5 micrometres, comparable to the typical size of a bacterium and large enough to be seen in light microscopes. It was discovered in March 2014 in an ice core sample collected from a permafrost in Siberia. Prior to this discovery, the largest virus was the peculiar virus genus Pandoravirus, which have a size of approximately 1 micrometer and whose genome contains 1,900,000 to 2,500,000 base pairs of DNA.[43]
17 metres (13.7 ft) lying on its side in a projected line from the highest point of the shoulder, to the base of the forefoot, indicating a standing shoulder height of 3.96 metres (13.0 ft). This male had a computed weight of 10.4 tonnes.[10] A spatial genetic analysis estimated that a specimen of Armillaria ostoyae growing over 91 acres (37 ha) in northern Michigan, United States weighs 440 tons (4 x 105 kg).[23][24] Approximations of the land area of the Oregon "humongous fungus" are 3.5 square miles (9.1 km2) (2,240 acres (910 ha), possibly weighing as much as 35,000 tons as the world's most massive living organism.[25] In Armillaria ostoyae, each individual mushroom (the fruiting body, similar to a flower on a plant) has only a 5 cm (2.0 in) stipe, and a pileus up to 12.5 cm (4.9 in) across. There are many other fungi which produce a larger individual size mushroom. The largest known fruiting body of a fungus is a specimen of Phellinus ellipsoideus (formerly Fomitiporia ellipsoidea) found on Hainan Island.[26] The fruiting body masses up to 500 kg (1,100 lb).[27][28] Until P. ellipsoideus replaced it, the largest individual fruit body came from Rigidoporus ulmarius. R. ulmarius can grow up to 284 kg (626 lb), 1.66 m (5.4 ft) tall, 1.46 m (4.8 ft) across, and has a circumference of up to 4.9 m (16 ft).
yes
Mycology
Is the world's largest organism a fungus?
yes_statement
the "world"'s "largest" "organism" is a "fungus".. a "fungus" holds the title for being the "world"'s "largest" "organism".
https://wildaboututah.org/pando-the-worlds-largest-organism/
Pando-The World's Largest Organism - Wild About Utah
What we consider to be the world’s largest organism has changed over time. At one point, the largest animal crown went to a 150 ton female blue whale. And General Sherman, a 275 foot tall Giant Sequoia was the largest plant. In 1992, scientists discovered a fungus in northern Michigan and proclaimed it to be the world’s largest organism. Not nearly as visually stunning as a Giant Sequoia, this type of fungus is a filagree of mushrooms and rootlike tentacles spawned by a single fertilized spore. Over time it had grown to cover 37 acres, most of this below ground. Subsequent mushroom hunts uncovered even larger specimens elsewhere. Stretching over 1,600 miles and visible from space, I often hear the Great Barrier Reef called the world’s largest organism. But the reef is not a single organism. It is created from the limestone secretions of a great number of different reef-producing coral species. Fungi, reefs and giant trees are all very worthy biological wonders, but the thing that gets my largest organism vote is right here in Utah. Like the Great Barrier Reef, it’s so vast you really need to see it from a plane or even satellite. Like General Sherman, it has its own name—Pando—-meaning “I spread” in Latin. Pando can be seen is spreading itself in Fishlake National Forest in south central Utah. So what is Pando? And why is it so remarkable? Pando is a clonal aspen colony. Each “tree” that we see in an aspen forest is not an individual tree at all but a genetically identical stem connected underground to its parent clone. More trees arise from lateral roots, creating a group of genetically identical trees. But, biologically speaking, the colony is just one individual plant. Recent genetic testing by Dr. Karen Mock of Utah State University confirms Pando’s enormous size- it covers over 106 acres and contains around 47,000 aboveground stems or suckers. When you consider the volume represented by the trees and root system, Pando easily wins the title of world’s largest organism. So far anyway. Thanks to Dr. Karen Mock of Utah State University’s College of Natural Resources for her help in developing this piece. For pictures and sources of the remarkable Pando, see www.wildaboututah.org WESTERN ASPEN ALLIANCE is a joint venture between Utah State University’s College of Natural Resources and the USDA Forest Service Rocky Mountain Research Station, whose purpose is to facilitate and coordinate research issues related to quaking aspen (Populus tremuloides) communities of the west. http://www.western-aspen-alliance.org/
What we consider to be the world’s largest organism has changed over time. At one point, the largest animal crown went to a 150 ton female blue whale. And General Sherman, a 275 foot tall Giant Sequoia was the largest plant. In 1992, scientists discovered a fungus in northern Michigan and proclaimed it to be the world’s largest organism. Not nearly as visually stunning as a Giant Sequoia, this type of fungus is a filagree of mushrooms and rootlike tentacles spawned by a single fertilized spore. Over time it had grown to cover 37 acres, most of this below ground. Subsequent mushroom hunts uncovered even larger specimens elsewhere. Stretching over 1,600 miles and visible from space, I often hear the Great Barrier Reef called the world’s largest organism. But the reef is not a single organism. It is created from the limestone secretions of a great number of different reef-producing coral species. Fungi, reefs and giant trees are all very worthy biological wonders, but the thing that gets my largest organism vote is right here in Utah. Like the Great Barrier Reef, it’s so vast you really need to see it from a plane or even satellite. Like General Sherman, it has its own name—Pando—-meaning “I spread” in Latin. Pando can be seen is spreading itself in Fishlake National Forest in south central Utah. So what is Pando? And why is it so remarkable? Pando is a clonal aspen colony. Each “tree” that we see in an aspen forest is not an individual tree at all but a genetically identical stem connected underground to its parent clone. More trees arise from lateral roots, creating a group of genetically identical trees. But, biologically speaking, the colony is just one individual plant. Recent genetic testing by Dr. Karen Mock of Utah State University confirms Pando’s enormous size- it covers over 106 acres and contains around 47,000 aboveground stems or suckers.
no
Mycology
Is the world's largest organism a fungus?
yes_statement
the "world"'s "largest" "organism" is a "fungus".. a "fungus" holds the title for being the "world"'s "largest" "organism".
https://www.vice.com/en/article/epz5gn/the-worlds-largest-organism-is-breaking-up-scientists-warn
The World's Largest Organism Is 'Breaking Up,' Study Warns
The World's Largest Organism Is ‘Breaking Up,’ Study Warns “The pando'' has become a casual way to refer to the COVID-19 pandemic, but deep in the woods of central Utah, the Pando is the name for what scientists regard as the largest living organism in the world:over 40,000 massive aspen trees that are actually a single organism thought to stem from the same root. And despite thriving for several centuries, perhaps even millenia, this 106-acre beast is "breaking up" due to human influence. Advertisement “To the untrained eye, it looks like deer and cattle are the villains here, but both of those species are highly manipulated by humans,” Paul Rogers, an ecologist at Utah State University who published a recent study on the disintegrating Pando in the journal of Conservation Science and Practice, told Motherboard. Although it appears as if these hungry herbivores have been eating away at the Pando’s “world's largest organism” title for decades, after analyzing 64 plots of it, Rogers found that these animals are not to blame for depleting the Pando. Rather, government efforts to remove predators like bears and wolves from states like Utah, Montana and Wyoming have thrown the natural system off balance. Such interventions, including killing off wolf populations through poisoning, aren’t new, and occurred throughout the early 1900s. Since states make money from selling hunting licenses, and more deer means better hunting conditions and more profit, the issue remains politically and economically controversial, and more difficult to solve. “We took away the predators and elevated the numbers [of prey animals] so that people who like to hunt or see animals will be more successful,” Rogers says. Unfortunately, this has resulted in greater numbers of deer, that are more domesticated and sedentary than wild deer left to their own devices. “Too many deer is a big problem,” Rogers warns. Advertisement Instead of reintegrating wolves back into the environment, or taking other steps to reduce the population of deer, Utah has opted to use fencing as a way to keep animals away and preserve the Pando. While this may appear to be an effective short term solution, due to the unique way these aspen grow, fencing can hinder natural regrowth that used to occur when the trees die off. “It's like putting a bandaid on a really big wound,” he says. “We still have a bleeding problem.” If the Pando is strained to the point of no longer being the world’s largest living organism, it may not be the end of the world. But because the Pando problem is such a microcosm, that makes it an example of how to address the deleterious effects of human activity on natural systems and could hold lessons for addressing other issues like global warming. Still, its shrinkage, and the mishandling of measures to stop it, is a bad sign for the future of humanity. “We have the ability to change course, that’s what the data and information point out,” Rogers said. If not, another contender for the world’s largest living organism is the “Humongous Fungus,” a honey mushroom that has swelled throughout Oregon, again, as a result of humans manipulating the environment. In the end, Rogers’ research concludes that if the Humongous Fungus dethrones the Pando one day, it’s not the deers’ fault. “The finger points back at us, pretty obviously.” ORIGINAL REPORTING ON EVERYTHING THAT MATTERS IN YOUR INBOX. By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group, which may include marketing promotions, advertisements and sponsored content.
The World's Largest Organism Is ‘Breaking Up,’ Study Warns “The pando'' has become a casual way to refer to the COVID-19 pandemic, but deep in the woods of central Utah, the Pando is the name for what scientists regard as the largest living organism in the world:over 40,000 massive aspen trees that are actually a single organism thought to stem from the same root. And despite thriving for several centuries, perhaps even millenia, this 106-acre beast is "breaking up" due to human influence. Advertisement “To the untrained eye, it looks like deer and cattle are the villains here, but both of those species are highly manipulated by humans,” Paul Rogers, an ecologist at Utah State University who published a recent study on the disintegrating Pando in the journal of Conservation Science and Practice, told Motherboard. Although it appears as if these hungry herbivores have been eating away at the Pando’s “world's largest organism” title for decades, after analyzing 64 plots of it, Rogers found that these animals are not to blame for depleting the Pando. Rather, government efforts to remove predators like bears and wolves from states like Utah, Montana and Wyoming have thrown the natural system off balance. Such interventions, including killing off wolf populations through poisoning, aren’t new, and occurred throughout the early 1900s. Since states make money from selling hunting licenses, and more deer means better hunting conditions and more profit, the issue remains politically and economically controversial, and more difficult to solve. “We took away the predators and elevated the numbers [of prey animals] so that people who like to hunt or see animals will be more successful,” Rogers says. Unfortunately, this has resulted in greater numbers of deer, that are more domesticated and sedentary than wild deer left to their own devices. “Too many deer is a big problem,” Rogers warns.
no
Mycology
Is the world's largest organism a fungus?
yes_statement
the "world"'s "largest" "organism" is a "fungus".. a "fungus" holds the title for being the "world"'s "largest" "organism".
https://www.smithsonianmag.com/smart-news/mushroom-massive-three-blue-whales-180970549/
This Humongous Fungus Is as Massive as Three Blue Whales ...
This Humongous Fungus Is as Massive as Three Blue Whales The blue whale gets a lot of ink for the being the largest animal to ever live, beating out even the biggest dinosaurs. But it turns out the largest organisms on Earth aren’t in the oceans, they are beneath our feet. By weight and area, honey mushrooms in the genus Armillaria beat whales many times over. Now, reports Matthew Taub at Atlas Obscura, a new analysis of the original “humongous fungus” in Michigan’s Upper Peninsula shows the massive mushroom is much bigger and much older than researchers first believed. About 25 years ago, researchers discovered that an Armillaria gallica mushroom near Crystal Falls, Michigan, covered about 91 acres, weighed 110 tons and was about 1,500 years old, setting a new record for the largest organism at the time. For a new study published on the preprint service bioRxiv, James Anderson, a biologist at the University of Toronto and one of the original discoverers of the fungus, returned to the site and took 245 samples from the mushroom and examined its genome. The team confirmed that indeed, the entire fungus is just one individual. The DNA also showed a very slow mutation rate, meaning that the honey mushroom isn’t evolving very quickly. The visit also led them to revise the fungus’s age to 2,500 years and determine that it is four times as massive as the original estimate, or about 440 tons, the equivalent of three blue whales. How can a mushroom be that big? What we think of as mushrooms are just the fruiting bodies of the organisms. The main part of a mushroom is mass of underground tendrils called mycelium. Depending on the species, these tendrils can feed on soil, decaying plant matter or wood. In the case of the massive honey mushrooms, they have particularly thick black tendrils called rhizomorphs, reports Sarah Zhang at The Atlantic. The rhizomorphs can spread to acre upon acre in search of wood to consume. While other mushrooms prefer already decaying wood, the honey mushroom infects living trees, often killing them over the course of several decades, then continues eating them after they are dead. While it’s possible to find the underground mass by the honey mushrooms that it occasionally sends up, the telltale sign that the fungus is underfoot is the grove of dying trees above it. The Crystal Falls humongous fungus was the original humongous fungus that showed these organisms can reach massive size. But since its discovery it has been eclipsed by other honey mushrooms. An Armillaria found in eastern Oregon’s Blue Mountains covers three square miles and may be over 8,000 years old, holding the current title for humongous-est of the funguses. The size and huge distribution of these mushrooms underground is difficult to imagine. “I wish all of the substrate [soil, wood and other matter the fungus grows on] would be transparent for five minutes, so I could see where it is and what it’s doing,” Anderson tells Zhang. “We would learn so much from a five-minute glimpse.” Jason Daley is a Madison, Wisconsin-based writer specializing in natural history, science, travel, and the environment. His work has appeared in Discover, Popular Science, Outside, Men’s Journal, and other magazines.
This Humongous Fungus Is as Massive as Three Blue Whales The blue whale gets a lot of ink for the being the largest animal to ever live, beating out even the biggest dinosaurs. But it turns out the largest organisms on Earth aren’t in the oceans, they are beneath our feet. By weight and area, honey mushrooms in the genus Armillaria beat whales many times over. Now, reports Matthew Taub at Atlas Obscura, a new analysis of the original “humongous fungus” in Michigan’s Upper Peninsula shows the massive mushroom is much bigger and much older than researchers first believed. About 25 years ago, researchers discovered that an Armillaria gallica mushroom near Crystal Falls, Michigan, covered about 91 acres, weighed 110 tons and was about 1,500 years old, setting a new record for the largest organism at the time. For a new study published on the preprint service bioRxiv, James Anderson, a biologist at the University of Toronto and one of the original discoverers of the fungus, returned to the site and took 245 samples from the mushroom and examined its genome. The team confirmed that indeed, the entire fungus is just one individual. The DNA also showed a very slow mutation rate, meaning that the honey mushroom isn’t evolving very quickly. The visit also led them to revise the fungus’s age to 2,500 years and determine that it is four times as massive as the original estimate, or about 440 tons, the equivalent of three blue whales. How can a mushroom be that big? What we think of as mushrooms are just the fruiting bodies of the organisms. The main part of a mushroom is mass of underground tendrils called mycelium. Depending on the species, these tendrils can feed on soil, decaying plant matter or wood. In the case of the massive honey mushrooms, they have particularly thick black tendrils called rhizomorphs, reports Sarah Zhang at The Atlantic. The rhizomorphs can spread to acre upon acre in search of wood to consume.
yes
Mycology
Is the world's largest organism a fungus?
yes_statement
the "world"'s "largest" "organism" is a "fungus".. a "fungus" holds the title for being the "world"'s "largest" "organism".
https://www.dogonews.com/2022/8/5/australian-seagrass-meadow-claims-the-title-of-the-worlds-largest-organism
Australian Seagrass Meadow Claims The Title Of The World's ...
Australian Seagrass Meadow Claims The Title Of The World's Largest Organism Word Count Reading Level Listen to Article A flowering seagrass meadow in Australia is the "world's largest single living organism"( Credit: Angela Rossen./UWA) A sprawling seagrass meadow has just been declared the "world's largest single living organism" based on area. The Poseidon’s ribbon weed seagrass (Posidonia australis) is located in Shark Bay, a protected body of shallow water in Western Australia. It covers an impressive 77 square miles (200 square kilometers) — the equivalent of about 280 soccer fields! The record was previously held by a 3.7 square-mile mushroom, dubbed "Humongous Fungus," in the Malheur National Forest in Oregon, USA. Based on its size and growth rate, the researchers estimate that the meadow is 4,500 years old. While that is ancient, it is not record-breaking. A Posidonia oceanica seagrass plantin the western Mediterranean, which covers about 9.3 miles(15 kilometers), is believed to be over 100,000 years old! Researchers from the University of Western Australia and Flinders University stumbled upon the plant accidentally, while investigating the genetic diversity of Shark Bay's ribbon weed seagrass. The team analyzed seagrass specimens from ten meadows across Shark Bay, where the salt levels ranged from normal ocean salinity to almost twice as salty. To their surprise, they found the samples were genetically identical— meaning they all belonged to one plant. Further analysis showed that the seagrass originated from a single seedling. It grew by copying, or cloning, itself through an underground network of branching roots. Shark Bay is a protected body of shallow water in Western Australia (Credit: Angela Rossen./UWA) The massive plant also has another unique quality. Most seagrasses inherit half of each parent's DNA. However, the Shark Bay seagrass is a polyploid — a plant that carries the entire genome of each parent. "Polyploid plants often reside in places with extreme environmental conditions, are often sterile, but can continue to grow if left undisturbed, and this giant seagrass has done just that," Dr. Elizabeth Sinclair, the study's senior author, explained. The scientists, who published their findings in the journal Proceedings of the Royal Society B on May 31, 2022, now plan to conduct experiments to determine how the seagrass continues to flourish in Shark Bay's harsh and varied environment. Generate citations in MLA, APA, & Chicago formats APA Ahmed, S. (2022, August 5). Australian Seagrass Meadow Claims The Title Of The World's Largest Organism. Retrieved 2023, August 15, from https://www.dogonews.com/2022/8/5/australian-seagrass-meadow-claims-the-title-of-the-worlds-largest-organism Generate citations in MLA, APA, & Chicago formats Chicago Ahmed, Shariqua. “Australian Seagrass Meadow Claims The Title Of The World's Largest Organism.” DOGOnews. August 5, 2022. Accessed August 15, 2023. https://www.dogonews.com/2022/8/5/australian-seagrass-meadow-claims-the-title-of-the-worlds-largest-organism. MLA7 Chicago Ahmed, Shariqua. “Australian Seagrass Meadow Claims The Title Of The World's Largest Organism.” DOGOnews. August 5, 2022. Accessed August 15, 2023. https://www.dogonews.com/2022/8/5/australian-seagrass-meadow-claims-the-title-of-the-worlds-largest-organism. APA Ahmed, S. (2022, August 5). Australian Seagrass Meadow Claims The Title Of The World's Largest Organism. Retrieved 2023, August 15, from https://www.dogonews.com/2022/8/5/australian-seagrass-meadow-claims-the-title-of-the-worlds-largest-organism
"Polyploid plants often reside in places with extreme environmental conditions, are often sterile, but can continue to grow if left undisturbed, and this giant seagrass has done just that," Dr. Elizabeth Sinclair, the study's senior author, explained. The scientists, who published their findings in the journal Proceedings of the Royal Society B on May 31, 2022, now plan to conduct experiments to determine how the seagrass continues to flourish in Shark Bay's harsh and varied environment. Generate citations in MLA, APA, & Chicago formats APA Ahmed, S. (2022, August 5). Australian Seagrass Meadow Claims The Title Of The World's Largest Organism. Retrieved 2023, August 15, from https://www.dogonews.com/2022/8/5/australian-seagrass-meadow-claims-the-title-of-the-worlds-largest-organism Generate citations in MLA, APA, & Chicago formats Chicago Ahmed, Shariqua. “Australian Seagrass Meadow Claims The Title Of The World's Largest Organism.” DOGOnews. August 5, 2022. Accessed August 15, 2023. https://www.dogonews.com/2022/8/5/australian-seagrass-meadow-claims-the-title-of-the-worlds-largest-organism. MLA7 Chicago Ahmed, Shariqua. “Australian Seagrass Meadow Claims The Title Of The World's Largest Organism.” DOGOnews. August 5, 2022. Accessed August 15, 2023. https://www.dogonews.com/2022/8/5/australian-seagrass-meadow-claims-the-title-of-the-worlds-largest-organism. APA Ahmed, S. (2022, August 5).
no
Mycology
Is the world's largest organism a fungus?
no_statement
the "world"'s "largest" "organism" is not a "fungus".. a "fungus" is not the "world"'s "largest" "organism".
https://www.smithsonianmag.com/smart-news/pano-one-worlds-largest-organisms-dying-180970579/
Pando, One of the World's Largest Organisms, Is Dying | Smart ...
Pando, One of the World’s Largest Organisms, Is Dying Sweeping across 107 acres of Utah’s Fishlake National Forest is one of the world’s largest organisms: a forest of some 47,000 genetically identical quaking aspen trees, which all stem from a single root system. Pando, as the organism is known (its name is Latin for “I spread”), has been growing for at least 80,000 years. But according to Yasmin Tayag of Inverse, the grove’s health has declined dramatically over the past few decades. Pando, a recent study has found, is dying. Weighing 13 million pounds, Pando is the world’s largest organism by mass (Oregon’s “humungous fungus” spans a greater distance). Quaking aspens can reproduce by disseminating seeds, but more frequently, they send up sprouts from their roots and form a mass of trees aptly known as a “clone.” The new study, published in PLOS One, shows that Pando isn’t regenerating in the way that it should. Researchers assessed 65 plots that had been subjected to varying degrees of human efforts to protect the grove: some plots had been surrounded by a fence, some had been fenced in and regulated through interventions—like shrub removal and selective tree cutting, and some were untouched. The team tracked the number of living and dead trees, along with the number of new stems. Researchers also examined animal feces to determine how species that graze in Fishlake National Forest might be impacting Pando’s health. Their findings were pretty grim. In most areas of the grove, there are no “young or middle-aged trees at all,” lead study author Paul Rogers, an ecologist at Utah State University, tells Yasemin Saplakoglu of Live Science. Pando, he adds, is made up almost entirely “very elderly senior citizens." Mule deer and cattle appear to be the primary cause of Pando’s decline. The animals are chomping off the tops of saplings at alarming rates, leaving the grove with few opportunities to regenerate. But really, it isn’t the animals that are to blame. Under a U.S. Forest Service grazing allotment, ranchers are allowed to let their cattle graze at Pando for about two weeks every year, according to the study. Another major problem is the lack of apex predators in the area; in the early 1900s, humans aggressively hunted animals like wolves, mountain lions and grizzly bears, which help keep mule deer in check. And much of the fencing that was erected to protect Pando isn’t working; mule deer, it seems, are able to jump over the fences. “People are at the center of [the] failure,” Rogers tells Yessenia Funes of Earther. As part of the new study, the team also analyzed aerial photographs of Pando taken over the past 72 years. The images drive home the grove’s dire state. In the late 1930s, the crowns of the trees were touching. But over the past 30 to 40 years, gaps begin to appear within the forest, indicating that new trees aren’t cropping up to replace the ones that have died. And that isn’t great news for the animals and plants that depend on the trees to survive, Rogers says in a statement. Fortunately, all is not lost. There are ways that humans can intervene to give Pando the time it needs to get back on track, among them culling voracious deer and putting up better fencing to keep the animals away from saplings. As Rogers says, “It would be shame to witness the significant reduction of this iconic forest when reversing this decline is realizable should we demonstrate the will to do so.”
Pando, One of the World’s Largest Organisms, Is Dying Sweeping across 107 acres of Utah’s Fishlake National Forest is one of the world’s largest organisms: a forest of some 47,000 genetically identical quaking aspen trees, which all stem from a single root system. Pando, as the organism is known (its name is Latin for “I spread”), has been growing for at least 80,000 years. But according to Yasmin Tayag of Inverse, the grove’s health has declined dramatically over the past few decades. Pando, a recent study has found, is dying. Weighing 13 million pounds, Pando is the world’s largest organism by mass (Oregon’s “humungous fungus” spans a greater distance). Quaking aspens can reproduce by disseminating seeds, but more frequently, they send up sprouts from their roots and form a mass of trees aptly known as a “clone.” The new study, published in PLOS One, shows that Pando isn’t regenerating in the way that it should. Researchers assessed 65 plots that had been subjected to varying degrees of human efforts to protect the grove: some plots had been surrounded by a fence, some had been fenced in and regulated through interventions—like shrub removal and selective tree cutting, and some were untouched. The team tracked the number of living and dead trees, along with the number of new stems. Researchers also examined animal feces to determine how species that graze in Fishlake National Forest might be impacting Pando’s health. Their findings were pretty grim. In most areas of the grove, there are no “young or middle-aged trees at all,” lead study author Paul Rogers, an ecologist at Utah State University, tells Yasemin Saplakoglu of Live Science. Pando, he adds, is made up almost entirely “very elderly senior citizens. " Mule deer and cattle appear to be the primary cause of Pando’s decline.
no
Neuroscience
Is there a connection between brain size and intelligence?
yes_statement
"brain" "size" is "connected" to "intelligence".. "intelligence" is linked to "brain" "size".
https://www.technologyreview.com/2019/04/10/136131/chinese-scientists-have-put-human-brain-genes-in-monkeysand-yes-they-may-be-smarter/
Chinese scientists have put human brain genes in monkeys—and ...
Chinese scientists have put human brain genes in monkeys—and yes, they may be smarter Human intelligence is one of evolution’s most consequential inventions. It is the result of a sprint that started millions of years ago, leading to ever bigger brains and new abilities. Eventually, humans stood upright, took up the plow, and created civilization, while our primate cousins stayed in the trees. Now scientists in southern China report that they’ve tried to narrow the evolutionary gap, creating several transgenic macaque monkeys with extra copies of a human gene suspected of playing a role in shaping human intelligence. “This was the first attempt to understand the evolution of human cognition using a transgenic monkey model,” says Bing Su, the geneticist at the Kunming Institute of Zoology who led the effort. According to their findings, the modified monkeys did better on a memory test involving colors and block pictures, and their brains also took longer to develop—as those of human children do. There wasn’t a difference in brain size. The experiments, described on March 27 in a Beijing journal, National Science Review, and first reported by Chinese media, remain far from pinpointing the secrets of the human mind or leading to an uprising of brainy primates. Instead, several Western scientists, including one who collaborated on the effort, called the experiments reckless and said they questioned the ethics of genetically modifying primates, an area where China has seized a technological edge. “The use of transgenic monkeys to study human genes linked to brain evolution is a very risky road to take,” says James Sikela, a geneticist who carries out comparative studies among primates at the University of Colorado. He is concerned that the experiment shows disregard for the animals and will soon lead to more extreme modifications. “It is a classic slippery slope issue and one that we can expect to recur as this type of research is pursued,” he says. “It is troubling that the field is steamrolling along in this manner,” says Sikela. Evolution story Su, a researcher at the Kunming Institute of Zoology, specializes in searching for signs of “Darwinian selection”—that is, genes that have been spreading because they’re successful. His quest has spanned such topics as Himalayan yaks’ adaptation to high altitude and the evolution of human skin color in response to cold winters. The biggest riddle of all, though, is intelligence. What we know is that our humanlike ancestors’ brains rapidly grew in size and power. To find the genes that caused the change, scientists have sought out differences between humans and chimpanzees, whose genes are about 98% similar to ours. The objective, says, Sikela, was to locate “the jewels of our genome”—that is, the DNA that makes us uniquely human. For instance, one popular candidate gene called FOXP2—the “language gene” in press reports—became famous for its potential link to human speech. (A British family whose members inherited an abnormal version had trouble speaking.) Scientists from Tokyo to Berlin were soon mutating the gene in mice and listening with ultrasonic microphones to see if their squeaks changed. Su was fascinated by a different gene: MCPH1, or microcephalin. Not only did the gene’s sequence differ between humans and apes, but babies with damage to microcephalin are born with tiny heads, providing a link to brain size. With his students, Su once used calipers and head spanners to the measure the heads of 867 Chinese men and women to see if the results could be explained by differences in the gene. By 2010, though, Su saw a chance to carry out a potentially more definitive experiment—adding the human microcephalin gene to a monkey. China by then had begun pairing its sizable breeding facilities for monkeys (the country exports more than 30,000 a year) with the newest genetic tools, an effort that has turned it into a mecca for foreign scientists who need monkeys to experiment on. To create the animals, Su and collaborators at the Yunnan Key Laboratory of Primate Biomedical Research exposed monkey embryos to a virus carrying the human version of microcephalin. They generated 11 monkeys, five of which survived to take part in a battery of brain measurements. Those monkeys each have between two and nine copies of the human gene in their bodies. “You just go to the Planet of the Apes immediately in the popular imagination,” says Jacqueline Glover, a University of Colorado bioethicist who was one of the authors. “To humanize them is to cause harm. Where would they live and what would they do? Do not create a being that can’t have a meaningful life in any context.” The authors concluded, however, that it might be acceptable to make such changes to monkeys. In an e-mail, Su says he agrees that apes are so close to humans that their brains shouldn’t be changed. But monkeys and humans last shared an ancestor 25 million years ago. To Su, that alleviates the ethical concerns. “Although their genome is close to ours, there are also tens of millions of differences,” he says. He doesn’t think the monkeys will become anything more than monkeys. “Impossible by introducing only a few human genes,” he says. Smart monkey? Judging by their experiments, the Chinese team did expect that their transgenic monkeys could end up with increased intelligence and brain size. That is why they put the creatures inside MRI machines to measure their white matter and gave them computerized memory tests. According to their report, the transgenic monkeys didn’t have larger brains, but they did better on a short-term memory quiz, a finding the team considers remarkable. Several scientists think the Chinese experiment didn’t yield much new information. One of them is Martin Styner, a University of North Carolina computer scientist and specialist in MRI who is listed among the coauthors of the Chinese report. Styner says his role was limited to training Chinese students to extract brain volume data from MRI images, and that he considered removing his name from the paper, which he says was not able to find a publisher in the West. “There are a bunch of aspects of this study that you could not do in the US,” says Styner. “It raised issues about the type of research and whether the animals were properly cared for.” After what he’s seen, Styner says he’s not looking forward to more evolution research on transgenic monkeys. “I don’t think that is a good direction,” he says. “Now we have created this animal which is different than it is supposed to be. When we do experiments, we have to have a good understanding of what we are trying to learn, to help society, and that is not the case here.” One issue is that genetically modified monkeys are expensive to create and care for. With just five modified monkeys, it’s hard to reach firm conclusions about whether they really differ from normal monkeys in terms of brain size or memory skills. “They are trying to understand brain development. And I don’t think they are getting there,” says Styner. In an e-mail, Su agreed that the small number of animals was a limitation. He says he has a solution, though. He is making more of the monkeys and is also testing new brain evolution genes. One that he has his eye on is SRGAP2C, a DNA variant that arose about two million years ago, just when Australopithecus was ceding the African savannah to early humans. That gene has been dubbed the “humanity switch” and the “missing genetic link” for its likely role in the emergence of human intelligence. Su says he’s been adding it to monkeys, but that it’s too soon to say what the results are. Get the latest updates from MIT Technology Review We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Founded at the Massachusetts Institute of Technology in 1899, MIT Technology Review is a world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and political impact. Advertise with MIT Technology Review Elevate your brand to the forefront of conversation around emerging technologies that are radically transforming business. From event sponsorships to custom content to visually arresting video storytelling, advertising with MIT Technology Review creates opportunities for your brand to resonate with an unmatched audience of technology and business elite.
His quest has spanned such topics as Himalayan yaks’ adaptation to high altitude and the evolution of human skin color in response to cold winters. The biggest riddle of all, though, is intelligence. What we know is that our humanlike ancestors’ brains rapidly grew in size and power. To find the genes that caused the change, scientists have sought out differences between humans and chimpanzees, whose genes are about 98% similar to ours. The objective, says, Sikela, was to locate “the jewels of our genome”—that is, the DNA that makes us uniquely human. For instance, one popular candidate gene called FOXP2—the “language gene” in press reports—became famous for its potential link to human speech. (A British family whose members inherited an abnormal version had trouble speaking.) Scientists from Tokyo to Berlin were soon mutating the gene in mice and listening with ultrasonic microphones to see if their squeaks changed. Su was fascinated by a different gene: MCPH1, or microcephalin. Not only did the gene’s sequence differ between humans and apes, but babies with damage to microcephalin are born with tiny heads, providing a link to brain size. With his students, Su once used calipers and head spanners to the measure the heads of 867 Chinese men and women to see if the results could be explained by differences in the gene. By 2010, though, Su saw a chance to carry out a potentially more definitive experiment—adding the human microcephalin gene to a monkey. China by then had begun pairing its sizable breeding facilities for monkeys (the country exports more than 30,000 a year) with the newest genetic tools, an effort that has turned it into a mecca for foreign scientists who need monkeys to experiment on. To create the animals, Su and collaborators at the Yunnan Key Laboratory of Primate Biomedical Research exposed monkey embryos to a virus carrying the human version of microcephalin. They generated 11 monkeys, five of which survived to take part in a battery of brain measurements.
yes
Neuroscience
Is there a connection between brain size and intelligence?
yes_statement
"brain" "size" is "connected" to "intelligence".. "intelligence" is linked to "brain" "size".
https://www.earth.com/news/human-brain-size/
Accidental intelligence: human brain size may be due to DNA 'typo ...
Accidental intelligence: human brain size may be due to DNA 'typo' Our brains are big and complex, able to take in and process a massive amount of information – and human brain size and complexity may be due in part to a ‘typo’ in our DNA, according to a new study. The genetic mutation is likely to go back millions of years. Our closest cousins, the extinct Neanderthals and Denisovans, were brainiacs, too. It’s not present in our closest living relatives, chimpanzees, however. Their brains are about a third the size of our own. That means the mutation probably cropped up no earlier than 5 or 6 million years ago, when the ancestors of humans and chimpanzees first started down separate evolutionary paths, the researchers said. During that time period, between 2 and 6 million years ago, human ancestors began walking upright and using tools. At the same time, human brain size began expanding. Eventually, early humans began traveling from Africa to other parts of the world. About 800,000 years ago, their brains began growing rapidly, around the same time early humans were learning how to adapt to new habitats and environments. Last year, researchers found a gene that appeared to linked to growth of the neocortex, the part of the brain connected with vision, hearing, spatial reasoning, conscious thought, and language. In humans, about 76 percent of the brain is the neocortex. Now, researchers have figured out why that gene works the way it does. The tiny mutation to human DNA – called a point mutation – allows for certain stem cells to grow in a way that leads to a larger neocortex. All living humans appear to have have this genetic mutation, the researchers said. Scientists aren’t sure what caused the mutation, or if it was the only genetic change that played a role in developing modern humans’ intelligence. Larger brains have been mostly beneficial to humanity. They’ve allowed for the myriad inventions and innovations that make modern life more comfortable. However, human brain size has also made childbirth more dangerous for mothers and infants. Human infants have larger heads for their body size compared to other mammals, making childbirth painful for their mothers. Their skulls are also not yet fully developed – the “soft spots” allow room for their head and brain to grow extensively in the first years after birth, but also make them vulnerable.
Accidental intelligence: human brain size may be due to DNA 'typo' Our brains are big and complex, able to take in and process a massive amount of information – and human brain size and complexity may be due in part to a ‘typo’ in our DNA, according to a new study. The genetic mutation is likely to go back millions of years. Our closest cousins, the extinct Neanderthals and Denisovans, were brainiacs, too. It’s not present in our closest living relatives, chimpanzees, however. Their brains are about a third the size of our own. That means the mutation probably cropped up no earlier than 5 or 6 million years ago, when the ancestors of humans and chimpanzees first started down separate evolutionary paths, the researchers said. During that time period, between 2 and 6 million years ago, human ancestors began walking upright and using tools. At the same time, human brain size began expanding. Eventually, early humans began traveling from Africa to other parts of the world. About 800,000 years ago, their brains began growing rapidly, around the same time early humans were learning how to adapt to new habitats and environments. Last year, researchers found a gene that appeared to linked to growth of the neocortex, the part of the brain connected with vision, hearing, spatial reasoning, conscious thought, and language. In humans, about 76 percent of the brain is the neocortex. Now, researchers have figured out why that gene works the way it does. The tiny mutation to human DNA – called a point mutation – allows for certain stem cells to grow in a way that leads to a larger neocortex. All living humans appear to have have this genetic mutation, the researchers said. Scientists aren’t sure what caused the mutation, or if it was the only genetic change that played a role in developing modern humans’ intelligence. Larger brains have been mostly beneficial to humanity. They’ve allowed for the myriad inventions and innovations that make modern life more comfortable. However, human brain size has also made childbirth more dangerous for mothers and infants.
yes
Neuroscience
Is there a connection between brain size and intelligence?
yes_statement
"brain" "size" is "connected" to "intelligence".. "intelligence" is linked to "brain" "size".
https://www.frontiersin.org/articles/10.3389/fnhum.2019.00044
Genes, Cells and Brain Areas of Intelligence - Frontiers
What is the neurobiological basis of human intelligence? The brains of some people seem to be more efficient than those of others. Understanding the biological foundations of these differences is of great interest to basic and applied neuroscience. Somehow, the secret must lie in the cells in our brain with which we think. However, at present, research into the neurobiology of intelligence is divided between two main strategies: brain imaging studies investigate macroscopic brain structure and function to identify brain areas involved in intelligence, while genetic associations studies aim to pinpoint genes and genetic loci associated with intelligence. Nothing is known about how properties of brain cells relate to intelligence. The emergence of transcriptomics and cellular neuroscience of intelligence might, however, provide a third strategy and bridge the gap between identified genes for intelligence and brain function and structure. Here, we discuss the latest developments in the search for the biological basis of intelligence. In particular, the recent availability of very large cohorts with hundreds of thousands of individuals have propelled exciting developments in the genetics of intelligence. Furthermore, we discuss the first studies that show that specific populations of brain cells associate with intelligence. Finally, we highlight how specific genes that have been identified generate cellular properties associated with intelligence and may ultimately explain structure and function of the brain areas involved. Thereby, the road is paved for a cellular understanding of intelligence, which will provide a conceptual scaffold for understanding how the constellation of identified genes benefit cellular functions that support intelligence. What Is Intelligence? Intuitively we all know what it is to be intelligent, although definitions of intelligence can be very diverse. It is something that helps us plan, reason, solve problems, quickly learn, think on our feet, make decisions and, ultimately, survive in the fast, modern world. To capture this elusive trait, cognitive tests have been designed to measure performance in different cognitive domains, such as processing speed and language. Very soon it became clear that the results of different cognitive tests are highly correlated and generate a strong general factor that underlies different capabilities—general intelligence or Spearman’s g (Spearman, 1904). One of the most used tests nowadays to estimate Spearman’s g is the Wechsler Adult Intelligent Scale (WAIS). This test combines results of multiple cognitive tests in one measurement, full-scale IQ score. Are the tests able to measure human intelligence and does expressing it in a single number—IQ score—make sense? Despite critiques of this reductionist approach to intelligence, the tests have proven their validity and relevance. First, results of IQ tests strongly correlate with life outcomes, including socioeconomic status and cognitive ability, even when measured early on in life (Foverskov et al., 2017). The increasing complexity and technology-dependent society imposes ever growing cognitive demands on individuals in almost every aspect of everyday life, such as banking, using maps and transportation schedules, reading and understanding forms, interpreting news articles. Higher intelligence offers many seemingly small advantages, but they accumulate to affect overall chances in life of individuals (Gottfredson, 1997). These are beneficial to socioeconomic status, education, social mobility, job performance, and even lifestyle choices and longevity (Lam et al., 2017). Second, intelligence turns out to be a very stable trait from young to old age in the same individual. In a large longitudinal study of English children, a correlation of 0.81 was observed between intelligence at 11 years of age and scores on national tests of educational achievement 5 years later. This contribution of intelligence was evident in all 25 academic disciplines (Deary et al., 2007). Even at much later age, intelligence remains stable: a single test of general intelligence taken at age 11 correlated highly with the results of the test at the age of 90 (Deary et al., 2013). Finally, one of the most remarkable findings of twin studies is that heritability of intelligence is extraordinarily large, in the range 50%–80% even reaching 86% for verbal IQ (Posthuma et al., 2001). This makes human intelligence one of the most heritable behavioral traits (Plomin and Deary, 2015). Moreover, with every generation, assortative mating infuses additive genetic variance into the population, contributing to this high heritability (Plomin and Deary, 2015). Thus, despite its elusiveness in definition, intelligence lies at the core of individual differences among humans. It can be measured by cognitive tests and the results of such tests have proven their validity and relevance: intelligence measures are stable overtime, show high heritability and predict major life outcomes. Biological Basis of Intelligence: A Whole-Brain Perspective Are Bigger Brains Smarter? A question that has puzzled scientists for centuries is that of the origin of human intelligence. What makes some people smarter than others? The quest to answer these questions has started as early as 1830s in Europe and Russia where the brains of deceased elite scientists and artists were systematically collected and meticulously studied (Vein and Maat-Schieman, 2008). However, all the attempts to dissect the exceptional ability and talent did not reveal much at that time. The reigning hypothesis of the past century was that smarter people have bigger brains. With the advances in neuroimaging techniques this hypothesis was put to test in many studies. Indeed, a meta-analysis of 37 studies with over 1,500 individuals of the relationship between in vivo brain volume and intelligence found a moderate, yet significant positive correlation of 0.33 (McDaniel, 2005). A more recent meta-study of 88 studies with over 8,000 individuals again reported a significant, positive, slightly smaller correlation coefficient of 0.24. One of the conclusions of this study was that the strength of the association of brain volume and IQ seems to be overestimated in the literature but remains robust after accounting for publication bias (Pietschnig et al., 2015). Thus, overall bigger brain volume, when analyzed across multiple studies, is associated with higher intelligence. Which Brain Areas Are Important for Intelligence? Brain function is distributed across various areas that harbor specific functions. Can intelligence be attributed to one or several of these areas? Structural and functional brain imaging studies focused on locating general intelligence within the brain and linking specific types of cognition to specific brain areas (Deary et al., 2010). Early imaging studies associating intelligence to brain structure showed that full-scale IQ scores, a measure of general intelligence, showed a widely distributed pattern of correlations with brain structures: IQ scores correlated with intracranial, cerebral, temporal lobe, hippocampal, and cerebellar volumes (Andreasen et al., 1993), that together encompass almost all brain areas. Voxel-based morphometry (VBM), a neuroimaging analysis technique that allows estimation of focal differences in brain structure, makes it possible to test whether any such areas are clustered together or distributed throughout the brain. Application of VBM to brain imaging data revealed that positive correlations between intelligence and cortical thickness are located primarily in multiple association areas of frontal and temporal lobes (Hulshoff Pol et al., 2006; Narr et al., 2007; Choi et al., 2008; Karama et al., 2009). Based on 37 neuroimaging studies, Jung and Haier (2007) put forward that in particular the structure of frontal Brodmann areas 10, 45–47, parietal areas 39 and 40, and temporal area 21 positively contribute to IQ scores (Jung and Haier, 2007). This model was extended by later studies to frontal eye field, orbitofrontal area, as well as a large number of areas in temporal lobe—inferior and middle temporal gyrus, parahippocampal cortex and auditory association cortex (Narr et al., 2007; Choi et al., 2008; Colom et al., 2009; Figure 1). Brain Structure Changes Brain structure is not fixed at one particular developmental time point and then remains unaltered for the rest of our lives. Gray matter volume changes throughout childhood as well as adulthood (Gogtay et al., 2004) and is influenced by learning, hormonal differences, experience and age. Gray matter changes may reflect rearrangements of dendrites and synapses between neurons (Gogtay et al., 2004). When people acquire a new skill, for instance juggling, transient and selective structural changes are observed in brain areas that are associated with the processing and storage of complex visual motion (Draganski et al., 2004). Similarly, sex differences and age differences are important factors that influence brain structure and can affect which cortical areas associate with intelligence. Substantial sex differences were reported in the pattern of correlations between intelligence and regional gray and white matter volumes (Haier et al., 2005; Narr et al., 2007; Yang et al., 2014; Ryman et al., 2016), but the reports do not fully agree on the brain areas showing sex differences or their association with cognitive performance. Haier et al. (2005) reported correlations of IQ with parietal and frontal regions in males, whereas women showed correlations mainly within the frontal lobe (Haier et al., 2005). Similar results were obtained by Ryman et al. (2016) in males—fronto-parietal gray matter was more significantly related to general cognitive ability. However, in females the results indicated associations with intelligence in white matter efficiency and total gray matter volume (Ryman et al., 2016). Yet different conclusions were drawn by Narr et al. (2007), where women showed significant associations in gray matter thickness in prefrontal and temporal association cortices, whereas men show associations primarily in temporal-occipital association cortices (Narr et al., 2007). Finally, in a recent study where surface-based morphometry (SBM) was applied instead of VBM, substantial group differences in brain structure were found between sexes but cognitive performance was unrelated to brain structural variation within and between sexes (Escorial et al., 2015). What the studies do agree on is that substantial sex differences exist in brain structure, but that these differences not always underlie variation in cognitive performance. For example, one of the well-established sex differences in brain structure is the increased cortical thickness of males compared to females (Lüders et al., 2002), but relationships between full-scale IQ score and brain tissue volumes do not differ between men and women (Narr et al., 2007; Escorial et al., 2015). Age Matters In addition to sex differences, gray matter volume shows dramatic changes during lifetime that are part of normal development (Gogtay et al., 2004). The initial increase at earlier ages is followed by sustained thinning around puberty. This developmental change is thought to be a result of overproduction of synapses in early childhood and increased synaptic pruning in adolescence and young adulthood (Bourgeois et al., 1994). Furthermore, different areas have their own timeline of maturation: higher-order association cortices mature only after lower-order somatosensory and visual cortices (Gogtay et al., 2004). Correlations with intelligence follow a similar developmental curve. The strongest correlations between gray matter volume and intelligence have been found for children around the age of 10 years (Shaw et al., 2006; Jung and Haier, 2007). However, at age 12, around the start of cortical thinning, a negative relationship emerges (Brouwer et al., 2014). Moreover, it seems that the whole pattern of cortical maturation unfolds differently in more intelligent children. Children with higher IQ demonstrate a particularly plastic cortex, with an initial accelerated and prolonged phase of cortical increase and equally vigorous cortical thinning by early adolescence (Shaw et al., 2006). Brain Specialization to Different Types of Intelligence In addition to associations of cortical structure with intelligence, imaging studies have revealed correlations of functional activation of cortical areas with intelligence. Psychology distinguishes between two types of intelligence that together comprise Spearman’s g: crystallized and fluid intelligence. Crystallized intelligence is based on prior knowledge and experience and reflects verbal cognition, while fluid intelligence requires adaptive reasoning in novel situations (Carroll, 1993; Engle et al., 1999). Multiple studies imply that fluid intelligence relies on more efficient function of distributed cortical areas (Duncan et al., 2000; Jung and Haier, 2007; Choi et al., 2008). In particular, lateral frontal cortex, with its well-established role in reasoning, attention and working memory, seems to support fluid intelligence, but also the parietal lobe is implicated. One of the earlier studies of fluid intelligence using Raven’s Advanced Progressive Matrices by Haier et al. (1988) demonstrated activation of several areas in the left-hemisphere, in particular posterior cortex. Cognitive performance showed significant negative correlations with cortical metabolic rates, suggesting more efficient neural circuits (Haier et al., 1988). In later studies, fluid intelligence was strongly linked to both function and structure of frontal lobe regions (Choi et al., 2008). When participants perform verbal and nonverbal versions of a challenging working-memory task, while their brain activity is measured using functional magnetic resonance imaging (fMRI), individuals with higher fluid intelligence are more accurate and have greater event-related neural activity in lateral prefrontal and parietal regions (Gray et al., 2003). Also in a PET-scan study, participants showed a selective recruitment of lateral frontal cortex during more complicated cognitive tasks compared to easier tasks (Duncan et al., 2000). In a more recent report, the measurements of gray matter volume of two frontal areas—orbito-frontal (OFC) and rostral anterior cingulate cortices (rACC)—were complemented by white matter connectivity between these regions. Together, left gray matter volume and white matter connectivity between left posterior OFC and rACC accounted for up to 50% of the variance in general intelligence. Thus, especially in prefrontal cortex, structure, function and connectivity all relate to general intelligence, specifically to reasoning ability and working memory (Ohtani et al., 2014). Crystallized intelligence that largely relies on verbal ability, on the other hand, depends more on the cortical structure and cortical thickness in lateral areas of temporal lobes and temporal pole (Choi et al., 2008; Colom et al., 2009). While parietal areas (Brodman area 40) show overlap in their involvement in crystallized and other types of intelligence, temporal Brodman area 38 is exclusively involved in crystallized intelligence. These findings harmonize well with the function of the temporal lobe—it is thought to be responsible for integrating diverse semantic information from distinct brain regions. Studies of patients with semantic dementia support the role of temporal lobe in semantic working memory as well as memory storage (Gainotti, 2006). Thus, subdividing Spearman’s g reveals distinct cortical distributions involved in subdomains of intelligence. It is likely that further subdividing fluid and crystallized intelligence, for instance in verbal comprehension, working memory, processing speed, and perceptual organization, may result in a more defined map of cortical regions on left and right hemisphere that relate to these subdomains of intelligence (Jung and Haier, 2007). White Matter and Intelligence Not only gray matter, but also white matter volumes show an association with intelligence that can be explained by common genetic origin (Posthuma et al., 2002). White matter consists of myelinated axons transferring information from one brain region to another and integrity of the white matter tracts is essential for normal cognitive function. Thus, specific patterns of white matter dysconnectivity are associated with heritable general cognitive and psychopathology factors (Alnæs et al., 2018). For example, Yu et al. (2008) found that mental retardation patients show extensive damage in the integrity of white matter tracts that was assessed by fractional anisotropy. IQ scores significantly correlated with the integrity of multiple white matter tracts in both healthy controls and mental retardation patients (Yu et al., 2008). This correlation was especially prominent in right uncinate fasciculus that connects parts of temporal lobe with the frontal lobe areas (Yu et al., 2008). These results support previous findings on the association of particularly temporal and frontal lobe gray matter volume and intelligence (Hulshoff Pol et al., 2006; Narr et al., 2007; Choi et al., 2008; Karama et al., 2009) and emphasize that intact connectivity between these areas is important for intelligence. Longitudinal studies that track changes in white matter across development and during aging also show that changes in white matter are accompanied by changes in intelligence. During brain maturation in children, white matter structure shows associations with intelligence. In a large sample (n = 778) of 6- to 10-year-old children, white matter microstructure was linked to non-verbal intelligence and to visuospatial ability, independent of age (Muetzel et al., 2015). In another study, where white matter was studied in typically-developing children vs. struggling learners, the white matter connectome efficiency was strongly associated with intelligence and educational attainment in both groups (Bathelt et al., 2018). Conclusions on Gross Brain Distribution of Intelligence Thus, both functional and structural neuroimaging studies show that general intelligence cannot be attributed to one specific region. Rather, intelligence is supported by a distributed network of brain regions in many, if not all, higher-order association cortices, also known as parietal-frontal network (Jung and Haier, 2007; Figure 1). This network includes a large number of regions—the dorsolateral prefrontal cortex, the parietal lobe, and the anterior cingulate, multiple regions within the temporal and occipital lobes and, finally, major white matter tracts. Some limited division of function can be observed, implicating frontal and parietal areas in fluid intelligence, temporal lobes in crystallized intelligence and white matter integrity in processing speed. Although brain imaging studies have identified anatomical and functional correlates of human intelligence, the actual correlation coefficients have consistently been modest, around 0.15–0.35 (Hulshoff Pol et al., 2006; Narr et al., 2007; Choi et al., 2008; Karama et al., 2009). There are most likely various reasons for this, but an important conclusion is that human intelligence can only partly be explained by brain structure and functional activation of cortical areas observed in MRI. There are other factors contributing to intelligence that have to be considered. To put it in an evolutionary perspective, the human brain has outstanding cognitive capabilities compared to other species, that include many specific human abilities—abstract thinking, language and creativity. However, human brain anatomy is not that distinct from other mammalian species and it cannot satisfactorily account for a marked evolutionary jump in intelligence. Both in its size and neuronal count, the human brain does not evolutionary stand out: elephants and whales have larger brains (Manger et al., 2013) and long-finned pilot whale cortex contains more neurons (37 billion) than that of humans (19–23 billion; Pakkenberg and Gundersen, 1997; Herculano-Houzel, 2012; Mortensen et al., 2014). Especially the brains of our closest neighbors on the evolutionary scale, non-human primates, show remarkable resemblance. In fact, the human brain is anatomically in every way a linearly scaled-up primate brain (Herculano-Houzel, 2012), and appears to have little exceptional or extraordinary features to which outstanding cognitive abilities can be attributed. Thus, answers to the origins of human intelligence and its variation between individuals most probably do not lie only in the gross anatomy of the brain, but rather should be sought at the level of its building blocks and computational units—neurons, synapses and their genetic make-up. A Genetic Approach to Intelligence Given that intelligence is one of the most heritable traits, it follows that also its neurobiological correlates should be under strong genetic influence. Indeed, both cortical gray and white matter show a gradient of similarity in subjects with increasing genetic affinity (Thompson et al., 2001; Posthuma et al., 2002). This structural brain similarity is especially strong in frontal and lateral temporal regions, which show most significant heritability (Thompson et al., 2001). Hence, overall brain volume links to intelligence and to a large extent shares a common genetic origin. How and when during the development is genetic influence exerted by individual genes and what are the genes that determine human intelligence? Genes of Intelligence Over the last decade, genome-wide association studies (GWAS) evolved into a powerful tool for investigating the genes underlying variation in many human traits and diseases (Bush and Moore, 2012). GWAS studies test for associations between phenotypes and genetic variants—single-nucleotide polymorphisms (SNPs)—in large groups of unrelated individuals. Although the large majority of SNPs have a minimal impact on biological pathways, some SNPs can also have functional consequences, causing amino acid changes and thus lead to the identification of genetic underpinnings of a disease or a trait (Bush and Moore, 2012). After the first wave of GWAS of intelligence studies yielded mostly non-replicable results (Butcher et al., 2008; Davies et al., 2011, 2015, 2016; Trampush et al., 2017) it became evident that intelligence is a highly polygenic trait and much larger sample sizes are needed to reliably identify contributing genes (Plomin and von Stumm, 2018). Meta-analysis of the first 31 cohorts (N = 53,949) could only predict ~1.2% of the variance in general cognitive function in an independent sample and biological pathway analysis did not produce significant findings (Davies et al., 2015). Using educational attainment as proxy phenotype of intelligence boosted both the sample size and the number of found associated genes. Educational attainment is the number of years spent in full-time education. Both phenotypically (Deary et al., 2010) and genetically (Trampush et al., 2017) it strongly correlates with IQ. Because the number of school years is one of the common, routinely gathered parameters, this approach increased sample sizes to ~400,000 individuals in the latest GWAS studies (Okbay et al., 2016). Even larger samples sizes were obtained by combining the GWAS for cognitive ability with educational attainment (Lam et al., 2017; Trampush et al., 2017) and by focusing on GWAS of intelligence in multiple cohorts (Savage et al., 2018; Zabaneh et al., 2018). What are the genes of intelligence identified by these studies? Intelligence Is a Polygenic Trait The latest and largest genetic association study of intelligence to date identified 206 genomic loci and implicated 1,041 genes, adding 191 novel loci and 963 novel genes to previously associated with cognitive ability (Savage et al., 2018). These findings show that intelligence is a highly polygenic trait where many different genes would exert extremely small, if any, influence, most probably at different stages of development. Indeed, the reported effect sizes for each allele are extremely small (generally less than 0.1% for even the strongest effects), and the combined effects genome-wide explain only a small proportion of the total variance (Lam et al., 2017). For example, the strongest effect of identified alleles on educational attainment explains only 0.022% of phenotypic variance in the replication sample (Okbay et al., 2016), and the combined effects genome-wide predict only a small proportion of the total variance in hold-out samples (Lam et al., 2017). At the same time, the overall SNP heritability reported in recent GWAS is around 20%–21%, (Lam et al., 2017; Trampush et al., 2017; Savage et al., 2018; Coleman et al., 2019), less than half of the heritability estimates in twin studies (>50%; Plomin and von Stumm, 2018). However, small genetic effects at critical stages of development may have large consequences on brain function and development and together with it on cognitive ability. Thus, it is important to know what these identified genes are, but also when and where they are expressed in the nervous tissue. Most SNPs Found in Non-coding Regions Non-coding regions comprise most of the human genome and harbor a significant fraction of risk alleles for neuropsychiatric disease and behavioral traits. Over the last decade, more than 1,200 GWAS studies have identified nearly 6,500 disease- or trait-predisposing SNPs, but only 7% of these are located in protein-coding regions (Pennisi, 2011). The remaining 93% are located within non-coding regions, suggesting that GWAS-associated SNPs regulate gene transcription levels rather than altering the protein-coding sequence or protein structure. A very similar picture emerges for GWAS of intelligence studies. SNPs significantly associated with intelligence are mostly located in intronic (51.3%) and intergenic areas (33.4%), while only 1.4% are exonic (Savage et al., 2018; Figure 2). Similar distributions were also found in earlier association studies (Sniekers et al., 2017; Coleman et al., 2019). However, it is exactly these non-coding, gene regulatory regions that make the genome responsive to changes in synaptic activity and constitute a major force behind the evolution of human cognitive ability (Hardingham et al., 2018). While the function of most intergenic regions in human DNA remain poorly defined, new insights emerge from studies combining high-resolution mapping of non-coding elements, chromatin accessibility and gene expression profiles. These studies link the regulatory elements to their target genes. Thus, neurogenesis and cortical expansion in humans is thought to be controlled by specific genetic regulatory elements—human-gained enhancers (HGEs), that show increased activity in the human lineage (de la Torre-Ubieta et al., 2018). Moreover, genetic variants associated with educational attainment were shown to be enriched within the regulatory elements involved in cortical neurogenesis (de la Torre-Ubieta et al., 2018). FIGURE 2 Figure 2. Most of the associated genetic variants of intelligence lie in non-coding DNA regions—only 1.4% of the associated single-nucleotide polymorphisms (SNPs) are exonic, non-synonymous variants and lie in protein-coding genes. Gene-set analyses implicate pathways related to neurogenesis, neuron differentiation and synaptic structure. The figure is based on the results from the most recent and largest genome-wide association studies (GWAS) of intelligence by Savage et al. (2018). Thus, genetic effects on cognitive ability most probably do not operate independently of environmental factors, but rather reveal themselves through signal-regulated transcription driven by experience. This interplay between the epigenetic effects through regulatory elements and genetic make-up would also explain the increasing heritability of intelligence with age (Bergen et al., 2007; Davis et al., 2008; Plomin and Deary, 2015). The same regulatory genes require proper gene-environment interactions to reveal their role in cognitive ability. In other words, during development, the same set of genes acquires an increasing impact on intelligence as early levels of cognitive ability become reinforced through the selection of environments and education consistent with those ability levels (Briley and Tucker-Drob, 2013; Plomin and von Stumm, 2018). Combining the SNP-data with transcriptome data showed that the candidate genes exhibit above-baseline expression in the brain throughout life, but show particularly higher expression levels in the brain during prenatal development (Okbay et al., 2016). When genes were grouped into functional clusters, many such clusters associated with educational attainment are primarily involved in different stages of neural development: the proliferation of neural progenitor cells and their specialization, the migration of new neurons to the different layers of the cortex, the projection of axons from neurons to their signaling target and dendritic sprouting (Okbay et al., 2016). Also for intelligence, gene-set analysis identifies neurogenesis, neuronal differentiation and regulation of nervous system development as major functions of the identified SNPs (Savage et al., 2018; Figure 2). Genes Involved in Cell-Cell Interactions Many of the identified genes that play a role in neurodevelopment might contribute to synaptic function and plasticity. Brain function relies on highly dynamic, activity-dependent processes that switch on and off genes. These can lead to profound structural and functional changes and involve formation of new and elimination of unused synapses, changes in cytoskeleton, receptor mobility and energy metabolism. Cognitive ability may depend on how efficient neurons can regulate these processes. Interactions of cells with their direct environment is a fundamental function in both neurodevelopment and synaptic function. Many of the top protein-coding genes associated with cognitive ability are membrane-anchored proteins responsible for cell-to-cell and cell-to-matrix communication. For example, the ITIH3 gene that codes for a protein that stabilizes the extracellular matrix. Another example is LAMB2 gene that codes for laminin, an extracellular matrix glycoprotein a major constituent of basement membranes. Also several cadherin genes, PCDHA1 to PCDHA7, CDHR4, that are involved in cell adhesion, associate with cognitive ability (NCBI Resource Coordinators, 2017; Savage et al., 2018). In addition, in an extremely high IQ cohort, the gene most significantly enriched for association is ADAM12, a membrane-anchored protein involved in cell–cell and cell–matrix interactions (Zabaneh et al., 2018). Finally, some candidate genes that code for cell adhesion molecules (DCC and SEMA3F; Savage et al., 2018) are specifically involved in axon guidance during neuronal development. Remarkably, recent large-scale cellular-resolution gene profiling has identified species-specific differences exactly in the same functional categories of genes involved in intercellular communication (Zeng et al., 2012). By contrasting mouse and human gene expression profiles in neocortex, the cross-species differences in gene expression included secreted protein (48%), extracellular matrix (50%), cell adhesion (36%), and peptide ligand (31%) genes. These results may highlight the importance of cell-to-environment interactions not only for human intelligence but also for human evolution in general. Genes of Synaptic Function and Plasticity Some findings of GWAS of intelligence point directly at genes with known functions in synaptic communication, plasticity and neuronal excitability. Some identified genes are primarily involved in presynaptic organization and vesicle release. One of those is TSNARE1 that codes for t-SNARE domain containing 1 (Savage et al., 2018). The primary role of SNARE proteins is to mediate docking of synaptic vesicles with the presynaptic membrane in neurons and vesicle fusion (NCBI Resource Coordinators, 2017). Furthermore, at least two other identified genes are also involved in vesicle trafficking: GBF1 mediates vesicular trafficking in Golgi apparatus and ARHGAP27 plays a role in clathrin-mediated endocytosis. Finally, BSN gene codes for a scaffolding protein involved in organizing the presynaptic cytoskeleton. One of the transcriptional activators associated with intelligence is cAMP responsive element binding 3L4 (CREB3L4). This gene encodes a CREB—a nuclear protein that modulates the transcription of genes. It is an important component of intracellular signaling events and has widespread biological functions. However, in neurons its most documented and well-studied roles is the regulation of synaptic plasticity, learning and memory formation (Silva et al., 1998). Tapping into databases of drug targets and their gene annotations can shed new light on the associations of drug gene-sets with a phenotype (Gaspar and Breen, 2017). Such a drug pathway analysis combined with GWAS results of intelligence revealed that the gene targets of two drugs involved in synaptic regulation and neuron excitability were significantly enriched: a T-type calcium channel blocker and a potassium channel inhibitor (Lam et al., 2017). In a related analysis of drug classes, significant enrichment was also observed for voltage-gated calcium channel subunits (Lam et al., 2017). In another study, genes involved in regulation of voltage-gated calcium channel complex were also significantly linked to educational attainment in a previous study (Okbay et al., 2016). Both ion channel types play a critical role in synaptic communication and action potential firing. T-type calcium channels are involved in action potential initiation and switching between distinct modes of firing (Cain and Snutch, 2010). Potassium channels are crucial for rapid repolarization during AP generation and maintenance of a resting membrane potential (Hodgkin and Huxley, 1952). Genes With Supporting Functions The human brain uses at least 20% of the entire body’s energy consumption. Most of this energy demand goes to generation postsynaptic potentials (Attwell and Laughlin, 2001; Magistretti and Allaman, 2015). Notably, the emergence of higher cognitive functions in humans during evolution is also associated with the increased expression of energy metabolism genes (Magistretti and Allaman, 2015). Genes involved in energy supply and metabolism could thus have an impact on maintenance of high-frequency firing during cognitive tasks. Indeed, cognitive ability associates with genetic variation in several genes that code for regulators of mitochondrial function—GPD2, NDUFS3, MTCH2 (NCBI Resource Coordinators, 2017; Savage et al., 2018). Mitochondria are central for various cellular processes that include energy metabolism, intracellular calcium signaling, and generation of reactive oxygen species. By adapting their function to the demands of neuronal activity, they play an essential role in complex behavior of neurons (Kann and Kovács, 2007). In addition, genes involved in lipid metabolism (BTN2A1 and BTN1A1) and glucose and amino acid metabolism (GPT) are among the candidate genes of intelligence. Another remarkable cluster of protein-coding genes implicated in intelligence are genes coding for microtubule-associated proteins. Microtubules are an essential part of the cytoskeleton and are involved in maintaining cell structure throughout development. At the same time, microtubules are important highways of intracellular transport, and thereby affect recycling of synaptic receptors and neurotransmitter release in neurons (Hernández and Ávila, 2017). The MAPT gene coding for microtubule-associated protein was linked to intelligence by several studies (Sniekers et al., 2017; Trampush et al., 2017; Savage et al., 2018; Coleman et al., 2019). MAPT is also altered in many brain diseases—Alzheimer’s disease, Parkinson’s disease and Huntington’s disease (Hernández and Ávila, 2017). Apart from MAPT, some other genes coding for microtubule associated proteins were found to be significantly associated with intelligence: microtubule associated serine/threonine kinase 3 (MAST3), ALMS1 functions in microtubule organization and SAXO2 (FAM154B) a microtubule-stabilizing protein (NCBI Resource Coordinators, 2017; Savage et al., 2018). Conclusions From Genetic Studies In conclusion, twin studies show that individual differences in human intelligence can largely (50%–80%) be explained by genetic influences making intelligence one of the most heritable traits. However, present GWAS studies can capture less than half of this heritability (21%–22%; Lam et al., 2017; Trampush et al., 2017; Savage et al., 2018; Coleman et al., 2019). Furthermore, genetic influences are attributed to miniscule effects by a large number of genes. Ninety-five percent of these genetic variants are located in intronic and intergenic regions and might have a gene regulatory function. Only a very small proportion of associated SNPs (1.4%), are located in DNA fragments that are translated into protein. The majority of associated genes are implicated in early, most probably prenatal development, with some genes essential for synaptic function and plasticity throughout lifespan. The fact that such traits as birth length/weight and longevity show robust polygenic correlations with cognitive performance (Lam et al., 2017; Trampush et al., 2017) implies that overall healthy development is a prerequisite for optimal cognitive function. GWAS tests possible associations between genes and phenotype. However, the availability of cell-type and tissue-specific transcriptome data from post-mortem human brains (Ardlie et al., 2015) has opened a new horizon for GWAS studies. Linking hits of GWAS data to cell-type and tissue-specific transcriptomic profiles (GTEx) may indicate in which brain region and even which cell types intelligence genes are potentially expressed. This approach has obvious caveats, since genes associated with intelligence do not have to be expressed at the same developmental time, and since brain loci involved in intelligence are widely distributed, not all genes need to be expressed in the same brain area or cell type. Nevertheless, using this approach, it was found that genes associated with educational attainment and intelligence preferentially express together in nervous tissue (Okbay et al., 2016; Lam et al., 2017; Trampush et al., 2017; Savage et al., 2018; Coleman et al., 2019). Specifically, hippocampal, midbrain and generally cortical and frontal cortical regions show the highest enrichment of expression of these genes (Savage et al., 2018; Coleman et al., 2019). With the exception of midbrain, these are brain regions previously implicated in intelligence by brain imaging studies. Cell-type specific expression profiles of genes of intelligence highlight the role of neuronal cell types. Although glia cells are the most abundant cell type in the human brain (Vasile et al., 2017), no evidence for enrichment of candidate genes in oligodendrocytes or astrocytes was found (Lam et al., 2017; Trampush et al., 2017) leaving neurons as the main carrier of genetic variation. Further in-depth analysis of neuronal types revealed significant enrichment of associated genes within pyramidal neurons in hippocampal area CA1 and cortical somatosensory regions. In addition, significant associations were found in the principal cell type in striatum—the medium spiny neurons (Savage et al., 2018; Coleman et al., 2019). Pyramidal neurons are the most abundant neuronal types in neocortex and hippocampus, structures associated with higher executive functions, decision-making, problem-solving and memory. Striatal medium spiny neurons constitute 95% of all neuronal types within the striatum, a structure responsible for motivation, reward, habit learning and behavioral output (Volkow et al., 2017). The results of the GWAS studies put forward the hypothesis that these neuron types play a role in supporting intelligence (Coleman et al., 2019). Is there evidence that particular properties of brain cells contribute to intelligence? Cells of Intelligence Ever since Ramón y Cajal postulated his neuron doctrine of information processing calling neurons “butterflies of the soul” (Cajal, 1893), neuroscience has agreed that the basis of human intelligence must lie in neurons or networks of neurons. However, the neuroscientific search for the biological basis of intelligence has so far focused almost exclusively on the macroscopic brain level and genetics of intelligence, leaving a large gap of knowledge at cellular level. We assume that our mind functions through the activity of 86 billion neurons (Herculano-Houzel, 2012) and their connections, that form principal building blocks for coding, processing, and storage of information in the brain and ultimately give rise to cognition (Salinas and Sejnowski, 2001). Given the astronomic number of neuronal connections (Drachman, 2005), even the slightest change in efficiency of information processing by neurons can translate into large differences in cognitive ability. Indeed, one of the most robust and replicable associations in behavioral psychology is that of intelligence with mental processing speed, measured by reaction times by human test subjects (Vernon, 1983; Barrett et al., 1986). However, very few studies attempted to answer the question whether the activity and structure of single human neurons support human intelligence and how faster mental processing can be brought about by properties of cells in our brain. This knowledge gap is not surprising: the access to neurons in the living human brain is very limited and most of what is known about the function of neurons comes from laboratory animal research. During the past decades, the use of brain tissue resected during neurosurgical treatment of epilepsy or tumors has opened new avenues for studying the human brain on the cellular level (Molnár et al., 2008; Testa-Silva et al., 2010, 2014; Verhoog et al., 2013, 2016). To gain access to affected deep brain structures, neurosurgeons resect overlaying non-pathological neocortex that can be transported to the lab for further investigation. In combination with cognitive testing prior to surgery, this approach offers great opportunity to study neuronal function in relation to human intelligence. Such use of living human brain tissue from neurosurgery cannot be substituted by other techniques: post-mortem tissue is generally not suitable for physiological studies (but see Kramvis et al., 2018), while brain imaging studies lack the necessary cellular precision. The Key Role of Pyramidal Neurons Genetic studies indicate that expression of genes associated with intelligence accumulates in cortical pyramidal neurons (Savage et al., 2018; Coleman et al., 2019). Comparisons of key cellular properties of pyramidal neurons across species may offer insights into functional significance of such differences for human cognition. In fact, human tissue used in research always comes from higher-order association areas, typically temporal cortex, in order to spare primary sensory and language functions of the patient. These are exactly the areas implicated by brain imaging in human intelligence. Which properties of pyramidal neurons from temporal cortex stand out when compared across species? First, the structure of pyramidal cells is different (Elston and Fujita, 2014): compared to rodents and macaques, human layer 2/3 pyramidal cells have threefold larger and more complex dendrites (Mohan et al., 2015). Moreover, these large dendrites also receive two times more synapses than rodent pyramidal neurons (DeFelipe et al., 2002). Apart from structural differences, human pyramidal neurons display a number of unique functional properties. human excitatory synapses recover 3–4 times faster from depression than synapses in rodent cortex, have more speedy action potentials and transfer information at up to nine times higher rate than mouse synapse (Testa-Silva et al., 2014). In addition, adult human neurons can associate synaptic events in a much wider temporal window for plasticity (Testa-Silva et al., 2010; Verhoog et al., 2013). These differences across species may suggest evolutionary pressure on both dendritic structure and neuronal function in temporal lobe and emphasize specific adaptations of human pyramidal cells in cognitive functions these brain areas perform. Recently, these differences in human pyramidal neuron function and structure were linked to the intelligence scores and anatomical structure of temporal lobes from the same subjects (Goriounova et al., 2018; Figure 3). The results showed that high IQ scores associated with larger temporal cortical thickness in neurosurgery patients, as in healthy subjects (Choi et al., 2008). Furthermore, thicker temporal cortex linked to larger, more complex dendrites of human pyramidal neurons. Incorporating these realistic dendritic morphologies into computational model showed that larger model neurons were able to process synaptic inputs with higher temporal precision. Improved information transfer by model neurons was due to faster action potentials in larger cells. Finally, as predicted by the model, experimental recordings of action potential spiking in human pyramidal neurons demonstrated that individuals with higher IQ scores were able to sustain fast action potentials during neuronal activity. These findings provide the first evidence that human intelligence is associated with larger and more complex neurons and faster action potentials and more efficient synaptic information transfer (Goriounova et al., 2018). FIGURE 3 Figure 3. A cellular basis of human intelligence. Higher IQ scores associate with larger dendrites, faster action potentials during neuronal activity and more efficient information tracking in pyramidal neurons of temporal cortex. The figure is based on the results from Goriounova et al. (2018). Overall, larger dendritic length in human neurons compared to other species, and in particular elongation of their basal dendritic terminals (Deitcher et al., 2017) would enable these cells to use branches of their dendritic tree as independent computational compartments. Recently, Eyal et al. (2016, 2018) have provided new insights into signal processing and computational capabilities of the human pyramidal cells by testing their detailed models including excitatory synapses, dendritic spines, dendritic NMDA- and somatic spikes (Eyal et al., 2018). The results show that particularly large number of basal dendrites in human pyramidal cells and elongation of their terminals compared to other species result in electrical decoupling of the basal terminals from each other. Similar observations were also recently made by dendritic recordings from human layer 5 pyramidal neurons (Beaulieu-Laroche et al., 2018). In this way, human dendrites can function as multiple, semi-independent subunits and generate more dendritic NMDA- spikes independently and simultaneously, compared to rat temporal cortex (Eyal et al., 2014). Dendritic spikes through NMDA receptors are an essential component of behaviorally relevant computations in neurons. In mice, manipulation of these spikes lead to decreased orientation selectivity of visual cortical neurons linking the function of dendrites to visual information processing by neurons (Smith et al., 2013). Furthermore, larger dendrites have an impact on excitability of cells (Vetter et al., 2001; Bekkers and Häusser, 2007) and determine the shape and rapidity of action potentials (Eyal et al., 2014). Increasing the size of dendritic compartments in silico lead to acceleration of action potential onset and increased encoding capability of neurons (Eyal et al., 2014; Goriounova et al., 2018). In addition, compared to mouse, human pyramidal neurons in superficial layers show more hyperpolarization activated currents that facilitate excitability of these cells (Kalmbach et al., 2018). Thus, larger dendrites equip cells with many computational advantages necessary for rapid and efficient integration of large amounts of information. The fact that the larger and faster human neurons in temporal cortex link to intelligence (Goriounova et al., 2018) provides evidence that there is a continuum of these cellular properties across the human population. At the high end of the IQ score distribution, pyramidal cells of individuals with high IQ receive more synaptic inputs and are able to achieve higher resolution of synaptic integration by processing these multiple synaptic inputs separately and simultaneously. As cells are constantly bombarded by a large load of incoming signals during cognitive activity, the neuron has to relay these multiple inputs into output. Human neurons of individuals with higher IQ are able to translate these inputs into action potentials—output signal of the cell—much more efficiently, transfer more information and sustain fast action potential firing compared to lower IQ subjects. These findings harmonize well with genetic and imaging studies identifying metabolic rate as an important correlate of intelligence (Haier et al., 1988; Savage et al., 2018). How do these findings on cellular and genetic level translate to macroscale findings in brain imaging? One of the most robust finding in brain imaging is that cortical thickness and volume associate with intelligence (Haier et al., 2004; Colom et al., 2006, 2009; Narr et al., 2007; Choi et al., 2008; Karama et al., 2009). Reconstruction of cortical column at nanoscale resolution shows that cortical volume consists largely of dendritic and axonal processes with 7-fold greater number of axons over dendrites (Kasthuri et al., 2015), only a small proportion of this volume is occupied by cell bodies. The dendrites and axons are structures that mediate synaptic plasticity, store information and continue to grow and change during lifetime. Indeed, during normal postnatal development cortical areas follow a similar pattern: dendrites show continuous growth that is accompanied by increased cortical volume and decreased neuronal densities (Huttenlocher, 1990). In addition, frontal cortical areas that are more shaped by age and experience show a slower time course of these changes compared to primary visual areas that have an earlier critical period (Huttenlocher, 1990). In line with this prolonged development, dendritic trees in human temporal lobe continue to grow throughout maturity and into the old age. In 80-year-olds dendritic trees are more extensive than at the age of 50, with most of the difference resulting from increases in the number and average length of terminal segments of the dendritic tree. The link between dendritic size and cognition is emphasized by the fact that in senile dementia, dendritic trees are less extensive, largely because their terminal segments are fewer and shorter (Buell and Coleman, 1979). Also, within human cortex, a gradient of dendritic complexity exists across cortical areas. Higher order association areas that store and process more complex information contain neurons with larger and more complex dendrites compared to primary sensory areas. At the same time neuronal cell body density is lower in cortical association areas compared to primary sensory areas (Buell and Coleman, 1979; DeFelipe et al., 2002; Elston, 2003). A recent study by Genç et al. (2018) used multi-shell diffusion tensor imaging to estimate parieto-frontal cortical dendritic density in relation to human cognition. This study found that higher scores in cognitive tests correlated with lower values of neurite density (Genç et al., 2018). As neurite density decreases go together with the increases of dendrite length (Huttenlocher, 1990), the results obtained by Genç et al. (2018) may indicate that parieto-frontal cortical areas in individuals with higher intelligence have less densely packed neurons, and imply that these neurons have larger dendrites. Taking the results of Genç et al. (2018) and Goriounova et al. (2018) together suggests that the neuronal circuitry associated with higher intelligence is organized in a sparse and efficient manner. Larger and more complex pyramidal neurons are more dispersed in cortical space and occupy larger cortical volume. Conclusions and Future Perspectives Brain imaging has provided the basis for research on the neurobiology of intelligence by pointing out important functional and structural gross anatomical regions implicated in intelligence—overall gray matter volume and thickness, white matter integrity and function in temporal, frontal and parietal cortices. However, it is clear that neuroimaging in the present form is unable to provide temporal and spatial resolution sufficient to study the computational building blocks of the brain—neurons and synaptic contacts. On the other hand, GWAS studies have focused on the other extreme of the spectrum—the genes of intelligence. Large progress was made by increasing sample sizes and combining multiple cohorts. The results show that 98% of the associated genetic variants are not coded into functional protein and probably have a regulatory function at different stages of neural development. However, the small percentage of genes that do produce functional proteins are implicated in various neuronal functions including synaptic function and plasticity, cell interactions and energy metabolism. Importantly, growing database of gene expression profiles has pinpointed the expression of associated genes to principal neurons of cortex and midbrain—pyramidal and medium spiny neurons. What types of neurons are implicated in human intelligence? Recent advances in gene profiling of neurons with single cell resolution indicate that there are around 50 transcriptomic cell types of pyramidal cells in mice and different areas of the brain contain yet new sets of transcriptomic types (Tasic et al., 2018). The information contained in the transcriptomes links the types to their region-specific long-range target specificity. The same can be said about the striatal medium spiny neurons, where the detailed connectivity projection map from the entire cerebral cortex allowed to identify 29 distinct functional domains (Hintiryan et al., 2016). Thus, both pyramidal and medium spiny neurons form very heterogeneous populations, with different cell types having different functions and their specific connectivity patterns with the rest of the brain. How do these mouse cell types correspond to human cell types? How do different cell types support general intelligence and specific cognitive abilities in the human brain? Answers will require large-scale efforts that allow analysis of big numbers, not only of human cohorts, but also of cells and cell types. This may come within reach with the recent large-scale collaborative initiatives that have been started across the globe (Brose, 2016). Author Contributions NG and HM conceptualized the review and wrote the text. NG made the figures.
015) . Thus, despite its elusiveness in definition, intelligence lies at the core of individual differences among humans. It can be measured by cognitive tests and the results of such tests have proven their validity and relevance: intelligence measures are stable overtime, show high heritability and predict major life outcomes. Biological Basis of Intelligence: A Whole-Brain Perspective Are Bigger Brains Smarter? A question that has puzzled scientists for centuries is that of the origin of human intelligence. What makes some people smarter than others? The quest to answer these questions has started as early as 1830s in Europe and Russia where the brains of deceased elite scientists and artists were systematically collected and meticulously studied (Vein and Maat-Schieman, 2008). However, all the attempts to dissect the exceptional ability and talent did not reveal much at that time. The reigning hypothesis of the past century was that smarter people have bigger brains. With the advances in neuroimaging techniques this hypothesis was put to test in many studies. Indeed, a meta-analysis of 37 studies with over 1,500 individuals of the relationship between in vivo brain volume and intelligence found a moderate, yet significant positive correlation of 0.33 (McDaniel, 2005). A more recent meta-study of 88 studies with over 8,000 individuals again reported a significant, positive, slightly smaller correlation coefficient of 0.24. One of the conclusions of this study was that the strength of the association of brain volume and IQ seems to be overestimated in the literature but remains robust after accounting for publication bias (Pietschnig et al., 2015). Thus, overall bigger brain volume, when analyzed across multiple studies, is associated with higher intelligence. Which Brain Areas Are Important for Intelligence? Brain function is distributed across various areas that harbor specific functions. Can intelligence be attributed to one or several of these areas? Structural and functional brain imaging studies focused on locating general intelligence within the brain and linking specific types of cognition to specific brain areas (Deary et al., 2010).
yes
Neuroscience
Is there a connection between brain size and intelligence?
yes_statement
"brain" "size" is "connected" to "intelligence".. "intelligence" is linked to "brain" "size".
https://www.scientificamerican.com/article/the-limits-of-intelligence/
The Limits of Intelligence - Scientific American
In Brief Human intelligence may be close to its evolutionary limit. Various lines of research suggest that most of the tweaks that could make us smarter would hit limits set by the laws of physics. Brain size, for instance, helps up to a point but carries diminishing returns: brains become energy-hungry and slow. Better “wiring” across the brain also would consume energy and take up a disproportionate amount of space. Making wires thinner would hit thermodynamic limitations similar to those that affect transistors in computer chips: communication would get noisy. Humans, however, might still achieve higher intelligence collectively. And technology, from writing to the Internet, enables us to expand our mind outside the confines of our body. Santiago Ramón y Cajal, the Spanish Nobel-winning biologist who mapped the neural anatomy of insects in the decades before World War I, likened the minute circuitry of their vision-processing neurons to an exquisite pocket watch. He likened that of mammals, by comparison, to a hollow-chested grandfather clock. Indeed, it is humbling to think that a honeybee, with its milligram-size brain, can perform tasks such as navigating mazes and landscapes on a par with mammals. A honeybee may be limited by having comparatively few neurons, but it surely seems to squeeze everything it can out of them. At the other extreme, an elephant, with its five-million-fold larger brain, suffers the inefficiencies of a sprawling Mesopotamian empire. Signals take more than 100 times longer to travel between opposite sides of its brain—and also from its brain to its foot, forcing the beast to rely less on reflexes, to move more slowly, and to squander precious brain resources on planning each step. We humans may not occupy the dimensional extremes of elephants or honeybees, but what few people realize is that the laws of physics place tough constraints on our mental faculties as well. Anthropologists have speculated about anatomic roadblocks to brain expansion—for instance, whether a larger brain could fit through the birth canal of a bipedal human. If we assume, though, that evolution can solve the birth canal problem, then we are led to the cusp of some even more profound questions. One might think, for example, that evolutionary processes could increase the number of neurons in our brain or boost the rate at which those neurons exchange information and that such changes would make us smarter. But several recent trends of investigation, if taken together and followed to their logical conclusion, seem to suggest that such tweaks would soon run into physical limits. Ultimately those limits trace back to the very nature of neurons and the statistically noisy chemical exchanges by which they communicate. “Information, noise and energy are inextricably linked,” says Simon Laughlin, a theoretical neuroscientist at the University of Cambridge. “That connection exists at the thermodynamic level.” Do the laws of thermodynamics, then, impose a limit on neuron-based intelligence, one that applies universally, whether in birds, primates, porpoises or praying mantises? This question apparently has never been asked in such broad terms, but the scientists interviewed for this article generally agree that it is a question worth contemplating. “It’s a very interesting point,” says Vijay Balasubramanian, a physicist who studies neural coding of information at the University of Penn­sylvania. “I’ve never even seen this point discussed in science fiction.” Intelligence is of course a loaded word: it is hard to measure and even to define. Still, it seems fair to say that by most metrics, humans are the most intelligent animals on earth. But as our brain has evolved, has it approached a hard limit to its ability to process information? Could there be some physical limit to the evolution of neuron-based intelligence—and not just for humans but for all of life as we know it? That Hungry Tapeworm in Your Head The most intuitively obvious way in which brains could get more powerful is by growing larger. And indeed, the possible connection between brain size and intelligence has fascinated scientists for more than 100 years. Biologists spent much of the late 19th century and the early 20th century exploring universal themes of life—mathematical laws related to body mass, and to brain mass in particular, that run across the animal kingdom. One advantage of size is that a larger brain can contain more neurons, which should enable it to grow in complexity as well. But it was clear even then that brain size alone did not determine intelligence: a cow carries a brain well over 100 times larger than a mouse’s, but the cow isn’t any smarter. Instead brains seem to expand with body size to carry out more trivial functions: bigger bodies might, for example, impose a larger workload of neural housekeeping chores unrelated to intelligence, such as monitoring more tactile nerves, processing signals from larger retinas and controlling more muscle fibers. Eugene Dubois, the Dutch anatomist who discovered the skull of Homo erectus in Java in 1892, wanted a way to estimate the intelligence of animals based on the size of their fossil skulls, so he worked to define a precise mathematical relation between the brain size and body size of animals—under the assumption that animals with disproportionately large brains would also be smarter. Dubois and others amassed an ever growing database of brain and body weights; one classic treatise reported the body, organ and gland weights of 3,690 animals, from wood roaches to yellow-billed egrets to two-toed and three-toed sloths. Dubois’s successors found that mammals’ brains expand more slowly than their bodies—to about the ¾ power of body mass. So a muskrat, with a body 16 times larger than a mouse’s, has a brain about eight times as big. From that insight came the tool that Dubois had sought: the encephalization quotient, which compares a species’ brain mass with what is predicted based on body mass. In other words, it indicates by what factor a species deviates from the ¾ power law. Humans have a quotient of 7.5 (our brain is 7.5 times larger than the law predicts); bottlenose dolphins sit at 5.3; monkeys hover as high as 4.8; and oxen—no surprise there—slink around at 0.5. In short, intelligence may depend on the amount of neural reserve that is left over after the brain’s menial chores, such as minding skin sensations, are accounted for. Or to boil it down even more: intelligence may depend on brain size in at least a superficial way. As brains expanded in mammals and birds, they almost certainly benefited from economies of scale. For example, the greater number of neural pathways that any one signal between neurons can travel means that each signal implicitly carries more information, implying that the neurons in larger brains can get away with firing fewer times per second. Meanwhile, however, another, competing trend may have kicked in. “I think it is very likely that there is a law of diminishing returns” to increasing intelligence indefinitely by adding new brain cells, Balasubramanian says. Size carries burdens with it, the most obvious one being added energy consumption. In humans, the brain is already the hungriest part of our body: at 2 percent of our body weight, this greedy little tapeworm of an organ wolfs down 20 percent of the calories that we expend at rest. In newborns, it’s an astounding 65 percent. Staying in Touch Much of the energetic burden of brain size comes from the organ’s communication networks: in the human cortex, communications account for 80 percent of energy consumption. But it appears that as size increases, neuronal connectivity also becomes more challenging for subtler, structural reasons. In fact, even as biologists kept collecting data on brain mass in the early to mid-20th century, they delved into a more daunting enterprise: to define the “design principles” of brains and how these principles are maintained across brains of different sizes. A typical neuron has an elongated tail called the axon. At its end, the axon branches out, with the tips of the branches forming synapses, or contact points, with other cells. Axons, like telegraph wires, may connect different parts of the brain or may bundle up into nerves that extend from the central nervous system to the various parts of the body. In their pioneering efforts, biologists measured the diameter of axons under microscopes and counted the size and density of nerve cells and the number of synapses per cell. They surveyed hundreds, sometimes thousands, of cells per brain in dozens of species. Eager to refine their mathematical curves by extending them to ever larger beasts, they even found ways to extract intact brains from whale carcasses. The five-hour process, meticulously described in the 1880s by biologist Gustav Adolf Guldberg, involved the use of a two-man lumberjack saw, an ax, a chisel and plenty of strength to open the top of the skull like a can of beans. These studies revealed that as brains expand in size from species to species, several subtle but probably unsustainable changes happen. First, the average size of nerve cells increases. This phenomenon allows the neurons to connect to more and more of their compatriots as the overall number of neurons in the brain increases. But larger cells pack into the cerebral cortex less densely, so the distance between cells increases, as does the length of axons required to connect them. And because longer axons mean longer times for signals to travel between cells, these projections need to become thicker to maintain speed (thicker axons carry signals faster). Researchers have also found that as brains get bigger from species to species, they are divided into a larger and larger number of distinct areas. You can see those areas if you stain brain tissue and view it under a microscope: patches of the cortex turn different colors. These areas often correspond with specialized functions, say, speech comprehension or face recognition. And as brains get larger, the specialization unfolds in another dimension: equivalent areas in the left and right hemispheres take on separate functions—for example, spatial versus verbal reasoning. For decades this dividing of the brain into more work cubicles was viewed as a hallmark of intelligence. But it may also reflect a more mundane truth, says Mark Changizi, a theoretical neurobiologist at 2AI Labs in Boise, Idaho: specialization compensates for the connectivity problem that arises as brains get bigger. As you go from a mouse brain to a cow brain with 100 times as many neurons, it is impossible for neurons to expand quickly enough to stay just as well connected. Brains solve this problem by segregating like-functioned neurons into highly interconnected modules, with far fewer long-distance connections between modules. The specialization between right and left hemispheres solves a similar problem; it reduces the amount of information that must flow between the hemispheres, which minimizes the number of long, interhemispheric axons that the brain needs to maintain. “All of these seemingly complex things about bigger brains are just the backbends that the brain has to do to satisfy the connectivity problem” as it gets larger, Changizi argues. “It doesn’t tell us that the brain is smarter.” Jan Karbowski, a computational neuroscientist at the Polish Academy of Sciences in Warsaw, agrees. “Somehow brains have to optimize several parameters simultaneously, and there must be trade-offs,” he says. “If you want to improve one thing, you screw up something else.” What happens, for example, if you expand the corpus callosum (the bundle of axons connecting right and left hemispheres) quickly enough to maintain constant connectivity as brains expand? And what if you thicken those axons, so the transit delay for signals traveling between hemispheres does not increase as brains expand? The results would not be pretty. The corpus callosum would expand—and push the hemispheres apart—so quickly that any performance improvements would be neutralized. These trade-offs have been laid into stark relief by experiments showing the relation between axon width and conduction speed. At the end of the day, Karbowski says, neurons do get larger as brain size increases, but not quite quickly enough to stay equally well connected. And axons do get thicker as brains expand, but not quickly enough to make up for the longer conduction delays. Keeping axons from thickening too quickly saves not only space but energy as well, Balasubramanian says. Doubling the width of an axon doubles energy expenditure, while increasing the velocity of pulses by just 40 percent or so. Even with all of this corner cutting, the volume of white matter (the axons) still grows more quickly than the volume of gray matter (the main body of neurons containing the cell nucleus) as brains increase in size. To put it another way, as brains get bigger, more of their volume is devoted to wiring rather than to the parts of individual cells that do the actual computing, which again suggests that scaling size up is ultimately unsustainable. The Primacy of Primates It is easy, with this dire state of affairs, to see why a cow fails to squeeze any more smarts out of its grapefruit-size brain than a mouse does from its blueberry-size brain. But evolution has also achieved impressive work­arounds at the level of the brain’s building blocks. When Jon H. Kaas, a neuroscientist at Vanderbilt University, and his colleagues compared the morphology of brain cells across a spectrum of primates in 2007, they stumbled on­to a game changer—one that has probably given humans an edge. Kaas found that unlike in most other mammals, cortical neurons in primates enlarge very little as the brain increases in size. A few neurons do increase in size, and these rare ones may shoulder the burden of keeping things well connected. But the majority do not get larger. Thus, as primate brains expand from species to species, their neurons still pack together almost as densely. So from the marmoset to the owl monkey—a doubling in brain mass—the number of neurons roughly doubles, whereas in rodents with a similar doubling of mass the number of neurons increases by just 60 percent. That difference has huge consequences. Humans pack 100 billion neurons into 1.4 kilograms of brain, but a rodent that had followed its usual neuron-size scaling law to reach that number of neurons would now have to drag around a brain weighing 45 kilograms. And metabolically speaking, all that brain matter would eat the varmint out of house and home. “That may be one of the factors in why the large rodents don’t seem to be [smarter] at all than the small rodents,” Kaas says. Having smaller, more densely packed neurons does seem to have a real impact on intelligence. In 2005 neurobiologists Gerhard Roth and Urusula Dicke, both at the University of Bremen in Germany, reviewed several traits that predict intelligence across species (as measured, roughly, by behavioral complexity) even more effectively than the encephalization quotient does. “The only tight correlation with intelligence,” Roth says, “is in the number of neurons in the cortex, plus the speed of neuronal activity,” which decreases with the distance between neurons and increases with the degree of myelination of axons. Myelin is fatty insulation that lets axons transmit signals more quickly. If Roth is right, then primates’ small neurons have a double effect: first, they allow a greater increase in cortical cell number as brains enlarge; and second, they allow faster communication, because the cells pack more closely. Elephants and whales are reasonably smart, but their larger neurons and bigger brains lead to inefficiencies. “The packing density of neurons is much lower,” Roth says, “which means that the distance between neurons is larger and the velocity of nerve impulses is much lower.” In fact, neuroscientists have recently seen a similar pattern in variations within humans: people with the quickest lines of communication between their brain areas also seem to be the brightest. One study, led in 2009 by Martijn P. van den Heuvel of the University Medical Center Utrecht in the Netherlands, used functional magnetic resonance imaging to measure how directly different brain areas talk to one another—that is, whether they talk via a large or a small number of intermediary areas. Van den Heuvel found that shorter paths between brain areas correlated with higher IQ. Edward Bullmore, an imaging neuroscientist at the University of Cambridge, and his collaborators obtained similar results the same year using a different approach. They compared working memory (the ability to hold several numbers in one’s memory at once) among 29 healthy people. They then used magnetoencephalographic recordings from their subjects’ scalp to estimate how quickly communication flowed between brain areas. People with the most direct communication and the fastest neural chatter had the best working memory. It is a momentous insight. We know that as brains get larger, they save space and energy by limiting the number of direct connections between regions. The large human brain has relatively few of these long-distance connections. But Bullmore and van den Heuvel showed that these rare, nonstop connections have a disproportionate influence on smarts: brains that scrimp on resources by cutting just a few of them do noticeably worse. “You pay a price for intelligence,” Bullmore concludes, “and the price is that you can’t simply minimize wiring.” Intelligence Design If communication between neurons, and between brain areas, is really a major bottleneck that limits intelligence, then evolving neurons that are even smaller (and closer together, with faster communication) should yield smarter brains. Similarly, brains might become more efficient by evolving axons that can carry signals faster over longer distances without getting thicker. But something prevents animals from shrinking neurons and axons beyond a certain point. You might call it the mother of all limitations: the proteins that neurons use to generate electrical pulses, called ion channels, are inherently unreliable. Ion channels are tiny valves that open and close through changes in their molecular folding. When they open, they allow ions of sodium, potassium or calcium to flow across cell membranes, producing the electrical signals by which neurons communicate. But being so minuscule, ion channels can get flipped open or closed by mere thermal vibrations. A simple biology experiment lays the defect bare. Isolate a single ion channel on the surface of a nerve cell using a microscopic glass tube, sort of like slipping a glass cup over a single ant on a sidewalk. When you adjust the voltage on the ion channel—a maneuver that causes it to open or close—the ion channel does not flip on and off reliably like your kitchen light does. Instead it flutters on and off randomly. Sometimes it does not open at all; other times it opens when it should not. By changing the voltage, all you do is change the likelihood that it opens. It sounds like a horrible evolutionary design flaw—but in fact, it is a compromise. “If you make the spring on the channel too loose, then the noise keeps on switching it,” Laughlin says—as happens in the biology experiment described earlier. “If you make the spring on the channel stronger, then you get less noise,” he says, “but now it’s more work to switch it,” which forces neurons to spend more energy to control the ion channel. In other words, neurons save energy by using hair-trigger ion channels, but as a side effect the channels can flip open or close accidentally. The trade-off means that ion channels are reliable only if you use large numbers of them to “vote” on whether or not a neuron will generate an impulse. But voting becomes problematic as neurons get smaller. “When you reduce the size of neurons, you reduce the number of channels that are available to carry the signal,” Laughlin says. “And that increases the noise.” In a pair of papers published in 2005 and 2007, Laughlin and his collaborators calculated whether the need to include enough ion channels limits how small axons can be made. The results were startling. “When axons got to be about 150 to 200 nanometers in diameter, they became impossibly noisy,” Laughlin says. At that point, an axon contains so few ion channels that the accidental opening of a single channel can spur the axon to deliver a signal even though the neuron did not intend to fire. The brain’s smallest axons probably already hiccup out about six of these accidental spikes per second. Shrink them just a little bit more, and they would blather out more than 100 per second. “Cortical gray matter neurons are working with axons that are pretty close to the physical limit,” Laughlin concludes. This fundamental compromise between information, energy and noise is not unique to biology. It applies to everything from optical-fiber communications to ham radios and computer chips. Transistors act as gatekeepers of electrical signals, just like ion channels do. For five decades engineers have shrunk transistors steadily, cramming more and more onto chips to produce ever faster computers. Transistors in the latest chips are 22 nanometers. At those sizes, it becomes very challenging to “dope” silicon uniformly (doping is the addition of small quantities of other elements to adjust a semiconductor’s properties). By the time they reach about 10 nanometers, transistors will be so small that the random presence or absence of a single atom of boron will cause them to behave unpredictably. Engineers might circumvent the limitations of current transistors by going back to the drawing board and redesigning chips to use entirely new technologies. But evolution cannot start from scratch: it has to work within the scheme and with the parts that have existed for half a billion years, explains Heinrich Reichert, a developmental neurobiologist at the University of Basel in Switzerland—like building a battleship with modified airplane parts. Moreover, there is another reason to doubt that a major evolutionary leap could lead to smarter brains. Biology may have had a wide range of options when neurons first evolved, but 600 million years later a peculiar thing has happened. The brains of the honeybee, the octopus, the crow and intelligent mammals, Roth points out, look nothing alike at first glance. But if you look at the circuits that underlie tasks such as vision, smell, navigation and episodic memory of event sequences, “very astonishingly they all have absolutely the same basic arrangement.” Such evolutionary convergence usually suggests that a certain anatomical or physiological solution has reached maturity so that there may be little room left for improvement. Perhaps, then, life has arrived at an optimal neural blueprint. That blueprint is wired up through a step-by-step choreography in which cells in the growing embryo interact through signaling molecules and physical nudging, and it is evolutionarily entrenched. Bees Do It So have humans reached the physical limits of how complex our brain can be, given the building blocks that are available to us? Laughlin doubts that there is any hard limit on brain function the way there is one on the speed of light. “It’s more likely you just have a law of diminishing returns,” he says. “It becomes less and less worthwhile the more you invest in it.” Our brain can pack in only so many neurons; our neurons can establish only so many connections among themselves; and those connections can carry only so many electrical impulses per second. Moreover, if our body and brain got much bigger, there would be costs in terms of energy consumption, dissipation of heat and the sheer time it takes for neural impulses to travel from one part of the brain to another. The human mind, however, may have better ways of expanding without the need for further biological evolution. After all, honeybees and other social insects do it: acting in concert with their hive sisters, they form a collective entity that is smarter than the sum of its parts. Through social interaction we, too, have learned to pool our intelligence with others. And then there is technology. For millennia written language has enabled us to store information outside our body, beyond the capacity of our brain to memorize. One could argue that the Internet is the ultimate consequence of this trend toward outward expansion of intelligence beyond our body. In a sense, it could be true, as some say, that the Internet makes you stupid: collective human intelligence—culture and computers—may have reduced the impetus for evolving greater individual smarts. ABOUT THE AUTHOR(S) Douglas Fox is a freelance writer living in San Francisco. He is a frequent contributor to New Scientist, Discover, the Christian Science Monitor and a recipient of several awards, most recently of an Award for Reporting on a Significant Topic from the American Society of Journalists and Authors. Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers.
Or to boil it down even more: intelligence may depend on brain size in at least a superficial way. As brains expanded in mammals and birds, they almost certainly benefited from economies of scale. For example, the greater number of neural pathways that any one signal between neurons can travel means that each signal implicitly carries more information, implying that the neurons in larger brains can get away with firing fewer times per second. Meanwhile, however, another, competing trend may have kicked in. “I think it is very likely that there is a law of diminishing returns” to increasing intelligence indefinitely by adding new brain cells, Balasubramanian says. Size carries burdens with it, the most obvious one being added energy consumption. In humans, the brain is already the hungriest part of our body: at 2 percent of our body weight, this greedy little tapeworm of an organ wolfs down 20 percent of the calories that we expend at rest. In newborns, it’s an astounding 65 percent. Staying in Touch Much of the energetic burden of brain size comes from the organ’s communication networks: in the human cortex, communications account for 80 percent of energy consumption. But it appears that as size increases, neuronal connectivity also becomes more challenging for subtler, structural reasons. In fact, even as biologists kept collecting data on brain mass in the early to mid-20th century, they delved into a more daunting enterprise: to define the “design principles” of brains and how these principles are maintained across brains of different sizes. A typical neuron has an elongated tail called the axon. At its end, the axon branches out, with the tips of the branches forming synapses, or contact points, with other cells. Axons, like telegraph wires, may connect different parts of the brain or may bundle up into nerves that extend from the central nervous system to the various parts of the body. In their pioneering efforts, biologists measured the diameter of axons under microscopes and counted the size and density of nerve cells and the number of synapses per cell.
yes
Neuroscience
Is there a connection between brain size and intelligence?
yes_statement
"brain" "size" is "connected" to "intelligence".. "intelligence" is linked to "brain" "size".
https://www.psychologicalscience.org/news/releases/brain-size-cognitive-performance.html
Bigger Brains Linked With Slightly Better Cognitive Performance ...
Bigger Brains Linked With Slightly Better Cognitive Performance For more than 200 years, scientists have looked for a link between brain size and cognitive performance, yet the connection has remained hazy and fraught. Research led by Gideon Nave of the University of Pennsylvania’s Wharton School and Philipp Koellinger of Vrije Universiteit Amsterdam has clarified the connection, with the largest study of its kind. Using MRI-derived information about brain size in connection with cognitive performance test results and a measure of educational attainment, obtained from more than 13,600 individuals, the researchers found that, as previous studies have suggested, there is a positive relationship between brain volume and performance on cognitive tests. But that finding comes with important caveats. The research is published in Psychological Science, a journal of the Association for Psychological Science. “The effect is there,” says Nave, an assistant professor of marketing at Wharton. “On average, a person with a larger brain will tend to perform better on tests of cognition than one with a smaller brain. But size is only a small part of the picture, explaining only about two percent of the variability in test performance. For educational attainment the effect was even smaller: an additional 100cm3 cup full of brain would increase an average person’s years of schooling by less than 5 months.” Koellinger adds, “this implies that factors other than this one single variable that has received so much attention over the years account for 98% of the variation in cognitive test performance. Yet, the effect is strong enough that all future studies that will try to unravel the relationships between more fine-grained measures of brain anatomy and cognitive health should control for total brain volume. Thus, we see our study as a small, but important, contribution to better understanding differences in cognitive health among people.” Nave and Koellinger’s collaborators included Joseph Kable, Baird Term Professor in Penn’s Department of Psychology; Wi Hoon Jung, a former postdoctoral researcher in Kable’s lab; and Richard Karlsson Linnér, a postdoc in Koellinger’s group. From the outset, the researchers sought to minimize as much as possible the effects of bias and confounding factors in their research. They preregistered the study, meaning they published their methods and committed to publishing ahead of time so they couldn’t simply bury the results if the findings appeared to be insignificant. Their analyses also systematically controlled for sex, age, height, socioeconomic status, and population structure (measured from the participant’s genes). Height is correlated with better cognitive performance, for example, but also with bigger brain size, so their study attempted to zero in only on the contribution of brain size. Earlier studies had consistently identified a correlation between brain size and cognitive performance tests, but the relationship seemed to grow weaker as studies included more participants, so Nave and Koellinger hoped to pursue the question with a sample size that dwarfed previous efforts. The study relied on a recently amassed dataset, the UK Biobank, a repository of information from more than half a million people across the United Kingdom. The Biobank includes participants’ health and genetic information as well as brain scan images of a subset of roughly 20,000 people—a number that is growing by the month. “This gives us something that never existed before,” Koellinger says. “This sample size is gigantic—70 percent larger than all prior studies on this subject put together—and allows us to test the relationship between brain size and cognitive performance with much greater reliability.” Measuring cognitive performance is a difficult task, and the researchers note that even the evaluation used in this study has weaknesses. Participants took brief tests for logic, memory, and reaction time, but not acquired knowledge, yielding a relatively noisy measure of general cognitive performance. Using a model that incorporated a variety of variables, the team looked to see which were predictive of better cognitive performance and educational attainment. Even controlling for other factors, like height, socioeconomic status, and genetic ancestry, total brain volume was positively correlated with both. The findings are somewhat intuitive. “It’s a simplified analogy, but think of a computer,” Nave says. “If you have more transistors, you can compute faster and transmit more information. It may be similar in the brain to some extent. If you have more neurons, this may allow you to have a better memory, or complete more tasks in parallel. However, things are likely to be much more complex in reality. For example, consider the possibility that a bigger brain – which is highly heritable – is associated with being a better parent. In this case, the association between a bigger brain and test performance may simply reflect the influence of parenting on cognition. We won’t be able to get to the bottom of this without more research.” One of the notable findings related to differences between male and females. “Just like with height, there is a pretty substantial difference between males and females in brain volume, but this doesn’t translate into differences in cognitive performance,” Nave says. A more nuanced look at the brain scans may explain this result. Other studies reported that in females, the cerebral cortex—the outer layer of the front part of the brain—tends to be thicker than in males. “This might account for the fact that, despite having relatively smaller brains on average, there is no effective difference in cognitive performance between males and females,” Nave says. “And of course, many other things could be going on.” Yet the authors underscore that the overarching correlation between brain volume and “braininess” was a weak one; no one should be measuring job candidates’ head sizes during the hiring process, Nave jokes. Indeed, what stands out from the analysis is how little total brain volume seems to explain. Factors such as parenting style, education, nutrition, personality traits, and others, are likely major contributors that were not specifically tested in the study. In follow-up work, the researchers plan to zoom in to determine whether certain regions of the brain, or connectivity between them, play an outsized role in contributing to braininess. They’re also hopeful that a deeper understanding of the biological underpinnings of cognitive performance can help shine a light on environmental factors that contribute, some of which can be influenced by our actions and our governments’ policies. “Suppose you have the necessary biology to become a fantastic golf or tennis player, but you never have the opportunity to play, so you’ll never realize your potential,” Nave says. And Koellinger adds, “We’re hopeful that if we can understand the biological factors that are linked to cognitive performance, it will allow us to identify the environmental circumstances under which people can best manifest their potential and remain cognitively healthy. We’ve just started to scratch the top of the iceberg here.” The research was supported by ERC Consolidator Grant to Koellinger, the Wharton Neuroscience Initiative, and the Wharton’s Dean Research fund. Comments I would have hope the authors might have come up with a better title. From my perspective, the one above is too easily interpreted as “bigger is better” which does not represent the findings of the study. It actually is appropriate to interpret as “bigger is better,” as long as it is across only one sex. Bigger is better in either females or males. It does not however work BETWEEN the sexes. As indicated in the study, one possible reason for this is a thicker corpus callosum in the female brain. APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines. Transcranial electrical stimulation (tES) can be used to help bolster areas of the brain associated with creative thought. By learning about the neural patterns of creative thought, scientists are exploring how to steer them in new directions. Full Professor to Promote Healthy Lifestyle Behaviors and Prevention/Treatment of Pediatric Obesity and Related Health Problems Cookies We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies. To accept or reject certain categories of cookies specifically please click the Cookie Settings link. Please read our Cookie Policy for more information. Privacy Overview This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience. Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information. The _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors. _gat_gtag_UA_3507334_1 1 minute Set by Google to distinguish users. _gid 1 day Installed by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
Bigger Brains Linked With Slightly Better Cognitive Performance For more than 200 years, scientists have looked for a link between brain size and cognitive performance, yet the connection has remained hazy and fraught. Research led by Gideon Nave of the University of Pennsylvania’s Wharton School and Philipp Koellinger of Vrije Universiteit Amsterdam has clarified the connection, with the largest study of its kind. Using MRI-derived information about brain size in connection with cognitive performance test results and a measure of educational attainment, obtained from more than 13,600 individuals, the researchers found that, as previous studies have suggested, there is a positive relationship between brain volume and performance on cognitive tests. But that finding comes with important caveats. The research is published in Psychological Science, a journal of the Association for Psychological Science. “The effect is there,” says Nave, an assistant professor of marketing at Wharton. “On average, a person with a larger brain will tend to perform better on tests of cognition than one with a smaller brain. But size is only a small part of the picture, explaining only about two percent of the variability in test performance. For educational attainment the effect was even smaller: an additional 100cm3 cup full of brain would increase an average person’s years of schooling by less than 5 months.” Koellinger adds, “this implies that factors other than this one single variable that has received so much attention over the years account for 98% of the variation in cognitive test performance. Yet, the effect is strong enough that all future studies that will try to unravel the relationships between more fine-grained measures of brain anatomy and cognitive health should control for total brain volume. Thus, we see our study as a small, but important, contribution to better understanding differences in cognitive health among people.” Nave and Koellinger’s collaborators included Joseph Kable, Baird Term Professor in Penn’s Department of Psychology; Wi Hoon Jung, a former postdoctoral researcher in Kable’s lab; and Richard Karlsson Linnér, a postdoc in Koellinger’s group.
yes
Neuroscience
Is there a connection between brain size and intelligence?
yes_statement
"brain" "size" is "connected" to "intelligence".. "intelligence" is linked to "brain" "size".
https://www.nm.org/healthbeat/healthy-tips/battle-of-the-brain-men-vs-women-infographic
Battle of the Brain: Men Vs. Women [Infographic] | Northwestern ...
Battle of the Brain: Men Vs. Women [Infographic] When it comes to the brain, just how different are men and women? You may have heard that men’s brains are larger. It turns out, there are additional differences in grey matter and brain patterns as well. See what it all means for you and your health. (Not) All About Size Although the male brain is 10 percent larger than the female brain, it does not impact intelligence. Despite the size difference, men’s and women’s brains are more alike than they are different. One area in which they do differ is the inferior-parietal lobule, which tends to be larger in men. This part of the brain is linked with mathematical problems, estimating time and judging speed. Another area that was previously debated was the hippocampus, which is associated with memory, but recent studies have found no differences in the hippocampus between genders. Grey Matter Matters There is evidence that women have more grey matter in their brains. Grey matter contains cell bodies that help our bodies process information in the brain and is located with regions of the brain that are involved with muscle control and sensory perception. (Bad news for expecting moms: Grey matter decreases during pregnancy, which helps explain “pregnancy brain.”) That said, women have been found to use more white matter, which connects processing centers, while men use more grey matter. This could explain why men tend to excel at task-focused projects, while women are more likely to excel at language and multitasking. Brain Chemistry In studies examining connections within the brain, it has been found that women tend to have stronger connections side to side, which could lead to better intuitive thinking, analyzing, and drawing of conclusions. Men, on the other hand, tend to have stronger connections from front to back, which can result in heightened perception and stronger motor skills. Recent studies have also suggested that the cerebellum, historically thought only to be involved in the coordination of movement, may be slightly different between the sexes and actually have an effect on behavior and thinking as well. In addition to brain-processing patterns, men and women have different brain chemistry. While both process the same neurochemicals, they process them differently. For example, serotonin (which is connected to happiness and depression) does not process the same in women. This could help explain why women are more susceptible to anxiety and depression. What Does It Mean for Health Risks? Differences in genetic make-up may lead to a higher risk of developing certain health issues. Although this is a continuing field of research, some correlations may exist between hormones, such as testosterone or estrogen, and particular health conditions. Depression, stress and anxiety, all of which are more common in women, are other factors that can lead to additional health concerns such as stroke. See what this means for your health in the infographic below. (Plus, learn other surprising facts about your brain here.)
Battle of the Brain: Men Vs. Women [Infographic] When it comes to the brain, just how different are men and women? You may have heard that men’s brains are larger. It turns out, there are additional differences in grey matter and brain patterns as well. See what it all means for you and your health. (Not) All About Size Although the male brain is 10 percent larger than the female brain, it does not impact intelligence. Despite the size difference, men’s and women’s brains are more alike than they are different. One area in which they do differ is the inferior-parietal lobule, which tends to be larger in men. This part of the brain is linked with mathematical problems, estimating time and judging speed. Another area that was previously debated was the hippocampus, which is associated with memory, but recent studies have found no differences in the hippocampus between genders. Grey Matter Matters There is evidence that women have more grey matter in their brains. Grey matter contains cell bodies that help our bodies process information in the brain and is located with regions of the brain that are involved with muscle control and sensory perception. (Bad news for expecting moms: Grey matter decreases during pregnancy, which helps explain “pregnancy brain.”) That said, women have been found to use more white matter, which connects processing centers, while men use more grey matter. This could explain why men tend to excel at task-focused projects, while women are more likely to excel at language and multitasking. Brain Chemistry In studies examining connections within the brain, it has been found that women tend to have stronger connections side to side, which could lead to better intuitive thinking, analyzing, and drawing of conclusions. Men, on the other hand, tend to have stronger connections from front to back, which can result in heightened perception and stronger motor skills. Recent studies have also suggested that the cerebellum, historically thought only to be involved in the coordination of movement, may be slightly different between the sexes and actually have an effect on behavior and thinking as well. In addition to brain-processing patterns, men and women have different brain chemistry.
no
Neuroscience
Is there a connection between brain size and intelligence?
no_statement
there is no "connection" between "brain" "size" and "intelligence".. "intelligence" is not determined by "brain" "size".
https://www.criver.com/eureka/whats-so-special-about-einsteins-brain
What's So Special About Einstein's Brain? | Charles River
Research Models What’s So Special About Einstein’s Brain? After Albert Einstein’s death in 1955, scientists all over the world scrambled for the opportunity to get a piece of his brain. It was a pathologist by the name of Thomas Harvey who got to it first. After Harvey, a piece of Einstein’s brain went to a neuroanatomist named Marian Diamond, a PhD scientist from the University of California at Berkeley. Asking the question “What makes a genius a genius?” Diamond’s lab went looking for answers. Bigger and Better things? It’s only natural to assume that given Einstein’s high-powered cognition, his brain was simply, well, bigger, right? Myth #1: ‘Bigger is better.’ While this seems like a logical enough assumption, the fact is that brain size and cognitive power are not linked. Look at a whale, for example. A sperm whale’s brain is over 50 times bigger than a human’s, but you won’t see Shamu challenging Deep Blue to a chess match anytime soon. Weight also has little correlation with intelligence. In fact, a sperm whale’s brain weighs 17 pounds, an elephant’s a little over 10–whereas a human’s weighs about 3 pounds and accounts for approximately 2% of the total body weight. A 1999 study by a research team at the Faculty of Health Sciences at McMaster University, actually showed that Einstein’s brain was smaller than average. While smaller overall, there were, however, certain areas of his brain that were above average. Based on photographs of his brain, this study showed that Einstein’s parietal lobes–the top, back parts of the brain–were actually 15% larger than average. Two structures, the left angular gyrus and supermarginal gyrus, were particularly enlarged. These areas, while known to have little to do with IQ, are linked to mathematical ability, visuospatial cognition and become highly active when making unusual associations on tests of creativity. The researchers concluded this was likely the reason Einstein could perform the conceptual gymnastics needed to think about time and space with such imagery and abstraction. More…the Merrier? So, ‘bigger’ isn’t necessarily ‘better.’ Surely though, Einstein’s brain must have had more nerve cells, that is, neurons–those hard-working, fast-firing micro-machines, which network together and make up everything from the memories we retain to the personalities we display? Myth #2: ‘More is better.’ Unlike the first myth, there is some truth to this one. Einstein’s brain did actually have more brain cells. Not neurons, but rather a type of brain cell most neuroscientists never really paid much attention to. And this brings us back to Marian Diamond’s research and the dissection of Einstein’s brain. What Diamond and her research team found was that Einstein’s brain had a higher percentage of brain cells, namely glial cells. In brain science, neurons get all the glory. But the real miracle workers in the brain are the glial cells, Greek for “glue,” which protect and maintain neurons and cellular networks. Once thought to simply nourish (provide oxygen and nutrients) and support nerve cells (hold in place and remove waste), these cells have been found to speed communication between neurons and thus affect overall cognitive capacity. Diamond and her team found that Einstein’s brain had more glial cells relative to neurons, especially (that is, statistically significant) in an area of the brain called the left inferior parietal area, a region responsible for synthesizing information from different areas of the brain. Making the Connection Enhanced cognitive ability isn’t just a function of the number of glial cells, but also the number of connections between them–another finding from Diamond’s lab. No more myths here; we’ve always known that more connections between brain cells facilitates faster and more sophisticated communication. Such increased connections were found in Einstein’s brain, particularly in the cortex–the outermost and newest layer of the brain, which includes the prefrontal cortex, the temporal lobes and hippocampus (structure responsible for memory). Seeking to understand what stimulated such enhanced connectivity, Diamond compared rats in an environment without environmental stimulation to rats in an enriched environment, also given a variety of learning tasks. Results showed that rats without any new challenges or learning tasks had less synaptic connections (measured by brain weight) than those rats that were challenged and forced to learn new information. Einstein showed this same extraordinary amount of connections in multiple brain regions, presumably due to his insatiable curiosity, determination to learn more and passion for solving the riddles physics had to offer. The Makings of a Genius When you think genius, you think of Albert Einstein. It’s no surprise then that after his death, every scientist under the sun wanted to see what made his supremely advanced organ tick. Keeping in mind that his is just one brain, with regard to size, the assumption that “bigger is better” was found to be false. Additionally, from studying his brain, the assumption that more cells makes for a better brain was also put to rest. It appears the ‘glue’ in our brains has much to do with genius. Glial cells, which for years were thought to be the mother hens of the human brain, speed communication between neurons and enhance cognitive capability, especially when found in high numbers and with increased connectivity. But, if you don’t believe any of this and want to see for yourself, Einstein’s brain can be found in all its glory at the Mutter Museum in Philadelphia as well as in the British Museum where two of the 140 slices of his brain are on loan.
Weight also has little correlation with intelligence. In fact, a sperm whale’s brain weighs 17 pounds, an elephant’s a little over 10–whereas a human’s weighs about 3 pounds and accounts for approximately 2% of the total body weight. A 1999 study by a research team at the Faculty of Health Sciences at McMaster University, actually showed that Einstein’s brain was smaller than average. While smaller overall, there were, however, certain areas of his brain that were above average. Based on photographs of his brain, this study showed that Einstein’s parietal lobes–the top, back parts of the brain–were actually 15% larger than average. Two structures, the left angular gyrus and supermarginal gyrus, were particularly enlarged. These areas, while known to have little to do with IQ, are linked to mathematical ability, visuospatial cognition and become highly active when making unusual associations on tests of creativity. The researchers concluded this was likely the reason Einstein could perform the conceptual gymnastics needed to think about time and space with such imagery and abstraction. More…the Merrier? So, ‘bigger’ isn’t necessarily ‘better.’ Surely though, Einstein’s brain must have had more nerve cells, that is, neurons–those hard-working, fast-firing micro-machines, which network together and make up everything from the memories we retain to the personalities we display? Myth #2: ‘More is better.’ Unlike the first myth, there is some truth to this one. Einstein’s brain did actually have more brain cells. Not neurons, but rather a type of brain cell most neuroscientists never really paid much attention to. And this brings us back to Marian Diamond’s research and the dissection of Einstein’s brain. What Diamond and her research team found was that Einstein’s brain had a higher percentage of brain cells, namely glial cells. In brain science, neurons get all the glory. But the real miracle workers in the brain are the glial cells, Greek for “glue,” which protect and maintain neurons and cellular networks.
no
Otorhinolaryngology
Is there a cure for pulsatile tinnitus?
yes_statement
there is a "cure" for "pulsatile" "tinnitus".. "pulsatile" "tinnitus" can be "cured".
https://www.hear-it.org/pulsatile-tinnitus
Pulsatile Tinnitus - When you hear a thumping sound in your ear
What are the causes of pulsatile tinnitus? Pulsatile tinnitus, the thumping in the ear, is often related to the blood flow in the vessels (arteries and veins) near the ears or an increased awareness of the blood flow around the ears. There can be different possible causes for pulsatile tinnitus. Changes in the blood flow such as general increased blood flow, local increased blood flow or turbulent blood flow may be the cause. Also high blood pressure or a narrowing of a blood vessel near the ear may cause pulsatile tinnitus. Tinnitus and stress are often connected. With pulsatile tinnitus, the effect of stress is indirect as stress can exacerbate blood pressure. But there can also be other causes of pulsatile tinnitus. Some may only experience pulsatile tinnitus in one ear. Others only experience pulsatile tinnitus when they are lying down. Hear tinnitus Do you experience pulsatile tinnitus? Maybe you have another type of tinnitus as well. What do other types of tinnitus sound like? Listen to our examples of tinnitus. How can I stop the pulsing sound in my ear? You cannot stop or cure pulsatile tinnitus yourself. If you experience pulsatile tinnitus, it is advisable to see an ENT doctor or another medical specialist to have your ears, blood pressure and blood vessels near the ear and general health checked. If there is a specific cause for the problem, this may be solved. In other cases, no specific cause can be identified. Many have tried to find a way to cure tinnitus of any type, but so far there has not been found a scientifically proven solution. Pulsatile tinnitus can be alleviated as the patient learns to cope with the condition, e.g. through counselling, and live a normal life. en-gb - Hear-it Hear-it.org in other languages: Hear-it.org is a non-commercial web site and has been established to increase public awareness of hearing loss. Hear-it.org is one of the world's leading and most comprehensive websites on hearing, hearing loss and tinnitus and how to treat and live with hearing loss or tinnitus.
What are the causes of pulsatile tinnitus? Pulsatile tinnitus, the thumping in the ear, is often related to the blood flow in the vessels (arteries and veins) near the ears or an increased awareness of the blood flow around the ears. There can be different possible causes for pulsatile tinnitus. Changes in the blood flow such as general increased blood flow, local increased blood flow or turbulent blood flow may be the cause. Also high blood pressure or a narrowing of a blood vessel near the ear may cause pulsatile tinnitus. Tinnitus and stress are often connected. With pulsatile tinnitus, the effect of stress is indirect as stress can exacerbate blood pressure. But there can also be other causes of pulsatile tinnitus. Some may only experience pulsatile tinnitus in one ear. Others only experience pulsatile tinnitus when they are lying down. Hear tinnitus Do you experience pulsatile tinnitus? Maybe you have another type of tinnitus as well. What do other types of tinnitus sound like? Listen to our examples of tinnitus. How can I stop the pulsing sound in my ear? You cannot stop or cure pulsatile tinnitus yourself. If you experience pulsatile tinnitus, it is advisable to see an ENT doctor or another medical specialist to have your ears, blood pressure and blood vessels near the ear and general health checked. If there is a specific cause for the problem, this may be solved. In other cases, no specific cause can be identified. Many have tried to find a way to cure tinnitus of any type, but so far there has not been found a scientifically proven solution. Pulsatile tinnitus can be alleviated as the patient learns to cope with the condition, e.g. through counselling, and live a normal life. en-gb - Hear-it Hear-it.org in other languages: Hear-it.org is a non-commercial web site and has been established to increase public awareness of hearing loss.
no
Otorhinolaryngology
Is there a cure for pulsatile tinnitus?
yes_statement
there is a "cure" for "pulsatile" "tinnitus".. "pulsatile" "tinnitus" can be "cured".
https://www.nature.com/articles/srep36601
Dural arteriovenous fistula masquerading as pulsatile tinnitus ...
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Subjects Abstract Pulsatile tinnitus (PT) is often an initial presenting symptom of dural arteriovenous fistula (dAVF), but it may be overlooked or diagnosed late if not suspected on initial diagnostic work-up. Here, we assess anatomical features, treatment outcomes, and clinical implications of patients with PT due to dAVF. Of 220 patients who were diagnosed with dAVF between 2003 and 2014, 30 (13.6%) presented with only PT as their initial symptom. The transverse-sigmoid sinus (70.0%) was the most common site, followed by the hypoglossal canal (10.0%) and the middle cranial fossa (6.7%) on radiologic evaluation. Regarding venous drainage patterns, sinus or meningeal venous drainage pattern was the most common type (73.3%), followed by sinus drainage with a cortical venous reflux (26.7%). PT disappeared completely in 21 (80.8%) of 26 patients who underwent therapeutic intervention with transarterial embolization of the fistula, improved markedly in 3 (11.5%), and remained the same in 2 (7.7%). In conclusion, considering that PT may be the only initial symptom in more than 10% of dAVF, not only otolaryngologists but also neurologists and neurosurgeons should meticulously evaluate patients with PT. In most cases, PT originating from dAVF can be cured with transarterial embolization regardless of location and venous drainage pattern. Introduction Tinnitus, defined as the phantom perception of sound in the absence of a corresponding external source, is a term used for many forms of the symptom with various characteristics and different causes. The classification of tinnitus into pulsatile or non-pulsatile based on the perceived quality of the sound is of help to clinicians, because heartbeat-synchronous pulsatile tinnitus (PT) is predominantly vascular in origin1,2,3,4. PT usually results from vibrations of turbulent blood flow in vessels inside or near the middle ear5. Although PT is uncommon and represents less than 10% of all tinnitus6, it is important to recognize this category of tinnitus because PT is surgically curable when the causative vascular abnormalities are determined and resolved. In the clinical setting for patients with tinnitus, the management of PT is somewhat challenging due to its infrequency and lack of standardized diagnostic and therapeutic protocols. Of known underlying diseases, intracranial dural arteriovenous fistula (dAVF) is one of the most common causes of arterial pulse synchronous PT7,8. dAVF indicates an abnormal direct connection between dural arteries and dural veins or a venous sinus, accounting for 10–15% of intracranial arteriovenous malformations9,10. Patients with dAVF can be asymptomatic or can experience symptoms, ranging from mild PT to fatal intracranial hemorrhage, depending on the anatomical location and venous drainage pattern11. PT is often the sole initial symptom of dAVF, but a high index of suspicion and an appropriate evaluation are essential to avoid misdiagnosis and potentially catastrophic consequences. Cross-sectional images using contrast-enhanced temporal bone computed tomography and brain magnetic resonance imaging with angiography (MRI/A) give useful information for the diagnosis of dAVF, but for a complete characterization and classification of dAVF, classic angiography should be performed12. dAVFs have been managed with conservative treatment, neurosurgical resection, venous clipping, endovascular embolization, radiation therapy, and combinations of these methods11,12. Although surgery still plays an important role in some complex cases, most patients with dAVF can be treated successfully with transarterial or transvenous selective embolization. Considering that most dAVFs are curable with presently available treatment modalities, accurate diagnosis of dAVF presenting with only PT by performing a meticulous physical examination and choosing appropriate neuroimaging modalities is essential. Many researchers have documented radiological findings and treatment outcomes of dAVF from a neurosurgical viewpoint, but there are few studies on practical guidance for patients presenting with only PT obviously due to intracranial dAVF. Thus, we sought to evaluate retrospectively the clinical features, anatomical details, and treatment results in dAVF patients presenting with only PT. We also investigated the effects of potential influencing factors on the treatment outcomes. Methods Subjects We conducted a retrospective review of the medical records, brain MRI/A, and transfemoral cerebral angiography (TFCA) findings of 220 patients who were diagnosed with dAVF at Seoul National University Bundang Hospital between January 2003 and December 2014. Of them, a total of 30 patients (13.6%) visited the department of otolaryngology or neurosurgery with only PT as their initial symptom. Patient ages ranged from 27 to 80 years (mean, 52.8 ± 11.7 years). Of the 30 patients, 8 were men and 22 were women; 26 underwent therapeutic intervention with transarterial embolization of the fistula during TFCA. Detailed patient characteristics are summarized in Table 1. The study was approved by the institutional review board of the Clinical Research Institute at the hospital (IRB#: B-1601-332-103) and informed consent was obtained from all subjects. The study procedures were carried out in accordance with the relevant guidelines and regulations. Table 1 Demographic and clinical characteristics of the 26 patients with dural arteriovenous fistula presenting only with pulsatile tinnitus. On physical examination, the head-and-neck area was fully examined including changes in PT on digital compression of the ipsilateral internal jugular vein and head rotation to the ipsi- and contralateral side, and auscultation to locate any possible source of the PT. Pure tone audiometry was conducted in nine patients who visited the department of otolaryngology at the initial visit. The pure tone average was determined at 0.5, 1, 2, and 4 kHz. Hearing asymmetry of low-tone was defined as (1) a low-tone threshold discrepancy of 10 decibel (dB) or more on at least two consecutive frequencies (0.25 and 0.5 kHz) or a discrepancy of more than 20 dB on at least one frequency (0.25 or 0.5 kHz), as determined by pure-tone audiograms, and (2) the pure tone thresholds at low frequencies in the tinnitus-affected ear greater than that of the unaffected ear13. In the last four patients of the current case series, as a recording system became available, the PT acoustic characteristics were analyzed via transcanal tinnitus sound recording, using an inserted microphone (RODE microphones, Sydney, Australia), real-time recording, using the Cubase 5.0 software (Steinberg Media Technologies GmbH, Hamburg, Germany), and data analysis with the aid of MATLAB R2013a (MathWorks, Natick, MA, USA). The recording and analysis methods are further described in recent articles14,15. Diagnostic evaluation and endovascular interventions All patients underwent directed evaluation in terms of the side of the tinnitus, its duration, and other otologic symptoms. When the patient initially complained of heartbeat-synchronous PT, the head-and-neck area was fully examined, including changes in PT upon digital compression of the ipsilateral internal jugular vein and auscultation to locate any possible source of PT. If dAVF was highly suspected due to the patient’s history and physical findings, brain MRI/A was checked to detect intra- or extratemporal vascular etiologies including dAVF. Not to miss the diagnosis with dAVF in cases of PT, a solid diagnostic algorithm for patients with PT is necessary. Figure 1 summarizes the suggested diagnostic algorithm for a patient with pulsatile tinnitus. If dAVF was relevant to PT, TFCA was recommended to find definite feeding arteries and draining veins and to classify the lesion by anatomical similarities on the basis of the Borden classification16. After confirming the exact vascular anatomy of dAVF, endovascular embolization therapy was preferred as the primary treatment modality. N-butyl 2-cyano-acrylate was used as an embolic glue in most cases, and platinum coils were used in some cases. Embolization was terminated by confirming that the liquid adhesive embolic agent reached the proximal draining veins via the fistula nidus. If a patient’s symptom completely disappeared, we checked follow-up brain MRI/A 6 months after the embolization, and then checked brain MRI/A yearly for 2 to 3 years. If there was any residual symptom after the embolization, surgical disconnection or radiosurgery were considered in cases of definite residual target lesion. Outcome Measures Subjective improvements in PT were measured 6 months after embolization by determining the changes in global symptoms and were categorized into the following five types: (1) completely disappeared and ‘cured,’ (2) markedly improved, (3) slightly improved, (4) unchanged, and (5) worsened. For 6 patients whose post-treatment follow-up duration was less than 6 months, telephone interview regarding subjective symptom improvements was performed. The treatment outcomes were compared between subgroups divided by the anatomical location and venous drainage pattern of the dAVF according to the TFCA findings. We examined the associations of the cure- and improvement rates with clinical parameters, such as age, gender, duration of symptoms, the side of PT, intracranial location of the lesion, and drainage pattern, according to the Borden classification. Results Patient Characteristics Of 30 patients with PT, 29 had unilateral tinnitus and one had bilateral tinnitus (Table 1). In patients with unilateral tinnitus, the right ear was affected in 9 patients and the left ear in 20. In all patients with unilateral tinnitus, the sides of intracranial dAVF corresponded with the direction of PT. In the only patient with bilateral tinnitus, the dAVF was located at the confluence of sinuses (i.e. torcula), the midline connecting point of the superior sagittal sinus, straight sinus, and occipital sinus. This patient and another three with dAVF at each jugular bulb and transverse-sigmoid sinus were observed with no angiographic management. The mean duration of tinnitus perception was 9.8 ± 22.3 (range, 1–120) months. The average follow-up duration was 24.2 ± 28.4 (range, 5–132) months. Audiometric profiles and spectro-temporal analysis results Among nine patients who underwent pure tone audiometry, one subject with dAVF at the midline torcula and bilateral PT was excluded from the audiometric profile analysis. Of eight patients included in the audiometric profile analysis, four showed audiological asymmetry: one patient showed a low-tone threshold discrepancy of 10 dB or more on more than two consecutive frequencies and three patients showed a discrepancy of more than 20 dB on at least one frequency on the ipsi-lesional side. Of four patients who underwent transcanal sound recording, three showed definite periodic, pulse-synchronous acoustic features (Fig. 2). Two of those patients displayed unique pulsatile bumps at ~1500 Hz that reached an audible SPL (see pulsatile bumps in solid line ellipse in Fig. 2A,B), and the other exhibited large peak amplitudes and a periodic structure with a broadband nature (Fig. 2C). Figure 2 Three patients’ recorded signals represented by two-dimensional spectrograms and three-dimensional waterfall diagrams. Radiologic Findings The transverse-sigmoid sinus (21 cases, 70.0%) was the most common site of dAVF triggering PT, followed by the hypoglossal canal (3, 10.0%) and the middle cranial fossa (2, 6.7%; Fig. 3, Table 2). The venous drainage directly into the dural venous sinus or meningeal vein (Borden type I; 22 cases, 73.3%) pattern was more common than sinus drainage with cortical venous reflux (Borden type II; 8 cases, 26.7%; Table 3). Among the eight dAVFs with a Borden type II venous drainage pattern, seven were located at the transverse-sigmoid sinus and one at the confluence of sinuses (torcula). No patient in our series showed venous drainage directly into subarachnoid veins with cortical venous reflux only (Borden type III) on TFCA. Treatment Outcomes Of the 26 patients (M:F = 7:19) who were managed by endovascular embolization of the dAVF, PT disappeared completely in 21 (80.8%), was abated substantially in 3 (11.5%), and remained the same in 2 (7.7%), resulting in a subjective symptom improvement rate of 92.3%. No patient reported slightly improved or worsened symptom after definite treatment of dAVF. Four patients who were followed up without any surgical or interventional treatment showed no change in PT during the follow-up period. Of 21 patients with dAVF at the transverse-sigmoid sinus, 19 underwent endovascular embolization; of these, the symptom disappeared in 16 (84.2%) patients, was markedly improved in 2, and was unchanged in 1. Of three patients with dAVF at the hypoglossal canal, PT disappeared in two (66.7%) and the other showed unchanged symptom (Fig. 4). Of 22 patients with a Borden type I drainage pattern, 19 were treated with endovascular embolization; of these, PT was cured in 14 (73.7%), improved substantially in 3, and remained the same in 2. Seven of eight patients with Borden type II drainage patterns underwent endovascular embolization and all presented with complete resolution of PT. One Borden type II patient was not treated by transarterial or transvenous embolization due to the patient’s poor general medical condition and small amount of cortical reflux. No major complications occurred after endovascular treatment. Figure 4 Comparison of treatment outcomes according to the anatomical location of the dAVF in patients who received transarterial embolization (N = 26). Discussion A few previous studies have described ear complaints and demonstrated therapeutic outcomes of endovascular intervention for PT due to dAVF as well as other vascular causes8,17,18,19. However, in this study, we focused on data from our large case series of dAVF presenting with only PT to provide our colleague clinicians helpful information on the diagnosis and treatment of this challenging disease entity. To sum up, PT was the only initial symptom in more than 10% of dAVF, and the transverse-sigmoid sinus was the most common site of dAVF triggering PT. PT improved in 92.3% of cases after endovascular embolization, and there were no significant differences in the cure rate according to anatomical location or drainage pattern. The transverse-sigmoid sinus (70.0%) was the most common site of dAVF that presented solely with PT, followed by the hypoglossal canal (10.0%) and the middle cranial fossa (6.7%). Anatomical proximity of the lesion to the inner ear results in relatively more frequent presentation with PT in dAVFs originating from the transverse and sigmoid sinus compared to dAVFs from other vascular structures11,12,18. In a previous study, PT was the chief complaint in 90% of patients with transverse/sigmoid sinus dAVF16. Although the most common symptoms of transverse/sigmoid sinus dAVF are benign ones, such as PT or mild to moderate headache, lesions in this region are thought to be more closely associated with hemorrhagic and aggressive neurological symptoms than cavernous sinus dural AVFs11. In our series, the proportion of retrograde sinus drainage with cortical venous reflux (Borden type II) in transverse/sigmoid dAVFs (33.3%) was higher than that of dAVFs at other locations (11.1%). Considering that the Borden type II drainage pattern is more closely related to fatal complications, clinical suspicion and proper diagnosis of transverse/sigmoid dAVF is particularly important if it presents with only PT. Three patients presenting with only PT had dAVF at the hypoglossal canal. According to a recent systematic literature review on hypoglossal canal dAVF, ~75% of such patients present with PT, and PT is often the only symptom in hypoglossal canal dAVFs with solely anterograde venous drainage20. Our case series is consistent with those findings in that all three patients with hypoglossal canal dAVF presented solely with PT and all of them showed Borden type I drainage patterns on TFCA. Considering anatomical proximity, PT perception in patients with hypoglossal canal dAVFs may also be attributable to direct transmission of the venous bruit to the inner ear structure through the temporal bone. In addition, in the other seven patients, dAVFs were identified in areas such as the middle cranial fossa, cavernous sinus, occipital area, jugular bulb, and confluence of sinuses (torcula). Because more than 10% of dAVFs presented with only PT in our series and those intracranial dAVFs can be located anywhere near the inner ear structure, when a patient visits an outpatient clinic with a sole complaint of PT, the possibility of dAVFs should always be considered, inasmuch as they may visit otolaryngologists, neurosurgeons, or neurologists. Of 26 patients who underwent endovascular embolization, 24 (92.3%) reported improvement of their PT; of these, 21 (80.8%) achieved complete resolution of PT and the symptoms abated greatly in the other 3 (11.5%). This cure rate is comparable to that of several case series with PT17,18,21, and to that of dAVF with other symptoms11,12. During the past two decades, endovascular management through transarterial, transvenous, or combined approaches has become the first-line treatment for dAVFs. While high-grade lesions with cortical venous reflux should be treated as soon as possible to avoid the risks of hemorrhage, low-grade dAVFs with severe debilitating PT imposing poor quality of life may also be candidates for prompt endovascular repair. In addition, if PT is the only presenting symptom in patients with dAVF, prompt management by endovascular embolization can prevent further neurological and neurosurgical complications. It is notable that there were no significant differences in treatment outcomes among the patients with regard to various anatomical features, including intracranial locations and drainage patterns. If total occlusion of the shunt is not achievable or considered too dangerous, selective disconnection of cortical venous reflux is recommended to prevent neurological morbidity22. Two patients had persistent PT after TFCA because numerous feeders of dAVF precluded complete embolization: both had Borden type I drainage patterns, one was located at the transverse-sigmoid sinus and the other was at the cavernous sinus. Nevertheless, endovascular embolization is recommendable as the initial treatment of choice for these benign types of dAVFs because TFCA carries a relatively low rate of morbidity, and the stable natural history of these dAVFs does not justify the risk of sinus sacrifice. General approaches for the management of dAVFs include conservative treatment, endovascular intervention, surgery, and radiosurgery. Due to the recent efficacy of endovascular therapy, microsurgical obliteration is often reserved for cases in which endovascular embolization has failed or is not feasible11,12,23. Surgical disconnection of dAVFs has shown excellent results as well, and cases involving dAVFs of the floor of the anterior cranial fossa and the superior sagittal sinus can often be treated more easily and safely with surgical approaches12,24. The cure rate of surgical elimination of the dAVF has been reported to be nearly 100%, but the risk of transient and permanent morbidity remains up to 10%12,25. Studies on stereotactic radiosurgery have reported relatively good outcomes, with complete occlusion rate of 44–87% without major complications11,12. Although we preferred embolization via a transarterial method, the optimal way of endovascular treatment remains controversial. The rates of complete ablation by transvenous embolization have been reported to be 71–100%11,12. Because choice of transarterial, transvenous, and combined approaches depends mainly on dAVF architecture, pattern of venous drainage, location and clinical presentation, we agreed that individualized endovascular treatment result in a higher degree of cure rate with a lower complication rate. However, we could reasonably managed vascular PT in patients with dAVF by transarterial route with total symptomatic resolution of 80.8% in this study population. Previous literature on vascular PT has indicated that ipsi-lesional hearing loss is observed in some patients with PT and this is probably due to the masking effect of PT sound3,26,27. In the current case series, also, 50% of the patients evaluated by pure tone audiometry exhibited audiometric asymmetry. They showed either ipsi-lesional higher thresholds at one mid-frequency range or overall higher thresholds from low to middle frequencies. These audiometric results are also consistent with the preliminary data of four patients who were evaluated by spectro-temporal analysis of the recorded transcanal signal. Their data exhibited either unique pulsatile bumps at the mid-frequency range that reached an audible SPL or large peak amplitudes and a periodic structure with a broadband nature (Fig. 2). These results are presumably due to the nature of blood flow through the fistula tract. That is, when the fistulous tract is narrow, it could be surmised that there is a relatively small volume of blood flow through the narrow fistulous tract and this may generate PT with relatively higher frequency but small amplitude. Then, if the fistulous tract becomes wider, a larger amount of blood flow may generate PT with a larger amplitude and a broadband nature. However, this should be confirmed by future studies on a larger number of patients with dAVF. To the best of our knowledge, this is the first report on the clinical characteristics and treatment results for dAVF patients presenting with only PT as their initial symptom. However, our case series was limited in several aspects. First, the number of subjects was relatively small and the mean follow-up period was relatively short; thus, we could not draw conclusions about the outcomes and influencing factors of Borden type III, in-depth audiological profiles, any correlation with the location and extent of dAVF, or long-term follow-up results. Further clinical experience with a larger patient group is required to further evaluate clinical characteristics and treatment results. Moreover, we could only indirectly assess symptom improvement using subjective scales because we cannot yet objectively compare pre- and post-treatment symptoms. In this regard, further work on objective measurements of PT is warranted. In conclusion, given that PT can be the only initial symptom in more than 10% of dAVF, not only otolaryngologists but also neurologists and neurosurgeons should meticulously evaluate patients with PT to rule out the possibility of dAVF via a thorough history taking, physical examination, and audiological and psychoacoustic evaluations. When suspected, brain MRI/A and TFCA should be performed to diagnose and manage dAVF. In most cases, PT originating from dAVF can be cured by transarterial embolization regardless of the location and venous drainage pattern. Acknowledgements This work was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number HI14C-2264). Ethics declarations Competing interests Rights and permissions This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
%) on radiologic evaluation. Regarding venous drainage patterns, sinus or meningeal venous drainage pattern was the most common type (73.3%), followed by sinus drainage with a cortical venous reflux (26.7%). PT disappeared completely in 21 (80.8%) of 26 patients who underwent therapeutic intervention with transarterial embolization of the fistula, improved markedly in 3 (11.5%), and remained the same in 2 (7.7%). In conclusion, considering that PT may be the only initial symptom in more than 10% of dAVF, not only otolaryngologists but also neurologists and neurosurgeons should meticulously evaluate patients with PT. In most cases, PT originating from dAVF can be cured with transarterial embolization regardless of location and venous drainage pattern. Introduction Tinnitus, defined as the phantom perception of sound in the absence of a corresponding external source, is a term used for many forms of the symptom with various characteristics and different causes. The classification of tinnitus into pulsatile or non-pulsatile based on the perceived quality of the sound is of help to clinicians, because heartbeat-synchronous pulsatile tinnitus (PT) is predominantly vascular in origin1,2,3,4. PT usually results from vibrations of turbulent blood flow in vessels inside or near the middle ear5. Although PT is uncommon and represents less than 10% of all tinnitus6, it is important to recognize this category of tinnitus because PT is surgically curable when the causative vascular abnormalities are determined and resolved. In the clinical setting for patients with tinnitus, the management of PT is somewhat challenging due to its infrequency and lack of standardized diagnostic and therapeutic protocols. Of known underlying diseases, intracranial dural arteriovenous fistula (dAVF) is one of the most common causes of arterial pulse synchronous PT7,8.
yes
Otorhinolaryngology
Is there a cure for pulsatile tinnitus?
yes_statement
there is a "cure" for "pulsatile" "tinnitus".. "pulsatile" "tinnitus" can be "cured".
https://www.everydayhealth.com/tinnitus/diagnosis-treatment/
How to Get Relief From Tinnitus: All About Diagnosis and Treatment ...
How to Get Relief From Tinnitus: All About Diagnosis and Treatment Sometimes wearing a hearing aid can make tinnitus less bothersome.iStock When most people think about tinnitus, they associate it with “a ringing in the ears.” While that’s accurate in many cases, the condition can be far more complex. “It could be a ringing, but it could also be a buzzing, a whistling, a sound like ocean waves, leaves rustling — all those types of things,” says Brett Comer, MD, a surgeon and associate professor of otolaryngology at the University of Kentucky College of Medicine in Lexington. He says tinnitus can be classified as “pulsatile” or “nonpulsatile” — the former meaning it aligns with a person’s heartbeat, potentially sounding low and throbbing like a heartbeat. And in some very rare cases, a person’s tinnitus may even be audible to other people, which is known as “objective tinnitus,” because it’s a sound a doctor or someone other than the patient can hear. This form of tinnitus can be caused by “turbulent” blood flow or unusual muscle contractions near a person’s ear, according to previous research. (1) One important distinction to make is that tinnitus itself is not considered a disease; rather, it’s a symptom that signals something is wrong with the auditory system. Potential causes of tinnitus include age-related hearing loss, taking certain medicines, an underlying health problem, and other reasons, so the way to take care of the problem varies. But the first step is identifying that the symptom exists, Dr. Comer says. Though in some cases making a diagnosis isn’t as straightforward as some people might assume, he adds. How Doctors Diagnose Tinnitus After asking a patient what they’re hearing, Comer says the next step in making a diagnosis involves asking questions about the onset of tinnitus, such as: When did it start? How often is it noticeable? Your doctor will also try to measure the severity of the tinnitus. There are several types of tests a doctor or health professional who specializes in ear health can use to gauge the strength and tone of someone’s tinnitus, including a machine that helps produce a sound that aligns with what the patient is hearing. A doctor might also use a noise-producing machine to measure the patient’s “minimum masking level,” or the amount of sound needed to cover up her tinnitus. (2) Early on, tinnitus patients will also undergo a hearing exam. “Most of the time, that test will show the classic pattern where you’re missing some high-pitched hearing,” Comer says. He says this pattern of hearing loss — and the tinnitus that results — is common among people who have been exposed to loud noises like firearms or live music concerts. In some cases — especially if the hearing in one ear is much worse than in the other — Comer says a patient will need an MRI or other form of imaging test. “We’re looking at specific structures near the ear,” he says. A benign tumor — including one type that grows on the nerve connecting the inner ear to the brain — may turn up, as well as cancerous tumors. Though Comer adds that tinnitus-causing cancerous tumors are rare. For patients with pulsatile tinnitus — the rhythmic kind that may mimic a heartbeat — imaging may show that the patient’s carotid artery or jugular is “high-riding,” and comes into contact with the cochlea or other parts of the ear, Comer explains. In these cases and for patients with tumors or other clear tinnitus triggers, an operation or some other procedure may relieve their tinnitus. “But in most cases, there’s not an easy explanation or remedy for the symptom,” he says. There Are Treatments That Can Help Minimize or Get Rid of Tinnitus Can tinnitus go away on its own? “In some cases, yes it can. Although we don’t understand why,” Comer says. Since the condition is almost always subjective, it’s possible that some patients just learn to ignore it. It’s also possible that their brains naturally readjust to hearing loss in ways that “squelch” the tinnitus, he says. And for some people, treating (or addressing) the underlying condition that’s causing the tinnitus can make the symptom go away. In cases when impacted earwax, a blood vessel condition, or a certain medication is causing tinnitus, fixing these treatable problems could make the tinnitus go away. (3) But for most people, tinnitus doesn’t go away by itself. Fortunately these patients have several options when it comes to managing the condition. Surprising Ways Hearing Affects Your Health Hearing loss can have a lot of consequences, from affecting brain health to emotional health. Noise-Masking Can Make Tinnitus Less Noticeable Patients who experience the classic high-pitched, ringing type of tinnitus may only notice it in a quiet room — and especially at night while trying to sleep, Comer says. In these instances, turning on a fan or white noise machine — anything to break up the silence — can obscure the tinnitus ringing and therefore provide relief. “It may not make the tinnitus go away, but it makes it less noticeable to the brain,” he says. Some of these same masking methods — along with turning on a television or music — can also help during the daytime, too. There are also some “medical grade” sound machines that a health professional can tune specifically to block out a patient’s tinnitus. (4) A hearing aid can also help. Comer explains that some people who no longer hear well are more aware of their tinnitus because of the absence of ambient noise. “If they use a hearing aid and can pick up more background noises, that can help,” he says. Lifestyle and Behavioral Modifications Can Help With Tinnitus “Stress tends to make tinnitus worse,” explains Michael Kilgard, PhD, a professor of neuroscience at the University of Texas at Dallas. Research has helped solidify this connection, and it also suggests that some hormones and neurotransmitters that peak when a person is tired or stressed may affect the brain in ways that heighten tinnitus symptoms, according to work published in Frontiers in Neuroscience. (5) For all these reasons, getting a good night’s sleep and keeping stress in check — both much easier said than done — may help some people rein in the severity of their tinnitus. There’s some evidence linking mindfulness practices to lower levels of stress and better quality of sleep, and that suggests mindfulness may also help people with tinnitus, too. (6) The same is true of relaxation techniques, including exercise and yoga. Cognitive Behavioral Therapy Approaches Can Help With the Most Debilitating Cases of Tinnitus For patients who are especially troubled or debilitated by their tinnitus, a handful of cognitive behavioral therapies (CBTs) can help. They are designed to help individuals learn to control their emotional reaction to tinnitus — and uncouple the symptom from the distressful response they have to it — rather than trying to cover it up or avoid it. CBT has proved effective in helping people overcome phobias and pain-related conditions. A meta-analysis of 28 studies concluded that CBT may be an effective tool to reduce the negative impact of tinnitus on quality of life and is safe, but added that further research is needed to look at long-term benefits. (7) Medication Won’t Cure Tinnitus, But It Might Help Manage Underlying or Resulting Distress There is no pill that will cure a person’s tinnitus. But for patients who have anxiety or depression as a result of their tinnitus, there are drug options. “We have patients who come in and say they’re going to do something really bad to themselves if they don’t get some relief, and antidepressants can help them cope,” Comer says. Anti-anxiety drugs may also help individuals manage their response to tinnitus, Dr. Kilgard adds.
Though Comer adds that tinnitus-causing cancerous tumors are rare. For patients with pulsatile tinnitus — the rhythmic kind that may mimic a heartbeat — imaging may show that the patient’s carotid artery or jugular is “high-riding,” and comes into contact with the cochlea or other parts of the ear, Comer explains. In these cases and for patients with tumors or other clear tinnitus triggers, an operation or some other procedure may relieve their tinnitus. “But in most cases, there’s not an easy explanation or remedy for the symptom,” he says. There Are Treatments That Can Help Minimize or Get Rid of Tinnitus Can tinnitus go away on its own? “In some cases, yes it can. Although we don’t understand why,” Comer says. Since the condition is almost always subjective, it’s possible that some patients just learn to ignore it. It’s also possible that their brains naturally readjust to hearing loss in ways that “squelch” the tinnitus, he says. And for some people, treating (or addressing) the underlying condition that’s causing the tinnitus can make the symptom go away. In cases when impacted earwax, a blood vessel condition, or a certain medication is causing tinnitus, fixing these treatable problems could make the tinnitus go away. (3) But for most people, tinnitus doesn’t go away by itself. Fortunately these patients have several options when it comes to managing the condition. Surprising Ways Hearing Affects Your Health Hearing loss can have a lot of consequences, from affecting brain health to emotional health. Noise-Masking Can Make Tinnitus Less Noticeable Patients who experience the classic high-pitched, ringing type of tinnitus may only notice it in a quiet room — and especially at night while trying to sleep, Comer says. In these instances, turning on a fan or white noise machine — anything to break up the silence — can obscure the tinnitus ringing and therefore provide relief.
yes
Otorhinolaryngology
Is there a cure for pulsatile tinnitus?
yes_statement
there is a "cure" for "pulsatile" "tinnitus".. "pulsatile" "tinnitus" can be "cured".
https://austinpublishinggroup.com/cerebrovascular-disease-stroke/fulltext/ajcds-v1-id1010.php
Resolution of Pulsatile Tinnitus after Coil Embolization of Sigmoid ...
Abstract Venous sinus diverticulum is a rare vascular cause for pulsatile tinnitus characterized by an out pouching of the venous sinus into the calvarium, usually involving the sigmoid venous sinus. Sigmoid sinus diverticulum is often associated with upstream sinus stenosis. While the exact mechanism of sound generation from a sinus diverticulum is unclear, several case reports have suggested that pulsatile tinnitus can resolve after remodeling of venous blood flow such that the diverticulum is excluded from the circulation. Case reports have also suggested treatment of both the sigmoid sinus diverticulum and the often-associated upstream sinus stenosis may ameliorate pulsatile tinnitus. We report a case of trans-venous coil embolization of a sigmoid sinus diverticulum without treatment of an additionally identified upstream sinus stenosis, resulting in cure of the patient’s pulsatile tinnitus. A review of endovascular and open surgical treatment of sinus diverticula in the treatment of pulsatile tinnitus is also presented. Abbreviations Case Presentation A 59-year-old postmenopausal woman presented with subjective right-sided pulse synchronous pulsatile tinnitus (PT) increasing in intensity over the previous 18 months. Her PT became increasingly bothersome resulting in both difficulties falling asleep and waking her from sleep as well as difficulty concentrating. Her PT abated with right neck compression and was exacerbated by strenuous activity, bending forward at the waist, and with Valsalva, suggestive of a venous etiology [1-4]. She denied changes in the pitch of the sound over time and also denied changes in hearing acuity, balance, swallowing, coordination, strength, or sensation. A broad workup for causes of PT was performed. A complete history and physical were performed (including otoscopy to evaluate for middle ear mass, auscultation for a bruit, and ophthalmoscopy to search for papilledema) and were normal. Audiologic assessment of auditory acuity and discrimination (understanding of words) was also unremarkable. Lumbar puncture demonstrated bland cerebrospinal fluid (CSF) with a normal opening pressure. Noninvasive imaging, including contrast enhanced CT of the temporal bone, carotid ultrasound, and contrast enhanced MRI of the brain with MR venography and MR angiography were obtained. CT demonstrated a smoothly-marginated scalloping of the inner table of the sigmoid plate with narrow neck and extension of the sinus laterally into the sigmoid bone defect, characteristic of a sigmoid sinus diverticulum [5-9] (Figure 1). No other vascular neoplasm, vascular malformation, or vascular anomaly was identified. A conventional digital subtraction angiogram (DSA) excluded a dural arteriovenous fistula (DAVF) or other vascular pathology that might have been occult on noninvasive imaging. Venous phase images from the DSA (Figure 2) demonstrated a lobulated 6.8 x 8.0 x 4.2 mm right sigmoid sinus diverticulum with a 3.4 mm neck. In addition, a stenosis in the sigmoid sinus upstream from the diverticulum was lobular in appearance, suggestive of indentations in the sinus caused by arachnoid granulations. Figure 2 :Venous phase images from a DSA demonstrate a lobulated right sigmoid sinus diverticulum (white arrow) in the AP projection prior to treatment (A) measuring6.8 x 8.0 x 4.2 mm with a 3.4 mm neck. 3-D reconstruction (B) outlines the narrow neck, as well as the upstream stenosis (black arrow head) in the sigmoid sinus. Mid coiling image (C) demonstrates the catheter in the diverticulum (black arrow), stenosis in the sigmoid sinus proximal to the diverticulum, and coils filling the diverticulum with partial filling of the neck of the diverticulum with contrast. Post coiling AP image (D) of the diverticulum demonstrates the diverticulum is no longer filling with contrast and the sigmoid sinus remains patent. 3-D reconstructions in the venous phase post coiling (E and F) demonstrate the relationship of the coil mass to the sinus, as well as sinus patency. Figure 2 : Venous phase images from a DSA demonstrate a lobulated right sigmoid sinus diverticulum (white arrow) in the AP projection prior to treatment (A) measuring6.8 x 8.0 x 4.2 mm with a 3.4 mm neck. 3-D reconstruction (B) outlines the narrow neck, as well as the upstream stenosis (black arrow head) in the sigmoid sinus. Mid coiling image (C) demonstrates the catheter in the diverticulum (black arrow), stenosis in the sigmoid sinus proximal to the diverticulum, and coils filling the diverticulum with partial filling of the neck of the diverticulum with contrast. Post coiling AP image (D) of the diverticulum demonstrates the diverticulum is no longer filling with contrast and the sigmoid sinus remains patent. 3-D reconstructions in the venous phase post coiling (E and F) demonstrate the relationship of the coil mass to the sinus, as well as sinus patency. Due to the debilitating nature of the right-sided PT, the patient elected to undergo coil embolization of the sigmoid sinus diverticulum. Under general anesthesia, a 7F vascular sheath was placed in the common femoral vein and 5F vascular sheath in the contralateral common femoral artery. A 5000 unit Heparin bolus was given intravenously (appropriately increasing the activated clotting time from 134 seconds to 314 seconds). A coaxial 7F VBL guide catheter (Cordis Neurovascular, Miami Lakes, FL, USA) with a 5F Vert (Cook Medical, Bloomington, IN, USA) inner catheter was guided over a Bentson guide wire (Boston Scientific, Natick, MA, USA) from the common femoral vein, through the right atrium, and into the right internal jugular vein (IJ). With the 7 F guide catheter remaining in the high right IJ, and the 5 F catheter positioned at the right IJ sigmoid sinus junction, a 0.021 inches Prowler select plus (Cordis Neurovascular) microcatheter was guided over a Synchro 2 Standard (Stryker Neurovascular, Fremont, CA, USA) microwire into the sigmoid sinus diverticulum. Venous roadmap was obtained from venous phase of images of a DSA following injection of iodinated contrast into the contralateral left common carotid artery. Injection of the arteries on the contralateral side of the venous lesion allowed for visualization of the venous sinuses without ipsilateral arteries toconfound the images. The diverticulum was then embolized with 7 detachable coils. The upstream stenosis was not treated. Upon recovery from general anesthetic, the patient reported complete resolution of her PT. She was admitted to the intensive care unit for observation overnight, and was discharged the following morning. She was placed on 81 mg ASA orally daily for two weeks to allow for endothelialization of the coil-sinus interface and minimize the risk of sinus thrombosis. At 6 month follow up, the patient reported complete resolution of right-sided PT. Follow up angiogram demonstrated complete obliteration of the sigmoid sinus diverticulum and preserved patency of the dominant right sigmoid sinus. Discussion/Conclusion Pulsatile tinnitus (PT) is the auditory perception of a rhythmic sound in the absence of external source. The differential diagnosis of PT can be classified into vascular and nonvascular etiologies. Vascular etiologies can be divided into arterial causes (e.g. carotid artery dissection, fibromuscular dysplasia, aberrant internal carotid artery, glomus tumor, contralateral carotid artery stenosis resulting in ipsilateral carotid high-flow state), and venous causes (e.g. stenosis, dural arteriovenous fistula, sinus diverticulum, high jugular bulb, intracranial hypertension). Sigmoid sinus diverticulum, also referred to as sigmoid sinus “aneurysm”, is a rare vascular etiology for PT characterized by an out pouching of the venous sinus into the calvarium usually involving the sigmoid sinus. The exact mechanism of sound generation from a sinus diverticulum is unclear. Previous authors suggest that sinus diverticulum-induced PT may be a result of vibration of the venous sinus wall (caused by turbulence in the sigmoid sinus diverticulum) that is sensed by the cochlea [6,7]. An upstream stenosis of the venous sinuses is often noted in association with a sigmoid sinus diverticulum, and it has been speculated to play an additive role in sound generation [6,10,11], or even a causative role in sinus diverticulum generation [6,7,10-13]. As such, several authors suggest the treatment of both lesions simultaneously to treat venous PT [10,11]. We report a case of coil embolization of a venous sinus diverticulum without treatment of an associated venous sinus stenosis resulting in cure of PT without complication. Several authors have proposed the coincidence of venous sinus stenosis and sigmoid sinus diverticulum as a cause of PT [6,10,11]. Since each lesion individually or the two together may be the causative factor in a patient with PT, a diagnostic and therapeutic dilemma is incurred. Eisenman approached this dilemma by performing surgical reconstruction of the sinus to treat both lesions simultaneously [10]. While his method was successful in treating PT in 13 patients, he also reports two major post operative complications including one patient with venous sinus thrombosis and another with signs and symptoms of venous sinus thrombosis without “radiologic evidence of dural sinus thrombosis”. Signorelli and colleagues treated both conditions simultaneously by stent placement across both the stenosis and sigmoid sinus diverticulum with subsequent balloon dilatation of the stent in the area of stenosis. Antiplatelet therapy was subsequently used to minimize the risk of sinus thrombosis [11]. The incidence of dural venous sinus stent thrombosis, which could result in complete dural venous sinus thrombosis, is unknown. A retrospective review of venous sinus stent placement for intracranial hypertension did not report a single case of in stent stenosis or thrombosis out of 52 stents placed [14]. A recently published multicenter analysis of cerebral arterial stent placement to assist aneurysm coil embolization found a 0.7% incidence of in-stent thrombosis (1 in 142), which resolved with intraprocedural administration of abciximab [15]. Signorelli’s decision to treat the venous stenosis was in a large part attributed to measuring 10 mmHg pressure gradient across the stenosis suggesting turbulent flow downstream from the stenosis [11]. When approaching our case, we considered the large number of patients who have a stenosis in the dominant and non-dominant transverse or sigmoid sinus due to arachnoid granulations with very few of these patients presenting with PT. As such, we hypothesized the stenosis may be an incidental finding, or possibly contributory in the pathogenesis of the sigmoid sinus diverticulum, and not the main factor in sound generation. In treating the diverticulum only, we cured the PT without having to remodel the venous sinus in the region of the stenosis, thereby decreasing the complexity of the procedure, reducing the amount of implanted hardware and decreasing the complication risk of the procedure. We suggest the use of a stent be reserved for cases in which the sigmoid sinus diverticulum cannot otherwise be occluded endovascularly, and treatment of sinus stenosis only in cases of PT refractory to treatment of the sigmoid sinus diverticulum. Endovascular treatment of sigmoid sinus diverticulum using detachable coils for treatment of PT was initially described by Houdart [5]. Since that publication, 4 additional endovascular case reports have been published using a combination of detachable coils, detachable coils in conjunction with vascular stenting, as well as vascular stenting alone [2,12,13,16,17]. All cases involve female subjects (age range 31 to 59), involving the dominant sinus (either directly stated by the authors or ascertained from published images using previously proposed standards [6]), with resolution of symptoms following treatment, and no reported complications. Many of the authors utilized medications with platelet inhibitory properties to mitigate the risk of thrombus formation on the implanted coils and stents, presumably allowing for endothelialization as occurs in intra-arterial stent placement. Several surgical methods to treat sigmoid sinus diverticula have also been described. In addition to Eisenman, Gologorsky et al reported surgical treatment of venous sinus diverticula using self-tying U clips, as an alternative to stent-assisted coiling, in a case where primary coil embolization was technically unsuccessful; this approach resulted in resolution of symptoms (3 month follow up) [1]. A series by Otto et al described transmastoid sigmoid sinus reconstruction to treat PT in 3 patients with sigmoid sinus diverticula. In each of these cases, the wall of the sinus was reconstructed with extraluminal placement of bone wax or temporalis muscle and fascia resulting in complete resolution of PT (average 16.3 months follow up) [18]. Surgical remodeling of venous diverticula appears to carry a significant risk of morbidity including venous bleeding, hemotympanum, visual loss, embolization of bone wax, intracranial hypertension, and sinus thrombosis [1,18,19]; however, the incidence of complication remains unknown. None of the published operative cases have reported the length of stay. The relative merits of open surgical sinus remodeling versus endovascular embolization require larger patient cohorts. The potential risks of endovascular treatment of sigmoid sinus diverticulum include coil migration, thrombus formation on coils and, in cases requiring stent placement, stent migration, and in-stent thrombosis [1,2,12]. In order to minimize procedural risk, stents should be limited to cases requiring their use to adequately position coils in the venous diverticulum. Similar to treatment of arterial aneurysms, fastidious technique, appropriate coil selection, and mid embolization angiography to identify potential coil encroachment on the sinus or thrombus formation on the coils are required to minimize procedural risks. Use of a stent will likely require extended (at least six months) use of dual platelet inhibitor therapy with aspirin and clopidogrel to minimize risks of stent thrombosis and subsequent sinus occlusion [2,11]. As per Mehanna et al, we chose to place our patient on 325 mg aspirin orally for two weeks to minimize the risks of thrombus formation on the coils [13]. In summary, we present a case of PT in a 59-year-old woman with two potential venous etiologies, sigmoid sinus diverticulum and upstream sinus stenosis. In contrast to previous case reports, we elected to coil embolize the diverticulum only. The patient’s PT immediately resolved without treatment of the ipsilateral sinus stenosis.
Abstract Venous sinus diverticulum is a rare vascular cause for pulsatile tinnitus characterized by an out pouching of the venous sinus into the calvarium, usually involving the sigmoid venous sinus. Sigmoid sinus diverticulum is often associated with upstream sinus stenosis. While the exact mechanism of sound generation from a sinus diverticulum is unclear, several case reports have suggested that pulsatile tinnitus can resolve after remodeling of venous blood flow such that the diverticulum is excluded from the circulation. Case reports have also suggested treatment of both the sigmoid sinus diverticulum and the often-associated upstream sinus stenosis may ameliorate pulsatile tinnitus. We report a case of trans-venous coil embolization of a sigmoid sinus diverticulum without treatment of an additionally identified upstream sinus stenosis, resulting in cure of the patient’s pulsatile tinnitus. A review of endovascular and open surgical treatment of sinus diverticula in the treatment of pulsatile tinnitus is also presented. Abbreviations Case Presentation A 59-year-old postmenopausal woman presented with subjective right-sided pulse synchronous pulsatile tinnitus (PT) increasing in intensity over the previous 18 months. Her PT became increasingly bothersome resulting in both difficulties falling asleep and waking her from sleep as well as difficulty concentrating. Her PT abated with right neck compression and was exacerbated by strenuous activity, bending forward at the waist, and with Valsalva, suggestive of a venous etiology [1-4]. She denied changes in the pitch of the sound over time and also denied changes in hearing acuity, balance, swallowing, coordination, strength, or sensation. A broad workup for causes of PT was performed. A complete history and physical were performed (including otoscopy to evaluate for middle ear mass, auscultation for a bruit, and ophthalmoscopy to search for papilledema) and were normal.
yes
Otorhinolaryngology
Is there a cure for pulsatile tinnitus?
yes_statement
there is a "cure" for "pulsatile" "tinnitus".. "pulsatile" "tinnitus" can be "cured".
https://neuroangio.org/sample-page/diagnosis-and-treatment-of-pulsatile-tinnitus/
Diagnosis and Treatment of Pulsatile Tinnitus | neuroangio.org
Diagnosis and Treatment of Pulsatile Tinnitus This section is dedicated to the ancient and hallowed institution of Trial by Jury — and the Jury Duty service which yielded time to finally put this together Pulsatile Tinnitus is one of the least understood and most frequently underdiagnosed vascular symptoms. The constant, pulse-synchronous sound can be both alarming and profoundly disturbing. Most patients correctly interpret it as a vascular issue rather than an ear problem. They believe that some kind of abnormal blood flow has begun and are afraid that something may burst inside. As usual, patients are right — it is almost always a blood flow problem. It is completely different from nonpulsatile, constant tinnitus which is usually a high-pitched sound and is often associated with hearing loss. In contrast, pulsatile tinnitus patients usually have normal hearing. Many who are told they have hearing loss simply cannot hear well because the sound interferes with the hearing test. Once pulsatile tinnitus is gone, hearing is magically back to baseline. What is pulsatile tinnitus? It is a pulse-syncronous sound, more often than not unilateral. It is completely different from nonpulsatile tinnitus. Pulsatile tinnitus is really a bruit. It is a sound usually caused by some kind of abnormal, turbulent blood flow near the ear. There is usually nothing wrong with the ear, which is simply doing its job of hearing sounds. With pulsatile tinnitus, the sound comes from the inside. The challenge is to figure out what is the source of sound. Various descriptions of the sound are given and in some cases recorded. The most common is a “whoosh” — it is a low frequency sound which is similar to a “baby sonogram.” Some patients are able to record their sound by placing a sensitive microphone into the ear or onto areas of the head, or find a similar sounding recording among the different sounds of cardiac murmurs (like aortic regurgitation). Of course, most cases of PT have nothing to do with cardiac pathology. Here is a real-life quite severe PT auscultated with a stethoscope on the OTHER side of the sound (sound on right, stethoscope on left mastoid bone). Cause is venous sinus stenosis. The sound is so loud on the right that it drowns out this sound on the left. This is how loud venous stenosis can get. Periods of relative silence correspond to times when the jugular vein is compressed. When compression is released, the sound comes back. Here is the sound on the “bad” side in the same patient. Same idea — relative silence corresponds to times of jugular compression Here are a few other sample sounds of Pulsatile Tinnitus, recorded by actual patients. Courtesy of www.whooshers.com The majority of sounds are unilateral. This is simply because vascular abnormalities which cause PT are usually lateralized. Bilateral sounds can also have vascular etiology, but it is rare. Many patients report being able to change the volume or pitch of sound by various maneuvers such as neck repositioning, Valsalva maneuver (holding breath and bearing down), or by gentle pressure on the side of the neck. The last maneuver of gentle neck pressure, which occludes the ipsilateral jugular vein, is particularly important. If the sound stops, it is almost certainly due to venous sinus stenosis or another venous sinus cause, such as dehiscent jugular plate or diverticulum. Venous sinus stenois is by far the most common, and also most under-recognized, cause of pulsatile tinnitus in general and venous pulsatile tinnitus in particular. It is important to listen with a stethoscope over the ear and mastoid eminence to see if the sound can be heard. Hearing the sound is a near certainty that a cause will be found. However, it is important not to put to much emphasis on “objective” pulsatile tinnitus. “Objective” means that someone other than the patient can hear the sound. However, there are many pitfalls. Most importantly, just because the sound is not heard does not mean it is not real or significant. Perhaps it is too faint to be heard. Perhaps you need a better stethoscope. Or listen in the right place. Finally, it is entirely possible that your own hearing is not what it used to be… So, when “objective” pulsatile tinnitus is present, that is helpful. Lack of objective pulsatile tinnitus — subjective pulsatile tinnitus — does not mean anything. Could still be a dural fistula, venous stenosis, etc. Whether objective or subjective, pulsatile tinnitus is significant and warrants a thorough evaluation How do we approach pulsatile tinnitus? First, it is key to validate the patient’s likely already formed conclusion — that their sound is different from a constant pitch. That it has to do with blood flow. Yes, it does. Is it dangerous? Maybe, but usually not. For most doctors, that’s a surprise. Yet, it is true. Most PT cases are benign. However, a large minority are not and need a prompt workup. So, what is the approach? Ask to describe the sound. What makes it better/worse. Is it truly in sync with the heartbeat? Does it increase in frequency with exercise? Can they count the number of beats per minute? Is it the same as the pulse frequency? Can they stop the sound by neck compression (venous sinus stenosis)? Is there neck pain (dissection?). Do they feel dilated pulsing vessels behind the ear (dural fistula)? Was there recent major trauma (carotid fistula)? Are there headaches or vision changes (intracranial hypertension?) Any new medications? Particularly new or different oral contraceptives / hormone replacement (intracranial hypertension again). Is there a heart valve problem (major aortic regurgitation for example)? General hyperdynamic state (hyperthyroidism, major anemia, etc)? Now, here is the disclaimer for the patient (and doctor). None of the above can be used to diagnose or treat any disease in isolation. Just because someone has neck pain and pulsatile tinnitus does not mean they have carotid dissection. A full evatuation and usually imaging is required. What about imaging? Imaging is essential. We need to look inside for the source of sound. The approach is geared towards vascular causes. We start with contrast brain MRI (volumetric postcontrast T1 images are as good as an MRV), Time of Flight (TOF) brain MRA, and neck MRA (contrast is better). An MRV can also be obtained however high quality contrast MRI is just as good. What about CTA? I prefer MRA. The problem with CTA in pulsatile tinnitus is that one of the main conditions we are thinking of is a dural fistula. A badly timed CTA (with venous contamination) makes dural fistula more difficult to diagnose. TOF MRA does not have this problem. Also, CTA comes with both radiation and nonionic contrast. MRA does not. Other studies? A temporal bone CT can be useful also (vascualar variants such as aberrrant carotid, persistent stapedial, etc). What about catheter angiography? The truth is that most of the time, catheter angiography is not necessary. Most causes can be seen on a good set of MR imaging studies. Having said that, catheter angiography remains the gold standard for vascular imaging and is very useful in many circumstances. A well-known association between venous sinus stenosis and intracranial hypertension exists. Patients suspected of it should have an ophthalmology evaluation for papilledema and, frequently, a lumbar puncture to definitively prove or disprove intracranial hypertension. It is wrong however to think that all patients with sinus stenosis have IH. Most do not. And in a good number that do, pulsatile tinnitus is the only symptom — not headaches or vision issues. This may come as a surprise, but IH does not always have to present with headaches. In fact, that’s what makes it more difficult to diagnose. One very useful way of thinking about PT is separating the uncertainly of what the sound represents (lack of diagnosis) from the impact of the sound per se. Is the patient more bothered by not knowing what causes the sound or by the loud and disturbing nature of the sound itself? In most cases, once the cause is found, the patient can be reassured that the cause is not dangerous. Many patients are then able to cope with such sounds if cure is impractical or felt to be too hazardous. Others, particularly with high volume, constant PT, look for a cure even if the cause is not hazardous. The disturbing and disruptive nature of the sound is enough. For example, venous sinus stenosis is usually benign but occasionally very loud. Cure is possible (stenting) and to those who understand the risks and benefits this approach is very reasonable. Over the years, we have seen hundreds of patients with pulsatile tinnitus. We have also seen something quite incredible — the definitive role of a patient support group in educating physicians about the true nature and proper workup of pulsatile tinnitus. The sad reality is that 10 years ago most physicians, including ENT specialists, had no idea of the difference between pulsatile and nonpulsatile tinnitus. Patients with obvious vascular conditions were (and sometimes still are) told that they have an ear ringing problem and nothing could be done. Or they were told to wait six months before starting with any diagnostic imaging, to essentially “wait and see” if the sound went away. Pulsatile tinnitus rarely goes away on its own. Many patients are mistakenly told to ‘live with it,” prior to a thorough workup or even beginning one. Unfortunately, most of us did not learn about pulsatile tinnitus in medical school. Or about nonpulsatile tinnitus either. In absence of education, care is usually anecdotal, heterogeneous, and inadequate. About 10 years ago a pulsatile tinnitus sufferer started a web page www.whooshers.com which quickly became a magnet for fellow sufferers without answers and without support. Over the years, this matured into an online community with a robust Facebook presence, periodic meetings, advocacy to change CPT codes to distinguish between pulsatile and nonpulsatile tinnitus, contacts with medical societies across the globe, etc. Probably more than anyone else, “Whooshers” has served to appropriately educate pulsatile tinnitus sufferers to in turn educate medical professionals about the vascular nature of pulsatile tinnitus. Stories of patients being misdiagnosed for years are becoming rare. How can a physician systematically approach pulsatile tinnitus differential diagnosis? One way is to think in terms of vascular anatomy. Arterial causes include vascular stenoses such as carotid dissection, fibromuscular dysplasia, or atherosclerosis. Venous causes are venous sinus stenosis (most common cause, sound usually on side of bigger sinus), and occasionally diverticula, high jugular bulbs, lateral wall dehiscene, and a few others. Intracranial hypertension also falls into the venous category since the sound is made by venous sinus stenosis associated with intracranial hypertension. Next come arteriovenous shunts — the famous dural fistula. It is perhaps the most well known but certainly not the most common cause of PT. Too many patients are told that since there is no dural fistula nothing more can be done to find the cause. This is not true. Unclear reasons — the reality is that even after an extensive workup and multiple expert consultations, a sizable minority of patients (up to 20% I believe, but this number is a reflection of practice and referral patterns) still has no identified cause. Many of these patients have some degree of real hearing loss. Their sounds are more often bilateral and varied — not a simple whoosh, but periodic sounds that change in pitch, volume, and character. Some will be proven to be periodic but not pulse-synchronous. Others are truly pulsatile. Many are deeply frustrated and depressed. There is no set answer. Medications / supplements are usually ineffective. Management strategies include addressing consequences of the sound — for example how to deal with lack of sleep. Over the counter Melatonin is a good start. Other medications and strategies are possible in consultation with a sleep specialist. Pitfalls — rhythmic nonpulsatile tinnitus is one. It is a periodic sound that is not in sync with the heartbeat. It can certainly be profoundly disturbing and require treatment. However, it is not vascular and this is key in that vascular workup is going to be useless. How to differentiate between true PT and this one? Count the number of sounds per minute and compare with pulse frequency. If the sound is off by more than a few beats it should raise suspicion of periodic, nonpulsatile tinnitus. Most patients can tell if their sound is exactly in sync with the heartbeat — it varies with exertion just like the heart, occasionally skips a beat when the heart does, etc. The most well-known periodic, nonpulsatile tinnitus is “Middle Ear Myoclonus”. It is caused by myoclonic contractions of muscles related to the middle ear — tensor tympani or stapedius. These can be heard by another person and so it is also in the differential diagnosis of “objective tinnitus”. Palatal myoclonus is also another cause. Treatment is with Botox injections of responsible muscle or surgery. It is impossible to fully describe the range of symptoms, conditions, and other nuances of how to diagnose and treat PT. However it is important to know that in most cases an underlying cause can be identified. Below is a representative collection of different cases of pulsatile tinnitus. If you are a patient, it is very important to understand that the conditions shown below may be very rare, often do not cause pulsatile tinnitus (though in the following instances they did), may have other symptoms as part of the problem, etc. The purpose of this page is not to encouarage self-diagnosis. It is to show, primarily to medical professionals, the range of conditions and associated imaging findings of patients with pulsatile tinnitus. Below is a list of some cases. It is by no means a complete list. It is being updated as time and new information allows. There is a lot of literature on PT — most of it is quite good. It is impossible to list everyone. Different authors / groups approach PT from different perspectives, which are in turn influence by local practice patterns and group specialty. This for example influences their perception of what are common and rare causes of PT. We are no exception to this. For example, we see a lesser cross-section of patients with well-known causes such as Dural Fistula. The reason is that many patients come to us for second, third, and Nth opinion, having seen a range of prior specialists. In this setting, the “usual suspects” have already been identified and so the overall population is enriched in lesser known causes. Which is why we emphasize venous sinus stenosis as the most under-diagnosed cause today, in our opinion. Aside from scientific literature, there are a number of useful links on the web, but the overall spectrum is very heterogeneous and genuine caution is advised when surfing the web without a healthy sense of doubt Useful Links www.whooshers.com — premier support and information center for pulsatile tinnitus. Check out their facebook page as well
Others, particularly with high volume, constant PT, look for a cure even if the cause is not hazardous. The disturbing and disruptive nature of the sound is enough. For example, venous sinus stenosis is usually benign but occasionally very loud. Cure is possible (stenting) and to those who understand the risks and benefits this approach is very reasonable. Over the years, we have seen hundreds of patients with pulsatile tinnitus. We have also seen something quite incredible — the definitive role of a patient support group in educating physicians about the true nature and proper workup of pulsatile tinnitus. The sad reality is that 10 years ago most physicians, including ENT specialists, had no idea of the difference between pulsatile and nonpulsatile tinnitus. Patients with obvious vascular conditions were (and sometimes still are) told that they have an ear ringing problem and nothing could be done. Or they were told to wait six months before starting with any diagnostic imaging, to essentially “wait and see” if the sound went away. Pulsatile tinnitus rarely goes away on its own. Many patients are mistakenly told to ‘live with it,” prior to a thorough workup or even beginning one. Unfortunately, most of us did not learn about pulsatile tinnitus in medical school. Or about nonpulsatile tinnitus either. In absence of education, care is usually anecdotal, heterogeneous, and inadequate. About 10 years ago a pulsatile tinnitus sufferer started a web page www.whooshers.com which quickly became a magnet for fellow sufferers without answers and without support. Over the years, this matured into an online community with a robust Facebook presence, periodic meetings, advocacy to change CPT codes to distinguish between pulsatile and nonpulsatile tinnitus, contacts with medical societies across the globe, etc.
yes
Otorhinolaryngology
Is there a cure for pulsatile tinnitus?
no_statement
there is no "cure" for "pulsatile" "tinnitus".. "pulsatile" "tinnitus" cannot be "cured".
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9010140/
Transtemporal Venous Decompression for Idiopathic Venous ...
Share RESOURCES As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Abstract Objective To evaluate the clinical characteristics and present surgical outcomes of transtemporal venous decompression technique in the treatment of pulsatile tinnitus (PT). Study Design This is a prospective cohort study. Setting This study was done at the tertiary private neurotologic skull base clinic. Participants The primary author, between March 2012 and February 2013, evaluated 55 patients with the complaint of PT. Seven out of the 55 patients were diagnosed with severe, unrelenting idiopathic pulsatile tinnitus (IPT), and were placed into the study. These seven patients had temporal bone computed tomography, magnetic resonance imaging, arteriogram, videonystagmography, electrocochleography, and lumbar puncture based on the symptoms. All the seven patients underwent transtemporal venous decompression surgery. Main Outcome Measure Resolution of PT was determined as the primary outcome measure. Results Six out of seven patients had complete resolution of their PT immediately after surgery and at 3 to 4 years follow-up. One patient developed intracranial hypertension after 3 months requiring ventriculoperitoneal shunt, which resolved PT as well. No complications occurred. Conclusion A significant subset of the PT patient population has known reversible causes. The more common includes conductive hearing loss, superior canal dehiscence, benign intracranial hypertension, jugulosigmoid venous anomalies, stapedial myoclonus, etc. There exists a subset of patients who have IPT. Transtemporal venous decompression is a surgical technique that can be employed to give patients with IPT long-term relief. PT is described as a conscious and undesired perception of the heartbeat in the ear. 1 Patients with pulsatile or pulse-synchronous tinnitus comprise ∼4% of tinnitus cases. 23 PT is associated with many identifiable causes such as atherosclerotic carotid artery disease, glomus tumor, aneurysms, arteriovenous malformations, jugular bulb abnormalities, sigmoid sinus wall defects, diverticulum, superior semicircular canal dehiscence, and conductive hearing loss. These entities have been successfully treated in the past with various medical and surgical techniques. 45 Intracranial venous diverticulum as a cause of PT has been treated successfully with endovascular treatment. These include venous sinus stenting and coil embolization. 6 Lenck et al described stenting of stenosis of the lateral sinus for PT in patients with a stenotic lesion. These lesions are difficult to diagnose and require transvenous monography, and should be included in the differential of patients with PT. 7 There exists a population of patients with PT where a definitive cause cannot be identified and for which symptoms are refractory to conventional treatment. This subset of patients is diagnosed with IPT. Cognitive therapy and vasoactive medications have shown unsatisfactory results in managing symptoms of patients with IPT. 8 Bae et al treated 10 patients with IPT using tinnitus-retraining therapy, 9 of the 10 patients experienced incomplete resolution of PT. 9 Internal jugular vein ligation had shown success in treating IPT initially, but is strongly discouraged due to high recurrence rate and serious complications such as florid intracranial hypertension. 10111213 PT also occurs in ∼60% of patients with idiopathic intracranial hypertension (IIH). 14 Although typical symptoms of IIH consist of headaches or visual disturbances, PT alone, or in association with hearing loss, dizziness, and aural fullness, has also been recognized as a manifestation of this condition. 151617 Acetazolamide and other medical treatments are successful in treating mild to moderate IIH, but PT is known to persist. 18 Cerebrospinal fluid diversion surgeries can treat PT in IIH patients, but these procedures are indicated in severe cases of advancing vision loss, debilitating headaches, and progressing neurological symptoms. 192021 Various surgical treatments have been used to treat PT secondary to venous anomalies, such as sigmoid sinus diverticulum, sigmoid sinus wall defects (dehiscence, thinning), and dominant sigmoid sinus. 22 Guo and Wang reported 24 patients treated by transmastoid venous compression of dominant sigmoid sinus in patients with PT. Most patients had significant resolution, but 8 of the 24 persisted with PT. 23 Contrary to the above-mentioned causes of PT, IPT has no successful medical or surgical treatment. The persistent nature and lack of effective treatment for IPT can significantly affect health and quality of life, leading to depression and even suicidal ideations in severe cases. 24 In this article, we address diagnosis and treatment of patients with IPT that is severe and intractable, despite multifarious attempts toward their management. We propose transtemporal venous decompression as a surgical technique for treatment of PT in patients with IPT. This surgical treatment is proposed to be successful, by removing the bone over the area of the fixed wall in the sigmoid sinus responsible for the turbulent sound. Materials and Methods A prospective cohort clinical study was performed in a tertiary neurotologic skull base clinic from March 2012 to February 2013. The study was approved by the Seton Institutional Review Board of the Seton Family of Hospitals (CR-10-059). Patient History and Physical Examination The primary author evaluated a total of 55 patients complaining of unilateral PT. Detailed patient history along with demographic data, physical examinations of the ear, head, and neck regions were performed. Body mass index (BMI) was recorded ( Table 1 ). Audiograms and tympanograms were performed to exclude serous otitis media or any other causes of conductive hearing loss. Tinnitus severity was categorized based on Tinnitus Handicap Inventory (THI). Light digital pressure was applied over the ipsilateral internal jugular vein and its effect on loudness of PT was recorded. Seven of these 55 patients were identified who required further evaluation from a diagnostic standpoint due to negative findings. In six of the seven patients, it was observed that on application of light digital pressure over ipsilateral internal jugular vein or turning head to the same side, loudness of PT either diminished or resolved. Patient 5 did not observe any change. These seven patients were further evaluated using magnetic resonance venography (MRV), magnetic resonance angiography (MRA), 4-vessel arteriogram, videonystagmography, and electrocochleography based on individual requirements. Lumbar puncture (LP) was performed for all the seven patients to rule out IIH. Medical Therapy Medical therapy was determined based on the most likely cause of IPT specific to each patient. Two of these seven patients were prescribed niacin for their PT. One patient suspected to have (Meniere's disease) was treated with transtympanic steroids. Two patients were treated with acetazolamide for suspected intracranial hypertension. None of these patients observed any reduction in PT. The remaining two of the seven patients had IPT associated with mild elevated lumbar puncture opening pressure (LPOP; <30 cm/water). No other symptoms such as headache, blurring of vision, vision loss, papilledema, or morbid obesity of ICH were observed. These patients were encouraged to lose weight and low salt diet was advised. These two patients were treated with more aggressive doses of acetazolamide for up to 2 months. Acetazolamide treatment was discontinued after a maximum of 2 months, or if side effects necessitated the termination of therapy. There was no improvement in the severity of PT. Ventriculoperitoneal (VP) shunt was not indicated due to the absence of other morbid symptoms. Operative Procedure The primary author preformed all procedures, with the patient under general anesthesia. Standard postauricular incision was utilized with the patient in the supine position. The sigmoid sinus was skeletonized from the tegmen/sinodural angle as far inferior as possible to the jugular bulb. Dura was decompressed posterior and anterior to the sigmoid 1/2 cm as far inferior as possible. Emissary veins were obliterated when encountered. The venous system was meticulously decompressed to avoid venous trauma. If bone on the venous system was not practical to remove in total, it was left as thin as possible and in segments as small as possible. These segments were usually less than 2 to 3 mm in diameter ( Fig. 2 ). The wound was closed in layers, without mastoid dressing. The patient was observed either in intensive care unit or a step-down unit, for 24 hours postoperatively. All patients were discharged on the first or second postoperative day. Completed dural and venous decompression from sinodural angle to the jugular bulb region. Results Mean age was 52 years, and mean duration of PT was 27.2 months. Mean follow-up duration postsurgery was 33.4 months ( Table 1 ). Audiogram showed preoperative average speech recognition threshold of 25 dB and postoperative speech recognition threshold of 26 dB. Preoperative pure tone average was 26 dB and postoperative was 28 dB in the operative ear. This shows a trend lower in the postoperative ear but is within test variability. Three out of seven patients had 10 to 15 dB difference in hearing compared with the nonsurgical ear. Two out of seven patients had 40 to 50 dB of low-frequency sensorineural hearing loss compared with the nonaffected ear. Mean THI pre- and postsurgeries were 4.8 and 1.2, respectively. Six out of seven patients had complete resolution of PT immediately postsurgery and remained symptom free regardless of positioning or activity in the final evaluation at 3 to 4 years follow-up ( Table 2 ). Patient number 7 obtained almost 50% (THI grade 3) improvement immediately postsurgery and at 2 months follow-up. Her symptoms returned to preoperative level after 3 months postoperatively. This patient's preoperative LPOP was 26 cm/water. Postoperative LP and MRV were performed after 5 months. It was noted that her opening pressure had remained stable at 27 cm/water and postoperative MRV showed normal venous patency ( Fig. 3 ). She complained of persistent PT and had developed headaches. The patient was referred to a neurologist and treated medically for IIH, eventually requiring a VP shunt. This resolved PT and headache. Discussion This is a prospective clinical study, in which six of seven patients, diagnosed with severe unrelenting IPT, were successfully treated by transtemporal venous decompression. One patient, number 7, had significant resolution of PT initially. She had significant weight gain postoperatively, and developed symptoms consistent with IIH. Medical histories, physical examination, and imaging studies are important for differentiating the origin of PT. We identified seven patients, who were either nonresponsive to conventional medical treatment or had no identifiable cause for their PT. Idiopathic Pulsatile Tinnitus Although PT has varied etiologies, often no underlying pathology can be identified. 252627 Herraiz and Aparicio identified 10 such cases out of 80 and were diagnosed with “idiopathic” PT. 28 Extensive imaging studies are suggested in cases of subjective idiopathic PT, where no risk factors can be identified in the patient's medical history or physical examination. 2930 All our seven patients were diagnosed with idiopathic PT. Two had mild elevation LPOP, but still classified as idiopathic. Imaging and LP were performed for all seven patients. As stated previously, severity of PT in all our patients graded very severe to catastrophic (THI). One patient had suicidal ideations secondary to the IPT. Several authors have advocated that if PT disappears on ipsilateral jugular vein compression, it signifies a venous cause. If it does not, an arterial cause exists. 1431 Patient 5 in our study did not observe any change in PT on ipsilateral compression of internal jugular vein. We believe that compression of jugular vein can be an important criterion for identifying venous PT but is not absolute. The compression strength may vary between clinicians, and strong neck compression may also decrease the arterial flow, resulting in lesser tinnitus. 32 On the contrary, too weak compression may not even be able to alter the jugular flow. Patient's anxiety and environmental sounds are other factors. Krishnan et al attempted to classify 16 patients with PT by compression of internal jugular vein but were unable to categorize 50% of the investigated patients because of the vague response of the patients. 33 Complete resolution of PT immediately postsurgery in this patient 5, without any complications, suggests venous origin. Resolution in this patient suggests a venous cause, even though preoperative venous compression had no effect on the PT. Thus, compression testing may indicate origin of PT but is not absolute. A venous cause of IPT is the probable source, and this characteristic of IPT makes ligation of the ipsilateral internal jugular vein, a very tempting procedure. There are various reports in the literature describing successful resolution of IPT in patients who underwent internal jugular vein ligation. 3435363738 Duvillard et al argued that internal jugular vein ligation should be avoided because the jugular vein represents the dominant channel of cerebral venous outflow and its ligation poses a theoretical risk of causing intracranial hypertension. 39 It is also observed that this procedure has a high initial success, but high delayed failure rate, presumably due to the rapid development of collateral venous flow around the level of ligation, particularly through mastoid and condylar emissary veins. 40 Jackler et al also concluded that there is rarely, if ever, an indication for this procedure solely for the purpose of alleviating PT, and we agree. 22 Idiopathic Pulsatile Tinnitus with Mild IIH Mild IIH was a factor in our cohort. Two out of seven patients were diagnosed with IPT with mild IIH. Both patients were obese females. Association of intracranial hypertension with obesity is well documented in the literature. 414243 Both patients were advised and had attempted weight loss in the past without success. Weight loss benefits patients with ICH, but its effect on PT, particularly in mild IIH patients is unknown. In addition, both patients were below the BMI requirements for bariatric surgery. 44 LPOP in the two patients with IIH was 26 and 29 cm/water. LP of 25 cm/water or more is considered diagnostic of increased intracranial pressure. 45 The classic symptoms of IIH include headache, visual disturbance, and PT; however, all the symptoms are rarely collectively present in each individual patient. 14 Our two patients with mild increase LPOP had only PT as a complaint. Imaging abnormalities associated with raised intracranial hypertension include empty sella, tight subarachnoid spaces, flattening of the posterior globe, protrusion of the optic nerve head, and distension of the optic nerve sheath. 4647 Imaging was unremarkable for both our patients. It is likely that they were diagnosed early in course of the disease; well before any of these imaging signs appeared. This also highlights the important role otolaryngologists can play in identifying intracranial hypertension, before appearance of more severe morbid signs and symptoms. Carbonic anhydrase inhibitors, such as acetazolamide, are the main medical treatment prescribed for intracranial hypertension and its symptoms. 14849 Acetazolamide dosage was graduated from 125 mg bid up to 1,000 mg a day in our patients. Both patients with IIH were treated with acetazolamide as the first line of treatment, without affecting the PT. Shaw and Million in their series reported two patients with PT associated with IIH. They observed a complete resolution of symptoms of IIH, including PT, with acetazolamide therapy. As per their report, one of these patients remained symptom free for 1 year and the other was followed up for a few weeks. 14 Biousse et al reported two patients treated with acetazolamide for IIH. They noted that acetazolamide did help with headaches but the PT returned after 3 to 4 months, although it was intermittent and not as loud as before. 50 Vivo et al reported similar results, where two female patients were treated with acetazolamide for IIH. One patient observed an improvement in MRI findings such as restoration of optic nerve sheath thickness and bilateral optic papilla. However, both patients reported persistent moderate to severe episodes of tinnitus during the 10- and 12-month follow-up. 18 Although acetazolamide is a mainstay treatment for intracranial hypertension, it appears inadequate in treating PT associated with mild IIH. Cerebrospinal fluid diversion techniques, using ventricular shunts, have been used to dissipate the increased intracranial pressure to prevent catastrophic vision loss and/or debilitating headaches or disabling PT. 51 For patient 7, PT returned to preoperative level after 3 months. LPOP repeated at 5 months postoperatively was stable at 27 cm/water. This patient had normal CT, brain MRI, and MRA preoperatively. Postoperative MRV showed negative findings ( Fig. 3 ). She experienced debilitating headaches, which were not present preoperatively. PT returned to her baseline level, as she had a significant weight gain, despite weight control measures. The patient was referred to a neurologist, and was prescribed a combination of maximal dosages of topiramate and acetazolamide. Severity of symptoms led to the decision of placing a VP shunt. VP shunt resolved both the headache and the PT in this patient. High-Riding/Dehiscent Jugular Bulb Although suggestions are that the symptom of PT from IPT occurs in the area of the jugular bulb, 3452 it is our premise that PT emanates from the area of the sigmoid sinus, most likely from the mid-portion in the mastoid cavity. The incidence of high-riding jugular bulb and dehiscent jugular bulb ranges from 3.5 to 22.6%, respectively. Patients with high jugular bulb can be completely asymptomatic or may suffer from dizziness, PT, and hearing loss. 53 Various surgical treatments have been used with the aim to create a barrier and provide a sound proofing from the dehisced jugular bulb. El-Begermy and Rabie, in their case series, treated PT due to high dehiscent jugular bulb by layered reconstruction of the bony hypotympanum. Four of seven patients reported resolution of PT, and two patients reported change in character of tinnitus to whistling without improvement. One patient developed severe increase of intracranial pressure requiring treatment, but there was no improvement in PT symptoms. 54 In another study, three patients with PT with a high and dehiscent jugular bulb were treated with transcanal jugular bulb resurfacing technique, using hydroxyapatite. Two patients associated the onset of their symptoms with trauma. All the patients noted immediate improvement in PT. 55 It is possible that patients successfully treated by various surgical repairs of the high and dehisced jugular bulb actually augmented the flow pattern within the mastoid sigmoid segment that is responsible for PT. Surgical management of the dehisced jugular bulb is difficult and can lead to venous obstruction and problematic bleeding. Surgical Treatment of PT Due to Sigmoid Sinus Wall Defects Surgical treatments have been used to treat PT for sigmoid anomalies. Kim et al treated eight patients suffering from vascular PT resulting from diverticulum and/or thinning/dehiscence of the sigmoid sinus wall. The sigmoid sinus was compressed and transmastoid reshaping was performed. Seven patients in this study obtained resolution of PT symptoms. One patient suffered a postoperative complication of increased intracranial hypertension requiring decompression of sigmoid sinus. 56 There are other similar techniques reported in the literature performed on patients with diverticulum/dehiscence involving skeletonization of the sigmoid sinus, adjacent dura, and diverticulum. 575859 Our study patient cohort had no sigmoid sinus wall defects or diverticulum. The exact location of where venous hum arises is unclear. Cho et al conducted three-dimensional reconstruction of the venous system in patients with PT. The authors indicated that PT is caused due to turbulence, which could arise from irregularity in the diameter, vessel curvature, or elasticity of the vessel. 27 We believe that as the blood flows from transverse sinus to S-shaped curvature of sigmoid sinus, it creates an area of turbulence. Venous decompression in this area changes the turbulent flow of blood, perceived as PT, to a more laminar flow, which in turn helps eliminate PT. It also most likely allows venous distension, thus augmenting the venous flow. Sigmoid Sinus Decompression for Other Conditions Sigmoid sinus decompression surgery is not new to the realm of skull base surgeries. It has been performed successfully in the past for treatment of other conditions. Gianoli et al performed decompression of sigmoid sinus as a part of sac vein decompression technique to treat Meniere's disease in 35 patients. There were no reported complications or adverse effects at 2 years follow-up. 60 Ostrowski and Kartush reported long-term 55 months follow-up in 56 patients who underwent sac vein decompression. None of the patients reported any complications. 61 We have used a similar form of this technique in the successful treatment of IPT. Limitations Our study has limitations. Postoperative LPOP, MRV/MRA, and MRI were not routinely performed. Conclusion This procedure is highly effective and displays excellent long-term (3–4 years) complete resolution. Patient selection is systematic, and patients who will most benefit can be identified. IPT can be successfully, safely, and definitively treated by performing transtemporal venous decompression, when conservative management fails and symptoms are severe. Our results also suggest transtemporal venous decompression may be effective for PT related to other venous anomalies such as dehisced jugular bulb and sigmoid venous dehiscence.
1 Patients with pulsatile or pulse-synchronous tinnitus comprise ∼4% of tinnitus cases. 23 PT is associated with many identifiable causes such as atherosclerotic carotid artery disease, glomus tumor, aneurysms, arteriovenous malformations, jugular bulb abnormalities, sigmoid sinus wall defects, diverticulum, superior semicircular canal dehiscence, and conductive hearing loss. These entities have been successfully treated in the past with various medical and surgical techniques. 45 Intracranial venous diverticulum as a cause of PT has been treated successfully with endovascular treatment. These include venous sinus stenting and coil embolization. 6 Lenck et al described stenting of stenosis of the lateral sinus for PT in patients with a stenotic lesion. These lesions are difficult to diagnose and require transvenous monography, and should be included in the differential of patients with PT. 7 There exists a population of patients with PT where a definitive cause cannot be identified and for which symptoms are refractory to conventional treatment. This subset of patients is diagnosed with IPT. Cognitive therapy and vasoactive medications have shown unsatisfactory results in managing symptoms of patients with IPT. 8 Bae et al treated 10 patients with IPT using tinnitus-retraining therapy, 9 of the 10 patients experienced incomplete resolution of PT. 9 Internal jugular vein ligation had shown success in treating IPT initially, but is strongly discouraged due to high recurrence rate and serious complications such as florid intracranial hypertension. 10111213 PT also occurs in ∼60% of patients with idiopathic intracranial hypertension (IIH). 14 Although typical symptoms of IIH consist of headaches or visual disturbances, PT alone, or in association with hearing loss, dizziness, and aural fullness, has also been recognized as a manifestation of this condition.
yes
Otorhinolaryngology
Is there a cure for pulsatile tinnitus?
no_statement
there is no "cure" for "pulsatile" "tinnitus".. "pulsatile" "tinnitus" cannot be "cured".
https://lacanadahearing.com/services/tinnitus-solutions
Tinnitus Solutions | La Cañada Hearing Aids & Audiology
Tinnitus Solutions by Dr. Kevin Ivory Do you ever experience sounds such as ringing, buzzing, roaring, or humming in your head? These sounds are known as tinnitus and almost everyone experiences these sounds from time to time, however, when they do not go away they can become bothersome. Tinnitus, known commonly as “ringing of the ear,” is a condition in which one hears noises without the presence of an external stimulus. Tinnitus appears in 80% of hearing loss cases, with a number of potential causes. Tinnitus is believed to be caused by damage to inner ear hair cells, a cause which is shared with sensorineural hearing loss. If you are experiencing tinnitus, whether temporary or chronic, schedule a visit with us at La Cañada Hearing Aids & Audiology. Just like hearing loss and diabetes, tinnitus is typically a chronic condition. So while it is true that tinnitus cannot be cured, it can be managed through education and training. At La Cañada Hearing Aids & Audiology we use evidence-based individual and group rehabilitation to help you take control of your tinnitus so it stops controlling you. What is tinnitus? Known more commonly as “ringing of the ears,” tinnitus is a condition in which people experience sounds where there is no external stimulus. The American Tinnitus Association (ATA) estimates that millions of Americans “experience tinnitus, often to a debilitating degree, making it one of the most common health conditions in the country.” Approximately 15% of Americans – over 50 million people – experience tinnitus, whether it is temporary or chronic. Approximately 20 million Americans experience chronic tinnitus, with 2 million experiencing debilitating cases. Connection to hearing loss An estimated 80% of tinnitus cases go hand in hand with hearing loss. This is due to the relationship between hearing and inner ear hair cells. With presbycusis (age-related hearing loss) and noise-induced hearing loss, there may be degeneration to inner ear hair cells. Inner ear hair cells are responsible for translating sound vibrations into neural signals recognized by our brains as sound. Hearing specialists suggest that tinnitus may result from the degeneration of these hair cells, as they may send phantom signals to the brain, which is then registered as sound. Types and Causes Tinnitus is usually not an isolated condition in and of itself. The appearance of tinnitus often points to other underlying health conditions. Identifying and treating tinnitus may lead to a reduction or elimination of the symptoms. The ATA notes, “While tinnitus is most often triggered by hearing loss, there are roughly 200 different health disorders that can generate tinnitus as a symptom.” Objective Tinnitus Objective tinnitus creates a sound that can be heard by people who sit nearby. Objective tinnitus comprises less than 1% of cases, and has been linked to circulatory or somatic systems in the body. Objective tinnitus takes the form of pulsatile tinnitus, in which increased blood flow or muscle spasms affect hearing. In some cases, pulsatile tinnitus is synchronous with the beating of the heart. Other cases of pulsatile tinnitus indicate a problem with the small muscles of the middle ear or the bones of the inner ear. Conditions such as high blood pressure and others that affect blood flow may lead to objective tinnitus. In these cases, by treating related medical conditions, people may find relief from tinnitus. Subjective Tinnitus With subjective tinnitus, only the person experiencing tinnitus can hear the sound. Because our bodies and nervous systems differ from person to person, these sounds take many diverse forms. Subjective tinnitus is the most common form, comprising 99% of reported cases. Causes of subjective tinnitus include sensorineural hearing loss due to damage of inner ear hair cells (aging, exposure to loud noise, and even certain classes of ototoxic medication); Meniere’s disease; impacted earwax; or another related medical condition. Tinnitus Treatment Options Everyone reacts to tinnitus differently which means everyone needs an individualized tinnitus treatment plan tailored specifically to them. Some patients, just upon being educated on tinnitus, find relief. Others find more targeted strategies such as environmental distractors, sound therapy, group rehabilitation, and cognitive behavioral therapy beneficial. Many patients benefit from a combination of several of these strategies.  Occasionally tinnitus can be treated medically and when this is discovered during the evaluation, a medical referral is given. Tinnitus has the potential to contribute to increased levels of stress, anxiety, and depression. Tinnitus has been linked to memory problems, the ability to concentrate, and fatigue. Tinnitus has been known to affect a person’s emotional well-being, interfering with social interaction and employment. There is no cure for tinnitus, but there is effective treatment. Because the majority of tinnitus cases are linked with hearing, seeking a hearing exam and consultation might be the first step to finding a solution. If your tinnitus is linked with hearing loss, Dr. Ivory will provide options for hearing aids that will address both issues. Tests to match the pitch and loudness of your tinnitus which include a hearing test of additional high pitches, as needed ‍ How do we help manage your tinnitus? If you have hearing loss, treating your hearing loss will be recommended as part of your tinnitus management plan. Period. It is like having a broken arm. If you go to the doctor and get medication for your broken arm it may feel a little better for a period of time but you still have a broken arm. We must treat the hearing loss so that we can begin to successfully manage the tinnitus. Tinnitus is also commonly treated with the use of sound masking, available on hearing aid tinnitus therapy features. Hearing aid manufacturers also offer standalone devices to generate sound masking for tinnitus. Many hearing aid manufacturers offer hearing aids with tinnitus therapy. Tinnitus therapy is most commonly sound masking – using tones or nature sounds to mask the frustrating sounds of tinnitus. There are also exercises available to provide relief and to train the brain away from hearing the sounds of tinnitus.
In some cases, pulsatile tinnitus is synchronous with the beating of the heart. Other cases of pulsatile tinnitus indicate a problem with the small muscles of the middle ear or the bones of the inner ear. Conditions such as high blood pressure and others that affect blood flow may lead to objective tinnitus. In these cases, by treating related medical conditions, people may find relief from tinnitus. Subjective Tinnitus With subjective tinnitus, only the person experiencing tinnitus can hear the sound. Because our bodies and nervous systems differ from person to person, these sounds take many diverse forms. Subjective tinnitus is the most common form, comprising 99% of reported cases. Causes of subjective tinnitus include sensorineural hearing loss due to damage of inner ear hair cells (aging, exposure to loud noise, and even certain classes of ototoxic medication); Meniere’s disease; impacted earwax; or another related medical condition. Tinnitus Treatment Options Everyone reacts to tinnitus differently which means everyone needs an individualized tinnitus treatment plan tailored specifically to them. Some patients, just upon being educated on tinnitus, find relief. Others find more targeted strategies such as environmental distractors, sound therapy, group rehabilitation, and cognitive behavioral therapy beneficial. Many patients benefit from a combination of several of these strategies.  Occasionally tinnitus can be treated medically and when this is discovered during the evaluation, a medical referral is given. Tinnitus has the potential to contribute to increased levels of stress, anxiety, and depression. Tinnitus has been linked to memory problems, the ability to concentrate, and fatigue. Tinnitus has been known to affect a person’s emotional well-being, interfering with social interaction and employment. There is no cure for tinnitus, but there is effective treatment.
no
Otorhinolaryngology
Is there a cure for pulsatile tinnitus?
no_statement
there is no "cure" for "pulsatile" "tinnitus".. "pulsatile" "tinnitus" cannot be "cured".
https://www.nch.org/news/can-you-hear-it-now/
Can you hear it now? - Northwest Community Healthcare
Can you hear it now? Stent silences whooshing sound for patient It can be difficult to describe a medical condition others can’t hear or understand. That was the case for Susan McElfresh, a 55-year-old Lake in the Hills resident who in the spring of 2015 told her doctor she was hearing a loud whooshing sound in her ears. An MRI and CT scan turned up nothing unusual. By the next year it grew louder, and whenever Susan’s heart would race, it was almost deafening. When she pressed on her carotid artery, the sound would go away. But like stepping on a garden hose to stop the flow of water, she knew it was unreasonable and impossible to stop the flow of blood to her brain. Susan had developed headaches, couldn’t concentrate at work, stopped going out with friends and lost faith in doctors. The path to NCH During a scan at another hospital for an unrelated medical issue, Susan happened to mention her condition to a nuclear medicine technician named Anna. Susan learned that Anna’s mom – NCH Staff Nurse Cathy Woltmann – also had pulsatile tinnitus, but she had been cured. Anna said that Ali Shaibani, M.D. was her mom’s physician, so Susan made an appointment with him that day. “If I hadn’t met the technician and she hadn’t told me about her mother who had the same condition, I would have never gone down that path,” Susan says. After tests and trying medication that didn’t seem to help, Susan asked Dr. Shaibani to take a second look at the arteries and veins in her head. He reviewed all of her scans and tests. “He didn’t discount me,” she says. “I wanted to get this solved and he said, ‘I think it’s time we go ahead and do the procedure.’” Dr. Shaibani said that Susan had a “subset of idiopathic intracranial hypertension (IIHTN) associated with acquired narrowing of the main veins (transverse sinuses) draining the blood out of the cranium/brain.” “She was initially treated with medication but did not have relief of her symptoms,” Dr. Shaibani says. “She then came in for treatment which consisted of diagnostic cerebral angiography and venography and direct pressure measurements in the veins to document a significant narrowing.” He put a stent in the larger transverse sinus to relieve abnormally elevated pressures and the noise Susan was hearing. While putting in a stent involves risks, Dr. Shaibani says Susan responded very well to the treatment. “Her chances of continuing to be symptom-free without recurrence are approximately 90 percent,” Dr. Shaibani says. Waking up after the procedure, Susan says she felt “reborn.” The date was March 12, 2019. “The first thing I noticed was that it was gone,” she says. “I couldn’t believe it. I was so happy. It was amazing.” That’s when the ICU nurse, knowing the story that led Susan to NCH, called Cathy Woltmann. “You have no idea how much you impacted me,” Susan told Cathy through tears. “You gave me hope, and I didn’t give up.” The two compared stories and cried together. Only patients who have pulsatile tinnitus understand what it’s like to suffer from it. “It was quite a coincidence that I happened to be working that day,” says Cathy. “Susan explained that I had to meet her because I saved her life. I said, ‘Oh, I get it because Dr. Shaibani saved my life, too.’” Cathy’s story Cathy endured 22 months of pulsatile tinnitus and says it drove her to nearly lose her mind. To illustrate the severity, she shared how a couple of the members of a pulsatile tinnitus Facebook support group decided to end their lives due to the endless, maddening noise. When Dr. Shaibani decided to put a stent in her brain and explained the risks, Cathy told him he could do whatever he wanted. “I was willing to try anything,” Cathy says. “When I woke up in the recovery room, I couldn’t speak. I was so emotional. I just started crying because it was gone.” This website uses cookies that measure website usage and help us give you the best experience. By continuing to use this website, you consent to this website’s use of these cookies and you accept and agree to our Website Privacy Policy and Terms of Use.
“She then came in for treatment which consisted of diagnostic cerebral angiography and venography and direct pressure measurements in the veins to document a significant narrowing.” He put a stent in the larger transverse sinus to relieve abnormally elevated pressures and the noise Susan was hearing. While putting in a stent involves risks, Dr. Shaibani says Susan responded very well to the treatment. “Her chances of continuing to be symptom-free without recurrence are approximately 90 percent,” Dr. Shaibani says. Waking up after the procedure, Susan says she felt “reborn.” The date was March 12, 2019. “The first thing I noticed was that it was gone,” she says. “I couldn’t believe it. I was so happy. It was amazing.” That’s when the ICU nurse, knowing the story that led Susan to NCH, called Cathy Woltmann. “You have no idea how much you impacted me,” Susan told Cathy through tears. “You gave me hope, and I didn’t give up.” The two compared stories and cried together. Only patients who have pulsatile tinnitus understand what it’s like to suffer from it. “It was quite a coincidence that I happened to be working that day,” says Cathy. “Susan explained that I had to meet her because I saved her life. I said, ‘Oh, I get it because Dr. Shaibani saved my life, too.’ ” Cathy’s story Cathy endured 22 months of pulsatile tinnitus and says it drove her to nearly lose her mind. To illustrate the severity, she shared how a couple of the members of a pulsatile tinnitus Facebook support group decided to end their lives due to the endless, maddening noise. When Dr. Shaibani decided to put a stent in her brain and explained the risks, Cathy told him he could do whatever he wanted. “I was willing to try anything,” Cathy says. “When I woke up in the recovery room, I couldn’t speak.
yes
Gerontology
Is there a limit to human lifespan?
yes_statement
there is a "limit" to "human" "lifespan".. "human" "lifespan" has a "limit".
https://www.smithsonianmag.com/smart-news/study-suggests-150-years-may-be-human-lifespans-upper-limit-180977899/
Study Suggests 150 Years May Be the Human Lifespan's Upper Limit
A new study suggests there may be a hard limit on human longevity, reports Live Science's Rebecca Sohn. That upper limit, according to the study published this week in the journal Nature Communications, is somewhere between 120 and 150 years old. At that advanced age, the researchers say the human body simply would no longer be able to bounce back and repair itself after normal stresses such as illness, according to the Guardian. The study is based on medical data from more than 500,000 volunteers that the team behind the study collated into a single number that measures the physiological toll of aging that they called the “dynamic organism state indicator” or DOSI. This figure distinguishes biological age, which is essentially how run down your cells and organ systems are, from chronological age in a manner that recalls a scene from the Indiana Jones film Raiders of the Lost Ark (1981) in which a banged up but still youthful Harrison Ford groans, “it’s not the years honey, it’s the mileage.” “What we’re saying here is that the strategy of reducing frailty, so reducing the disease burden, has only an incremental ability to improve your lifespan,” Peter Fedichev, a longevity researcher at the Moscow Institute of Physics and Technology and senior author of the study, tells Sophie Putka of Inverse. Per Live Science, the suggestion is that increasing the human lifespan beyond this hard limit would require therapies that boosted and maintained the body’s ability to be resilient and repair itself. Researchers gleaned this upper limit on human life from anonymized blood samples from 544,398 people in the United States, United Kingdom and Russia. The team primarily looked at two numbers to determine the individual’s DOSI: the ratio of two types of white blood cells that the immune system uses to fight infection and the variability in the size of red blood cells, according to Live Science. Each of these numbers tend to increase as people get on in years and are referred to by researchers as biomarkers of aging. The researchers calculated the human lifespan’s potential upper limits by plugging these biomarkers of aging, along with other basic medical data on each volunteer, into a computer model. “They are asking the question of ‘What’s the longest life that could be lived by a human complex system if everything else went really well, and it’s in a stressor-free environment?’” Heather Whitson, director of the Duke University Center for the Study of Aging and Human Development and who was not involved in the study, tells Emily Willingham of Scientific American. The team’s computer model suggested that even under completely ideal biological circumstances, these biomarkers of aging would have declined so much by 150 years of age that they could no longer support a living organism. But it’s not clear that making it to 150 would necessarily be pleasant. As S. Jay Olshansky, an epidemiologist at the University of Illinois at Chicago who was not involved in the study, tells Scientific American, a long lifespan is not the same thing as a long health span. “Death is not the only thing that matters,” Whitson tells Scientific American. “Other things, like quality of life, start mattering more and more as people experience the loss of them.” The kind of death this study postulates, she tells Scientific American, “is the ultimate lingering death. And the question is: Can we extend life without also extending the proportion of time that people go through a frail state?”
A new study suggests there may be a hard limit on human longevity, reports Live Science's Rebecca Sohn. That upper limit, according to the study published this week in the journal Nature Communications, is somewhere between 120 and 150 years old. At that advanced age, the researchers say the human body simply would no longer be able to bounce back and repair itself after normal stresses such as illness, according to the Guardian. The study is based on medical data from more than 500,000 volunteers that the team behind the study collated into a single number that measures the physiological toll of aging that they called the “dynamic organism state indicator” or DOSI. This figure distinguishes biological age, which is essentially how run down your cells and organ systems are, from chronological age in a manner that recalls a scene from the Indiana Jones film Raiders of the Lost Ark (1981) in which a banged up but still youthful Harrison Ford groans, “it’s not the years honey, it’s the mileage.” “What we’re saying here is that the strategy of reducing frailty, so reducing the disease burden, has only an incremental ability to improve your lifespan,” Peter Fedichev, a longevity researcher at the Moscow Institute of Physics and Technology and senior author of the study, tells Sophie Putka of Inverse. Per Live Science, the suggestion is that increasing the human lifespan beyond this hard limit would require therapies that boosted and maintained the body’s ability to be resilient and repair itself. Researchers gleaned this upper limit on human life from anonymized blood samples from 544,398 people in the United States, United Kingdom and Russia. The team primarily looked at two numbers to determine the individual’s DOSI: the ratio of two types of white blood cells that the immune system uses to fight infection and the variability in the size of red blood cells, according to Live Science. Each of these numbers tend to increase as people get on in years and are referred to by researchers as biomarkers of aging.
yes
Gerontology
Is there a limit to human lifespan?
yes_statement
there is a "limit" to "human" "lifespan".. "human" "lifespan" has a "limit".
https://en.wikipedia.org/wiki/Maximum_life_span
Maximum life span - Wikipedia
Maximum life span (or, for humans, maximum reported age at death) is a measure of the maximum amount of time one or more members of a population have been observed to survive between birth and death. The term can also denote an estimate of the maximum amount of time that a member of a given species could survive between birth and death, provided circumstances that are optimal to that member's longevity. Most living species have an upper limit on the number of times somaticcells not expressing telomerase can divide. This is called the Hayflick limit, although this number of cell divisions does not strictly control lifespan. In animal studies, maximum span is often taken to be the mean life span of the most long-lived 10% of a given cohort. By another definition, however, maximum life span corresponds to the age at which the oldest known member of a species or experimental group has died. Calculation of the maximum life span in the latter sense depends upon the initial sample size.[1] The longest living person whose dates of birth and death were verified according to the modern norms of Guinness World Records and the Gerontology Research Group was Jeanne Calment (1875–1997), a French woman who is verified to have lived to 122. The oldest male lifespan has only been verified as 116, by Japanese man Jiroemon Kimura. Reduction of infant mortality has accounted for most of the increased average life span longevity, but since the 1960s mortality rates among those over 80 years have decreased by about 1.5% per year. "The progress being made in lengthening lifespans and postponing senescence is entirely due to medical and public-health efforts, rising standards of living, better education, healthier nutrition and more salubrious lifestyles."[4] Animal studies suggest that further lengthening of median human lifespan as well as maximum lifespan could be achieved through "calorie restriction mimetic" drugs or by directly reducing food consumption.[5] Although calorie restriction has not been proven to extend the maximum human life span as of 2014[update], results in ongoing primate studies have demonstrated that the assumptions derived from rodents are valid in primates.[6][7] It has been proposed that no fixed theoretical limit to human longevity is apparent today.[8][9] Studies in the biodemography of human longevity indicate a late-life mortality deceleration law: that death rates level off at advanced ages to a late-life mortality plateau. That is, there is no fixed upper limit to human longevity, or fixed maximal human lifespan.[10] This law was first quantified in 1939, when researchers found that the one-year probability of death at advanced age asymptotically approaches a limit of 44% for women and 54% for men.[11] However, this evidence depends on the existence of a late-life plateaus and deceleration that can be explained, in humans and other species, by the existence of very rare errors.[12][13] Age-coding error rates below 1 in 10,000 are sufficient to make artificial late-life plateaus, and errors below 1 in 100,000 can generate late-life mortality deceleration. These error rates cannot be ruled out by examining documents[13] (the standard) because of successful pension fraud, identity theft, forgeries and errors that leave no documentary evidence. This capacity for errors to explain late-life plateaus solves the "fundamental question in aging research is whether humans and other species possess an immutable life-span limit" and suggests that a limit to human life span exists.[14] A theoretical study suggested the maximum human lifespan to be around 125 years using a modified stretched exponential function for human survival curves.[15] In another study, researchers claimed that there exists a maximum lifespan for humans, and that the human maximal lifespan has been declining since the 1990s.[16] A theoretical study also suggested that the maximum human life expectancy at birth is limited by the human life characteristic value δ, which is around 104 years.[17] The United Nations has undertaken an important Bayesian sensitivity analysis of global population burden based on life expectancy projection at birth in future decades. The 2017 95% prediction interval of 2090 average life expectancy rises as high as +6 (106, in Century Representation Form) by 2090, with dramatic, ongoing, layered consequences on world population and demography should that happen. The prediction interval is extremely wide, and the United Nations cannot be certain. Organizations like the Methuselah Foundation are working toward an end to senescence and practically unlimited human lifespan. If successful, the demographic implications for human population will be greater in effective multiplier terms than any experienced in the last five centuries if maximum lifespan or the birthrate remain unlimited by law. Modern Malthusian predictions of overpopulation based on increased longevity have been criticized on the same basis as general populationalarmism (see Malthusianism). Evidence for maximum lifespan is also provided by the dynamics of physiological indices with age. For example, scientists have observed that a person's VO2max value (a measure of the volume of oxygen flow to the cardiac muscle) decreases as a function of age. Therefore, the maximum lifespan of a person could be determined by calculating when the person's VO2max value drops below the basal metabolic rate necessary to sustain life, which is approximately 3 ml per kg per minute.[18][page needed] On the basis of this hypothesis, athletes with a VO2max value between 50 and 60 at age 20 would be expected "to live for 100 to 125 years, provided they maintained their physical activity so that their rate of decline in VO2max remained constant".[19] Average and commonly accepted maximum lifespans correspond to the extremums of the body mass (1, 2) and mass normalized to height (3, 4) of men (1, 3) and women (2, 4).[20] Eels, the so-called Brantevik Eel (Swedish: Branteviksålen) is thought to have lived in a water well in southern Sweden since 1859, which makes it over 150 years old.[37] It was reported that it had died in August 2014 at an age of 155.[38] Whales (bowhead whale) (Balaena mysticetus about 200 years)—Although this idea was unproven for a time, recent research has indicated that bowhead whales recently killed still had harpoons in their bodies from about 1890,[39] which, along with analysis of amino acids, has indicated a maximum life span of "177 to 245 years old".[40][41][42] Greenland sharks are currently the vertebrate species with the longest known lifespan.[43] An examination of 28 specimens in one study published in 2016 determined by radiocarbon dating that the oldest of the animals that they sampled had lived for about 392 ± 120 years (a minimum of 272 years and a maximum of 512 years). The authors further concluded that the species reaches sexual maturity at about 150 years of age.[43] Invertebrate species which continue to grow as long as they live (e.g., certain clams, some coral species) can on occasion live hundreds of years: Some jellyfish species, including Turritopsis dohrnii, Laodicea undulata,[46] and Aurelia sp.1,[47] are able to revert to the polyp stage even after reproducing (so-called reversible life cycle), rather than dying as in other jellyfish. Consequently, these species are considered biologically immortal and have no maximum lifespan.[48] There may be no natural limit to the Hydra's life span, but it is not yet clear how to estimate the age of a specimen. Lobsters are sometimes said to be biologically immortal because they do not seem to slow down, weaken, or lose fertility with age. However, due to the energy needed for moulting, they cannot live indefinitely.[50] Plants are referred to as annuals which live only one year, biennials which live two years, and perennials which live longer than that. The longest-lived perennials, woody-stemmed plants such as trees and bushes, often live for hundreds and even thousands of years (one may question whether or not they may die of old age). A giant sequoia, General Sherman is alive and well in its third millennium. A Great Basin Bristlecone Pine called Methuselah is 4,855 years old.[51] Another Bristlecone Pine called Prometheus was a little older still, showing 4,862 years of growth rings. The exact age of Prometheus, however, remains unknown as it is likely that growth rings did not form every year due to the harsh environment in which it grew but it was estimated to be ~4,900 years old when it was cut down in 1964.[52] The oldest known plant (possibly oldest living thing) is a clonal Quaking Aspen (Populus tremuloides) tree colony in the Fishlake National Forest in Utah called Pando at about 16,000 years. Lichen, a symbiotic algae and fungal proto-plant, such as Rhizocarpon geographicum can live upwards of 10,000 years. "Maximum life span" here means the mean life span of the most long-lived 10% of a given cohort. Caloric restriction has not yet been shown to break mammalian world records for longevity. Rats, mice, and hamsters experience maximum life-span extension from a diet that contains all of the nutrients but only 40–60% of the calories that the animals consume when they can eat as much as they want. Mean life span is increased 65% and maximum life span is increased 50%, when caloric restriction is begun just before puberty.[53] For fruit flies the life extending benefits of calorie restriction are gained immediately at any age upon beginning calorie restriction and ended immediately at any age upon resuming full feeding.[54] A few transgenic strains of mice have been created that have maximum life spans greater than that of wild-type or laboratory mice. The Ames and Snell mice, which have mutations in pituitary transcription factors and hence are deficient in Gh, LH, TSH, and secondarily IGF1, have extensions in maximal lifespan of up to 65%. To date, both in absolute and relative terms, these Ames and Snell mice have the maximum lifespan of any mouse not on caloric restriction (see below on GhR). Mutations/knockout of other genes affecting the GH/IGF1 axis, such as Lit, Ghr, and Irs1 have also shown extension in lifespan, but much more modest both in relative and absolute terms. The longest lived laboratory mouse ever was a Ghr knockout mouse, which lived to ≈1800 days in the lab of Andrzej Bartke at Southern Illinois University. The maximum for normal B6 mice under ideal conditions is 1200 days. Accumulated DNA damage appears to be a limiting factor in the determination of maximum life span. The theory that DNA damage is the primary cause of aging, and thus a principal determinant of maximum life span, has attracted increased interest in recent years. This is based, in part, on evidence in human and mouse that inherited deficiencies in DNA repair genes often cause accelerated aging.[58][59][60] There is also substantial evidence that DNA damage accumulates with age in mammalian tissues, such as those of the brain, muscle, liver, and kidney (reviewed by Bernstein et al.[61] and see DNA damage theory of aging and DNA damage (naturally occurring)). One expectation of the theory (that DNA damage is the primary cause of aging) is that among species with differing maximum life spans, the capacity to repair DNA damage should correlate with lifespan. The first experimental test of this idea was by Hart and Setlow[62] who measured the capacity of cells from seven different mammalian species to carry out DNA repair. They found that nucleotide excision repair capability increased systematically with species longevity. This correlation was striking and stimulated a series of 11 additional experiments in different laboratories over succeeding years on the relationship of nucleotide excision repair and life span in mammalian species (reviewed by Bernstein and Bernstein[63]). In general, the findings of these studies indicated a good correlation between nucleotide excision repair capacity and life span. The association between nucleotide excision repair capability and longevity is strengthened by the evidence that defects in nucleotide excision repair proteins in humans and rodents cause features of premature aging, as reviewed by Diderich.[59] Further support for the theory that DNA damage is the primary cause of aging comes from study of Poly ADP ribose polymerases (PARPs). PARPs are enzymes that are activated by DNA strand breaks and play a role in DNA base excision repair. Burkle et al. reviewed evidence that PARPs, and especially PARP-1, are involved in maintaining mammalian longevity.[64] The life span of 13 mammalian species correlated with poly(ADP ribosyl)ation capability measured in mononuclear cells. Furthermore, lymphoblastoid cell lines from peripheral blood lymphocytes of humans over age 100 had a significantly higher poly(ADP-ribosyl)ation capability than control cell lines from younger individuals. A comparison of the heart mitochondria in rats (7-year maximum life span) and pigeons (35-year maximum life span) showed that pigeon mitochondria leak fewer free-radicals than rat mitochondria, despite the fact that both animals have similar metabolic rate and cardiac output[65] For mammals there is a direct relationship between mitochondrial membrane fatty acid saturation and maximum life span[66] Female mammals express more Mn−SOD and glutathione peroxidase antioxidant enzymes than males. This has been hypothesized as the reason they live longer[70] However, mice entirely lacking in glutathione peroxidase 1 do not show a reduction in lifespan. The maximum life span of transgenic mice has been extended about 20% by overexpression of human catalase targeted to mitochondria[71] A comparison of 7 non-primate mammals (mouse, hamster, rat, guinea-pig, rabbit, pig and cow) showed that the rate of mitochondrial superoxide and hydrogen peroxide production in heart and kidney were inversely correlated with maximum life span[72] A study of 8 non-primate mammals showed an inverse correlation between maximum life span and oxidative damage to mtDNA (mitochondrial DNA) in heart & brain[73] A study of several species of mammals and a bird (pigeon) indicated a linear relationship between oxidative damage to protein and maximum life span[74] There is a direct correlation between DNA repair and maximum life span for mammalian species[75] Drosophila (fruit-flies) bred for 15 generations by only using eggs that were laid toward the end of reproductive life achieved maximum life spans 30% greater than that of controls[76] A mutation in the age−1 gene of the nematode worm Caenorhabditis elegans increased mean life span 65% and maximum life span 110%.[78] However, the degree of lifespan extension in relative terms by both the age-1 and daf-2 mutations is strongly dependent on ambient temperature, with ≈10% extension at 16 °C and 65% extension at 27 °C. The capacity of mammalian species to detoxify the carcinogenic chemical benzo(a)pyrene to a water-soluble form also correlates well with maximum life span.[80] Short-term induction of oxidative stress due to calorie restriction increases life span in Caenorhabditis elegans by promoting stress defense, specifically by inducing an enzyme called catalase. As shown by Michael Ristow and co-workers nutritive antioxidants completely abolish this extension of life span by inhibiting a process called mitohormesis.[81]
This capacity for errors to explain late-life plateaus solves the "fundamental question in aging research is whether humans and other species possess an immutable life-span limit" and suggests that a limit to human life span exists.[14] A theoretical study suggested the maximum human lifespan to be around 125 years using a modified stretched exponential function for human survival curves.[15] In another study, researchers claimed that there exists a maximum lifespan for humans, and that the human maximal lifespan has been declining since the 1990s.[16] A theoretical study also suggested that the maximum human life expectancy at birth is limited by the human life characteristic value δ, which is around 104 years.[17] The United Nations has undertaken an important Bayesian sensitivity analysis of global population burden based on life expectancy projection at birth in future decades. The 2017 95% prediction interval of 2090 average life expectancy rises as high as +6 (106, in Century Representation Form) by 2090, with dramatic, ongoing, layered consequences on world population and demography should that happen. The prediction interval is extremely wide, and the United Nations cannot be certain. Organizations like the Methuselah Foundation are working toward an end to senescence and practically unlimited human lifespan. If successful, the demographic implications for human population will be greater in effective multiplier terms than any experienced in the last five centuries if maximum lifespan or the birthrate remain unlimited by law. Modern Malthusian predictions of overpopulation based on increased longevity have been criticized on the same basis as general populationalarmism (see Malthusianism). Evidence for maximum lifespan is also provided by the dynamics of physiological indices with age. For example, scientists have observed that a person's VO2max value (a measure of the volume of oxygen flow to the cardiac muscle) decreases as a function of age.
yes
Gerontology
Is there a limit to human lifespan?
yes_statement
there is a "limit" to "human" "lifespan".. "human" "lifespan" has a "limit".
https://www.theguardian.com/science/2016/oct/05/human-lifespan-has-hit-its-natural-limit-research-suggests
Human lifespan has hit its natural limit, research suggests | Ageing ...
Humans are unlikely to ever blow out more than 125 candles on their birthday cake, according to research that suggests that our lifespan has already hit its natural limit. The oldest human who ever lived, according to official records, was 122-year-old Frenchwoman Jeanne Louise Calment, who died in 1997. Now a team of American researchers suggest Calment is unlikely to lose the top spot any time soon, as their research shows that though more people reach old age each year, the ceiling for human lifespan appears to be stuck at around 115 years. “The chances are very high that we [have] really reached our maximum allotted lifespan for the first time,” said Jan Vijg, co-author of the research from the Albert Einstein College of Medicine in New York. But the new study suggests that is highly unlikely. The upshot, says Vijg, is that people should focus on enjoying life and staying healthy for as long as possible. “That’s where we have to invest our money,” he said. The notion of extending the human lifespan has captured imaginations for millennia. Among scientists, enthusiasm for the idea has grown in recent years with a host of Silicon Valley companies springing up to join academic institutions in attempting to chip away at issue of longevity - among them Google’s California Life Company, or Calico, as it is known – with big-buck prizes such as the Palo Alto Longevity Prize adding to the clamour. But the researchers, writing in the journal Nature, describe how analysis of records from a number of international databases suggests there is a limit to human lifespan, and that we have already hit it. Using data for 41 countries and territories from the Human Mortality Database, the team found that life expectancy at birth has increased over the last century. That, says Vijg, is down to a number of factors, including advances in childbirth and maternity care, clean water, the development of antibiotics and vaccines and other health measures. But while the proportion of people surviving to 70 and over has risen since 1900, the rate of improvements in survival differ greatly between levels of old age. Large gains are seen for ages 70 and up, but for ages 100 or more the rate of improvement drops rapidly. “[For] the oldest old people, we are still not very good at reducing their mortality rates,” said Vijg. What’s more, in 88% of the countries, the ages showing the greatest rate of improvement have not changed since 1980. The researchers then turned to the International Database on Longevity and analysed data from France, UK, the US and Japan - four countries with a high proportion of those aged 110 or above - so-called “supercentenarians”. The researchers found that the maximum reported age at death rapidly increased between 1970 and the early 1990s, rising by around 0.15 years every year. But in the mid-to-late 90s, a plateau was reached, with the yearly maximum reported age at death at around 115 years. Modelling of the possibility of living beyond such an age offered further insights. “Based on the data we have now, the chance that you will ever see a person of 125 [years] in a given year is about 1 in 10,000,” said Vijg. The apparent limit to human lifespan, the authors say, is not down to a set of biological processes specifically acting to call time on life. Rather, it is a byproduct a range of genetic programmes that control processes such as growth and development. Henne Holstege from VU University, Amsterdam works on ageing of centenarians, and previously led research into Dutch supercentenarian Hendrikje van Andel-Schipper, who died aged 115. She says the new study suggests “there seems to be a wall of mortality that modern medicine cannot overcome”. “If you die from heart disease at 70, then the rest of your body might still be in relatively good health. So, a medical intervention to overcome heart disease can significantly prolong your lifespan,” she said. “However, in centenarians not just the heart, but all bodily systems, have become aged and frail. If you do not die from heart disease, you die from something else.” Medical interventions, she says, cannot solve the problem of overall decline, with the only promising approach lying in slowing down the ageing process itself. But, she added, “It is however not yet clear if and how this can be accomplished.” But Tom Kirkwood, associate dean for ageing at Newcastle University, is sanguine that the lifespan ceiling will continue to rise. “There is no set programme for ageing and we know that the process, which is ultimately driven by the build-up of faults and damage in the cells and organs of the body, is to some degree malleable,” he said. “Even without any change in the biology of ageing, it is almost inevitable that the current record will be broken.” Cynthia Kenyon, vice president of ageing research at Calico, is also optimistic. “No one, particularly not evolutionary theorists, predicted that single-gene mutations could slow the aging process and double the lifespans of animals. But they can,” she said. “While we don’t have demographic data supporting the idea that the maximum human lifespan is now increasing, that certainly doesn’t mean it’s impossible.”
Humans are unlikely to ever blow out more than 125 candles on their birthday cake, according to research that suggests that our lifespan has already hit its natural limit. The oldest human who ever lived, according to official records, was 122-year-old Frenchwoman Jeanne Louise Calment, who died in 1997. Now a team of American researchers suggest Calment is unlikely to lose the top spot any time soon, as their research shows that though more people reach old age each year, the ceiling for human lifespan appears to be stuck at around 115 years. “The chances are very high that we [have] really reached our maximum allotted lifespan for the first time,” said Jan Vijg, co-author of the research from the Albert Einstein College of Medicine in New York. But the new study suggests that is highly unlikely. The upshot, says Vijg, is that people should focus on enjoying life and staying healthy for as long as possible. “That’s where we have to invest our money,” he said. The notion of extending the human lifespan has captured imaginations for millennia. Among scientists, enthusiasm for the idea has grown in recent years with a host of Silicon Valley companies springing up to join academic institutions in attempting to chip away at issue of longevity - among them Google’s California Life Company, or Calico, as it is known – with big-buck prizes such as the Palo Alto Longevity Prize adding to the clamour. But the researchers, writing in the journal Nature, describe how analysis of records from a number of international databases suggests there is a limit to human lifespan, and that we have already hit it. Using data for 41 countries and territories from the Human Mortality Database, the team found that life expectancy at birth has increased over the last century. That, says Vijg, is down to a number of factors, including advances in childbirth and maternity care, clean water, the development of antibiotics and vaccines and other health measures.
yes
Gerontology
Is there a limit to human lifespan?
yes_statement
there is a "limit" to "human" "lifespan".. "human" "lifespan" has a "limit".
https://www.liebertpub.com/doi/10.1089/rej.2019.2272
Jeanne Calment, Actuarial Paradoxography and the Limit to Human ...
Abstract The case of Jeanne Calment and her exceptional longevity has attracted worldwide attention, detailed examination, and some skepticism. Most recently, it has been suggested that Jeanne Calment's record is spurious and the result of identity fraud by her daughter. Although there is merit to subjecting claims of extreme longevity to scrutiny, either validating or debunking a single case has a negligible impact on scientific knowledge of aging and lifespan. Perhaps the best-known finding from the field on gerontology, recognizable even to nonspecialists, is Jeanne Calment's unsurpassed longevity record. Her death in 1997 at the age of 122 attracted worldwide attention and has stood for more than two decades as the high water mark of human lifespan. With this attention comes doubt, and in his examination of demographic and documentary evidence,1 Zak questions the validity of Jeanne Calment's record longevity and proposes that her identity was assumed by her daughter, who died at a much less remarkable age of 99 years old. Calment's longevity has attracted skepticism for some time, much of it justified by its extraordinary nature. However, the facts of her life have been extensively validated by Robine and Allard2 and reviewed in Robine et al.3. The question of whether the issues raised by Zak1 are sufficient to outweigh the evidence they have collected is left to the judgment of other commentators. That someone should live for >120 years seems unlikely, but so too seems the alternative, in which a case of identity theft passes undetected for >80 years, despite worldwide scrutiny. A more pressing concern is what would the invalidation of Jeanne Calment's age at death mean for our understanding of human longevity? In short, not much. No scientific finding should hinge on a single data point, and Jeanne Calment's record, even if of utmost veracity, should not dictate the results of any analysis. Fortunately, those interested in the limits to human longevity do not have to worry about a revision to Jeanne Calment's age at death upending the field. Despite advances in the 19th and 20th centuries, survival at extremely old ages has stagnated4 and it is unlikely that anyone will live past 125, a finding that is robust even if one or several data points (including, but not limited to, Calment's) are removed from the analysis.5–7 With time and an increasing population of supercentenarians, it is possible that Calment's record, whether genuine or not, will be matched or even slightly exceeded in the next century,8 but even the most optimistic analyses concede that longevity for supercentenarians is stagnant; this stagnation implies that observing an individual living much longer, such as to 125 or 150, is prohibitively unlikely.9,10 Analyses from large data sets of elderly individuals, drawing on Swedish, Dutch, American, and Belgian populations that do not include Calment, have also concluded that improvement in the human lifespan has ceased,11–14 and their results are naturally unaffected by the validity of her longevity. If no analysis can or should depend on one person's longevity, however extraordinary, is there any use, then, to actuarial paradoxography, the practice of seeking the longest-lived people? In short, yes. Certainly, it would be futile to suggest that it be discontinued, as it has persisted for thousands of years.15 However alluring it may be, there is a temptation to dismiss actuarial paradoxography as a sort of stamp collecting, remarking on interesting cases without connecting them to a broader scientific theory. But it may rise above the level of gerontological philately by providing a set of rigorously verified data on which to base the study of aging and demarcating the boundaries of longevity. The current practices of the field represent a huge improvement over those of the past. If, for example, we were to revert to the ancient standards that allowed the Sumerian King List16 to be taken credibly, many scientists would waste their time trying to reconcile the human life expectancy of a few decades with tales of kings living tens of thousands of years. Thus, with regard to validating claims of extreme longevity, we can see that immense gains result even from minimal stringency in evaluation. Going further than that, perfection is something to be pursued, even if never attained. Perhaps the current list of validated claims contains a few that do not belong; perhaps clerical error has resulted in the exclusion of some genuine claims; and it is probable that a few people have attained supercentenarian status, but were never recorded as such due to being born in the wrong time or place (this last drawback somewhat mitigated by the fact that improvements in longevity and record-keeping tend to go hand in hand). However, the current standards of the field do well at preventing the most egregious cases from being taken seriously. There is always room for refinement and further improvement, but undue attention to the case of Jeanne Calment risks missing the forest for one very tall tree.
However, the facts of her life have been extensively validated by Robine and Allard2 and reviewed in Robine et al.3. The question of whether the issues raised by Zak1 are sufficient to outweigh the evidence they have collected is left to the judgment of other commentators. That someone should live for >120 years seems unlikely, but so too seems the alternative, in which a case of identity theft passes undetected for >80 years, despite worldwide scrutiny. A more pressing concern is what would the invalidation of Jeanne Calment's age at death mean for our understanding of human longevity? In short, not much. No scientific finding should hinge on a single data point, and Jeanne Calment's record, even if of utmost veracity, should not dictate the results of any analysis. Fortunately, those interested in the limits to human longevity do not have to worry about a revision to Jeanne Calment's age at death upending the field. Despite advances in the 19th and 20th centuries, survival at extremely old ages has stagnated4 and it is unlikely that anyone will live past 125, a finding that is robust even if one or several data points (including, but not limited to, Calment's) are removed from the analysis.5–7 With time and an increasing population of supercentenarians, it is possible that Calment's record, whether genuine or not, will be matched or even slightly exceeded in the next century,8 but even the most optimistic analyses concede that longevity for supercentenarians is stagnant; this stagnation implies that observing an individual living much longer, such as to 125 or 150, is prohibitively unlikely.9,10 Analyses from large data sets of elderly individuals, drawing on Swedish, Dutch, American, and Belgian populations that do not include Calment, have also concluded that improvement in the human lifespan has ceased,11–14 and their results are naturally unaffected by the validity of her longevity.
yes
Gerontology
Is there a limit to human lifespan?
yes_statement
there is a "limit" to "human" "lifespan".. "human" "lifespan" has a "limit".
https://www.sciencealert.com/study-suggests-that-theoretically-we-should-be-able-to-live-forever
Want to Live Forever? There's No Theoretical Limit to Human ...
Want to Live Forever? There's No Theoretical Limit to Human Lifespan, New Study Says Humans can probably live to at least 130, and possibly well beyond, though the chances of reaching such super old age remain vanishingly small, according to new research. The outer limit of the human lifespan has long been hotly debated, with recent studies making the case we could live up to 150 years, or arguing that there is no maximum theoretical age for humans. The new research, published Wednesday in the Royal Society Open Science journal, wades into the debate by analyzing new data on supercentenarians – people aged 110 or more – and semi-supercentenarians, aged 105 or more. While the risk of death generally increases throughout our lifetime, the researchers' analysis shows that risk eventually plateaus and remains constant at approximately 50-50. "Beyond age 110 one can think of living another year as being almost like flipping a fair coin," said Anthony Davison, a professor of statistics at the Swiss Federal Institute of Technology in Lausanne (EPFL), who led the research. "If it comes up heads, then you live to your next birthday, and if not, then you will die at some point within the next year," he told AFP. Based on the data available so far, it seems likely that humans can live until at least 130, but extrapolating from the findings "would imply that there is no limit to the human lifespan," the research concludes. The conclusions match similar statistical analyses done on datasets of the very elderly. "But this study strengthens those conclusions and makes them more precise because more data are now available," Davison said.
Want to Live Forever? There's No Theoretical Limit to Human Lifespan, New Study Says Humans can probably live to at least 130, and possibly well beyond, though the chances of reaching such super old age remain vanishingly small, according to new research. The outer limit of the human lifespan has long been hotly debated, with recent studies making the case we could live up to 150 years, or arguing that there is no maximum theoretical age for humans. The new research, published Wednesday in the Royal Society Open Science journal, wades into the debate by analyzing new data on supercentenarians – people aged 110 or more – and semi-supercentenarians, aged 105 or more. While the risk of death generally increases throughout our lifetime, the researchers' analysis shows that risk eventually plateaus and remains constant at approximately 50-50. "Beyond age 110 one can think of living another year as being almost like flipping a fair coin," said Anthony Davison, a professor of statistics at the Swiss Federal Institute of Technology in Lausanne (EPFL), who led the research. "If it comes up heads, then you live to your next birthday, and if not, then you will die at some point within the next year," he told AFP. Based on the data available so far, it seems likely that humans can live until at least 130, but extrapolating from the findings "would imply that there is no limit to the human lifespan," the research concludes. The conclusions match similar statistical analyses done on datasets of the very elderly. "But this study strengthens those conclusions and makes them more precise because more data are now available," Davison said.
no
Gerontology
Is there a limit to human lifespan?
yes_statement
there is a "limit" to "human" "lifespan".. "human" "lifespan" has a "limit".
https://www.scientificamerican.com/article/humans-could-live-up-to-150-years-new-research-suggests/
Humans Could Live up to 150 Years, New Research Suggests ...
Jeanne Calment enjoys her daily cigarette and glass of red wine on the occasion of her 117th birthday. In 1997, she died at the age of 122 and still holds the record for being the person with the longest lifespan. Credit: Jean-Pierre Fizet Getty Images Advertisement The chorus of the theme song for the movie Fame, performed by actress Irene Cara, includes the line “I’m gonna live forever.” Cara was, of course, singing about the posthumous longevity that fame can confer. But a literal expression of this hubris resonates in some corners of the world—especially in the technology industry. In Silicon Valley, immortality is sometimes elevated to the status of a corporeal goal. Plenty of big names in big tech have sunk funding into ventures to solve the problem of death as if it were just an upgrade to your smartphone’s operating system. Yet what if death simply cannot be hacked and longevity will always have a ceiling, no matter what we do? Researchers have now taken on the question of how long we can live if, by some combination of serendipity and genetics, we do not die from cancer, heart disease or getting hit by a bus. They report that when omitting things that usually kill us, our body’s capacity to restore equilibrium to its myriad structural and metabolic systems after disruptions still fades with time. And even if we make it through life with few stressors, this incremental decline sets the maximum life span for humans at somewhere between 120 and 150 years. In the end, if the obvious hazards do not take our lives, this fundamental loss of resilience will do so, the researchers conclude in findings published in May 2021 in Nature Communications. “They are asking the question of ‘What’s the longest life that could be lived by a human complex system if everything else went really well, and it’s in a stressor-free environment?’” says Heather Whitson, director of the Duke University Center for the Study of Aging and Human Development, who was not involved in the paper. The team’s results point to an underlying “pace of aging” that sets the limits on life span, she says. For the study, Timothy Pyrkov, a researcher at a Singapore-based company called Gero, and his colleagues looked at this “pace of aging” in three large cohorts in the U.S., the U.K. and Russia. To evaluate deviations from stable health, they assessed changes in blood cell counts and the daily number of steps taken and analyzed them by age groups. For both blood cell and step counts, the pattern was the same: as age increased, some factor beyond disease drove a predictable and incremental decline in the body’s ability to return blood cells or gait to a stable level after a disruption. When Pyrkov and his colleagues in Moscow and Buffalo, N.Y., used this predictable pace of decline to determine when resilience would disappear entirely, leading to death, they found a range of 120 to 150 years. (In 1997 Jeanne Calment, the oldest person on record to have ever lived, died in France at the age of 122.) The researchers also found that with age, the body’s response to insults could increasingly range far from a stable normal, requiring more time for recovery. Whitson says that this result makes sense: A healthy young person can produce a rapid physiological response to adjust to fluctuations and restore a personal norm. But in an older person, she says, “everything is just a little bit dampened, a little slower to respond, and you can get overshoots,” such as when an illness brings on big swings in blood pressure. Measurements such as blood pressure and blood cell counts have a known healthy range, however, Whitson points out, whereas step counts are highly personal. The fact that Pyrkov and his colleagues chose a variable that is so different from blood counts and still discovered the same decline over time may suggest a real pace-of-aging factor in play across different domains. Study co-author Peter Fedichev, who trained as a physicist and co-founded Gero, says that although most biologists would view blood cell counts and step counts as “pretty different,” the fact that both sources “paint exactly the same future” suggests that this pace-of-aging component is real. The authors pointed to social factors that reflect the findings. “We observed a steep turn at about the age of 35 to 40 years that was quite surprising,” Pyrkov says. For example, he notes, this period is often a time when an athlete’s sports career ends, “an indication that something in physiology may really be changing at this age.” The desire to unlock the secrets of immortality has likely been around as long as humans’ awareness of death. But a long life span is not the same as a long health span, says S. Jay Olshansky, a professor of epidemiology and biostatistics at the University of Illinois at Chicago, who was not involved in the work. “The focus shouldn’t be on living longer but on living healthier longer,” he says. “Death is not the only thing that matters,” Whitson says. “Other things, like quality of life, start mattering more and more as people experience the loss of them.” The death modeled in this study, she says, “is the ultimate lingering death. And the question is: Can we extend life without also extending the proportion of time that people go through a frail state?” The researchers’ “final conclusion is interesting to see,” Olshansky says. He characterizes it as “Hey, guess what? Treating diseases in the long run is not going to have the effect that you might want it to have. These fundamental biological processes of aging are going to continue.” The idea of slowing down the aging process has drawn attention, not just from Silicon Valley types who dream about uploading their memories to computers but also from a cadre of researchers who view such interventions as a means to “compress morbidity”—to diminish illness and infirmity at the end of life to extend health span. The question of whether this will have any impact on the fundamental upper limits identified in the Nature Communications paper remains highly speculative. But some studies are being launched—testing the diabetes drug metformin, for example—with the goal of attenuating hallmark indicators of aging. In this same vein, Fedichev and his team are not discouraged by their estimates of maximum human life span. His view is that their research marks the beginning of a longer journey. “Measuring something is the first step before producing an intervention,” Fedichev says. As he puts it, the next steps, now that the team has measured this independent pace of aging, will be to find ways to “intercept the loss of resilience.” Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers.
They report that when omitting things that usually kill us, our body’s capacity to restore equilibrium to its myriad structural and metabolic systems after disruptions still fades with time. And even if we make it through life with few stressors, this incremental decline sets the maximum life span for humans at somewhere between 120 and 150 years. In the end, if the obvious hazards do not take our lives, this fundamental loss of resilience will do so, the researchers conclude in findings published in May 2021 in Nature Communications. “They are asking the question of ‘What’s the longest life that could be lived by a human complex system if everything else went really well, and it’s in a stressor-free environment?’” says Heather Whitson, director of the Duke University Center for the Study of Aging and Human Development, who was not involved in the paper. The team’s results point to an underlying “pace of aging” that sets the limits on life span, she says. For the study, Timothy Pyrkov, a researcher at a Singapore-based company called Gero, and his colleagues looked at this “pace of aging” in three large cohorts in the U.S., the U.K. and Russia. To evaluate deviations from stable health, they assessed changes in blood cell counts and the daily number of steps taken and analyzed them by age groups. For both blood cell and step counts, the pattern was the same: as age increased, some factor beyond disease drove a predictable and incremental decline in the body’s ability to return blood cells or gait to a stable level after a disruption. When Pyrkov and his colleagues in Moscow and Buffalo, N.Y., used this predictable pace of decline to determine when resilience would disappear entirely, leading to death, they found a range of 120 to 150 years. (In 1997 Jeanne Calment, the oldest person on record to have ever lived, died in France at the age of 122.) The researchers also found that with age,
yes
Gerontology
Is there a limit to human lifespan?
yes_statement
there is a "limit" to "human" "lifespan".. "human" "lifespan" has a "limit".
http://frontier.yahoo.com/news/katie-couric-aging-mice-harvard-researcher-david-sinclair-035336385.html
Harvard Researcher on Aging: There's no 'limit on the human lifespan'
Harvard Researcher on Aging: There's no 'limit on the human lifespan' My peers of a certain age will remember an Oil of Olay commercial about deciding not to grow old gracefully, but rather to "fight it every step of the way." And while we spend billions trying to turn back time, the Fountain of Youth has yet to be found at the bottom of a lotion bottle. But one researcher from Harvard Medical School, David Sinclair, believes the secret to stopping the aging process is closer than we think. "I wouldn't begin to put a limit on the human lifespan," he says. Sinclair has spent the past 20 years looking for ways to help people live longer, healthier lives. In an exclusive look at his strictly guarded mouse lab, Sinclair showed us how his research team is looking to stop the clock on aging. It was Sinclair's research on resveratrol, a molecule found in grapes, that made headlines a decade ago when it showed promising results in keeping overfed mice as healthy as lean mice. Sinclair even chose to test resveratrol on himself, something he has been doing for the past 10 years, and he says he's feeling fit and healthy. Likewise, his parents, who are in their 70s, report similar results from taking resveratrol. Today, Sinclair has taken his research even further. By prematurely aging mice, he is able to test new molecules on them in an attempt to return them to their younger, healthier state. He's hopeful that the molecules will one day help prevent or delay diseases like cancer and Alzheimer's in humans. All of this, of course, is still very much in the research phase, but Sinclair is confident that his work will lead to many of us living longer and healthier lives. "Can we one day live to 150?" he asks. "I don't see why not; it's just a matter of when." Who do you think is a global game changer, and what person would you like to see featured in this series? Let me know on Twitter (@katiecouric) or on Tumblr.
Harvard Researcher on Aging: There's no 'limit on the human lifespan' My peers of a certain age will remember an Oil of Olay commercial about deciding not to grow old gracefully, but rather to "fight it every step of the way. " And while we spend billions trying to turn back time, the Fountain of Youth has yet to be found at the bottom of a lotion bottle. But one researcher from Harvard Medical School, David Sinclair, believes the secret to stopping the aging process is closer than we think. "I wouldn't begin to put a limit on the human lifespan," he says. Sinclair has spent the past 20 years looking for ways to help people live longer, healthier lives. In an exclusive look at his strictly guarded mouse lab, Sinclair showed us how his research team is looking to stop the clock on aging. It was Sinclair's research on resveratrol, a molecule found in grapes, that made headlines a decade ago when it showed promising results in keeping overfed mice as healthy as lean mice. Sinclair even chose to test resveratrol on himself, something he has been doing for the past 10 years, and he says he's feeling fit and healthy. Likewise, his parents, who are in their 70s, report similar results from taking resveratrol. Today, Sinclair has taken his research even further. By prematurely aging mice, he is able to test new molecules on them in an attempt to return them to their younger, healthier state. He's hopeful that the molecules will one day help prevent or delay diseases like cancer and Alzheimer's in humans. All of this, of course, is still very much in the research phase, but Sinclair is confident that his work will lead to many of us living longer and healthier lives. "Can we one day live to 150?" he asks. "I don't see why not; it's just a matter of when. " Who do you think is a global game changer, and what person would you like to see featured in this series? Let me know on Twitter (@
no
Gerontology
Is there a limit to human lifespan?
yes_statement
there is a "limit" to "human" "lifespan".. "human" "lifespan" has a "limit".
https://www.cbsnews.com/news/is-there-a-natural-limit-to-the-human-lifespan/
Have we reached the natural limit to the human lifespan? - CBS News
Have we reached the natural limit to the human lifespan? Over the past 100 years, human life expectancy has increased greatly thanks in large part to advancements in science and medicine. In addition to longer average lifespans, the maximum age at death of the longest-living people also rose steadily throughout the 20th century, leaving many to speculate that human longevity may not have an upper limit. But a new report shows that this upward trend has slowed down in recent years and suggests that we have already reached the limit to the human lifespan. “Demographers as well as biologists have contended there is no reason to think that the ongoing increase in maximum lifespan will end soon. But our data strongly suggest that it has already been attained and that this happened in the 1990s,” senior author Jan Vijg, Ph.D., professor and chair of genetics at Albert Einstein College of Medicine, said in a statement. For the report, published online this week in Nature, Vijg and his team analyzed numbers from an international mortality database that included about 40 countries. The data showed a steady increase of life expectancy since the year 1900, with the fraction of people born in the same year surviving to old age – defined as 70 and older – increasing with their calendar year of birth. However, when looking at the survival improvements in that same period for people age 100 and older, the researchers found that gains in survival peaked around age 100, then decreased rapidly. “The data shows that we’re not very successful at keeping people alive over the age 100, and that suggests that there may be a hard limit to human lifespan,” Vijg told CBS News. The researchers also looked at the yearly maximum reported age at death in the four countries with the largest number of very old people – France, Japan, Britain and the United States. They focused on individuals verified as living to age 110 or older between 1968 and 2006. The oldest living person ever documented, Jeanne Calment of France, holding a certificate from the Guinness Book of Records in 1995. Calment died in 1997 at age 122. GEORGES GOBET/AFP/Getty Images The data showed that the maximum age increased rapidly between the 1970s and early 1990s but then plateaued in the mid-90s. That was around the time when Jeanne Calment, the oldest documented person who ever lived, died at the age of 122 in 1997, the authors note. The researchers’ model concludes that the maximum human life span reported most years is just short of 115, on average, though there will be occasional outliers like Calment. They also predict that the likelihood of a person exceeding age 125 in any given year is less than 1 in 10,000. Vijg points out that maximum lifespans hit a plateau despite the increasing size of aging populations worldwide and continued improvements in health care, such as vaccines, antibiotics, and other medical treatments that have helped millions of people live longer. The reason for the upper limit to lifespan then, he contends, is that it is hardwired in our biology. “The idea is that every species has particular longevity assurance systems to keep them healthy at least until their reproductive age,” Vijg said, “and then slowly it declines. And at the end of the day, the body is overwhelmed by stress, damage, molecular errors and so on, and these defense systems are only good enough to protect you for only a given amount of time.” While he expects medical advancements to continue to keep more people healthier for a longer amount of time – thus increasing average life expectancy – he believes those advancements will not increase the maximum age of the longest-lived people. S. Jay Olshansky, Ph.D., who studies aging at the School of Public Health at the University of Illinois at Chicago, said that this is because the current medical model of treating one disease at a time can’t fully protect us from the natural biological process of aging. “When you treat specific diseases it’s almost like putting a Band-Aid on what’s really happening to our bodies,” Olshansky told CBS News. “It just has a temporary effect. The older we get, the higher the probability that something else is going to come along and influence our lives, both in terms of quality of life and risk of death. It’s almost like a game of whack-a-mole. When one thing goes down something else goes up and the older we live the quicker it is that something comes and takes the place of whatever it is that we pushed down.” In an editorial accompanying the study, Olshansky wrote, “Humanity is working hard to manufacture more survival time, with some degree of success, but we should acknowledge that a genetically determined fixed life-history strategy for our species stands in the way of radical life extension.” The only way to break through the barrier of our natural human lifespan, he argues, is for the public health paradigm to shift and for scientists to figure out how to address the underlying cause of aging. “If we could find a way to slow aging even just be a marginal amount, the impact would be dramatic on the population because it would have a positive impact on a broad range of diseases, as opposed to going after one at a time,” Olshansky said.
Have we reached the natural limit to the human lifespan? Over the past 100 years, human life expectancy has increased greatly thanks in large part to advancements in science and medicine. In addition to longer average lifespans, the maximum age at death of the longest-living people also rose steadily throughout the 20th century, leaving many to speculate that human longevity may not have an upper limit. But a new report shows that this upward trend has slowed down in recent years and suggests that we have already reached the limit to the human lifespan. “Demographers as well as biologists have contended there is no reason to think that the ongoing increase in maximum lifespan will end soon. But our data strongly suggest that it has already been attained and that this happened in the 1990s,” senior author Jan Vijg, Ph.D., professor and chair of genetics at Albert Einstein College of Medicine, said in a statement. For the report, published online this week in Nature, Vijg and his team analyzed numbers from an international mortality database that included about 40 countries. The data showed a steady increase of life expectancy since the year 1900, with the fraction of people born in the same year surviving to old age – defined as 70 and older – increasing with their calendar year of birth. However, when looking at the survival improvements in that same period for people age 100 and older, the researchers found that gains in survival peaked around age 100, then decreased rapidly. “The data shows that we’re not very successful at keeping people alive over the age 100, and that suggests that there may be a hard limit to human lifespan,” Vijg told CBS News. The researchers also looked at the yearly maximum reported age at death in the four countries with the largest number of very old people – France, Japan, Britain and the United States. They focused on individuals verified as living to age 110 or older between 1968 and 2006.
yes
Gerontology
Is there a limit to human lifespan?
yes_statement
there is a "limit" to "human" "lifespan".. "human" "lifespan" has a "limit".
https://www.gavi.org/vaccineswork/150-years-really-limit-human-lifespan
Is 150 years really the limit of human lifespan? | Gavi, the Vaccine ...
Vaccineswork - Search Main navigation - Vaccineswork Is 150 years really the limit of human lifespan? Researchers think they've calculated the limit of human lifespan – but there's more to it. 8 June 2021 5 min read How long could a human really live? Hyejin Kang/ Shutterstock While most of us can expect to live to around 80, some people defy expectations and live to be over 100. In places such as Okinawa, Japan and Sardinia, Italy, there are many centenarians. The oldest person in history – a French woman named Jeanne Calment – lived to 122. When she was born in 1875, the average life expectancy was roughly 43. But just how long could a human actually live? It’s a question people have been asking for centuries. While average life expectancy (the number of years a person can expect to live) is relatively easy to calculate, maximum lifespan estimates (the greatest age a human could possibly reach) are much harder to make. Previous studies have placed this limit close to 140 years of age. But a more recent study proposes that the limit to human lifespan is closer to 150. Calculating lifespan The oldest and still most widely used method for calculating life expectancy, and thus lifespan, relies on the Gompertz equation. This is the observation, first made in the 19th century, that human death rates from disease increase exponentially with time. Essentially, this means your chance of death – from cancer, heart disease and many infections, for example – roughly doubles every eight to nine years. There are many ways the formula can be tweaked to account for how different factors (such as sex or disease) affect the lifespan within a population. Gompertz calculations are even used to calculate health insurance premiums – which is why these companies are so interested in whether you smoke, whether you are married and anything else that might allow them to more accurately judge the age at which you will die. Have you read? Another approach to figuring out how long we can live is to look at how our organs decline with age, and run that rate of decline against the age at which they stop working. For example, eye function and how much oxygen we use while exercising show a general pattern of decline with ageing, with most calculations indicating organs will only function until the average person is around 120 years old. But these studies also unmask increasing variation between people as they grow older. For example, some peoples’ kidney function declines rapidly with age while in others it hardly changes at all. Now researchers in Singapore, Russia, and the US have taken a different approach to estimate the maximum human lifespan. Using a computer model, they estimate that the limit of human lifespan is about 150 years. Living to 150 Intuitively, there should be a relationship between your chance of death and how rapidly and completely you recover from illness. This parameter is a measure of your ability to maintain homeostasis – your normal physiological equilibrium – and is known as resilience. In fact, ageing can be defined as the loss of ability to maintain homeostasis. Typically, the younger the person, the better they are at recovering rapidly from illness. To conduct the modelling study, the researchers took blood samples from over 70,000 participants aged up to 85 and looked at short-term changes in their blood cell counts. The number of white blood cells a person has can indicate the level of inflammation (disease) in their body, while the volume of red blood cells can indicate a person’s risk of heart disease or stroke, or cognitive impairment, such as memory loss. The researchers then simplified this data into a single parameter, which they called the dynamic organisms state indicator (Dosi). Changes in Dosi values across the participants predicted who would get age-related diseases, how this varied from person to person, and modelled the loss of resilience with age. These calculations predicted that for everyone – regardless of their health or genetics – resilience failed completely at 150, giving a theoretical limit to human lifespan. But estimates of this type assume that nothing new will be done to a population, such as, no new medical treatments will be found for common diseases. This is a major flaw, since significant progress occurs over a lifetime and this benefits some people more than others. For example, a baby born today can rely on about 85 years of medical progress to enhance their life expectancy, while an 85-year-old alive now is limited by current medical technologies. As such, the calculation used by these researchers will be relatively accurate for old people but will become progressively less so the younger the person you’re looking at. Even given the current pace of progress, we can confidently expect life expectancy to increase because it has been doing this since Gompertz was alive in the 1860s. In fact, if you spend half an hour reading this article average life expectancy will have increased by six minutes. Unfortunately, at that rate, the average person won’t live to 150 for another three centuries. Authors Disclosure statement Richard Faragher is a member of the Board of Directors of the American Federation for Aging Research (AFAR) and the Biogerontology Research Foundation. He is a member of the scientific advisory board of the Longevity Vision Fund.
But a more recent study proposes that the limit to human lifespan is closer to 150. Calculating lifespan The oldest and still most widely used method for calculating life expectancy, and thus lifespan, relies on the Gompertz equation. This is the observation, first made in the 19th century, that human death rates from disease increase exponentially with time. Essentially, this means your chance of death – from cancer, heart disease and many infections, for example – roughly doubles every eight to nine years. There are many ways the formula can be tweaked to account for how different factors (such as sex or disease) affect the lifespan within a population. Gompertz calculations are even used to calculate health insurance premiums – which is why these companies are so interested in whether you smoke, whether you are married and anything else that might allow them to more accurately judge the age at which you will die. Have you read? Another approach to figuring out how long we can live is to look at how our organs decline with age, and run that rate of decline against the age at which they stop working. For example, eye function and how much oxygen we use while exercising show a general pattern of decline with ageing, with most calculations indicating organs will only function until the average person is around 120 years old. But these studies also unmask increasing variation between people as they grow older. For example, some peoples’ kidney function declines rapidly with age while in others it hardly changes at all. Now researchers in Singapore, Russia, and the US have taken a different approach to estimate the maximum human lifespan. Using a computer model, they estimate that the limit of human lifespan is about 150 years. Living to 150 Intuitively, there should be a relationship between your chance of death and how rapidly and completely you recover from illness. This parameter is a measure of your ability to maintain homeostasis – your normal physiological equilibrium – and is known as resilience. In fact, ageing can be defined as the loss of ability to maintain homeostasis. Typically, the younger the person, the better they are at recovering rapidly from illness.
yes
Paranormal
Is there life after death?
yes_statement
there is "life" after "death".. "life" continues after "death".. after "death", there is still "life".. "life" persists beyond "death".
https://press.princeton.edu/ideas/spinozas-guide-to-life-and-death
Spinoza's guide to life and death | Princeton University Press
Spinoza’s guide to life and death How should we face our mortality? Whether death is—as we all hope—a far off eventuality or, through age or illness, imminent, what is the proper attitude to take? Should we fear death? The ancient Epicureans felt that this was something of a category mistake: you should fear only those things that can harm you, and if you are dead then nothing can harm you. As Epicurus so elegantly put it, “Death, the most awful of evils, is nothing to us, seeing that, when we are, death is not come, and when death is come, we are not.” But if not fear, then should the prospect of one’s own demise at least be a source of anxiety? dread? regret? Or, as some religiously minded people might insist, should the end of this life be looked upon with hope, in the expectation that something better awaits in a world-to-come? Spinoza’s views in moral philosophy—what he has to say about virtue, the good life, and happiness—were clearly influenced by the wisdom of the ancient Stoics. He was well read in Seneca, Epictetus and others. However, on the topic of death, Spinoza goes his own separate way; in fact, he heads in the completely opposite direction. The Stoic sage meditates upon death constantly. Epictetus advised, as part of his therapeutic strategy for peace of mind, that one should “keep before your eyes day by day death and exile, and everything that seems terrible, but most of all death.” Seneca, too, recommends thinking often about one’s own mortality as essential to overcoming both fear of one’s own death and grief at the death of others. “Rehearse this thought [about death, that it is the evil that puts an end to all evils] every day, that you may be able to depart from life contentedly. For many men clutch and cling to life, even as those who are carried down a rushing stream clutch and cling to briars and sharp rocks.” By contrast, Spinoza’s “free person”—the ideal individual all of whose thoughts and actions are under the guidance of reason, not passion—rarely, if ever, thinks about death. In one of the more striking propositions of his philosophical masterpiece, the Ethics, Spinoza notes that “the free person thinks least of all of death.” This is because the free person knows that there is nothing to think about. They understand that there is no afterlife, no post-mortem realm of reward and punishment, no world-to-come. When a person dies, there is, for that person, nothing. In this respect, Spinoza’s view is closer to that of Epicurus. For Spinoza, there is no immortal soul or self that persists beyond this life. When you are dead, you are dead. The denial of immortality seems, in fact, to have been a constant in Spinoza’s thinking, going back even to around the time of his herem or excommunication from the Amsterdam Portuguese-Jewish community at the age of 23, when he was reportedly telling people that among the reasons for his expulsion from the Sephardic community was his proclaiming that “the soul dies with the body.” But if there is no such thing as immortality, then there is nothing to be afraid of after death—nor, for that matter, anything to hope for. This ancient lesson is something that the free person understands well. What the free person does think about, constantly, is the joy of living. This does not mean that s/he is obsessed with carnal pleasures and the hedonistic delights that come through sense experience. While Spinoza’s “sage” does not go to the other extreme and lead an austere life of deprivation, s/he does know that the mundane pleasures of food, companionship and art that make life interesting and pleasant are to be pursued only in moderation. The true joy of living, however, comes from the increase in the human “power of striving” that accompanies the acquisition of knowledge, especially knowledge of oneself and of one’s place in nature. This self-understanding is a kind of wisdom, and it fills the free person with self-esteem—not, however, the self-esteem or pride that depends on the opinion of others, but the true estimation of one’s achievement and self-worth. It also liberates the free person from such harmful emotions as hate, envy, and jealousy, and moves him/her to improve the lives of others and treat them with benevolence. In all of this, the free person sees how such attitudes and behaviors are in their own best interest. I am often asked why, of all the great, dead philosophers, I spend so much time studying and writing about Spinoza. It is because, as I see it, Spinoza basically got it right: about human nature, religion, reason, politics and a good life. He, more than any of the other philosophers I enjoy working on, really is still relevant in the twenty-first century—especially in this era in which science is all too often denigrated and the life of the mind undervalued. For lessons on how to live well, how to lead an examined life, a life that reaches our highest potential as rational being—and, just as important, lessons on how to die—there can be no better guide. Steven Nadler is the author of many books, including Rembrandt’s Jews, which was a finalist for the Pulitzer Prize, Spinoza: A Life, which won the Koret Jewish Book Award, and A Book Forged in Hell: Spinoza’s Scandalous Treatise and the Birth of the Secular Age (Princeton). He is the William H. Hay II Professor of Philosophy and Evjue-Bascom Professor in the Humanities at the University of Wisconsin–Madison.
In one of the more striking propositions of his philosophical masterpiece, the Ethics, Spinoza notes that “the free person thinks least of all of death.” This is because the free person knows that there is nothing to think about. They understand that there is no afterlife, no post-mortem realm of reward and punishment, no world-to-come. When a person dies, there is, for that person, nothing. In this respect, Spinoza’s view is closer to that of Epicurus. For Spinoza, there is no immortal soul or self that persists beyond this life. When you are dead, you are dead. The denial of immortality seems, in fact, to have been a constant in Spinoza’s thinking, going back even to around the time of his herem or excommunication from the Amsterdam Portuguese-Jewish community at the age of 23, when he was reportedly telling people that among the reasons for his expulsion from the Sephardic community was his proclaiming that “the soul dies with the body.” But if there is no such thing as immortality, then there is nothing to be afraid of after death—nor, for that matter, anything to hope for. This ancient lesson is something that the free person understands well. What the free person does think about, constantly, is the joy of living. This does not mean that s/he is obsessed with carnal pleasures and the hedonistic delights that come through sense experience. While Spinoza’s “sage” does not go to the other extreme and lead an austere life of deprivation, s/he does know that the mundane pleasures of food, companionship and art that make life interesting and pleasant are to be pursued only in moderation. The true joy of living, however, comes from the increase in the human “power of striving” that accompanies the acquisition of knowledge, especially knowledge of oneself and of one’s place in nature. This self-understanding is a kind of wisdom, and it fills the free person with self-esteem—not, however, the self-esteem or pride that depends on the opinion of others, but the true estimation of one’s achievement and self-worth.
no
Paranormal
Is there life after death?
yes_statement
there is "life" after "death".. "life" continues after "death".. after "death", there is still "life".. "life" persists beyond "death".
https://www.express.co.uk/news/science/848991/life-after-death-what-happens-when-you-die-quantum-physics
There is NO life after death: Scientist insists afterlife is IMPOSSIBLE ...
THERE is NO life after death, according to one well respected physicist who claims humanity has to abandon all fanciful beliefs and focus on what the laws of the universe dictate. Afterlife: Expert discusses 'feelings' in near-death experiences Sean Carroll, a cosmologist and physics professor at the California Institute of Technology, believes he has put the debate surrounding the afterlife to bed after extensively studying the laws of physics. Dr Carroll states “the laws of physics underlying everyday life are completely understood” and everything happens within the realms of possibility. He says for there to be an afterlife, consciousness would need to be something that is entirely separated from our physical body – which it is not. Rather, consciousness at the very basic level is a series of atoms and electrons which essentially give us our mind. Related articles The laws of the universe do not allow these particles to operate after our physical demise, according to Dr Carroll He said: “Claims that some form of consciousness persists after our bodies die and decay into their constituent atoms face one huge, insuperable obstacle: the laws of physics underlying everyday life are completely understood, and there's no way within those laws to allow for the information stored in our brains to persist after we die.” For his evidence, Dr Carroll points to the Quantum Field Theory (QFT). In simple terms, the QFT is the belief there is one field for each type of particle. Consciousness only exists in the mind (Image: GETTY) There is NO afterlife, according to the laws of physics (Image: GETTY) Invalid email We use your sign-up to provide content in ways you've consented to and to improve our understanding of you. This may include adverts from us and 3rd parties based on our understanding. You can unsubscribe at any time. More info For example, all the photons in the universe are on one level, and all the electrons too have their own field, and for every other type of particle too. Dr Carroll explains if life continued in some capacity after death, tests on the quantum field would have revealed "spirit particles" and "spirit forces”. Dr Carroll writes in the Scientific American: “If it's really nothing but atoms and the known forces, there is clearly no way for the soul to survive death. Dr Carroll says there is no life after death (Image: GETTY) “Believing in life after death, to put it mildly, requires physics beyond the Standard Model. “Most importantly, we need some way for that ‘new physics’ to interact with the atoms that we do have. “Within QFT, there can't be a new collection of ‘spirit particles’ and ‘spirit forces’ that interact with our regular atoms, because we would have detected them in existing experiments.” Once this is accepted by all scientists, Dr Carroll says, then they can truly begin to understand how the human mind operated. He said: “There's no reason to be agnostic about ideas that are dramatically incompatible with everything we know about modern science. “Once we get over any reluctance to face reality on this issue, we can get down to the much more interesting questions of how human beings and consciousness really work.”
THERE is NO life after death, according to one well respected physicist who claims humanity has to abandon all fanciful beliefs and focus on what the laws of the universe dictate. Afterlife: Expert discusses 'feelings' in near-death experiences Sean Carroll, a cosmologist and physics professor at the California Institute of Technology, believes he has put the debate surrounding the afterlife to bed after extensively studying the laws of physics. Dr Carroll states “the laws of physics underlying everyday life are completely understood” and everything happens within the realms of possibility. He says for there to be an afterlife, consciousness would need to be something that is entirely separated from our physical body – which it is not. Rather, consciousness at the very basic level is a series of atoms and electrons which essentially give us our mind. Related articles The laws of the universe do not allow these particles to operate after our physical demise, according to Dr Carroll He said: “Claims that some form of consciousness persists after our bodies die and decay into their constituent atoms face one huge, insuperable obstacle: the laws of physics underlying everyday life are completely understood, and there's no way within those laws to allow for the information stored in our brains to persist after we die.” For his evidence, Dr Carroll points to the Quantum Field Theory (QFT). In simple terms, the QFT is the belief there is one field for each type of particle. Consciousness only exists in the mind (Image: GETTY) There is NO afterlife, according to the laws of physics (Image: GETTY) Invalid email We use your sign-up to provide content in ways you've consented to and to improve our understanding of you. This may include adverts from us and 3rd parties based on our understanding. You can unsubscribe at any time. More info For example, all the photons in the universe are on one level, and all the electrons too have their own field, and for every other type of particle too. Dr Carroll explains if life continued in some capacity after death, tests on the quantum field would have revealed "spirit particles" and "spirit forces”.
no
Paranormal
Is there life after death?
yes_statement
there is "life" after "death".. "life" continues after "death".. after "death", there is still "life".. "life" persists beyond "death".
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4889815/
Determination of Death: A Scientific Perspective on Biological ...
Share RESOURCES As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Abstract Human life is operationally defined by the onset and cessation of organismal function. At postnatal stages of life, organismal integration critically and uniquely requires a functioning brain. In this article, a distinction is drawn between integrated and coordinated biologic activities. While communication between cells can provide a coordinated biologic response to specific signals, it does not support the integrated function that is characteristic of a living human being. Determining the loss of integrated function can be complicated by medical interventions (i.e., “life support”) that uncouple elements of the natural biologic hierarchy underlying our intuitive understanding of death. Such medical interventions can allow living human beings who are no longer able to function in an integrated manner to be maintained in a living state. In contrast, medical intervention can also allow the cells and tissues of an individual who has died to be maintained in a living state. To distinguish between a living human being and living human cells, two criteria are proposed: either the persistence of any form of brain function or the persistence of autonomous integration of vital functions. Either of these criteria is sufficient to determine a human being is alive. I. INTRODUCTION Determining when a human has died is scientifically challenging. Unlike the beginning of human life, an event that can be accurately localized to a period of less than a second (Condic, 2008, 2014b), precisely when death occurs is far less clear. In part, this may reflect the great variety of ways in which death occurs. Yet even when death occurs as the consequence of a relatively common event (e.g., heart failure), the transition from a living human being to a collection of human cells and tissues (i.e., a corpse) 1 cannot be directly observed. Consequently, the physical criteria used for determination of death are not intended to pinpoint the moment of death, but rather to identify a point at which we can state with confidence that death has already occurred. Yet what are the valid criteria for making this determination? The simplest criterion for death is total cellular death; that is, the transition from a living organism to a collection of non-living organic matter with no viable cells present. Yet cellular life persists in the body for hours or even days after an individual has been declared dead by current medical standards (National Conference of Commissioners on Uniform State Laws, 1980, Uniform Determination of Death Act); live cells have been recovered from human skin, dura (Bliss et al., 2012), and retina (Carter, 2007) up to 48h after death, with cells remaining viable in the human cornea for up to a week (Slettedal et al., 2008). This period can be extended considerably when artificial intervention (i.e., “life support”) is used to provide oxygen or blood circulation to the body. Even without artificial intervention, all of the sophisticated structures associated with tissues, organs, and organ systems remain intact following death, degenerating only slowly in the process of decomposition. Moreover, given that human cells and tissues are not the same as human beings, requiring total cellular death is too stringent a criterion for human death. If the total cessation of body-wide metabolic processes is the only admissible criterion for death, then nearly every human being who ever lived was either killed by the mortician who terminated cellular life in the embalming processes, or was buried or burnt alive. A more reasonable criterion must allow for the persistence of living cellular structures and functional cell metabolism following death of the human being. These preliminary considerations are important for contextualizing the debate regarding whether or not total brain death is a valid criterion for human death, and especially for evaluating the significance of the evidence indicating that (with the help of artificial interventions) many complex functions can persist in the body after death of the brain (table 1). 2 This evidence has led many to question the validity of the claim that brain death marks the death of a human being, or at least to question the standard rationale––i.e., that the brain is necessary to maintain organismal integration (Bernat, Culver, and Gert, 1981)––for considering brain death to be death. 3 Yet all of the functions listed in table 1 are either seen in isolated cells and tissues maintained in culture or are known to be due to chemical signals that could easily be reproduced in the laboratory. The fact that these activities are also seen in human cells/tissues ex vivo indicates that they are not sufficient for determining whether or not a human is still alive––i.e., whether or not what remains after brain death is still a human organism as a whole, rather than an aggregation of unintegrated cells and tissues that used to be part of the unified human organism. Table 1. Coordinated functions persisting after determination of death by current medical standards aA fetus is manifestly a living organism, and is responsible for many gestational functions. bComponents of sexual maturation have been observed in two individuals after brain death (see Shewmon, 1998). “Baby A” abnormally developed pubic hair (Tanner stage II) at 1 year of age following brain death; BES, a 13-year-old male who survived 65 days following a diagnosis of brain death, developed minimal pubic hair in this period. Given that cells and tissues are known to persist and perform an impressive and complex array of activities ex vivo, it is entirely reasonable to claim that the same situation could occur in a body that is no longer a living human being. Yet this leaves open the disturbing question of precisely what level of biologic function can persist in a purportedly dead human body, without giving us reason to question our judgment that the body is in fact dead. What signs can we rely on to indicate with sufficient certainty that a human being has died? 4 The challenge of determining when death occurs is therefore the challenge of discerning when the human being has ceased to be, leaving behind a collection of human cells that continue to exhibit some of the natural properties they had during life. The aim of this article is precisely to offer an analysis, from a biological perspective, 5 of what differentiates a human being from a mere aggregation of human cells. Both are alive and genetically “human.” Both exhibit complex behavior. Yet human beings are organisms, 6 and their function categorically requires a level of organization above that seen in cells, tissues, and organs (fig. 1). As argued below, the difference between tissue or cell-level organization and human organismal organization is not just a difference in degree, but rather a difference in kind—the difference between coordination and integration. While cells, tissues, organs, and organ systems engage in extremely complex coordinated activities, in nature they are not in themselves organisms because they are integrated into, at the service of, and globally regulated by the organism of which they are part and by which they were formed. In isolation from the whole, these parts lack the autonomous capacity to sustain their own functions, and can remain alive only with the aid of artificial interventions, such as culture medium or, in the case of organs, the perfusion of oxygenated blood. In contrast, all of the activities of an organism are globally and autonomously integrated to promote the continued life, health, and maturation of the organism as a whole. Thus, what differentiates genuine organismal integration from the coordination which occurs at the cell and tissue levels is that organismal integration is both global and autonomous. It is global in the sense that the activities of all the vital parts are regulated and organized to promote the health and survival of the whole (rather than just the survival of the parts themselves). It is autonomous in the sense that this regulation and organization is carried out by the organism itself. Levels of organization in living entities. In nature, only organisms (single cell and multicell) autonomously exist. With artificial support, cells and tissues that are naturally parts of a multicellular organism can exist independently of that organism. Each level of organization depends on the levels below. The transition between tissue organization and organismal organization reflects the difference between coordination and integration. Adapted from Condic (2011). The remainder of this article elaborates on and provides further evidence for the above claims, drawing on them to propose reasonable criteria for the determination of death that are stringent enough to avoid classifying the living as dead (even when artificial interventions are necessary to sustain life), but not so stringent that they require us to wait until every cell has died before declaring death. II. THE ORGANISMAL CRITERION FOR BOTH THE BEGINNING AND THE END OF NATURAL LIFE Living human beings are fundamentally different from human cells, based on the level at which integration occurs (fig. 1). Cells integrate the activity of molecules, molecular complexes, and subcellular organelles to promote the life and health of the cell as a whole. Different kinds of cells have different properties, but in all cases, cells are the fundamental unit of life, whether existing independently or as a part of a larger living thing. In the absence of some additional organizing principle, cells display no intrinsic drive to a higher level of organization. When left to their own devices, cells only produce more cells. In the artificial environment of a laboratory dish, a cell will survive and function according to its intrinsic characteristics, without reference to or requirement for anything beyond itself. 7 In light of this, an individual human cell in the laboratory can be considered an organism in its own right, albeit an artificial one. Artificial, because its isolation and sustenance depend on human actions; that is, it does not exist in nature. In the natural environment of the body, cells function as a part of an organism, not as independent organisms in their own right. Similarly, human tissues and organs can also be maintained in a living state in the laboratory. Yet, despite the higher level of complexity observed in tissues and the extensive interactions that occur between the cells that comprise them, such collections of cells cannot be considered organisms, because (unlike free-standing cellular creatures and complete, multicellular human beings) organs do not autonomously produce and regulate all of the structures and relationships required for the life of the organ as a whole. The individual cells in the organ naturally produce the structures necessary for cellular life. Yet organs and tissues are not entities organized for life independent of the body of which they are normally a part (organs are not “free-standing”). The structures which the cells of the organ can produce are not sufficient to sustain the life and health of the organ as a whole. Tissues and organs in laboratory culture are aggregates of cellular organisms, but not organisms in their own right. In the natural environment of the body, they are parts that contribute to the function and survival of the (multicellular) organism as a whole. In contrast to human organs, a human being functions as an organism at all stages of life. From the moment of sperm–egg fusion onward, a human embryo enters into a developmental sequence that will produce the cells, tissues, organs, and relationships required for progressively more mature stages (Condic, 2008, 2014b). Thus, unlike an individual cell or group of cells, which organize at the cellular and tissue levels only (fig. 1), the embryo exhibits a clear, self-directed drive towards a higher, multicellular level of organization. 8 At all stages of life, the parts of a human organism work together to promote the life and health of the entity as a whole. Thus, a mature human body is composed of many trillions of cells, but these cells are integrated into a single functional unit that autonomously sustains its own life and health. Unlike an isolated tissue or organ, the body as a whole is a true organism. The clear difference in the levels of organization exhibited by cells, tissues, and organisms provides an organismal definition for both the natural beginning and end of human life: Human life commences at the onset of globally self-integrated organismal function and concludes when globally self-integrated organismal function irreversibly ceases. This definition applies only to the natural life span of a human (i.e., to cases where there has been no intervention to sustain biologic function) since, as we will see, when artificial interventions supplement or replace biologic functions, many challenging and counterintuitive situations arise. Yet in the absence of such intervention, organismal function provides a clear and unambiguous criterion for both the beginning and end of human life. III. ORGANS REQUIRED FOR LIFE CHANGE OVER LIFE SPAN Application of this definition to all stages of human development is complicated by the fact that the vital organs required for organismal integration change over the life span. In the early embryo, a complex molecular “program” produces and organizes cells with specific properties that will build up the more mature tissues and systems of the body. 9 The embryo is more than a mass of tissue since the cells do more than simply make more cells or produce isolated organs; the cells of the embryo produce the organization of the entire body. In later embryos and fetuses, the heart and the placenta are the most critically required organs for continued life, growth, and coordination of body systems. In postnatal stages, the brain, the lungs, and the heart are all required organs, and the brain provides crucial integration of the three by regulating the other vital organs so that they function in the service of the whole. Importantly, this does not mean that a human being is nothing more than a molecular program, the placenta, the heart, the lungs, or the brain. It means that at different stages of the life span, specific organs are required for a human being to autonomously perform the globally integrated functions necessary to remain alive. It also means that the function of specific organs cannot universally distinguish between the living and the dead: irreversible cessation of placental function is likely to be a sufficient criteria for death at prenatal stages of life, but the fact that I do not currently have a functioning placenta does not mean that I am dead. Similarly, the lack of a functioning heart at early embryonic stages does not indicate an embryo is not alive or not a human being. It indicates that, similar to the brain and the lungs, the heart is not a required organ for early stages of human prenatal life. What is critical at all stages of human life is the continued, global, and autonomous integration of function that is characteristic of an organism and that distinguishes a living human being from an aggregation of human cells. IV. DISTINGUISHING A HUMAN PART FROM A HUMAN WHOLE Discriminating between the living and the dead is further complicated by the fact that many biologic functions that are naturally required for human life can currently be replaced (perfectly or imperfectly) by artificial interventions. Thus, in the past, irreversible cessation of cardiopulmonary function was adequate to conclude that the capacity of a human to function as an organism had been irreversibly lost, and therefore the human being had died. Yet today there are many medical interventions that can bypass such “irreversible” cardiac arrest and restore to full function individuals who would have otherwise been declared “dead.” 10 In light of an organismal criterion for both the beginning and end of life and in light of our ability to artificially replace many vital functions of the body, what features can reliably distinguish between a living human being and a dead one? Clearly cells, tissues, and organs ex vivo show complex functions that can also persist in a human body following some diagnoses of death (table 1). Consequently, these functions (e.g., wound healing) reflect only the operations of parts and do not necessarily imply the presence of a whole human. Conversely, the fact that many parts of the body can be lost or damaged without resulting in total loss of organismal integration indicates that limited or partial function does not necessarily imply the individual is dead. Therefore, the challenge in defining death is to determine when the activity observed in a biological system is self-regulated in the service of the “whole” and when it merely reflects the intrinsic properties of cellular parts. Stated in a somewhat different way, determination of death requires us to discern when a body has completely lost its capacity for global and autonomous self-regulation and integration, versus when a living human being is merely “blocked” from exercising its self-integrating capabilities, as when a head injury causes swelling which temporarily blocks the body’s ability to regulate its own breathing. If strictly functional criteria do not reliably distinguish parts from whole human beings (table 1), how can we tell that a human “whole” exists? Humans have been defined in many ways, but one of the simplest and most robust definitions is that humans are rational animals. This definition is independent of any specific religious tradition (it was initially put forward by the pagan philosopher Aristotle) and it acknowledges the two essential aspects of our nature: that we are living biological beings of the Kingdom Animalia who are capable of reason. Importantly, on the Aristotelian account, both of these essential aspects of our nature are ultimately rooted in the soul, understood as the unifying, vivifying, organizing principle of a living being. 11 This understanding of what constitutes a human being suggests two clear criteria that are each sufficient evidence for the persistence of human life in an entity that originally met the Aristotelian definition, because each is sufficient evidence for the presence of the human organizing-principle or soul: 1. Persistence of mental function, no matter how impaired, demonstrates the persistence of a living human being. Although not all humans with mental function are immediately capable of rationality, 12 the presence of any mental function in a human being gives reason to believe that the basic natural capacity for rationality, rooted in the soul, has not been completely lost. Given that the minimal neurologic structures required for mental function are not known, 13 prudence would dictate that persistence of any brain function should be considered evidence that the basic natural capacity for mental function may remain. 14 2. Persistence of global, autonomous integration of vital functions (“animality” or organismal function), even in the absence of evident mental function, indicates that an organism of the type Homo sapiens (i.e., a human being) exists. This criterion is a restatement of the “organismal” criteria given above. Applying these criteria enables a clear distinction between the living and the dead even in difficult cases. For example, individuals with severe brain damage who are in a persistent vegetative state have limited or absent mental function. However, such individuals (unless they also suffer from illness or injury affecting other vital organs 15 ) also show sustained, global, autonomous integration of bodily functions. They require normal care (i.e., food and water, and alleviation of the symptoms associated with prolonged bed rest), but not ventilation or other mechanical support systems. Such individuals are in a seriously impaired state, but they are clearly functioning as a human organism to maintain their body as a whole and regulate their own vital functions. They exhibit both persistent brain function (criterion #1) and persistent integration (criterion #2), and are therefore still alive. In contrast, individuals with high-level cervical spinal cord injury (hereafter, SCI) show limited or absent autonomous integration of bodily functions. They are dependent on artificial interventions (i.e., “life support”) to maintain their vital activities, yet their capacity for mental function remains. Such individuals are also severely impaired and they no longer function as a biological organism, 16 but by virtue of the fact that they remain capable of mental function (criterion #1), they are also still alive. In situations where there is both limited or absent autonomous control of the body (patients who are dependent on artificial medical interventions) and the individual is not conscious, great care must be taken to determine if any aspect of brain function persists. If so, no matter how impaired brain function may be, it remains possible that the capacity for some form of mental activity persists, and that the basic natural capacity for rationality (rooted in the soul) still remains. 17 Therefore, such individuals must be given the benefit of the doubt and seen as still alive. This does not imply a moral obligation to sustain such an individual by extraordinary means. But it does require an acknowledgement that removing life support will result in the death of a living (albeit severely impaired) human being. In contrast, following the irreversible cessation of all brain function, including the brain stem (i.e., “brain death”), the human body exhibits neither of the defining characteristics of a living human being: global autonomous integration cannot be maintained (i.e., the body is no longer able to function as an organism because it has lost the capacity to regulate its own vital activities, criterion #2), and mental function is also precluded (criterion #1). Therefore, brain death is “real death” because at postnatal stages, the brain is required for both self-directed integration of bodily function above the level of cells and tissues and for mental function. 18 Unlike a SCI patient where criterion #1 provides unambiguous evidence for the presence of a living human being, for a brain dead body we have no such evidence. All of the evidence that we currently have regarding the observed activities of bodies after brain death (with the help of artificial intervention) is consistent with the claim that the brain dead body is not an integrated whole, but rather an aggregate of human cells, persisting in the ordered relationships established during life and functioning under the auspices of individual cellular organizing principles. We know this because the same activities have also been observed in cells and tissues ex vivo (table 1), which obviously are not in themselves human organisms. While the brain dead body can exhibit coordinated activity within some of its systems or across cells in certain tissues (see below: “Objections to Brain Death as Evidence for Human Death”), it is unable to exercise either rationality or global, self-integrated organismal function, and therefore we lack sufficient evidence to show that a brain dead body is a living human organism. Of course, this in itself does not prove that a brain dead body is not a living human organism. More argumentation would be needed in order to show that (1) the capacity for global, self-integrated organismal function is necessary for the persistence of an organism, 19 and (2) in postnatal stages of life, the brain is required for such function. Premise 1 is a philosophical one, which is beyond the scope of this paper, but is defended elsewhere, 20 and is assumed explicitly or implicitly by many participants in the brain death debate. 21 Premise 2 is a biological premise, for which evidence is offered below. But even if the reader rejects either (or both) of these premises, what has already been said shows that the evidence indicating some functions can persist in a brain dead body on a ventilator is far from providing a conclusive reason to reject the belief that brain death marks the loss of organismal integration. This is, in itself, an important conclusion. V. HUMAN BEINGS WHO CANNOT FUNCTION AS ORGANISMS The criteria given above to distinguish the living from the dead raise the challenging possibility that in cases of patients with SCI who are sustained by artificial intervention, a human being can persist, even when they are no longer able to function as a human organism. In nature, there is a clear hierarchy of biological organization, with each level dependent on the one below it to remain alive, and lower levels not existing independent of their natural, higher level of organization (fig. 2, “Natural State”). Thus, living cells require organic molecules and subcellular organelles to function as cells. Without mitochondria, a human cell cannot persist, and conversely, mitochondria are never found independent of living cells. Similarly, organs cannot exist without cells and organs never naturally exist independent of an intact organism. At the highest level of biological organization, organisms are dependent on both living cells and living organs (i.e., the removal of vital organs or the death of the cells in the body will make it impossible for the organism to persist). Natural hierarchies and how they are disrupted by technology. In the natural state, cells require molecules (M), organs require cells, and organ systems require organs. Together with a functioning brain, these lower levels (blue) comprise the highest biologic level of organization: an organism. Similarly, a subset of organisms is capable of consciousness, and a subset of these is capable of sentience, with humans being capable of rational thought. Technology circumvents this hierarchy in many ways. Cells and organs can be removed from the body and maintained artificially (cell culture; organ culture). Patients in a persistent vegetative state (PVS patients) continue to function as organisms, but do not exhibit higher mental functions. Patients on life support with vital organ failure no longer function as organisms and lack the functions of the affected organ system, but retain living cells and higher mental functions. Patients with spinal injury have also ceased to function as organisms, but retain all other levels of human organization. Similar to the natural biological hierarchy, there is also a natural metaphysical hierarchy that orders properties of organisms in a specific sequence. Thus, not all organisms are capable of consciousness, yet the properties associated with consciousness never occur in nonorganisms, and therefore conscious entities are also organisms. In the same way, sentience requires consciousness, yet it does not occur in all conscious organisms. Finally, rational thought requires sentience, yet not all sentient organisms are rational. Thus, each successively higher function depends on lower functions for its operation. Most arguments for what constitutes a living human being take these natural biological and metaphysical hierarchies as given. For example, in the earlier-mentioned case of total cellular death, it is assumed that if all the cells of a human body are dead, the human must also be dead. Moreover, philosophers within the Aristotelian-Thomistic tradition require that an organizing principle, or soul, is necessary for organismal function, and that this same soul is also the principle of mental functions. Thus, it is assumed that regardless of the state of the body, if a human is capable of rational thought (i.e., manifestly continues to possess an organizing principle or soul), he must also possess the capacity (rooted in the soul) for organismal integration. 22 If man is truly a “rational animal,” one and the same principle is responsible for both the integration of the body and rational function. Metaphysically speaking, the continued presence of either power in an entity previously established as human is, in the abstract, sufficient to conclude that the human organizing principle (soul) remains. And modern, technological advances have put this abstract principle to the test. It is now possible to circumvent the natural hierarchies of biological systems and thereby create a wide range of counterintuitive and intellectually challenging situations. As noted above, a human cell in laboratory culture is a living entity that does not exist in nature (fig. 2, “Cell culture”). 23 How are we to think of such a cell? Clearly, it is not a human being, and yet by all reasonable scientific criteria it is both of human origin and also an organism. In this case, technological intervention has broken the natural biological hierarchy to produce an unnatural single-cell “human” organism that exists independent of the normal requirement for an intact human body as a necessary condition for the existence of human cells. Similarly, human patients with SCI are no longer functioning as organisms by any reasonable biological definition. While limited integration persists, such patients have ceased to autonomously integrate the biologic function of parts at the level required to sustain the life of the body as a whole. If left untreated, an individual with SCI would not survive more than a few minutes. However, with appropriate mechanical interventions, such an individual can be kept alive for many years. This creates an intellectually challenging situation of a living human being who (similar to the human cell in culture) is no longer dependent on the natural hierarchy of biological organization; that is, a living human being who does not function as an organism (fig. 2, “Patients with high cervical spinal injury”). 24 How can this technologically produced unnatural state of affairs be reconciled with the Aristotelian view of the human soul as the organizing principle or “substantial form” of the body? I propose that the situation following SCI is similar to a human embryo with a severe (or even fatal) developmental defect. An embryo with such a defect is clearly a human organism and not a mere collection of human cells or a disorganized tumor, and therefore it must possess a human organizing principle. Yet in the case of embryos with developmental defects, the proper function of this organizing principle is blocked by a material deficiency, 25 and the embryo is prevented from exercising its full human capacities. For example, failure to develop a nervous system capable of supporting rational thought (one of the defining characteristics of a human being) in an embryo that is otherwise undergoing an organized pattern of maturation does not preclude the embryo from being a human being. It merely indicates this particular embryo is a severely impaired human that cannot exercise its natural capacity to produce those neurological structures required for rational thought. This indicates that while the capacity to develop as a rational being is a metaphysical power inherent to all humans, the exercise of this capacity is not necessary for existence of a human substantial form (i.e., for a human soul or organizing principle). Yet, so long as an embryo with a developmental defect functions as an organism (criterion #2 above), it is a living human being. Similarly, in the case of SCI, the organizing principle of the body must persist (otherwise the individual would be dead), but the full function of this principle is blocked by an injury-induced material deficiency––in this case, the severing of the connections that would enable the brain to communicate with the rest of the body below the site of injury. This results in a human being who only exercises a subset of their natural abilities and who is no longer able to exercise his capacity to function as an organism. In nature, SCI would be completely incompatible with continued existence of the organism, and would rapidly and inevitably result in the total loss of the capacity (rooted in the soul) for organismal integration. In nature, therefore, SCI would rapidly and inevitably result in the death of the organism. Yet, due to artificial interventions substituting for the vital functions that have been blocked, the individual remains alive, despite his inability to function as a completely self-integrated biological system (i.e., without functioning as an organism). Just as the immediately exercisable capacity for reason can be lost without loss of the substantial form “human” (despite the fact that humans are rational animals), the immediately exercisable capacity for organismal function can also be lost while the human individual nonetheless persists. So long as SCI patients exhibit brain function (criterion #1 above), there is reason to believe that the human soul (which is also the principle of the capacity for organismal integration) persists, and that such patients are therefore alive. 26 Objections to Brain Death as Evidence for Human Death The conclusion that death of the brain is a valid criterion for determining the death of a human being has been criticized by those who assert that sufficient bodily integration remains following death of the brain (table 1) to view such individuals as living human organisms. This view denies that any higher metaphysical or functional level is relevant to the consideration of death (i.e., it asserts that all the mental, sensory, motor, and involuntary functions of the brain can be lost without the loss of a human being), and it turns critically on the question of whether the bodily functions observed following death of the brain rise to the level required for a human organism to persist. This requires us to revisit the levels of organization seen in living systems (fig. 1) and consider in detail how the highest level of organization (that of a living human being or organism) is accomplished. In the human body, biologic functions are coordinated or “organized” in three basic ways. The most extensive and most sophisticated means of control is through the activity of the nervous system, most especially the brain. In a healthy individual, the brain receives diverse types of information from all parts of the body. In addition to the five primary senses (sight, sound, taste, olfaction, and touch), the brain receives information from the entire body––continuously reading out factors as diverse as body temperature, pH, fluid balance, hormone levels, gravity, pain, vibration, mechanical load, muscle contraction, electrical fields, inflammation, blood sugar, and many other aspects of the overall metabolic state. The brain is then responsible for integrating this diverse information to generate a comprehensive representation of the status of the body as a whole––including the environmental and social context in which the body is operating––and to craft an adaptive, global, biologic response that appropriately reflects this status. For example, when an individual exercises, the brain receives information regarding the state of the body from multiple sources, including an exercise-induced drop in blood pH and direct neural signals from both the muscles and from the vessels leading out of the heart. The brain integrates this information to generate a complex, multifaceted, global, and adaptive response that involves and serves the entire body. It drives an increase in both heartbeat and respiratory rates to increase oxygen and reduce carbon dioxide in the blood, thereby bringing blood pH back into a healthy range. The brain also signals blood vessels in the muscles to widen and those in the gut to constrict, thereby shifting blood to where it is most needed. Finally, the brain initiates the release of adrenaline to increase blood glucose levels and modulate the function of cells throughout the body in a coordinated manner that is appropriate to strenuous activity. These adaptive responses would not occur (or would occur only partially) in a SCI patient who was mechanically or electrically stimulated to exercise; bodily function would rapidly become unbalanced, potentially resulting in a state of medical shock or even in death. In addition to controlling these involuntary responses, the brain is also the source of voluntary adaptation to exercise, including conscious regulation of breathing, relaxation of the muscles not being utilized, and (e.g., if playing tennis) visual tracking of the ball and deliberate physical behaviors to keep the ball in play. While these actions do not directly sustain the health of the body, they are clearly an important component of what makes playing tennis a highly coordinated and uniquely human physical activity. An important aspect of the integrating activity of the brain is that it is context dependent, and that this context dependence is global, reflecting the net balance of information from the body as a whole as well as from the environment. If blood pH drops due to a medical condition unrelated to exercise (e.g., renal acidosis), the brain responds adaptively by increasing breathing rate to restore normal blood chemistry, but it does not initiate the full complement of responses seen during exercise. The brain does not react in a unitary way to blood pH, but rather it determines the overall state of the body and responds appropriately to the particular context. A far less sophisticated form of bodily coordination occurs via soluble signaling molecules that are released by specific cells into the bloodstream. Because blood circulates throughout the body, chemical signals are systemic (whole-body), and such signals can therefore mediate a coordinated bodily response to specific stimuli. Chemical signals can elicit a single type of response throughout the body, or they can have different effects on different cell types. Yet, because chemical signals are regulated by specific triggers to serve specific functions, they are inherently restricted to these functions. Adrenaline (a chemical-signaling molecule) affects many bodily systems, yet it has exactly the same effects whether it is released in response to exercise, stress, or any other stimulus. Therefore, chemical signals mediate a coordinated response to one or more triggering stimuli, but they do not integrate multiple factors to craft a global response that reflects the diverse conditions present in the body as a whole. Finally, on a local level, coordination of function can also occur through cell contacts or soluble-signaling molecules that diffuse over short distances. This kind of communication can control the activity of cells within a particular tissue, but does not regulate the body in a systemic manner. Similar to long-range chemical signaling, local signals are induced by a narrow range of local conditions to generate a coordinated cellular response, but they do not integrate multiple sources of information from the body as a whole or regulate the activity of systems throughout the body in response to that information. The difference between the integrating activity of the brain and the more limited coordinating activity of other signaling systems is critical to the interpretation of brain death. Merriam-Webster 27 defines “integrate” as “to combine two or more things to form or create something; to form, coordinate or blend into a functioning, unified whole,” with a synonym being “unite.” In contrast, “coordinate” is defined as “to bring into a common action, movement, or condition; to act or work together properly and well,” with a synonym being “harmonize.” Thus, integration combines two or more elements to result in a single, unified whole, whereas coordination simply involves communication of parts in order to achieve an effective outcome. On a biological level, these terms can be defined as follows: Integration: The compilation of information from diverse structures and systems to generate a response that (1) is multifaceted, (2) is context dependent, (3) takes into account the condition of the whole, and (4) regulates the activity of systems throughout the body for the sake of the continued health and function of the whole. Integration is (by definition) a global response and during postnatal stages of human life is uniquely accomplished by the nervous system, most especially the brain. Coordination: The ability of a stimulus, acting through a specific signaling molecule, to bring responding cells into a common action or condition. Coordination can reflect either (1) a single type of response that occurs simultaneously in multiple cells or (2) a set of synchronous, but cell-type specific responses. Coordination can be local or global and is accomplished both by the brain and by other signaling systems. All chemically mediated biologic functions, including those that persist after death of the brain (table 1), involve a specific signal and a unitary response. There is coordination of the response across all of the cells capable of receiving the signal, but there is no modulation of that response to reflect differences in circumstance regarding the condition of the whole; i.e., there is no integration. Processes due to coordinated cellular responses can be very complex, often resembling the behavior of a living organism. Yet, despite the apparent “unity” of such coordinated events, they do not necessarily reflect the action of an integrated whole. Like the behavior of swarming bees or a school of fish, coordinated processes persisting after death reflect only the behavior of individual, autonomous cellular units that are responding to a limited number of stimuli to generate the semblance of a unified whole. Unlike mere coordination, the brain can modify, enhance, or suppress components of a multifaceted response that involves many parts of the body at once, and that depends on the balance of information it receives from throughout the body as well as from the environment. It integrates body-wide information to craft an appropriate (and, when needed, body-wide) response that serves the organism as a whole, depending on the details of the situation. 28 After death of the brain, the lower levels of cell communication remain, but the body is no longer capable of compiling multiple sources of information to produce an integrated, global response. Importantly, while only organisms exhibit integration and integration is necessary for a biological system to function as an organism, partial or limited integration is not sufficient for organismal function (fig. 3). For example, SCI patients maintain limited integration (primarily involving functions of the head and those bodily functions mediated by endocrine signaling or by undamaged cranial nerves), yet this level of integration is not sufficient to sustain the vital activities of the body as a whole. Therefore, SCI patients do not function as organisms, despite the persistence of higher mental functions and the limited integration that persists (fig. 2; “Patients with high cervical spinal injury”). Integration is not sufficient for human organismal function. At the lowest level (blue), cells are alive and show coordination (cell communication). At the next level (orange), there is a system capable of integration, which, at postnatal stages, requires a brain. If integration is sufficient to sustain life, the system functions as an organism. At the highest level (gray), the brain is capable of supporting consciousness, sentience, and rationality. Interpreting the Persistence of Order after Brain Death Following death by any means, the body does not instantaneously turn into a pile of dust or a disaggregated collection of single cells. Consequently, cells within a corpse retain their inherent, ordered properties that were established during life, including their contacts with neighboring cells. The heart continues to beat, due to the intrinsic electrical properties of cardiac cells and the connections between cells within cardiac tissue. Blood continues to travel to all parts of the body via the circulatory system––a sophisticated network of cell-contacts that was established during embryonic life. Respiration ceases because it requires signals from the brain, yet if oxygen is artificially supplied, cells in the body will remain alive and continue to function normally for some time, just as they would in laboratory culture. Under these conditions, functions that are mediated by chemical signaling and by local cell contacts will persist (table 1). Yet none of these activities involves the integration that is characteristic of a living organism. Rather, the bodily functions that persist after the death of the brain reflect the properties of individual cells, functioning as autonomous cellular organisms within a pre-existing system that provides efficient distribution of long-range signaling molecules to other, independent cellular organisms. Coordination persists, but integration is lost. The persistence of what may appear to be integrated function after brain death can be better understood by considering the following, simple analogy. If a marching band with red, white, and blue hats assembles on a field in the shape of an American flag––this is clearly an integrated activity requiring global communication of all members of the band for the sake of the performance as a whole. And if, after the performance has concluded, the marchers simultaneously throw their hats into the air, the image of the flag will persist for a short time as the hats rise above the field. But once the hats have left the direct control of the marchers, adaptive integration is no longer possible. The hats are ordered only by their own intrinsic properties and by the forces of physics. The fading and imperfect image of the flag is merely a remnant of the prior order; a projectile of the past, with no ongoing integration to sustain it. Just so for the persistence of order after death, living cells (similar to hats) are able to intrinsically sustain their own properties and (unlike hats) are also able to maintain their ordered connections with neighbors. Consequently, cells will persist in their natural functions for some time in the absence of global integration. Yet, without an overriding organizing principle that can respond adaptively and globally to changing circumstances, the residual order seen in a corpse rapidly decays. Despite aggressive life support, the great majority of brain dead bodies suffer irreversible cessation of cardiopulmonary function within 7 days (Jennett, Gleave, and Wilson, 1981; Hung and Chen, 1995; Al-Shammri et al., 2003). In contrast, SCI patients, who typically retain at least some degree of brain-mediated integration following injury, 29 show far better survival, with more than 90% remaining alive for at least 30 days (Shao et al., 2011). Global integration is required for sustained organization above the level intrinsic to cells, and at postnatal stages of human life, integration is uniquely accomplished by the brain. As a final point, it is important to consider the assumptions underlying the argument that coordinated cell communication (table 1) is sufficient for a living human being to persist. Clearly, coordinated functions exist in continuously varying degrees at all levels of life from cells up to organisms (fig. 1). If the integrated function that is uniquely provided by the brain at postnatal stages is not required for human life, distinguishing the living from the dead is simply a matter of degree. And if any arbitrary level of coordination is sufficient to conclude that a human organism remains alive, then an organism is nothing more than the sum of its constituent parts; i.e., if parts remain and their functions persist, then a human organism also persists, at least partially. The view that a body remains alive after the death of the brain is fundamentally a reductionist argument that denies the existence of an integrated human whole beyond the properties of the cells and organs that comprise the body. 30 If this view were correct, then human death would not occur until every single cell in the body had died. VI. CONCLUSIONS The beginning and end of human life are naturally defined by the onset and cessation of organismal function. Organisms autonomously and globally integrate all bodily activities for the sake of the whole, and at postnatal stages of life this integration critically and uniquely requires a functioning brain. Living cells persist in the human body for some time following death and maintain their natural properties and relationships. Although communication between cells can provide a coordinated biologic response to specific signals, it does not provide evidence for integrated function that is characteristic of a human organism. Modern technology has produced a wide range of challenging situations in which some elements of biological coordination can persist, uncoupled from the natural biological hierarchy. In particular, individuals such as severe SCI patients, who are no longer able to function as organisms, can (under some conditions) be maintained in a living state. In these cases, the persistence of brain function, and therefore the potential for mental function, is sufficient evidence for persistence of a living human being. Conversely, in cases of severe brain damage, individuals may be unable to exercise mental function, yet so long as they continue to autonomously and globally integrate their own biologic activities (i.e., so long as they continue to function as an organism), they remain alive. By contrast, after total brain death mental function clearly ceases and none of the evidence produced thus far has conclusively demonstrated that genuine organismal integration (as opposed to mere coordination) can persist. While these facts do not prove the claim that brain death is a valid criterion for human death, they strongly support this claim. At the very least, these facts show that the documented biological observations about brain dead bodies and their (artificially supported) capacities do not disprove the claim that brain death marks the death of a human organism as a whole. NOTES 1.Catholic and other religious traditions hold that this transition occurs in an instant. For example, St. John Paul II described death as “a single event, consisting in the total disintegration of that unitary and integrated whole that is the personal self. It results from the separation of the life principle (or soul) from the corporal reality of the person” (John Paul II, 2000). 2.The list of functions persisting after death of the brain is taken from Shewmon (2001). 3.The work of Alan Shewmon has been particularly influential in this regard (Shewmon 1998, 2001). Based largely on Shewmon’s evidence, the President’s Council on Bioethics issued a report in 2008 (The President’s Council on Bioethics, 2008) rejecting the loss of somatic integration rationale for considering brain death to be a sign of human death. Nonetheless, the Council did reaffirm the validity of brain death as a criterion for death on other grounds. Others, however, believe that Shewmon’s evidence proves that brain death does not necessarily mark the death of the human being. See, for example, Miller and Truog (2008); Truog et al. (2013). 4.Given the somewhat mysterious nature of death (see, e.g., Spaemann, 2011), it is not reasonable to expect absolute scientific certainty on this matter. The only absolute certainty that a human being has died would be when all of the cells of the body have died. What is needed, rather, is moral certainty, or the certainty sufficient to guide action. 5.I recognize that determining the criteria for human death also requires answering fundamental philosophical questions, such as what distinguishes an aggregation of individual substances from a single complex substance composed of many parts. The biological analysis presented here does not address these questions directly, but rather is complementary to the philosophical analysis presented in Moschella’s paper in this issue. 6.The medical dictionary maintained by the U.S. National Library of Medicine and the National Institutes of Health defines an organism as “an individual constituted to carry on the activities of life by means of organs separate in function but mutually dependent: a living being” (http://www.merriam-webster.com/medlineplus/organism [accessed 29 January, 2016]). 7.Some have suggested that there is no meaningful distinction between the processes within a cell and the information and materials it receives from the environment (e.g., Oyama, 2000), but this argument requires that all features of the environment needed for life (oxygen, gravity, etc.) are intrinsic features of cells, not extrinsic factors to which the organism has evolved to respond, thus abolishing the ability to speak of any entity as a distinct thing. 8.For a more extensive discussion of what distinguishes an embryo from a cell, see Condic, 2014a. 11.For further philosophical elaboration and defense of this point, see Moschella, 2016. 12.Rationality itself is not directly observable. Therefore, to err on the side of caution, I take the presence or absence of mental function as sufficient evidence to indicate that the basic natural capacity for rationality (even if not immediately exercisable) may still be present. 14.The precise level of brain function that must be lost unambiguously to conclude to the loss of the basic natural capacity for mental function is unknown, and (indeed) may be impossible to determine. However, irreversible loss of function of the entire brain is clearly an unambiguous indication that the capacity for mental function has been lost. The precise level of brain function required to result in total loss of either organismal function or mental function has yet to be determined. 15.Cases where individuals in a persistent vegetative state require support for vital functions (dialysis, pacemaker, etc.) are no different from cases where conscious individuals require such support: (1) if the intervention is temporary or partial, the patient still integrates his own function, albeit in an impaired manner or (2) if the intervention is permanent and complete, the patient no longer integrates his own function. 16.The argument that an individual with SCI is no longer able to exercise the capacity to function as an organism is considered in more detail in the next section. 17.For example, humans who lack cortical structures responsible for “higher” brain functions such as language are clearly conscious, and therefore capable of some degree of mental function. See citations given in footnote 20. 18.Although (2016) some authors such as Shewmon and Austriaco believe that bodily integration can persist even after brain death, my analysis (see below) indicates that the evidence on which their belief is based shows only that coordination between cells and tissues can persist after brain death, not that genuine organismal integration can persist. 19.Some advocates of a systems biology approach would appear to deny this premise (see, e.g., the Austriaco (2016) paper in this issue and Shewmon, 2001), or at least its insistence on the need for global self-integration. Yet, as I have already noted and argue further below, the logic of this approach would imply that death does not really occur until all of an organism’s cells have died. 20.For a defense of this premise, see Moschella’s paper in this issue. 23.Nor is this a technologically trivial matter. Individual cells were not sustained in culture until the 1950s, with the successful cultivation of so-called “HeLa” cells, which were taken from the cervical cancer biopsy of Henrietta Lacks. Human cells do not naturally or easily live when removed from the whole of which they are a part. 24.Since the capacity for organismal function is rooted in the soul, so long as a human remains alive, he remain the kind of entity that is an organism. However, just as a human being who no longer exhibits “rationality” can nonetheless remain a rational animal (i.e., an animal with a rational nature, rooted metaphysically in the soul as the principle of rational capacities), an individual who no longer functions as an organism (i.e., an animal) can nonetheless remain a rational animal. Biologically, an organism is a self-sustaining integrated whole; and clearly following SCI, neither the head nor the body below the injury functions as an organism (while metaphysically, they remain an organism). 25.Developmental biologists would see this deficiency in terms of a perturbation of a normal developmental pathway, due to an internal genetic or other biological defect. 26.For a more in-depth analysis of the status of a SCI patient from a philosophical perspective, and an explicit response to Shewmon’s analogy between SCI patients and brain dead patients (Shewmon, 2010), see Moschella, 2016. 28.While other body organs can have complex and graded responses that affect many tissues, they cannot directly control the body as a whole. While the activities of the liver (for example) have global effects on many body functions, the liver qua liver cannot directly control the activity of the eyes or the hands. 29.In most SCI patients, brain-mediated endocrine functions and those functions controlled by the 9th and 10th cranial nerves persist, along with any residual functions mediated by “spared” spinal fibers that continue to communicate with the body through the damaged spinal column. 30.I am indebted to Fr. Ignacio de Ribera Martin for this important insight. Moschella M. 2016. Deconstructing the brain disconnection-brain death analogy and clarifying the rationale for the neurological criterion of death. The Journal of Medicine and Philosophy41:279–299. [PMC free article] [PubMed] [Google Scholar] ———. 2001. The brain and somatic integration: Insights into the standard biological rationale for equating “brain death” with death. The Journal of Medicine and Philosophy26:457–78. [PubMed] [Google Scholar] ———. 2010. Constructing the death elephant: A synthetic paradigm shift for the definition, criteria, and tests for death. The Journal of Medicine and Philosophy35:256–98. [PubMed] [Google Scholar]
At the lowest level (blue), cells are alive and show coordination (cell communication). At the next level (orange), there is a system capable of integration, which, at postnatal stages, requires a brain. If integration is sufficient to sustain life, the system functions as an organism. At the highest level (gray), the brain is capable of supporting consciousness, sentience, and rationality. Interpreting the Persistence of Order after Brain Death Following death by any means, the body does not instantaneously turn into a pile of dust or a disaggregated collection of single cells. Consequently, cells within a corpse retain their inherent, ordered properties that were established during life, including their contacts with neighboring cells. The heart continues to beat, due to the intrinsic electrical properties of cardiac cells and the connections between cells within cardiac tissue. Blood continues to travel to all parts of the body via the circulatory system––a sophisticated network of cell-contacts that was established during embryonic life. Respiration ceases because it requires signals from the brain, yet if oxygen is artificially supplied, cells in the body will remain alive and continue to function normally for some time, just as they would in laboratory culture. Under these conditions, functions that are mediated by chemical signaling and by local cell contacts will persist (table 1). Yet none of these activities involves the integration that is characteristic of a living organism. Rather, the bodily functions that persist after the death of the brain reflect the properties of individual cells, functioning as autonomous cellular organisms within a pre-existing system that provides efficient distribution of long-range signaling molecules to other, independent cellular organisms. Coordination persists, but integration is lost. The persistence of what may appear to be integrated function after brain death can be better understood by considering the following, simple analogy. If a marching band with red, white, and blue hats assembles on a field in the shape of an American flag––this is clearly an integrated activity requiring global communication of all members of the band for the sake of the performance as a whole.
no
Paranormal
Is there life after death?
yes_statement
there is "life" after "death".. "life" continues after "death".. after "death", there is still "life".. "life" persists beyond "death".
https://blogs.scientificamerican.com/guest-blog/physics-and-the-immortality-of-the-soul/
Physics and the Immortality of the Soul - Scientific American Blog ...
Physics and the Immortality of the Soul The topic of "life after death" raises disreputable connotations of past-life regression and haunted houses, but there are a large number of people in the world who believe in some form of persistence of the individual soul after life ends. Clearly this is an important question, one of the most important ones we can possibly think of in terms of relevance to human life. If science has something to say about, we should all be interested in hearing. Adam Frank thinks that science has nothing to say about it. He advocates being "firmly agnostic" on the question. (His coblogger Alva Noë resolutely disagrees.) I have an enormous respect for Adam; he's a smart guy and a careful thinker. When we disagree it's with the kind of respectful dialogue that should be a model for disagreeing with non-crazy people. But here he couldn't be more wrong. Adam claims that there "simply is no controlled, experimental[ly] verifiable information" regarding life after death. By these standards, there is no controlled, experimentally verifiable information regarding whether the Moon is made of green cheese. Sure, we can take spectra of light reflecting from the Moon, and even send astronauts up there and bring samples back for analysis. But that's only scratching the surface, as it were. What if the Moon is almost all green cheese, but is covered with a layer of dust a few meters thick? Can you really say that you know this isn't true? Until you have actually examined every single cubic centimeter of the Moon's interior, you don't really have experimentally verifiable information, do you? So maybe agnosticism on the green-cheese issue is warranted. (Come up with all the information we actually do have about the Moon; I promise you I can fit it into the green-cheese hypothesis.) Obviously this is completely crazy. Our conviction that green cheese makes up a negligible fraction of the Moon's interior comes not from direct observation, but from the gross incompatibility of that idea with other things we think we know. Given what we do understand about rocks and planets and dairy products and the Solar System, it's absurd to imagine that the Moon is made of green cheese. We know better. We also know better for life after death, although people are much more reluctant to admit it. Admittedly, "direct" evidence one way or the other is hard to come by -- all we have are a few legends and sketchy claims from unreliable witnesses with near-death experiences, plus a bucketload of wishful thinking. But surely it's okay to take account of indirect evidence -- namely, compatibility of the idea that some form of our individual soul survives death with other things we know about how the world works. Claims that some form of consciousness persists after our bodies die and decay into their constituent atoms face one huge, insuperable obstacle: the laws of physics underlying everyday life are completely understood, and there's no way within those laws to allow for the information stored in our brains to persist after we die. If you claim that some form of soul persists beyond death, what particles is that soul made of? What forces are holding it together? How does it interact with ordinary matter? Everything we know about quantum field theory (QFT) says that there aren't any sensible answers to these questions. Of course, everything we know about quantum field theory could be wrong. Also, the Moon could be made of green cheese. Among advocates for life after death, nobody even tries to sit down and do the hard work of explaining how the basic physics of atoms and electrons would have to be altered in order for this to be true. If we tried, the fundamental absurdity of the task would quickly become evident. Even if you don't believe that human beings are "simply" collections of atoms evolving and interacting according to rules laid down in the Standard Model of particle physics, most people would grudgingly admit that atoms are part of who we are. If it's really nothing but atoms and the known forces, there is clearly no way for the soul to survive death. Believing in life after death, to put it mildly, requires physics beyond the Standard Model. Most importantly, we need some way for that "new physics" to interact with the atoms that we do have. Very roughly speaking, when most people think about an immaterial soul that persists after death, they have in mind some sort of blob of spirit energy that takes up residence near our brain, and drives around our body like a soccer mom driving an SUV. The questions are these: what form does that spirit energy take, and how does it interact with our ordinary atoms? Not only is new physics required, but dramatically new physics. Within QFT, there can't be a new collection of "spirit particles" and "spirit forces" that interact with our regular atoms, because we would have detected them in existing experiments. Ockham's razor is not on your side here, since you have to posit a completely new realm of reality obeying very different rules than the ones we know. But let's say you do that. How is the spirit energy supposed to interact with us? Here is the equation that tells us how electrons behave in the everyday world: Don't worry about the details; it's the fact that the equation exists that matters, not its particular form. It's the Dirac equation -- the two terms on the left are roughly the velocity of the electron and its inertia -- coupled to electromagnetism and gravity, the two terms on the right. As far as every experiment ever done is concerned, this equation is the correct description of how electrons behave at everyday energies. It's not a complete description; we haven't included the weak nuclear force, or couplings to hypothetical particles like the Higgs boson. But that's okay, since those are only important at high energies and/or short distances, very far from the regime of relevance to the human brain. If you believe in an immaterial soul that interacts with our bodies, you need to believe that this equation is not right, even at everyday energies. There needs to be a new term (at minimum) on the right, representing how the soul interacts with electrons. (If that term doesn't exist, electrons will just go on their way as if there weren't any soul at all, and then what's the point?) So any respectable scientist who took this idea seriously would be asking -- what form does that interaction take? Is it local in spacetime? Does the soul respect gauge invariance and Lorentz invariance? Does the soul have a Hamiltonian? Do the interactions preserve unitarity and conservation of information? Nobody ever asks these questions out loud, possibly because of how silly they sound. Once you start asking them, the choice you are faced with becomes clear: either overthrow everything we think we have learned about modern physics, or distrust the stew of religious accounts/unreliable testimony/wishful thinking that makes people believe in the possibility of life after death. It's not a difficult decision, as scientific theory-choice goes. We don't choose theories in a vacuum. We are allowed -- indeed, required -- to ask how claims about how the world works fit in with other things we know about how the world works. I've been talking here like a particle physicist, but there's an analogous line of reasoning that would come from evolutionary biology. Presumably amino acids and proteins don't have souls that persist after death. What about viruses or bacteria? Where upon the chain of evolution from our monocellular ancestors to today did organisms stop being described purely as atoms interacting through gravity and electromagnetism, and develop an immaterial immortal soul? There's no reason to be agnostic about ideas that are dramatically incompatible with everything we know about modern science. Once we get over any reluctance to face reality on this issue, we can get down to the much more interesting questions of how human beings and consciousness really work. Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers.
Moon is made of green cheese. We know better. We also know better for life after death, although people are much more reluctant to admit it. Admittedly, "direct" evidence one way or the other is hard to come by -- all we have are a few legends and sketchy claims from unreliable witnesses with near-death experiences, plus a bucketload of wishful thinking. But surely it's okay to take account of indirect evidence -- namely, compatibility of the idea that some form of our individual soul survives death with other things we know about how the world works. Claims that some form of consciousness persists after our bodies die and decay into their constituent atoms face one huge, insuperable obstacle: the laws of physics underlying everyday life are completely understood, and there's no way within those laws to allow for the information stored in our brains to persist after we die. If you claim that some form of soul persists beyond death, what particles is that soul made of? What forces are holding it together? How does it interact with ordinary matter? Everything we know about quantum field theory (QFT) says that there aren't any sensible answers to these questions. Of course, everything we know about quantum field theory could be wrong. Also, the Moon could be made of green cheese. Among advocates for life after death, nobody even tries to sit down and do the hard work of explaining how the basic physics of atoms and electrons would have to be altered in order for this to be true. If we tried, the fundamental absurdity of the task would quickly become evident. Even if you don't believe that human beings are "simply" collections of atoms evolving and interacting according to rules laid down in the Standard Model of particle physics, most people would grudgingly admit that atoms are part of who we are. If it's really nothing but atoms and the known forces, there is clearly no way for the soul to survive death. Believing in life after death, to put it mildly, requires physics beyond the Standard Model. Most importantly, we need some way for that "new physics" to interact with the atoms that we do have.
no
Paranormal
Is there life after death?
yes_statement
there is "life" after "death".. "life" continues after "death".. after "death", there is still "life".. "life" persists beyond "death".
https://www.interaliamag.org/articles/philip-nova-can-consciousness-continue-after-death-a-neuroscientific-perspective/
Can consciousness continue after death? A Neuroscientific ...
Can consciousness continue after death? A Neuroscientific Perspective What happens when we die? Unless we accept a religious explanation, the only remaining possibility seems to be the annihilation of consciousness. But another possibility is consistent with evidence from neuroscience. Our brains easily form predictive models of the behavior of close friends and loved ones, which emulate these other selves much as advanced computers can emulate other electronic devices. Whether these emulations are conscious is an open question, but there is preliminary evidence from several sources that the brain can host multiple independent spheres of consciousness. This opens up the possibility of survival through the medium of other brains. What happens when we die? Some are lucky enough to have total conviction that a pleasant afterlife awaits — but most of us harbor doubts, which can curdle into profound terror. There are at least two reasons we might fear death. One is that it deprives us of being able to carry out our life’s work, which can encompass everything from creating works of art to traveling the world to raising our children. This, at least, we can do something about: we can put plans in place to carry this work forward even after we’re gone. But the second thing death takes from us is our capacity to experience, or to be conscious. There seems to be no hope that after we die, our consciousness will continue. What to do? We can try to ignore death, but it has a way of forcing itself upon us. For me, my first real brush with death was watching my grandpa be swallowed up by Alzheimer’s disease. This led me to pursue my PhD in neuroscience in hope of finding a cure. At the same time, I had an early-life crisis. I was struggling to find my place in the world and wanted to understand what it all meant. Since I wanted my beliefs to be based on evidence, I didn’t find answers offered by traditional faiths satisfying. But as I learned more about the brain, I came to a surprising conclusion. We are used to thinking about biological death — when the heart stops beating and the brain stops firing — as the end of us. But there are cases where these do not coincide. Alzheimer’s, for example, wipes our memories and destroys our essential selves, even if our bodies linger on for a while. But I also began to explore an even stranger possibility. The opposite might also be true: what if biological death is not the end of the self or of our conscious experiences? I reached this conclusion strictly through my understanding of how the mind works, without any metaphysical concepts: no souls required. In this article, I will make the case that a form of afterlife, which bears a strong resemblance to reincarnation, is compatible with a scientific worldview. To readers who believe in the existence of the soul, this entire essay would seem like an exercise in futility. If you’re religious, or open to the possibility, then the answer is simple: souls are attached to our bodies during life, survive after death, and carry us into the Great Beyond. But I don’t think this is true: even if souls do exist, they are not us in the ways that matter most. Believers in traditional afterlives think that they will continue to have conscious experiences after death via their souls; this means that the soul is conscious, or supports consciousness. It isn’t enough that the soul exists after death and has experiences; for it to really be your afterlife, the soul has to be you and have your experiences. What this means is difficult to pin down, but I’d suggest that it has to have the same kind of mental life that you do. It has to think the way you do.1 Why does your soul have to share your mental life in order to really count as a continuation of you? Intuitively, we tend to think that the essence of a person is more closely tied to their mind and personality than to their body. Think of a body swap movie like Freaky Friday; in the story, the mother’s mind swaps into her daughter’s body, and vice-versa. We accept, as viewers, that after the swap the person who looks like the daughter is really the mom, because that’s where her mind is. To be you after death, the soul must have your mind. Ideally it would share your memories. Memories play a special role in making us who we are2, because they are the raw ingredients we use to construct our life stories. If a soul entered into the afterlife with no memory of the life that had come before, whether that life would really be a continuation of yours would be in doubt; it would have the same connection to you that you have to the protagonist of an unremembered dream. If a disembodied soul is to truly be a continuation of your life, it has to have experiences in common with yours, which should include your memories and ways of thinking and perceiving the world. The trouble is, all of those are mental phenomena, and the simplest explanation3 of mental phenomena is that they are produced in your brain. How do we know this? First, the activity of the brain is extremely well-correlated with mental events. Techniques like EEG and fMRI allow us to measure the activity of the brain, and we can see it buzzing with activity when study participants perform mental tasks, ranging from doing mental math, picturing objects in the imagination4, recalling an emotional memory5, or learning a new language6. Crucially, these patterns of activity are stable — thinking the same sort of thought will reliably be accompanied by predictable patterns of electrical activity in brain cells. The brain activity corresponds to the mental activity. Correlation is not causation, but we also know that making physical changes to the brain leads, predictably, to changes in thought and experience. This is why humans enjoy taking drugs, from caffeine to alcohol to more exotic substances: introducing certain molecules to the brain changes how we think, feel, and see the world. Even mystical experiences, such as the sense that one is in the presence of a mysterious greater power, can be caused by consuming psychedelic drugs.7 All evidence points to memories being stored in the brain. Scientists have even transferred a memory from one mouse to another by teaching the first mouse a path through a maze, recording a pattern of brain cell activity as the mouse thinks about the task, and then replaying that pattern in another mouse’s brain with electrical stimulation; the second mouse then “remembers” how to solve the maze, even though she has never seen it before.8 We also know that diminishing the activity of the brain, whether through injury or anesthesia, causes a dimunation and eventual loss of consciousness; and people who sustain injuries to certain parts of the brain can even lose whole categories of experience, such as the ability to perceive faces9 or to notice objects to one’s left.10 Taken together, we can conclude: conscious experiences happen in living, functioning brains; and functioning brains are required to produce conscious experiences. When I, as a scientist, say that the simplest (and likeliest) explanation of consciousness is that it’s produced by brain activity, I’m sometimes accused of being closed-minded. Since we don’t know everything about how consciousness works, why am I so confident in dismissing the role that souls might play? If there are gaps in our understanding of how the brain works (as there certainly are), how can I rule out that consciousness might persist after death, via the soul? It’s true that we understand very little about how consciousness works. We can point to patterns of electrical activity in the brain that go on during thinking, but it isn’t obvious why it feels the way it does to have those kinds of mental experiences. Why does it feel like anything at all when neurons send electrical signals to one another, while it (presumably) doesn’t feel like something to the transistors in my iPhone? Why does a certain pattern of electrical activity cause all the experiences — pleasure, pain, sweet, bitter, blue, red, happy, sad — that create the feeling of life? I’ll be the first to admit that I don’t know why a functioning brain produces the feeling of being alive. But we have good reason to believe that the brain is an essential ingredient, and no reason to think that consciousness can arise in other places. This could change as we develop more sophisticated artificial intelligence, which may become conscious if set up in the right way. And I also expect that there are fundamental things about the universe that we don’t understand and have yet to discover. It may turn out that souls or other metaphysical things do exist and are involved in consciousness in some way: maybe, for example, the universe is pervaded by an invisible, unobservable mind field that has to interact with the brain to make it conscious. These things are not (currently) the simplest possible explanations, but as we learn more about the world around us, they may turn out to be right. But that would still leave a soul-based afterlife in doubt. Remember, we have every reason to believe that consciousness requires a functioning brain or similar information-processing system: mental events have accompanying patterns of brain activity and changing the physical brain (with drugs or injury) causes changes in mental life. All that stops when you die, and there’s no reason to think that even if a soul exists, it could sustain you-ness on its own. Memories can’t even survive a night of heavy drinking — how plausible is it, really, that they could survive the complete destruction of your physical brain? A brain injury can change your personality — but death leaves it intact? Some might object that near-death experiences (NDEs) prove that conscious experiences after death are possible, but there is no evidence that these NDEs are anything more than hallucinations in almost-but-not-quite-dead brains. NDEs can share some features across people11 (with reports of bright lights, seeing loved ones, and so on), but this consistency is somewhat overstated and certainly not universal,12 and anyway this does not disprove that NDEs are hallucinations — drugs like LSD cause similar hallucinations across many people, too.13 Some researchers have tried to prove that the “out of body” experiences reported during NDEs are real perceptions by a soul that is literally separated from the body, by hiding objects in high places of operating rooms that would be out of sight for the body on the table, but visible to a soul that really was floating up above.14 Unsurprisingly, these studies have failed; if one succeeds, I’ll be happy to change my mind, but until then, I see no reason to believe that souls carry us into the Great Beyond. Everything that makes us who we are is in the brain; even if the soul exists, once it separates from your brain, it would no longer be you. If a soul won’t save us, are we doomed at death? Not necessarily. I think it’s possible — even likely — that death will not be the end of experience. As I’ve argued above, your mental life is the product of a particular pattern of electrical activity in your brain. It’s an activity — something your brain does. The brain is like an enormous musical instrument, with trillions of tiny “strings” (the synapses, or connections between brain cells) buzzing at particular frequencies. The “music” produced by this staggeringly complex instrument is what makes you you. Your consciousness is a symphony of brain activity. Music is a series of notes, along with temporal relationships between notes: the order the notes are played, the pauses between them, the length each is held, and so on. That’s the information, or structured data, that defines music. And because music is information, it can be stored and transmitted in other forms. A live performance can be recorded on magnetic tape, digitized to 1s and 0s, and transcribed to sheet music; in all cases, as long as the recording is faithful, the song remains the same. This is characteristic of information: it can be stored and transmitted in many different forms, while still containing the same essential data. Information is resilient. It needs some physical medium to be stored — whether paper or computer chips or human brains — but it can coexist in many different storage media at once, and even if some of those are lost or destroyed, the information exists as long as any of its copies do. If conscious experience is informational, then it can be copied; and if it can be copied, then it can survive the death of its original vessel. It needs to be physically instantiated somewhere; it can’t just exist as a ghostly soul. But its physical form doesn’t need to be the original: the digital copy of “Macbeth” on a Kindle is the same as the one Shakespeare wrote in terms of its informational content, despite never having touched his quill. Why is it plausible to believe that the mind is information, like music or language? First, the contents of the mind can be described in words and understood by others. This means that our mental states can be transformed into another form of information and transferred, just as the information in music can be expressed as vibrations in the air, notes on a page, or bits and bytes on a computer. Second, the things that go on in our mind represent events in the world; they encode data about what we’re experiencing. Biting into an unfamiliar food and discovering that it tastes sweet, and a bit peppery, teaches you something about that food. And finally, these experiences are the result of information processing in the brain. Cells in the brain are arranged into circuits, and the computations these circuits perform are what produces the events we experience in our minds. What does it mean to experience something in your mind — to hear a new song, for example, and to be aware of hearing that song? To have a conscious experience is just to recognize that your brain’s circuitry is in a particular state. Hearing music is being aware that a certain pattern of vibrations in the air is causing a certain response in your eardrum, which leads to a certain pattern of electrical activity in the brain.15 Our intuitions also tell us that the self is a kind of information, which can be transferred, stored, and shared. We readily accept fictional stories in which one character’s mental contents are transferred into another person’s body or into a robot, and we expect that the recipient of this transfer is the “real” person. The reason it is probably not obvious to most people that the self is a kind of information, just like music or language, is that we don’t have the technology yet to copy and store selves. Information theory, the branch of science that studies and quantifies information and its properties, was invented only after electronic communication. Before messages could be sent over telegraph wires, it wasn’t obvious to people that language and sound were both forms of information, having shared properties. “Information” didn’t exist as a human concept until about 1920. I suspect that, if it becomes possible to store backup copies of the mind in a computer, that you and I are just another kind of information will suddenly become common sense. If the mind is informational, then it can be copied, and these copies can outlive the original — not as imposters, but as the real deal. Every copy is as much the real thing as any other, as long as the relevant information is the same (for Hamlet, words; for us, mental events). But for this to give us any hope of an afterlife, we need to have some reason to believe that copies of our mind actually exist. Someday, technology may provide a solution, and I expect that it will become possible to copy the mind into a computer. Many of us reading this, including myself, were probably born too early for such technology to save us. But we are not doomed. Each of us has the capacity to copy one another’s minds, and we do it regularly. You just probably don’t think about it as “copying”; you call it “getting to know each other” — or maybe “empathy” or “love”. Think of someone you know very intimately — a best friend, romantic partner, or close family member. You carry lots of memories of this person, and you can also easily anticipate their reaction to a new situation: “She wouldn’t want her birthday to be a surprise — she hates surprises.” You can probably hold an imaginary conversation with this person in your mind, because you know the contours of their personality: their humor, their insecurities, their sources of pride, their hopes for the future. You may have even had dreams where this person played a co-starring role and observed them in some truly fantastical scenarios that never happened in waking life. What you’re doing, in all of these cases, is predicting how this person would behave, based on the rich set of memories you have together. You’re doing what scientists call extrapolation: drawing on past data to predict the future. This means that you have an internal model of their personality — or in other words, a copy of their mind. You just didn’t know you were doing it. All the time you were hanging out, sharing laughs, and making memories, your brain was surreptitiously storing a copy of them away for later. Human brains even contain a special class of neurons, called mirror neurons, that are specialized for responding to the behavior to others and play a key role in empathy.[1] It’s an old cliche that when someone dies, they live on in the memories of their loved ones. Maybe this is not just a figure of speech. When our loved ones die, the mental model of them that we’ve carefully built up over years of careful study, doesn’t go away. They continue to exist via neurological emulation. Just as Hamlet still exists, even if Shakespeare’s original copy is lost, your loved ones exist so long as their copies do. It may seem hard to believe that a mental copy of you could represent genuine survival from your perspective after your own death. So, let’s consider a few possible objections. First, to be genuine survival, the copy would have to be conscious; is this believable? We tend to think that each brain contains only one conscious self, which would make it impossible for these models of our loved ones to be conscious in their own right. But this way of looking at the brain is probably wrong. Evidence from “split-brain” patients — people who have the two halves of their brain surgically separated to prevent severe epilepsy — suggests that each half of the brain develops a separate sphere of consciousness, at least under some circumstances.16 One half of the brain tends to control speech and the right side of the body, while the other half of the brain controls the right side; and in split-brain patients, these two sides can behave independently from (and sometimes in conflict with) each other. There is some evidence that the characters in dreams are independent spheres of consciousness: the side characters in your dreams can withhold information from you (or vice-versa), and they can behave in ways you don’t expect, suggesting that they have a semi-independent mind despite sharing your brain.17 Finally, practitioners report being able to deliberately create other conscious personalities through long-term meditation-like techniques.18 These sources of evidence are still preliminary, and we don’t yet know enough about how consciousness works to be able to definitively say whether multiple truly independent minds can exist in a single brain, or whether this can happen under normal circumstances. We know, for example, that some parts of the brain contribute to conscious experience (the frontal cortex, for example) and that others do not (the cerebellum), but not why.19 So it may turn out that the mental models we have of other people are not conscious, which would make my proposal wrong. But I think it’s reasonable to believe that they are — or at least as reasonable to believe this as not — for the simple reason that we know that our mental models of other people can participate in our own conscious experiences. When we hold an imaginary dialogue with a loved one, we are reconstructing the way they think, and being aware of it. They become part of us. It at least feels as if they are conscious, and with consciousness the feeling is the whole point. Another objection to whether this is genuine survival is that, even if our models of other people are conscious, they might not be the original person’s consciousness. They may be “just” a copy. In other words, if I die tomorrow, and my loved ones carry mental models of me, can I really expect to continue having experiences inside their brains? This can seem implausible; after all, those models of me exist right now, and I don’t feel their experiences. So why would I start feeling them after my original body dies? The philosopher Derek Parfit illustrates this dilemma vividly with his story about a teleporter20 (a similar idea is explored in the movie The Prestige). Parfit’s story is set in the near future, and teleporters are a common way to travel quickly. To use a teleporter you enter a special booth, and the booth scans and records your body (down to the position of every single atom). It then vaporizes you, destroying your original body; and simultaneously, a booth on the other side of the world receives the scan and reconstructs an exact copy of you. Parfit’s story concerns a man who is initially hesitant to travel this way, fearing that teleporting is not really travel, but rather dying and being replaced by a clone. But he eventually decides to do it anyway, and to his amazement, when he steps out of the booth on the other side of the world, he still feels like himself. He begins to travel this way often. But then, one day, the machine malfunctions; he enters the booth, waits, and… nothing. The technician opens the door, and tells him some bad news: the teleporter malfunctioned. It did scan his body, and it did create a copy of him on the other side of the world. But the machine failed to destroy his original body, so now there are two of him. And even worse, the scan process involves a heavy dose of radiation, so the man’s original self will die of cancer in one week. The man is horrified. He calls his clone on the other side of the world. The clone assures him that, even when the original dies, it won’t really be death, because the clone is an exact copy that can carry on his life. But while the man accepts that his clone can play the role of him, doing his work and raising his kids, the clone won’t really be him; in a week, he despairs, his life will be over. Since mental copies of us exist right now in other people’s brains, dying is exactly like what the man in Parfit’s story faces. If the man is right that the clone isn’t really him, then he (and we) are right to fear death; if the man is wrong, then death is really nothing to fear, since we should expect experiences to continue somewhere else. Both Parfit and I believe that the man is wrong. Parfit gets there via an elaborate argument about how personal identity is an illusion (so whether some future person is “you” is a meaningless question), but I think we can take a simpler route. We are used to thinking of ourselves as physical things, where each individual is unique. Two cars may look alike — may even be the same model — but are not the same car. If I get into a car that merely looks like mine, I’m a thief. But information does not work this way. Two copies of Hamlet are both the exact same story. Even if one copy was printed first, the second copy is not an “imposter”, as long as it has the same words.21 If we believe that what really matters is our minds, and that our minds are information, then it doesn’t matter that another version of your mind is “just” a copy. Any copy of a bit of information is the real thing. This introduces one final problem: any mental copy we can make of another person is incomplete. No matter how well we think we know someone else, we can’t know everything about them. So, while a full copy of your mind would be the real thing, a partial copy is just that — only partially you. So maybe it’s fairer to say that we do live on in other people, but only partly. That sounds just fine to me. In fact, it’s exactly what would happen to you if you never died — given enough time. Compare yourself to how you were 10 years ago. Are you exactly the same? Of course not. Your personality has changed in ways big and small, deliberate and circumstantial. You’ve learned new things and forgotten others. You are only a partial copy of your older self; that person survives as you, but only partly. You probably don’t find that especially troubling; quite the opposite, since we actively seek personal change. You will survive only partially in other people once you die, but you already survive only partially from one year to the next. In other words, what happens when you die is remarkably similar to what happens when you continue to live. In this essay, I’ve argued that the self is a form of information that can be copied and shared; and that this actually does occur as a natural extension of our empathy for one another. Each of us carries around a whole host of characters in our brains, to varying degrees of fidelity. When we die, these copies of us continue to exist, and represent genuine (if partial) survival. All of this amounts to a kind of afterlife, albeit an unusual one. It does not involve any metaphysical concepts; it remains agnostic as to whether a soul exists. Even if you do have a soul that goes off to Heaven, it can’t get rid of the copies that exist down here; it would just be one copy among many. This viewpoint also stays silent about how we got here, or what the purpose of all of this is (if any). Traditional religions typically acknowledge the existence of an afterlife, but usually stay silent on what, exactly, we should expect once we get there. The Bible has little to say about Hell, and even less about Heaven. Can we say more about this strange afterlife that awaits us? I don’t know exactly what it will feel like to be distributed over many other brains, but I suspect it will feel a lot like living does now. Each of us carries around many other people in our heads; as we come to know others intimately, we don’t just learn about them — we become more like them. We are a blend of the characteristics we picked up from other people. Even beloved fictional characters can leave their mark. Each of us is already something of an amalgam. We should expect more of that. One disadvantage that this afterlife has over religious alternatives is that the vessel we transition to is fragile: other brains host us for a while, but they too will die. But just as they carried us, so will others carry them; and the ways that we informed our successors’ lives will inform still others. Over time we become more diffuse, but do not vanish. Human existence flows on, one into the next, like a river. In some ways, this vision of the afterlife has a lot in common with Buddhist or Hindu reincarnation. Unlike in Hinduism there is no soul that bridges the gap; and unlike Buddhism there is no escape, no final nirvana, no enlightenment to be gained. There is just more and more and more life: messy, imperfect, and beautiful. The mechanism that keeps the great river of existence flowing is love. We don’t need to do anything to preserve the people we care about, other than to do what we’re already inclined to do: to care for one another. This afterlife imposes no judgements or cosmic laws on us. But it does suggest two bits of advice. We should live as authentically as we can — the more accurately others know us, the better they can preserve us. And we should do our best to really know the people we care about, so they can become part of us more fully. …………………………… Endnotes Identifying you with your mind has some odd implications. It means, for example, that you were never a fetus (because a fetus has no mental life); your body grew out of the fetus’ body, but it wasn’t really you. Likewise, if you suffer a tragic accident that gives you amnesia, that won’t really be you anymore; you will be dead, and while your body will still be around, it won’t be any concern of yours. Strangest of all (and most relevant to where we’re going in this essay), if your mind could be overwritten with a perfect replica of someone else’s, then you would just be that person. Plenty of delusional patients have claimed to be Napoleon, but if one day you woke up with all of Napoleon’s memories, you wouldn’t be delusional if you claimed to be him — you really would be. So defining you as your mind isn’t suitable for all cases. It wouldn’t be a good basis for making legal claims, because (at least until brain scientists develop a mind reading machine), it’s completely unprovable. Minds are slippery and constantly in flux, which is as fascinating to neuroscientists as it would be frustrating for lawyers; much more convenient, from a legal perspective, to define humans as biological beings that start as fetuses, end as corpses, and can’t Freaky Friday into one another in between. If I someday suffer from severe dementia, it makes perfect sense for the government to continue to act like I still exist. But to understand subjective experience, the mind is our best bet. If my memories are swapped into another body, that person will feel like me; and we shouldn’t contradict people about who they are without having a really good reason to do so. This was first explored by John Locke (“Of Identity and Diversity”) and refined over the subsequent centuries by others, including H.P. Grice. Everyone understands intuitively that a simple explanation, involving everyday occurrences, is better than a complicated one involving mysterious occurrences. Let’s say that you’ve adopted a new puppy and come home to find a puddle on the kitchen floor. There are many possible explanations for what happened: a never-before-seen weather event could have caused a cloud to form in your apartment, causing a micro-rainstorm; or the puddle could have been left behind by alien visitors for an unknown purpose. But the simplest explanation is that the puppy, who is not yet house trained, had an accident. Now, you should be willing to revise your account if this simple explanation turns out to be insufficient: if you have a camera in the living room that shows the puppy never went into the kitchen that day, then a more mysterious explanation may be warranted. But when a simpler explanation does the job, use it. This isn’t obvious to us because mental states are opaque; a red object creates a certain pattern of activity in the brain; but the conscious experience we have is of redness, not of having a certain pattern of electrical activity. But this is not unusual. Everything a computer does can be described in binary 1s and 0s, but at all but the lowest level of computer programming, the concepts of binary digits are rarely (if ever) invoked. Your browser “thinks” in terms of HTTP requests and your operating system “knows” about filesystems, not binary. Information can be abstracted. It’s hypothesized that anatomical differences play a role, and that (again, for reasons that are not fully understood) “recurrent connections”, a.k.a. neuronal circuits that loop back onto themselves, are required for consciousness. Giulio Tononi’s Integrated Information Theory proposes an explanation for why this might be, but is still highly speculative. See Tononi, Giulio. “Consciousness, information integration, and the brain.” Progress in brain research 150 (2005): 109-126. Parfit, D. (1984). Reasons and persons. OUP Oxford. Information requires a physical instantiation, each of which is a unique object. But the information itself is fungible. About Philip Nova Philip Nova is a software engineer at Google and has worked on applications in biotechnology and education technology. He holds a PhD in biomedical science from the University of California, San Francisco, focused on neural mechanisms of memory in Alzheimer's disease. He lives in San Francisco, California. View all posts with Philip Nova → Your “case that a form of afterlife, which bears a strong resemblance to reincarnation, is compatible with a scientific worldview,” is convincing. It also accords with the nondual Shakta/Shaiva position described in my article in this same issue. From this perspective, reality is consciousness. This doesn’t mean everything is conscious, but that everything is a manifestation of consciousness, thus everything is connected. Interestingly, just before I read your article I awoke with a strong sense that my late husband is not only part of me, but IS me. Excellent argument and I love the description of existence as a flowing river building upon itself, which reminds me of the idea of a collective unconscious (Jung set aside) that perhaps we are all contributing to. Also, your mention that “We are a blend of the characteristics we picked up from other people. Even beloved fictional characters can leave their mark.” made me think about the things we create with our hands, our thoughts and our intentions becoming their own extensions of our conscious selves. As an artist, I think about my creations as a sort of new being, or perhaps more accurately an extension of myself, that takes part of me and then goes off to have their own adventures once I’ve released them into the world through exhibition or sale. Then a part of myself is touching the thoughts and experiences of other people without my even having necessarily met them. Although my original intention in the work is shaped by others’ experiences and memory to become something new that touches them in some way (or not), perhaps there is a piece of me that remains with them. You must be a subscriber and logged in to leave a comment. Users of a Site License are unable to comment.
But the second thing death takes from us is our capacity to experience, or to be conscious. There seems to be no hope that after we die, our consciousness will continue. What to do? We can try to ignore death, but it has a way of forcing itself upon us. For me, my first real brush with death was watching my grandpa be swallowed up by Alzheimer’s disease. This led me to pursue my PhD in neuroscience in hope of finding a cure. At the same time, I had an early-life crisis. I was struggling to find my place in the world and wanted to understand what it all meant. Since I wanted my beliefs to be based on evidence, I didn’t find answers offered by traditional faiths satisfying. But as I learned more about the brain, I came to a surprising conclusion. We are used to thinking about biological death — when the heart stops beating and the brain stops firing — as the end of us. But there are cases where these do not coincide. Alzheimer’s, for example, wipes our memories and destroys our essential selves, even if our bodies linger on for a while. But I also began to explore an even stranger possibility. The opposite might also be true: what if biological death is not the end of the self or of our conscious experiences? I reached this conclusion strictly through my understanding of how the mind works, without any metaphysical concepts: no souls required. In this article, I will make the case that a form of afterlife, which bears a strong resemblance to reincarnation, is compatible with a scientific worldview. To readers who believe in the existence of the soul, this entire essay would seem like an exercise in futility. If you’re religious, or open to the possibility, then the answer is simple: souls are attached to our bodies during life, survive after death, and carry us into the Great Beyond. But I don’t think this is true: even if souls do exist, they are not us in the ways that matter most. Believers in traditional afterlives think that they will continue to have conscious experiences after death via their souls; this means that the soul is conscious, or supports consciousness.
yes
Paranormal
Is there life after death?
yes_statement
there is "life" after "death".. "life" continues after "death".. after "death", there is still "life".. "life" persists beyond "death".
https://signsofthetimes.org.au/2022/09/is-there-life-after-death/
Is there life after death? - Signs of the Times
Is there life after death? As a media and communications graduate, I love stories in all their forms, but I’ve always held a special place in my heart for science fiction. Exotic planets, alien races, unique extrapolations of scientific theory and bizarre visions of the future of our world—no other genre captures my imagination in quite the same way. But my love for the genre goes beyond its aesthetic trappings—it’s also deeply rooted in the ideas that science fiction likes to tackle. Perhaps unsurprisingly, considering the future setting that sci-fi often indulges in, it is often a genre which aims at interrogating some of the more existential or philosophical questions which we face. Take some of my favourites for example. Both the Mass Effect and The Expanse series, for example, explore the ways our current political divide may be reproduced in the future—as well as how we may react to an existential threat to our existence. Denis Villeneuve’s film Arrival serves not only as an imagining of extra-terrestrial linguistics, but also explores the nature of free will, time and love. Then there’s Ann Leckie’s Imperial Radch trilogy which examines the ways we define our identity, which in turn is defined by forces outside our will. Two of the more interesting science fiction novels I have read in recent years attempted to tackle this issue. The first was the final instalment in the Remembrance of Earth’s Past trilogy: Death’s End, while the second was Rian Hughes’ XX: A Novel, Graphic. In both stories humanity faces interstellar destruction and must find a way to survive. In Death’s End Cixin Liu describes a cold, uncaring universe where the only way to stave off the inevitable is using science to elevate humanity. In XX, death is an inevitability we cannot prevent—instead, Hughes posits we live on through the collective ideas, dreams and aspirations of mankind that persist long after we are gone. Both are fascinating, well-written takes on the issue of death—but I can’t say I view either as providing a fully satisfactory answer to the question. Liu’s version of the universe is harsh and unforgiving—a story that aligns with the work of atheist proponents such as Dawkins or Harris. It’s difficult to find purpose or meaning in such a world. Similarly, Hughes’ idea has a degree of comfort to it—we can continue to exist after we die—but it requires the complete annihilation of self in a way which raises questions about whether “we” really do persist. It’s not surprising though that these authors cannot provide perfect answers to the question—philosophers have long been in the same boat. (Credit: Josh Marshall) The philosophy of death Philosophers have argued about death for millennia, forming hundreds of different theories and hypotheses. The differing perspectives on death are, unfortunately, too numerous to count—but often fall into a few categories of thought. The first is simple: death is the end of existence and nothing of your self persists beyond it. This is the view widely held by many materialistic atheists, as well as some non-theistic religions. Other schools of thought rely on the belief in a soul: some ephemeral, immaterial essence which comprises your thoughts and feelings and make you, “you”. The origin—and ultimate fate of your soul—varies from religion to religion. In Buddhism, the soul is reincarnated constantly until it either attains nirvana and becomes nothing. There are a variety of potential liberations in Hinduism. Other religions view death as the separation of the body and the soul—wherein your soul “floats away” from your body upon death into the afterlife. This can be seen in the mythologies of ancient Greece or Rome, as well as modern Islam—and even some forms of Christianity. The Christian view of death For many, this may be the common perception of Christianity’s views on life after death—your soul is separated from your body where it either goes straight to heaven if you’ve been good, or straight to hell if you’ve been bad. Think of how often children are comforted when confronting death with the platitude: “Don’t worry, they’re in heaven now.” What’s interesting is that many of these ideas are not supported, or outright contradicted, by the Bible which they claim as their basis. For starters, there’s the soul. When it comes to proving its existence, many use two Bible verses as evidence. The first reads, “The Lord God formed man of the dust of the ground, and breathed into his nostrils the breath of life, and man became a living soul”, while the second states, “The dust returns to the ground it came from, and the spirit returns to God who gave it” (Genesis 2:7, Ecclesiastes 12:7, KJV, italics added). Some argue that the Genesis verse provides evidence for the soul being a distinct phenomenon due to the way it is stated as being breathed into the body, separate from the creation of the body. Similarly, Ecclesiastes seems to reinforce this, with the dust/body having one fate, and the soul/spirit having another. Seems straightforward, right? Unfortunately, no. When we look more closely at the original translations of the word soul here, it becomes slightly more complicated. The Hebrew word is ruach, which means “breath, wind or spirit”. Many argue that the use of ruach in these contexts is not referring to a separate aspect of ourselves which makes up the soul, but the “breath” or “gift of life” that God has provided—and which is similarly taken when we die. This is further supported by numerous other Bible verses, including another from Ecclesiastes, which states “the living know that they will die, but the dead know nothing. They have no further reward, and even the memory of them is forgotten. Their love, their hate and their jealousy have long since vanished; never again will they have a part in anything that happens under the sun.” (Ecclesiastes 9:5, 6) In this, we can see that the Bible does not support the view that the soul continues into an afterlife when the body expires. Instead, the self is intrinsically tied to the body and when it expires, so do we. Life after death This does not mean that death is the end. The theme of death and resurrection is discussed widely throughout the biblical story. Perhaps one of the most revealing examples comes from the apostle Paul who writes in 1 Thessalonians 4:13,14: “Brothers and sisters, we do not want you to be uninformed about those who sleep in death, so that you do not grieve like the rest of mankind, who have no hope. For we believe that Jesus died and rose again, and so we believe that God will bring with Jesus those who have fallen asleep in him.” In this way, death is described as a sleep—where we are completely unaware of the events of the world and disconnected from them, just as Ecclesiastes 9 describes. That is not to say there is no heaven or hell, however. 1 Thessalonians 4:16, 17 describes what happens at the second coming of Jesus. “For the Lord himself will come down from Heaven, with a loud command, with the voice of the archangel and with the trumpet call of God, and the dead in Christ will rise first. After that, we who are still alive and are left will be caught up together with them in the clouds to meet the Lord in the air. And so we will be with the Lord forever.” When Christ returns, His followers will be resurrected, taken to be with Him and have eternal life. But what about those who don’t follow Christ? While they are also described as being resurrected, their fate initially seems more dire. Here’s how Revelation 20:12–15 describes it: “And I saw the dead, great and small, standing before the throne, and books were opened. Another book was opened, which is the book of life. The dead were judged according to what they had done as recorded in the books . . . Then death and Hades were thrown into the lake of fire. The lake of fire is the second death. Anyone whose name was not found written in the book of life was thrown into the lake of fire.” No. For starters, Revelation 20:10 states that the devil is included in those who are thrown in the lake. In the end, he is not the ruler of the wicked, but shares in their fate. More important though are eight words in verse 14 that recontextualise our entire understanding of what “hell” is—and undermines the myth that has persisted for centuries: “The lake of fire is the second death.” Hell is not a place where those who disobey God suffer eternal agony. Not only does this not line up with the idea of a loving God, but here it is explicitly rejected. Eternal torment instead refers to the way in which the decision to reject God carries eternal consequences. When people reject God, they become eternally disconnected from Him. As God is the One who gives life, eternal disconnection from Him results in eternal death. The punishment many imagine with fire and pitchforks is instead eternal nothingness. In this way, we can see that death and resurrection as presented in the Bible are radically different to the ways they have been presented in popular culture. While many struggle to reconcile their impression of hell with a loving and kind God, the reality is much easier to understand. God gives everybody the freedom to choose their path in life. Those who accept Jesus as their Saviour will receive the gift of eternal life. In the same way, those who reject Him will not be forced to suffer. Like those who believe, their suffering will also come to an end; though sadly, this will also be their eternal end. Knowing this, there is only one question that remains: Which end will you choose, an eternal life or an eternal end? Ryan Stanton is a PhD graduate of media and communications at the University of Sydney. He’s a passionate follower of Jesus, avid board gamer and admirer of science fiction. Signs of the Times magazine - Aus/NZ None of us likes dealing with loss, grief or pain. Yet sometimes in life, those losses make us stronger. They help us to grow and mature if we travel through them. Many times we avoid them and the feelings that ensue when we suffer. We push it down, repress it Signs of the Times magazine - Aus/NZ We have these microscopic “pets” in our gut system known as the gut microbiota that helps us with our emotional wellbeing and many other health benefits. Studies are clear that to be happier and healthier we need to please these symbiotic housemates of ours. Here's some tips on how. We are not a completely solitary being as we are accompanied by microscopic bugs. Believe it or not, there are more microbial cells (40 trillion) than human cells (30 trillion) in our body1. Our body is made of water, minerals, protein, fat, bones and also trillions of bacteria, fungus and virus. We... Signs of the Times magazine - Aus/NZ It’s been said that porn and the internet rose together. As technology develops, the pornography industry finds new and even more instant ways to entangle consumers. I saw a video on social media where a young woman was talking about her subscription account (where people subscribe to access exclu... Signs of the Times magazine - Aus/NZ In this uncertain world, it is easy to feel hopeless. Bad things are happening around us and can capture our attention. But faith is the antidote. It gives us confidence that what we hope for and cannot yet see, will come. A better tomorrow, an end to suffering. And the
This is further supported by numerous other Bible verses, including another from Ecclesiastes, which states “the living know that they will die, but the dead know nothing. They have no further reward, and even the memory of them is forgotten. Their love, their hate and their jealousy have long since vanished; never again will they have a part in anything that happens under the sun.” (Ecclesiastes 9:5, 6) In this, we can see that the Bible does not support the view that the soul continues into an afterlife when the body expires. Instead, the self is intrinsically tied to the body and when it expires, so do we. Life after death This does not mean that death is the end. The theme of death and resurrection is discussed widely throughout the biblical story. Perhaps one of the most revealing examples comes from the apostle Paul who writes in 1 Thessalonians 4:13,14: “Brothers and sisters, we do not want you to be uninformed about those who sleep in death, so that you do not grieve like the rest of mankind, who have no hope. For we believe that Jesus died and rose again, and so we believe that God will bring with Jesus those who have fallen asleep in him.” In this way, death is described as a sleep—where we are completely unaware of the events of the world and disconnected from them, just as Ecclesiastes 9 describes. That is not to say there is no heaven or hell, however. 1 Thessalonians 4:16, 17 describes what happens at the second coming of Jesus. “For the Lord himself will come down from Heaven, with a loud command, with the voice of the archangel and with the trumpet call of God, and the dead in Christ will rise first. After that, we who are still alive and are left will be caught up together with them in the clouds to meet the Lord in the air. And so we will be with the Lord forever.”
yes
Paranormal
Is there life after death?
no_statement
there is no "life" after "death".. "life" does not exist after "death".. after "death", there is no "life".. "life" ceases to exist after "death".
https://www.theatlantic.com/daily-dish/archive/2010/05/what-do-atheists-think-of-death/187003/
What Do Atheists Think Of Death? - The Atlantic
What Do Atheists Think Of Death? This post certainly struck a nerve, particularly among atheist readers. One writes: Sharing Kevin's sense of never having felt the need to believe in God, perhaps my answer will be of interest. I have always felt that when I die, I am dead and gone, my conscious life will end, my interactions with others will end, and I will be simply GONE. I don't know what causes consciousness (call it spirit, call it soul, I don't mean to pick sides with my words), but I expect that it will end. My afterlife will be in the memories of those I knew, those who loved me, those who carry me on in their hearts. I, myself, cease to exist. This gives me a beautiful, shockingly beautiful sense of the Now. Being in the present, the here and now, is the ultimate reward of life. I am constantly gobsmacked by the minutiae of life; I stand in awe of the things around me right fucking now. There's no reward, no judgment, no heaven, no hell. I live right fucking now. Another writes: I think that when I die I'll cease to exist, and in some ways I'm happy about that. Life is hard work. Life is good, worthy work that I'm proud of and that makes me feel good, for the most part, but even though I'll probably be sad to die (and I'd hate to think I was about to die any time soon), I'm still glad, in principle, that some day life will cease, and my burdens will dissolve with my joys. I don't want to live forever. Another: Speaking as someone who shares Kevin's view on this topic, what we think happens when we die is that we die, only our contributions to the world we are departing will live on, and that's all there is to it. We're not going to be around to experience it afterwards. Would it be nice not to die? Maybe, certainly sounds interesting (although I could see myself wishing fervently for death to put me out of my boredom when I turned a million, and considering it an inhuman and sadistic torment to deny that to me...). But if we wish to live in a reality based world we need to acknowledge that there is no rational reason to believe this to be true and it is a monumental case of group wishful thinking to put it politely. People are afraid of dying, they don't want to deal with it, and believing they'll never have to *really* deal with it because they're not going to *really* die is just easier. How do I feel about it? Meh. I accepted my mortality (and that of everyone else I know) a long time ago, I dealt with it, and now I rarely give it much thought unless circumstances call it to my attention. I have better things to do with my life than obsessing over a time when it's going to be over. And no, that is not me declaring how incredibly brave and stoic in the face of death that makes atheists, I don't imagine I'd be any less scared facing the imminent ending of my life when the time comes than your average person... it is simply not a concern of mine now. Wringing my hands over it would be about as pointless as wailing over the gravitational constant of the universe not having a different value more to my liking. Reality is what it is. And reality is that people aren't immortal. Yet another: "I wonder what Kevin thinks happens to him when he dies?" I think the fact that you have to ask this question at all says a lot about how the fear of death is inextricably tied to a belief in higher powers in the minds of theists. To one such as I, who shares Kevin's views, the answer is rather obvious and intuitive. Nothing is going to happen to him when he dies, because there won't be a 'him' for anything to happen to. As for your follow up question- "And how does he feel about that - not just emotionally but existentially?"- I can only speak for myself, but again, the fact that you feel the need to ask this question says a lot about the source of your faith. Forgive me if this sounds overly judgmental, but to me terms like "faith" and "spirituality" are just shorthand for an individual's inability to cope with the concept of oblivion. Why must one feel anything particular about it in the first place? I am. One day, I will not be. This doesn't bother me and I don't understand the need to waste the precious gift of sentience agonizing about such things. I recognize that some people can't shrug off the idea of not existing in some form. Take my husband for instance. He has an overdeveloped fear of oblivion but can't bring himself to believe in fairy tales. He takes comfort in philosophy. In the words of (probably) Marcus Aurelius: ‘Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.’ Another: As an atheist who has just recently had two friends die, I can say that not all atheists are as lucky as Kevin. For me, the fear of death is far and away the most immediate and challenging aspect of my atheism. Death affects me in a profound way, because I know it's not a matter of belief at this point, for me that this life is all we get. As much as I would like to believe platitudes like "He's in a better place now" and "I know he's smiling down on us," I see them for what they are, and what they represent: attempts to avoid facing the reality of death. So if you truly believe that "Facing [death] is our life's task," may I suggest you try atheism? Religion is how people AVOID facing it. It's the common thread in all religions, from the most ancient to the most modern: "When we die, it's not really the end. So don't worry so much." But for most religions it doesn't stop there. Most of them teach that life after death will not only exist, but it'll be way more awesome than stupid ol' life with all its trials and tribulations. A choir of angels! Forty virgins! Nirvana! All your old friends, your family, even Mittens and Fido will be there to give you a big hug and welcome you to eternity! Eternity. Living forever. Whatever philosophical contortions you want to twist yourself through, if you believe in eternity, you are not facing death. Atheists face death. We have to come to grips with the finality of our end without the aid of any comforting fairy tales. It's not easy, but neither is life. Atheists and theists can agree on that, at least. We just don't think death is going to be any different. A reader asks: When atheists claim that religion is just a fanciful way to deal with the unpleasant inevitability of death, the faithful often decry such a reduction as unfair. And yet your main response to Kevin Drum's unapologetic description of his lack of religious impulse is to ask "Then what do you think happens when we die?" As another of those "untragic" atheists, I can only scratch my head as to why my answer to that question would seem to be revelatory to you... unless, that is, the avoidance of death offered by religion is its key selling point. So which is it? And how does soothing my fears (be they rational or irrational) make something like religion more likely to be True? A final thought: I believe we have a "soul," but not in the sense of a spiritual being apart from our bodies, but in the sense of a consciousness that transcends our physical limitations. It is, first, the essence of our beings, the thing that connects the person we are today to the person we have been at all the stages of our lives. The boy I once was is in some sense the man I am today and the old man I will be, and I think this persistence of being - this connective line, this inner self - is part of what I mean by "soul." In addition, by "soul" I also mean our ability to contemplate time and space and perhaps a sense of harmony far outside our own physicality. And finally I mean a higher morality - the part of our beings that makes us not only human, and thus animal, but also humane, and in that sense spiritual beings with a higher morality than self-interest and even survival. This feeling no doubt has a physical cause as well, but at some level our higher-processing brains and our experiences and learning give us feelings that seem unconnected to physical sensation. And it is here where the best of humankind resides and expresses itself. When I worry about my own death, it is not death that I really worry about, but the manner of death, and the lead up to it of decline, decrepitude, helplessness, pain. (If I knew I would be fairly healthy until the end, and then die peacefully in my sleep, much of my anxiety would be gone.) Death itself does not scare me. I remember undergoing general surgery for some minor problem, and was given some anesthetic drip before being wheeled into the operating room. At one moment I was talking to the surgeon, and the very next moment - a nanosecond later - I was in the recovery room. I had no awareness of a dimming of consciousness. One instant I was there, an instant later I was gone. This, it seems to me, is what death is like, only there's no reawakening. Consciousness ends, and along with it any awareness and sensation. There is not even a feeling of absence. Another way I look at it is that life after I am dead will be just like life before I was born. I don't regret not being here sooner than I was, and I had no sensation of existence before my birth. So it will be after my death. The only death that really scares me is the death of those I love, far more than my own. This is not to say that I don't want to live as long as possible, so long as I can function in some way and not be an excessive burden. And this desire, it seems to me, is itself strong proof that there is no afterlife. Freud's thanatos notwithstanding, even our souls hunger for a concrete existence. We may long for transcendence, but it is a transcendence in our lives, not in some desire to be totally spiritual beings, removed forever from connection to the real. At least not for long - that way lies madness. Sooner or later, we want to reconnect to the world. And we constantly hunger for the visible world, the streams of sensations that feed our consciousness and being. It is the very opposite of an afterlife idealized by major religions. And that leads me to my final point (probably a startling one, from your point of view): I think life after death would be stupid. By this I absolutely do not mean that it is stupid to believe in an afterlife or to desire it (though such a desire may be a result of naivete, irrationality, or great pain). I mean that such an existence would itself be stupid. It would be devoid of anything that gives our intelligence any significance, and our current lives any meaning. It would not in any sense be human. I remember telling my brother that if I died and there was a God and he told me that he indeed created the world in six days, I would be extremely disappointed, for I find the world as it is far more miraculous and awe-inspiring than its biblical description. Similarly, a life after death devoid of physicality would mean very little to me, and I don't desire it. Perhaps it would matter to whatever essence or spirit survived me, but to the living human being I am, this world - you and me and everyone else - is all that really matters. Again, Andrew, what do you think happens when you die? Your body and individuality recreated in some recognizable way, with friends long gone again available to you? Andrew Sullivan as a disembodied spirit, glowing because you - or it - are in the presence of Jesus? You must have some view. Share it. And tell us if you really prefer that afterlife, to all the pain and glory of this real one. One of the starkest things I remember from my one afternoon spent conversing with the subject of my doctoral dissertation, Michael Oakeshott, was his response to a question I posed him about the notion of salvation and the after-life (he died the following year). It was "who would want to be saved"? Oakeshott's last unwritten essay (I suspect he had been writing it in his head much of his life) was going to be about a conception of salvation which had nothing whatsoever with the future. I have two intuitions about what happens when I die. The first is that I cannot know in any way for sure; and I surely know that whatever heaven is, it is so beyond our human understanding that it is perhaps better not to try an answer. The second is that I will continue to exist in my essence but more firmly and completely enveloped in the love and expanse of God, as revealed primarily in the life of Jesus. I guess you can believe there is nothing there (atheism/agnosticism); or that there is something there into which everything dissolves - human and divine - which is a kind of non-material unity of love and compassion (Buddhism). Faith gives me the hope of the Christian alternative to both, that we will remain who we are, the unique objects of God's love, and yet part of such a miraculous sea of divine love, we will be both ourselves and yet far less limited by ourselves, freed from the sin that keeps us from knowing one another, forgiving one another and loving one another and loving God as parent, child and spirit. My most indelible connection with death was being by one of my closest friends of my own age as he faced his own mortality. I was there at the hour of his death; and I was there when he was fully and healthily alive; and I was there when he faced his death, day by day, for two years, until he died at the age of 31 in his mother's arms. One memory, related in Love Undetectable, came when Patrick, toward the end of his life, was enduring terrible sweats. In one of the lulls in which his body seemed to rest, I lay down next to him on the bed and asked the hardest question: I asked him what he thought death actually was. He was shivering and we spooned, that candlewick bedspread holding our bodies inches apart. I remember feeling his bones beneath it for the first time, the skeleton beginning to shape the once firm, rosy flesh of his body. "I don't know," he said. "I don't really know. Sometimes it seems like some blackness coming toward me. And sometimes it doesn't feel like anything." He paused and I felt unqualified to add anything. So we lay there for a while in silence, staring at the ceiling, me wondering if I'd asked him because I was actually curious as to what a dying man might actually think, as if he might know a little better and help me navigate what I thought was ahead of me; or whether I asked him because somebody needed to, and no one else would dare; or since I was his only close friend facing the same prospect, no one else could ask him. He shivered again, and the phone rang. But death became one more of those banalities we had in common. Where is Patrick now? He is with me whenever my thoughts turn to him; he is alive and vivid, if transfigured sometimes, in my dreams. He is with me at the end of the Cape each summer, as a seagull flies close to me in the evening sky. He is in my prayers. He is. I can prove none of this. I can only witness that watching my dearest friend die, after being in the AIDS bunker with him for two years, helped me understand that my friend lives. You will mock me for this wish-fulfillment. But they mocked the disciples too who knew that the Lord was alive, and that death was not the master of Him. I live in this awareness. But I also live in the awareness that eternity is here already, that the majesty and miracle of God's creation resonates through every second of our lives and every particle of matter within and without us. That is how I interpret Oakeshott's deeply Christian (and somewhat Buddhist) understanding of salvation as having nothing whatsoever to do with the future. The unity and individuality and wonder we are told we will only know then is actually here now, shielded from our own eyes by our own mortal fear, by our own avoidance of death, by our own inability to grasp that this struggle we fear is actually already over, that God loves us now unconditionally, overwhelmingly, this knowledge prevented solely from penetrating us by our own sense of inadequacy, or our looking away, or are losing ourselves in the human and worldly things that I understand by sin. So I do not believe our consciousness is utterly different after death than now. I believe, with Saint Paul, that this is the same divine experience, but through a glass darkly. I believe it is Love, because Jesus showed me so. And I await with great fear because I am human and I await with great hope because of the incarnation and resurrection of God in human history. To philosophize is to learn how to die. To believe is to hope for light in the face of "some blackness coming toward us."
Consciousness ends, and along with it any awareness and sensation. There is not even a feeling of absence. Another way I look at it is that life after I am dead will be just like life before I was born. I don't regret not being here sooner than I was, and I had no sensation of existence before my birth. So it will be after my death. The only death that really scares me is the death of those I love, far more than my own. This is not to say that I don't want to live as long as possible, so long as I can function in some way and not be an excessive burden. And this desire, it seems to me, is itself strong proof that there is no afterlife. Freud's thanatos notwithstanding, even our souls hunger for a concrete existence. We may long for transcendence, but it is a transcendence in our lives, not in some desire to be totally spiritual beings, removed forever from connection to the real. At least not for long - that way lies madness. Sooner or later, we want to reconnect to the world. And we constantly hunger for the visible world, the streams of sensations that feed our consciousness and being. It is the very opposite of an afterlife idealized by major religions. And that leads me to my final point (probably a startling one, from your point of view): I think life after death would be stupid. By this I absolutely do not mean that it is stupid to believe in an afterlife or to desire it (though such a desire may be a result of naivete, irrationality, or great pain). I mean that such an existence would itself be stupid. It would be devoid of anything that gives our intelligence any significance, and our current lives any meaning. It would not in any sense be human. I remember telling my brother that if I died and there was a God and he told me that he indeed created the world in six days, I would be extremely disappointed, for I find the world as it is far more miraculous and awe-inspiring than its biblical description.
no
Paranormal
Is there life after death?
no_statement
there is no "life" after "death".. "life" does not exist after "death".. after "death", there is no "life".. "life" ceases to exist after "death".
https://en.wikipedia.org/wiki/Death
Death - Wikipedia
Death is the irreversible cessation of all biological functions that sustain an organism.[1] For organisms with a brain, death can also be defined as the irreversible cessation of functioning of the whole brain, including the brainstem, and[2][3]brain death is sometimes used as a legal definition of death.[4] The remains of a former organism normally begin to decompose shortly after death.[5] Death is an inevitable process that eventually occurs in all organisms. Some organisms, such as Turritopsis dohrnii, are biologically immortal. However, they can still die from other means than aging.[6] Figuring out when someone is dead has been a problem. Initially, there was the definition of death when breathing and the heartbeat ceased.[7] However, the spread of CPR no longer meant it was irreversible.[8] Brain death was the next option, which fractured between different definitions. Some people believe that all brain functions must cease. Some believe that even if the brainstem is still alive, their personality and identity are dead, so therefore, they should be entirely dead.[9] Death is generally applied to whole organisms; the similar process seen in individual components of an organism, such as cells or tissues, is necrosis.[10] Something that is not considered an organism, such as a virus, can be physically destroyed but is not said to die, as a virus is not considered alive in the first place.[11] As of the early 21st century, 56 million people die per year. The most common reason is cardiovascular disease, which is a disease that affects the heart.[12] Many cultures and religions have the idea of an afterlife and also may hold the idea of judgment of good and bad deeds in one's life. There may also be different customs for honoring the body, such as a funeral, cremation, or sky burial.[13] Death is actively trying to be cured by a group of scientists known as biogerontologists, through seeking to do the same as biologically immortal organisms do and applying a similar means to humans.[14] However, as humans do not have the means to apply this to themselves, they have to use other ways to reach the maximum lifespan for a human, such as calorie reduction, dieting, and exercise.[15] As of 2022, a total of 109 billion humans have died, or roughly 93.8% of all humans to ever live. Problems of definition The concept of death is a key to human understanding of the phenomenon.[16] There are many scientific approaches and various interpretations of the concept. Additionally, the advent of life-sustaining therapy and the numerous criteria for defining death from both a medical and legal standpoint have made it difficult to create a single unifying definition.[17] Defining life to define death One of the challenges in defining death is in distinguishing it from life. As a point in time, death seems to refer to the moment when life ends. Determining when death has occurred is difficult, as cessation of life functions is often not simultaneous across organ systems.[18] Such determination, therefore, requires drawing precise conceptual boundaries between life and death. This is difficult due to there being little consensus on how to define life. It is possible to define life in terms of consciousness. When consciousness ceases, an organism can be said to have died. One of the flaws in this approach is that there are many organisms that are alive but probably not conscious.[19] Another problem is in defining consciousness, which has many different definitions given by modern scientists, psychologists and philosophers.[20] Additionally, many religious traditions, including Abrahamic and Dharmic traditions, hold that death does not (or may not) entail the end of consciousness. In certain cultures, death is more of a process than a single event. It implies a slow shift from one spiritual state to another.[21] Other definitions for death focus on the character of cessation of organismic functioning and human death, which refers to irreversible loss of personhood. More specifically, death occurs when a living entity experiences irreversible cessation of all functioning.[22] As it pertains to human life, death is an irreversible process where someone loses their existence as a person.[22] Definition of death by heartbeat and breath Historically, attempts to define the exact moment of a human's death have been subjective or imprecise. Death was defined as the cessation of heartbeat (cardiac arrest) and breathing,[7] but the development of CPR and prompt defibrillation have rendered that definition inadequate because breathing and heartbeat can sometimes be restarted.[8] This type of death where circulatory and respiratory arrest happens is known as the circulatory definition of death (CDD). Proponents of the CDD believe this definition is reasonable because a person with permanent loss of circulatory and respiratory function should be considered dead.[23] Critics of this definition state that while cessation of these functions may be permanent, it does not mean the situation is irreversible because if CPR was applied fast enough, the person could be revived.[23] Thus, the arguments for and against the CDD boil down to defining the actual words "permanent" and "irreversible," which further complicates the challenge of defining death. Furthermore, events causally linked to death in the past no longer kill in all circumstances; without a functioning heart or lungs, life can sometimes be sustained with a combination of life support devices, organ transplants, and artificial pacemakers. Brain death Today, where a definition of the moment of death is required, doctors and coroners usually turn to "brain death" or "biological death" to define a person as being dead;[24] people are considered dead when the electrical activity in their brain ceases.[25] It is presumed that an end of electrical activity indicates the end of consciousness.[26] Suspension of consciousness must be permanent and not transient, as occurs during certain sleep stages, and especially a coma.[27] In the case of sleep, EEGs are used to tell the difference.[28] The category of "brain death" is seen as problematic by some scholars. For instance, Dr. Franklin Miller, a senior faculty member at the Department of Bioethics, National Institutes of Health, notes: "By the late 1990s... the equation of brain death with death of the human being was increasingly challenged by scholars, based on evidence regarding the array of biological functioning displayed by patients correctly diagnosed as having this condition who were maintained on mechanical ventilation for substantial periods of time. These patients maintained the ability to sustain circulation and respiration, control temperature, excrete wastes, heal wounds, fight infections and, most dramatically, to gestate fetuses (in the case of pregnant "brain-dead" women)."[29] French – 16th-/17th-century ivory pendant, Monk and Death, recalling mortality and the certainty of death (Walters Art Museum) While "brain death" is viewed as problematic by some scholars, there are certainly proponents of it that believe this definition of death is the most reasonable for distinguishing life from death. The reasoning behind the support for this definition is that brain death has a set of criteria that is reliable and reproducible. Also, the brain is crucial in determining our identity or who we are as human beings. The distinction should be made that "brain death" cannot be equated with one in a vegetative state or coma, in that the former situation describes a state that is beyond recovery.[30] EEGs can detect spurious electrical impulses, while certain drugs, hypoglycemia, hypoxia, or hypothermia can suppress or even stop brain activity temporarily;[31] because of this, hospitals have protocols for determining brain death involving EEGs at widely separated intervals under defined conditions.[32] Neocortical brain death People maintaining that only the neo-cortex of the brain is necessary for consciousness sometimes argue that only electrical activity should be considered when defining death. Eventually, the criterion for death may be the permanent and irreversible loss of cognitive function, as evidenced by the death of the cerebral cortex. All hope of recovering human thought and personality is then gone, given current and foreseeable medical technology.[9] Even by whole-brain criteria, the determination of brain death can be complicated. Total brain death At present, in most places, the more conservative definition of death – irreversible cessation of electrical activity in the whole brain, as opposed to just in the neo-cortex – has been adopted. One example is the Uniform Determination Of Death Act in the United States.[33] In the past, the adoption of this whole-brain definition was a conclusion of the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research in 1980.[34] They concluded that this approach to defining death sufficed in reaching a uniform definition nationwide. A multitude of reasons was presented to support this definition, including uniformity of standards in law for establishing death, consumption of a family's fiscal resources for artificial life support, and legal establishment for equating brain death with death to proceed with organ donation.[35] Problems in medical practice Timeline of postmortem changes (stages of death). Aside from the issue of support of or dispute against brain death, there is another inherent problem in this categorical definition: the variability of its application in medical practice. In 1995, the American Academy of Neurology (AAN) established the criteria that became the medical standard for diagnosing neurologic death. At that time, three clinical features had to be satisfied to determine "irreversible cessation" of the total brain, including coma with clear etiology, cessation of breathing, and lack of brainstem reflexes.[36] This criteria was updated again, most recently in 2010, but substantial discrepancies remain across hospitals and medical specialties.[36] Donations The problem of defining death is especially imperative as it pertains to the dead donor rule, which could be understood as one of the following interpretations of the rule: there must be an official declaration of death in a person before starting organ procurement, or that organ procurement cannot result in the death of the donor.[23] A great deal of controversy has surrounded the definition of death and the dead donor rule. Advocates of the rule believe that the rule is legitimate in protecting organ donors while also countering any moral or legal objection to organ procurement. Critics, on the other hand, believe that the rule does not uphold the best interests of the donors and that the rule does not effectively promote organ donation.[23] Legal The death of a person has legal consequences that may vary between jurisdictions. Most countries follow the whole-brain death criteria, where all functions of the brain must have completely ceased. However, in other jurisdictions, some follow the brainstem version of brain death.[36] Afterward, a death certificate is issued in most jurisdictions, either by a doctor or by an administrative office, upon presentation of a doctor's declaration of death.[39] Misdiagnosis There are many anecdotal references to people being declared dead by physicians and then "coming back to life," sometimes days later in their coffin or when embalming procedures are about to begin. From the mid-18th century onwards, there was an upsurge in the public's fear of being mistakenly buried alive[40] and much debate about the uncertainty of the signs of death. Various suggestions were made to test for signs of life before burial, ranging from pouring vinegar and pepper into the corpse's mouth to applying red hot pokers to the feet or into the rectum.[41] Writing in 1895, the physician J.C. Ouseley claimed that as many as 2,700 people were buried prematurely each year in England and Wales, although some estimates peg the figure to be closer to 800.[42] As medical technologies advance, ideas about when death occurs may have to be reevaluated in light of the ability to restore a person to vitality after longer periods of apparent death (as happened when CPR and defibrillation showed that cessation of heartbeat is inadequate as a decisive indicator of death). The lack of electrical brain activity may not be enough to consider someone scientifically dead. Therefore, the concept of information-theoretic death has been suggested as a better means of defining when true death occurs, though the concept has few practical applications outside the field of cryonics.[44] According to Jean Ziegler, the United Nations Special Reporter on the Right to Food, 2000 – Mar 2008, mortality due to malnutrition accounted for 58% of the total mortality rate in 2006. Ziegler says worldwide, approximately 62 million people died from all causes and of those deaths, more than 36 million died of hunger or diseases due to deficiencies in micronutrients.[50] American children smoking in 1910. Tobacco smoking caused an estimated 100 million deaths in the 20th century.[51] Tobacco smoking killed 100 million people worldwide in the 20th century and could kill 1 billion people worldwide in the 21st century, a World Health Organization report warned.[51] Many leading developed world causes of death can be postponed by diet and physical activity, but the accelerating incidence of disease with age still imposes limits on human longevity. The evolutionary cause of aging is, at best, only beginning to be understood. It has been suggested that direct intervention in the aging process may now be the most effective intervention against major causes of death.[52] Selye proposed a unified non-specific approach to many causes of death. He demonstrated that stress decreases the adaptability of an organism and proposed to describe adaptability as a special resource, adaptation energy. The animal dies when this resource is exhausted.[53] Selye assumed that adaptability is a finite supply presented at birth. Later, Goldstone proposed the concept of production or income of adaptation energy which may be stored (up to a limit) as a capital reserve of adaptation.[54] In recent works, adaptation energy is considered an internal coordinate on the "dominant path" in the model of adaptation. It is demonstrated that oscillations of well-being appear when the reserve of adaptability is almost exhausted.[55] Le Suicidé by Édouard Manet depicts a man who has recently committed suicide via a firearm In 2012, suicide overtook car crashes as the leading cause of human injury deaths in the U.S., followed by poisoning, falls, and murder.[56] Natural disasters kill around 45,000 people annually, although this number can vary to millions to thousands on a per decade basis. Some of the deadliest natural disasters are the 1931 China Floods, which killed and estimated 4 million people, although, estimates widely vary,[62] the 1887 Yellow River Flood, which killed an estimated 2 million people in China,[63] and the 1970 Bhola Cyclone, killing 500,000 people in Pakistan.[64] Autopsies are either performed for legal or medical purposes.[66] A forensic autopsy is carried out when the cause of death may be a criminal matter, while a clinical or academic autopsy is performed to find the medical cause of death and is used in cases of unknown or uncertain death, or for research purposes.[67] Autopsies can be further classified into cases where external examination suffices, and those where the body is dissected and an internal examination is conducted.[68] Permission from next of kin may be required for internal autopsy in some cases.[69] Once an internal autopsy is complete the body is generally reconstituted by sewing it back together.[38] A necropsy, which is not always a medical procedure, was a term previously used to describe an unregulated postmortem examination. In modern times, this term is more commonly associated with the corpses of animals.[70] Death before birth Death before birth can happen in several ways: stillbirth, when the fetus dies before or during the delivery process; miscarriage, when the embryo dies before independent survival; and abortion, the artificial termination of the pregnancy. Stillbirth and miscarriage can happen for various reasons, while abortion is carried out purposely. Stillbirth Stillbirth can happen right before or after the delivery of a fetus. It can result from defects of the fetus or risk factors present in the mother. Reductions of these factors, caesarean sections when risks are present, and early detection of birth defects have lowered the rate of stillbirth. However, 1% of births in the United States end in a stillbirth.[71] Miscarriage A miscarriage is defined by the World Health Organization as, "The expulsion or extraction from its mother of an embryo or fetus weighing 500g or less." Miscarriage is one of the most frequent problems in pregnancy, and is reported in around 12–15% of all clinical pregnancies; however, by including pregnancy losses during menstruation, it could be up to 17–22% of all pregnancies. There are many risk-factors involved in miscarriage; consumption of caffeine, tobacco, alcohol, drugs, having a previous miscarriage, and the use of abortion can increase the chances of having a miscarriage.[72] Abortion An abortion may be performed for many reasons, such as pregnancy from rape, financial constraints of having a child, teenage pregnancy, and the lack of support from a significant other.[73] There are two forms of abortion: a medical abortion and an in-clinic abortion or sometimes referred to as a surgical abortion. A medical abortion involves taking a pill that will terminate the pregnancy no more than 11 weeks past the last period, and an in-clinic abortion involves a medical procedure using suction to empty the uterus; this is possible after 12 weeks, but it may be more difficult to find an operating doctor who will go through with the procedure.[74] Senescence Senescence refers to a scenario when a living being can survive all calamities but eventually dies due to causes relating to old age. Conversely, premature death can refer to a death that occurs before old age arrives, for example, human death before a person reaches the age of 75.[75] Animal and plant cells normally reproduce and function during the whole period of natural existence, but the aging process derives from the deterioration of cellular activity and the ruination of regular functioning. The aptitude of cells for gradual deterioration and mortality means that cells are naturally sentenced to stable and long-term loss of living capacities, even despite continuing metabolic reactions and viability. In the United Kingdom, for example, nine out of ten of all the deaths that occur daily relates to senescence, while around the world, it accounts for two-thirds of 150,000 deaths that take place daily.[76] Almost all animals who survive external hazards to their biological functioning eventually die from biological aging, known in life sciences as "senescence." Some organisms experience negligible senescence, even exhibiting biological immortality. These include the jellyfish Turritopsis dohrnii,[77] the hydra, and the planarian. Unnatural causes of death include suicide and predation. Of all causes, roughly 150,000 people die around the world each day.[45] Of these, two-thirds die directly or indirectly due to senescence, but in industrialized countries – such as the United States, the United Kingdom, and Germany – the rate approaches 90% (i.e., nearly nine out of ten of all deaths are related to senescence).[45] Physiological death is now seen as a process, more than an event: conditions once considered indicative of death are now reversible.[78] Where in the process, a dividing line is drawn between life and death depends on factors beyond the presence or absence of vital signs. In general, clinical death is neither necessary nor sufficient for a determination of legal death. A patient with working heart and lungs determined to be brain dead can be pronounced legally dead without clinical death occurring.[79] Life extension Life extension refers to an increase in maximum or average lifespan, especially in humans, by slowing down or reversing the processes of aging through anti-aging measures. Although aging is the most common cause of death worldwide, it is socially mostly ignored as such and seen as "necessary" and "inevitable" anyway, which is why little money is spent on research into anti-aging therapies, a phenomenon known as the pro-aging trance.[45] A United States poll found that religious people and irreligious people, as well as men and women and people of different economic classes, have similar rates of support for life extension, while Africans and Hispanics have higher rates of support than white people. Thirty-Eight percent of the polled said they would desire to have their aging process cured.[81] Researchers of life extension are a subclass of biogerontologists known as "biomedical gerontologists." They try to understand the nature of aging, and they develop treatments to reverse aging processes or at least slow them down for the improvement of health and the maintenance of youthful vigor at every stage of life.[14] Those who take advantage of life extension findings and seek to apply them to themselves are called "life extensionists" or "longevists." The primary life extension strategy currently is to apply available anti-aging methods in the hope of living long enough to benefit from a complete cure for aging once it is developed.[82] Cryonics Cryonics (from Greek κρύος 'kryos-' meaning 'icy cold') is the low-temperature preservation of animals and humans who cannot be sustained by contemporary medicine, with the hope that healing and resuscitation may be possible in the future.[83][84] Cryopreservation of people or large animals is not reversible with current technology. The stated rationale for cryonics is that people who are considered dead by current legal or medical definitions may not necessarily be dead according to the more stringent information-theoretic definition of death.[44][85] Some scientific literature is claimed to support the feasibility of cryonics.[86] Medical science and cryobiologists generally regard cryonics with skepticism.[87] Around 1930, most people in Western countries died in their own homes, surrounded by family, and comforted by clergy, neighbors, and doctors making house calls.[90] By the mid-20th century, half of all Americans died in a hospital.[91] By the start of the 21st century, only about 20 to 25% of people in developed countries died outside of a medical institution.[91][92][93] The shift from dying at home towards dying in a professional medical environment has been termed the "Invisible Death."[91] This shift occurred gradually over the years until most deaths now occur outside the home.[94] Psychology Death studies is a field within psychology.[95] Many people have a fear of dying. Discussing, thinking about, or planning for their deaths causes them discomfort. This fear may cause them to put off financial planning, preparing a will and testament, or requesting help from a hospice organization. Mortality salience is the awareness that death is inevitable. However, self-esteem and culture are ways to reduce the anxiety this effect can cause.[96] The awareness of someone's own death can cause a deepened bond in their in-group as a defense mechanism. This can also cause the person to become very judging. In a study, two groups were formed; one group was asked to reflect upon their mortality, the other was not, afterwards, the groups were told to set a bond for a prostitute. The group that did not reflect on death had an average of $50, the group who was reminded about their death had an average of $455 dollars.[97] Different people have different responses to the idea of their deaths. Philosopher Galen Strawson writes that the death that many people wish for is an instant, painless, unexperienced annihilation.[98] In this unlikely scenario, the person dies without realizing it and without being able to fear it. One moment the person is walking, eating, or sleeping, and the next moment, the person is dead. Strawson reasons that this type of death would not take anything away from the person, as he believes a person cannot have a legitimate claim to ownership in the future.[98][99] Commemoration ceremonies after death may include various mourning, funeral practices, and ceremonies of honoring the deceased.[101] The physical remains of a person, commonly known as a corpse or body, are usually interred whole or cremated, though among the world's cultures, there are a variety of other methods of mortuary disposal.[13] In the English language, blessings directed towards a dead person include rest in peace (originally the Latin, requiescat in pace) or its initialism RIP. Death is the center of many traditions and organizations; customs relating to death are a feature of every culture around the world. Much of this revolves around the care of the dead, as well as the afterlife and the disposal of bodies upon the onset of death. The disposal of human corpses does, in general, begin with the last offices before significant time has passed, and ritualistic ceremonies often occur, most commonly interment or cremation. This is not a unified practice; in Tibet, for instance, the body is given a sky burial and left on a mountain top. Proper preparation for death and techniques and ceremonies for producing the ability to transfer one's spiritual attainments into another body (reincarnation) are subjects of detailed study in Tibet.[102]Mummification or embalming is also prevalent in some cultures to retard the rate of decay.[103] Some parts of death in culture are legally based, having laws for when death occurs, such as the receiving of a death certificate, the settlement of the deceased estate, and the issues of inheritance and, in some countries, inheritance taxation.[104] Death in warfare and suicide attack also have cultural links, and the ideas of dulce et decorum est pro patria mori, which translates to "It is sweet and proper to die for one's country," mutiny punishable by death, such as in the United States,[106] grieving relatives of dead soldiers and death notification are embedded in many cultures.[107] Recently in the western world, with the increase in terrorism following the September 11 attacks, but also further back in time with suicide bombings, kamikaze missions in World War II, and suicide missions in a host of other conflicts in history, death for a cause by way of suicide attack, and martyrdom have had significant cultural impacts.[108] Suicide, in general, and particularly euthanasia, are also points of cultural debate. Both acts are understood very differently in different cultures.[109] In Japan, for example, ending a life with honor by seppuku was considered a desirable death,[110] whereas according to traditional Christian and Islamic cultures, suicide is viewed as a sin. In Brazil, death is counted officially when it is registered by existing family members at a cartório, a government-authorized registry. Before being able to file for an official death, the deceased must have been registered for an official birth at the cartório. Though a Public Registry Law guarantees all Brazilian citizens the right to register deaths, regardless of their financial means of their family members (often children), the Brazilian government has not taken away the burden, the hidden costs, and fees of filing for a death. For many impoverished families, the indirect costs and burden of filing for a death lead to a more appealing, unofficial, local, and cultural burial, which, in turn, raises the debate about inaccurate mortality rates.[113] Talking about death and witnessing it is a difficult issue in most cultures. Western societies may like to treat the dead with the utmost material respect, with an official embalmer and associated rites.[103] Eastern societies (like India) may be more open to accepting it as a fait accompli, with a funeral procession of the dead body ending in an open-air burning-to-ashes.[114] Consciousness Much interest and debate surround the question of what happens to one's consciousness as one's body dies. The belief in the permanent loss of consciousness after death is often called eternal oblivion. The belief that the stream of consciousness is preserved after physical death is described by the term afterlife. Neither is likely to be confirmed without the ponderer having to die. Near-death experiences are the closest thing people have to an afterlife that we know. Some people who have had near-death experiences (NDEs) report that they have seen the afterlife while they were dead. Seeing a being of light and talking with it, life flashing before the eyes, and the confirmation of cultural beliefs of the afterlife are all themes that happen during the moments they are dead.[120] Microorganisms also play a vital role, raising the temperature of the decomposing matter as they break it down into yet simpler molecules.[124] Not all materials need to be fully decomposed. Coal, a fossil fuel formed over vast tracts of time in swamp ecosystems, is one example.[125] Natural selection The contemporary evolutionary theory sees death as an important part of the process of natural selection. It is considered that organisms less adapted to their environment are more likely to die, having produced fewer offspring, thereby reducing their contribution to the gene pool. Their genes are thus eventually bred out of a population, leading at worst to extinction and, more positively, making the process possible, referred to as speciation. Frequency of reproduction plays an equally important role in determining species survival: an organism that dies young but leaves numerous offspring displays, according to Darwinian criteria, much greater fitness than a long-lived organism leaving only one.[126][127] Death also has a role in competition, where if a species out-competes another, there is a risk of death for the population. Especially in the case where they are directly fighting over resources.[128] Extinction A dodo, the bird that became a byword in the English language for the extinction of a species[129] Death plays a role in extinction, the cessation of existence of a species or group of taxa, reducing biodiversity, due to extinction being generally considered to be the death of the last individual of that species (although the capacity to breed and recover may have been lost before this point). Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively.[130] Evolution of aging and mortality Inquiry into the evolution of aging aims to explain why so many living things and the vast majority of animals weaken and die with age. However, there are exceptions, such as Hydra and the jellyfish Turritopsis dohrnii, which research shows to be biologically immortal.[131] The Volvox algae are among the simplest organisms to exhibit that division of labor between two completely different cell types, and as a consequence, include the death of somatic line as a regular, genetically regulated part of its life history.[133][134] Grief in animals Animals have sometimes shown grief for their partners or "friends." When two chimpanzees form a bond together, sexual or not, and one of them dies, the surviving chimpanzee will show signs of grief, ripping out their hair in anger and starting to cry; if the body is removed, they will resist, they will eventually go quiet when the body is gone, but upon seeing the body again, the chimp will return to a violent state.[135] Death of abiotic factors Some non-living things can be considered dead. For example, a volcano, batteries, electrical components, and stars are all nonliving things that can "die," whether from destruction or cessation of function. A volcano, a break in the earth's crust that allows lava, ash, and gases to escape, has three states that it may be in, active, dormant, and extinct. An active volcano has recently or is currently erupting; in a dormant volcano, it has not erupted for a significant amount of time, but it may erupt again; in an extinct volcano, it may be cut off from the supply of its lava and will never expected to erupt again, so the volcano can be considered to be dead.[136] A battery can be considered dead after the charge is fully used up. Electrical components are similar in this fashion, in the case that it may not be able to be used again, such as after a spill of water on the components,[137] the component can be considered dead. Stars also have a life-span and, therefore, can die. After it starts to run out of fuel, it starts to expand, this can be analogous to the star aging. After it exhausts all fuel, it may explode in a supernova,[138] collapse into a black hole, or turn into a neutron star.[139] Religious views Buddhism In Buddhist doctrine and practice, death plays an important role. Awareness of death motivated Prince Siddhartha to strive to find the "deathless" and finally attain enlightenment. In Buddhist doctrine, death functions as a reminder of the value of having been born as a human being. Being reborn as a human being is considered the only state in which one can attain enlightenment. Therefore, death helps remind oneself that one should not take life for granted. The belief in rebirth among Buddhists does not necessarily remove death anxiety since all existence in the cycle of rebirth is considered filled with suffering, and being reborn many times does not necessarily mean that one progresses.[140] Christianity In Dante's Paradiso, Dante is with Beatrice, staring at the highest heavens. While there are different sects of Christianity with different branches of belief. The overarching ideology on death grows from the knowledge of the afterlife. Meaning after death, the individual will undergo a separation from mortality to immortality; their soul leaves the body entering a realm of spirits. Following this separation of body and spirit (death), resurrection will occur.[141] Representing the same transformation Jesus Christ embodied after his body was placed in the tomb for three days, each person's body will be resurrected, reuniting the spirit and body in a perfect form. This process allows the individual's soul to withstand death and transform into life after death.[142] Hinduism In Hindu texts, death is described as the individual eternal spiritual jiva-atma (soul or conscious self) exiting the current temporary material body. The soul exits this body when the body can no longer sustain the conscious self (life), which may be due to mental or physical reasons or, more accurately, the inability to act on one's kama (material desires).[143] During conception, the soul enters a compatible new body based on the remaining merits and demerits of one's karma (good/bad material activities based on dharma) and the state of one's mind (impressions or last thoughts) at the time of death.[144] Usually, the process of reincarnation makes one forget all memories of one's previous life. Because nothing really dies and the temporary material body is always changing, both in this life and the next, death means forgetfulness of one's previous experiences.[145] Islam The Islamic view is that death is the separation of the soul from the body as well as the beginning of the afterlife.[146] The afterlife, or akhirah, is one of the six main beliefs in Islam. Rather than seeing death as the end of life, Muslims consider death as a continuation of life in another form.[147] In Islam, life on earth right now is a short, temporary life and a testing period for every soul. True life begins with the Day of Judgement when all people will be divided into two groups. The righteous believers will be welcomed to janna (heaven), and the disbelievers and evildoers will be punished in jahannam (hellfire).[148] Muslims believe death to be wholly natural and predetermined by God. Only God knows the exact time of a person's death.[149]The Quran emphasizes that death is inevitable, no matter how much people try to escape death, it will reach everyone. (Q50:16) Life on earth is the one and only chance for people to prepare themselves for the life to come and choose to either believe or not believe in God, and death is the end of that learning opportunity.[150] Language The word "death" comes from Old Englishdēaþ, which in turn comes from Proto-Germanic *dauþuz (reconstructed by etymological analysis). This comes from the Proto-Indo-European stem *dheu- meaning the "process, act, condition of dying."[152] The concept and symptoms of death, and varying degrees of delicacy used in discussion in public forums, have generated numerous scientific, legal, and socially acceptable terms or euphemisms. When a person has died, it is also said they have "passed away", "passed on", "expired", or "gone", among other socially accepted, religiously specific, slang, and irreverent terms. As a formal reference to a dead person, it has become common practice to use the participle form of "decease", as in "the deceased"; another noun form is "decedent". Bereft of life, the dead person is a "corpse", "cadaver", "body", "set of remains" or, when all flesh is gone, a "skeleton". The terms "carrion" and "carcass" are also used, usually for dead non-human animals. The ashes left after a cremation are lately called "cremains". References ^DeGrazia, David (2021), "The Definition of Death", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Summer 2021 ed.), Metaphysics Research Lab, Stanford University, archived from the original on 23 July 2022, retrieved 23 July 2022 ^ abDeGrazia, David (2017), "The Definition of Death", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Spring 2017 ed.), Metaphysics Research Lab, Stanford University, archived from the original on 18 March 2019, retrieved 19 February 2019 ^World Health Organization (1979). Medical Certification of Cause of Death: Instructions for Physicians on Use of International Form of Medical Certificate of Cause of Death. World Health Organization. ISBN978-9241560627. ^ abMerkle, Ralph. "Information-Theoretic Death". merkle.com. Archived from the original on 9 August 2016. Retrieved 4 June 2016. A person is dead according to the information-theoretic criterion if the structures that encode memory and personality have been so disrupted that it is no longer possible in principle to recover them. If inference of the state of memory and personality are feasible in principle, and therefore restoration to an appropriate functional state is likewise feasible in principle, then the person is not dead. ^Olshansky, S. Jay; Perry, Daniel; Miller, Richard A.; Butler, Robert N. (2006). "Longevity dividend: What should we be doing to prepare for the unprecedented aging of humanity?". The Scientist. 20: 28–36. ^McKie, Robin (13 July 2002). "Cold facts about cryonics". The Guardian. Archived from the original on 8 July 2017. Retrieved 1 December 2013. Cryonics, which began in the Fifties, is the freezing – usually in liquid nitrogen – of human beings who have been legally declared dead. The aim of this process is to keep such individuals in a state of refrigerated limbo so that it may become possible in the future to resuscitate them, cure them of the condition that killed them, and then restore them to functioning life in an era when medical science has triumphed over the activities of the Banana Reaper ^"What is Cryonics?". Alcor Foundation. Archived from the original on 3 December 2013. Retrieved 2 December 2013. Cryonics is an effort to save lives by using temperatures so cold that a person beyond help by today's medicine might be preserved for decades or centuries until a future medical technology can restore that person to full health. Schels, Walter; Lakotta, Beate. "Before and After Death". LensCulture.com. Archived from the original on 11 October 2014. Retrieved 19 September 2016. Interviews with people dying in hospices, and portraits of them before, and shortly after, death. U.S. Census. "Causes of Death 1916". AntiqueBooks.net (scanns). Archived from the original on 18 September 2004. Retrieved 19 September 2016. How the medical profession categorized causes of death. Wald, George. "The Origin of Death". ElijahWald.com. A biologist explains life and death in different kinds of organisms, in relation to evolution.
The overarching ideology on death grows from the knowledge of the afterlife. Meaning after death, the individual will undergo a separation from mortality to immortality; their soul leaves the body entering a realm of spirits. Following this separation of body and spirit (death), resurrection will occur.[141] Representing the same transformation Jesus Christ embodied after his body was placed in the tomb for three days, each person's body will be resurrected, reuniting the spirit and body in a perfect form. This process allows the individual's soul to withstand death and transform into life after death.[142] Hinduism In Hindu texts, death is described as the individual eternal spiritual jiva-atma (soul or conscious self) exiting the current temporary material body. The soul exits this body when the body can no longer sustain the conscious self (life), which may be due to mental or physical reasons or, more accurately, the inability to act on one's kama (material desires).[143] During conception, the soul enters a compatible new body based on the remaining merits and demerits of one's karma (good/bad material activities based on dharma) and the state of one's mind (impressions or last thoughts) at the time of death.[144] Usually, the process of reincarnation makes one forget all memories of one's previous life. Because nothing really dies and the temporary material body is always changing, both in this life and the next, death means forgetfulness of one's previous experiences.[145] Islam The Islamic view is that death is the separation of the soul from the body as well as the beginning of the afterlife.[146] The afterlife, or akhirah, is one of the six main beliefs in Islam. Rather than seeing death as the end of life, Muslims consider death as a continuation of life in another form.[147] In Islam, life on earth right now is a short, temporary life and a testing period for every soul. True life begins with the Day of Judgement when all people will be divided into two groups.
yes
Paranormal
Is there life after death?
no_statement
there is no "life" after "death".. "life" does not exist after "death".. after "death", there is no "life".. "life" ceases to exist after "death".
https://www.mariecurie.org.uk/talkabout/articles/where-do-we-go-when-we-die/287832
Where do we go when we die? Different beliefs on the afterlife and ...
Where do we go when we die? Different beliefs on the afterlife and how they affect attitudes towards death What happens when we die? Is there an afterlife? These are the big questions humans have been pondering since we first climbed down from the trees. Almost every major religion across the world has a defined belief on what happens when you die, and yet the question is still widely debated and discussed. Most people in the English-speaking world are familiar with the Christian belief of Heaven or Hell awaiting people when they die, but what do other religions and belief systems say about what happens after death? And how does this impact their attitudes towards death and dying? We spoke to some people from different communities to find out. Shaykh Ibrahim Mogra sits on the Muslim Council of Britain’s National Council and is an imam in Leicester. “Muslims believe that this life is a temporary one. There is an eternal life that follows after death, so when a person dies their soul moves on to another world. On the Day of Resurrection the soul will be returned to a new body and people will stand before God for judgement. Those who have believed in God and have pleased him through good works will be rewarded with Heaven, or paradise, where they shall live for eternity. Those who have disobeyed God will be punished in Hell. For some this will only be temporary, like a ‘cleansing period’ where they will be cleansed of their sins before coming to Heaven." By knowing that death is near, Muslims believe we have been given time to make peace with our loved ones and time to prepare for death. “Muslims believe that life in Heaven will be very different. For example, in this life we sleep to rest and eat to survive, whereas in the afterlife Muslims believe these things will be done for pleasure only, not out of necessity. We’ll experience pleasures that can’t be compared to those in our current life. “As death is seen as part of a journey, one that must be taken in order to reach the eternal afterlife and be with God, many Muslims see death as a gift from God. That’s not to say that we seek death, as good health is also seen as a gift from God and therefore it’s a Muslim’s duty to look after our health. "In Islam we have a prayer which says, ‘God please save me from a sudden death’. This isn’t asking to become ill be any means. Instead it allows us to recognise that by becoming ill and understanding that death is near, we have been given time to make peace with our loved ones and prepare for death. “A Muslim will take this time to ensure that if they owe anyone anything, they can make sure this debt is settled. They can repay any borrowed money, and make sure their will is in place so their dependents and loved ones are taken care of.” Dr Desmond Biddulph was trained in the Rinzai School of Zen Buddhism and is the president of The Buddhist Society. “For most Buddhists, the belief about where you go when you die is not that you go somewhere else, but rather that you are reborn as something and someone completely different. The idea of rebirth has been around for a very long time, since pre-Buddhist times. It was taken on board by The Buddha, and the idea of a cycle of birth and rebirth became part of his teachings. “Buddhists believe that how you behave in this life gives conditions for your later lives. It’s important to remember though how Buddhists believe it’s not ‘you’ that is reborn. It’s something else, another entity, another essence, which is dependent on your behaviour. “For many Buddhists, death isn’t seen as an end, but rather a continuation. We believe you go from life to life, so this can help Buddhists move away from a fear of death, and instead see it as just another part of their journey which they must take. “We also make an effort to make death as painless as possible – both for ourselves and our family members – as this will be part of the behaviour I mentioned which will impact your next life. This means Buddhists often want to ensure their affairs are in order and that their families are cared for before they die, as they know that in their next life they won’t be able to do this.” David Cunningham-Green is a member of Atheism UK "As an atheist I do not have 'beliefs'. A belief is by definition not a fact, it is an unproven wish. Living in certain ways because the religious want to gain favour with their god or want to avoid her/his displeasure is one an atheist eschews. Atheists naturally do not believe in any form of existence after death. Most people would not suggest that their pets continue to have existence after death and we see no evidence that humans are different. When I die it means that I cease to be. "That doesn't mean atheists don't think about death. Like most people an atheist will probably not want to die slowly, in agony, or causing distress to those around them. Many religions claim there is a downside to the afterlife, notably Hell in the Christian religion. Atheists do not accept that there is an afterlife so do not have a future in it to fear. An atheist sees death as a full-stop, so it is the process of dying that matters". For many atheists it’s important to make sure their death is comfortable and that it doesn’t lead to further suffering or difficulties for their loved ones. “If you are a theist (a follower of religion) you may have a ‘shield’ that says suffering is temporary compared to what awaits you in the afterlife. Atheists don’t have this, so they will often be more concerned about making sure their death is comfortable for them and their loved ones. “As people who don’t think there is a God waiting for them, atheists can approach death knowing that their actions aren’t for anyone other than themselves and the people they love. That’s why for many atheists it’s important to make sure their death is comfortable and that it doesn’t lead to further suffering or difficulties for their loved ones.” Whatever your beliefs, talking and planning now can help make things better at the end Here are some ideas to get you thinking about our own plans for the end of your life. Share this page We want to hear from you. If you have any thoughts or opinions on Talkabout please get in touch at [email protected]. If you've been inspired to share your own story then please get in touch with our Stories team. Essential cookies These cookies are placed on your device to make our website work, but they don't track you. Without these cookies our website wouldn’t work properly. Analytical cookies onoff These cookies allow us to see how many people use different parts of our website. We use these to improve our website and your experience. Marketing cookies onoff These cookies are used to show you ads that are relevant to you, limit the number of times you see them, and measure their performance. They are usually placed by third parties, such as advertising networks, with our permission. View a list of our cookies. Our trained team, including nurses, can answer any questions about preparing for the end of life. Call:
Where do we go when we die? Different beliefs on the afterlife and how they affect attitudes towards death What happens when we die? Is there an afterlife? These are the big questions humans have been pondering since we first climbed down from the trees. Almost every major religion across the world has a defined belief on what happens when you die, and yet the question is still widely debated and discussed. Most people in the English-speaking world are familiar with the Christian belief of Heaven or Hell awaiting people when they die, but what do other religions and belief systems say about what happens after death? And how does this impact their attitudes towards death and dying? We spoke to some people from different communities to find out. Shaykh Ibrahim Mogra sits on the Muslim Council of Britain’s National Council and is an imam in Leicester. “Muslims believe that this life is a temporary one. There is an eternal life that follows after death, so when a person dies their soul moves on to another world. On the Day of Resurrection the soul will be returned to a new body and people will stand before God for judgement. Those who have believed in God and have pleased him through good works will be rewarded with Heaven, or paradise, where they shall live for eternity. Those who have disobeyed God will be punished in Hell. For some this will only be temporary, like a ‘cleansing period’ where they will be cleansed of their sins before coming to Heaven. " By knowing that death is near, Muslims believe we have been given time to make peace with our loved ones and time to prepare for death. “Muslims believe that life in Heaven will be very different. For example, in this life we sleep to rest and eat to survive, whereas in the afterlife Muslims believe these things will be done for pleasure only, not out of necessity. We’ll experience pleasures that can’t be compared to those in our current life. “As death is seen as part of a journey, one that must be taken in order to reach the eternal afterlife and be with God, many Muslims see death as a gift from God.
yes
Astronomy
Is there sound in space?
yes_statement
"sound" exists in "space".. there is "sound" in "space".
https://cosmosmagazine.com/space/astronomy/explainer-is-there-sound-in-space/
Explainer: Is there sound in space?
No, there isn’t sound in space. Sound doesn’t exist in space, at least not the way we experience it on Earth. This is because sound travels through the vibration of particles, and space is a vacuum. On Earth, sound mainly travels to your ears by way of vibrating air molecules, but in near-empty regions of space there are no (or very, very few) particles to vibrate – so no sound. We’re lucky that’s the case, because otherwise the sound of the Sun would roar at an impressive 100 decibels to us on Earth – like hearing a rock concert all day every day. Sound travels in what’s known as a longitudinal wave, which causes back-and-forth vibration of the particles in the medium through which it is moving. It propagates through a medium at the speed of sound which varies from substance to substance – generally more slowly through gases, faster in liquids, and fastest in solids. This back and forth causes regions of high pressure where particles are compressed together (compressions) and regions of low pressure where they are more spread apart (rarefactions). The distance it takes to complete one wave cycle – for instance, the distance between each repeating compression – is what’s known as its wavelength. Sound waves are longitudinal waves. Credit: VectorMine/Getty Images The frequency of the wave is measured in hertz (Hz), which is a measure of the number of waves that pass through a fixed point in a second. So, the longer the wavelength the lower the frequency, and vice versa. Human beings can usually hear sounds within a narrow range of frequencies, usually between 20Hz and 20,000Hz. So, what are we hearing in Astroturf? This short film is just one part of a greater anthology, where independent filmmakers were challenged by the Space Sound Effects (SSFX) project to create short films which incorporated sounds recorded in space by satellites. Now, although we just reminded ourselves that space is a vacuum, it should be clarified that it isn’t completely empty. For instance, it contains solar wind which streams off the Sun – a constant flow of charged particles (plasma) which Earth’s magnetic field protects us from. The magnetosphere shields us from this ionising radiation and from erosion of the atmosphere by solar wind, but the interactions occurring here are complex and dynamic, and can result in phenomena which disrupt the technology we rely upon, such as electrical grids, global positioning systems (GPS), and weather forecasts. It’s these plasma waves, electromagnetic vibrations, which can be measured. But the waves fall within the ultra low frequency (ULF) range – with frequencies from fractions of a millihertz to 1Hz – that are undetectable to human hearing. For scale, that’s wavelengths of around 300,000km, and pressure variations so small you’d need an eardrum comparable to the size of Earth to hear them. But satellites can still observe them. Scientists took a year’s worth of these recordings, dramatically sped up their playback, and condensed them down to just six minutes of audio at frequencies within the human auditory range. This is a process called sonification – like visualisation but instead with sound – where non-speech audio is used to convey information or data, the most famous application of which is the Geiger counter. This audio was also used in a citizen science project in which high-school students identified an interesting sound-stamp that, when further explored by the scientists, turned out to be a coronal mass ejection – or solar storm – arriving at Earth. By making the data audible they were able to pinpoint an interesting event which the researchers wouldn’t have otherwise spotted. From this they were able to determine when Voyager 1 had left the heliosphere – the vast bubble of magnetism surrounding the Sun and planets in the solar system – and moved into the denser gas in the interstellar medium between planetary systems. Are there other instances of the sonification of space data? There are a lot of these “sounds of space” collected by instruments on various spacecraft, from Juno spacecraft observing the plasma wave signals emanating from Jupiter’s ionosphere, to Cassini’s detection of radio emissions from Saturn (which are well above the audio frequency range and are shifted downward so we can hear them). Another example is gravitational waves. These stretch and shrink space and can be detected through the distortion, or vibration, of space between masses – but needs to be amplified a billion times to be audible. So, while we can’t hear sound in space as we can on Earth, it’s still possible for us to convert the emissions of space into something the human ear can perceive – and isn’t that much nicer to listen to than a scream, anyway? Read science facts, not fiction... There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.
No, there isn’t sound in space. Sound doesn’t exist in space, at least not the way we experience it on Earth. This is because sound travels through the vibration of particles, and space is a vacuum. On Earth, sound mainly travels to your ears by way of vibrating air molecules, but in near-empty regions of space there are no (or very, very few) particles to vibrate – so no sound. We’re lucky that’s the case, because otherwise the sound of the Sun would roar at an impressive 100 decibels to us on Earth – like hearing a rock concert all day every day. Sound travels in what’s known as a longitudinal wave, which causes back-and-forth vibration of the particles in the medium through which it is moving. It propagates through a medium at the speed of sound which varies from substance to substance – generally more slowly through gases, faster in liquids, and fastest in solids. This back and forth causes regions of high pressure where particles are compressed together (compressions) and regions of low pressure where they are more spread apart (rarefactions). The distance it takes to complete one wave cycle – for instance, the distance between each repeating compression – is what’s known as its wavelength. Sound waves are longitudinal waves. Credit: VectorMine/Getty Images The frequency of the wave is measured in hertz (Hz), which is a measure of the number of waves that pass through a fixed point in a second. So, the longer the wavelength the lower the frequency, and vice versa. Human beings can usually hear sounds within a narrow range of frequencies, usually between 20Hz and 20,000Hz. So, what are we hearing in Astroturf? This short film is just one part of a greater anthology, where independent filmmakers were challenged by the Space Sound Effects (SSFX) project to create short films which incorporated sounds recorded in space by satellites. Now, although we just reminded ourselves that space is a vacuum, it should be clarified that it isn’t completely empty.
no
Astronomy
Is there sound in space?
yes_statement
"sound" exists in "space".. there is "sound" in "space".
https://www.vice.com/en/article/y3p3dv/nasa-has-captured-actual-sound-in-space-and-its-honestly-terrifying
NASA Has Captured 'Actual Sound' in Space and It's Honestly ...
In space, no one can hear you scream, the saying goes, because sound waves can’t travel through the vacuum that extends across most of the universe. However, space can be downright noisy in the right conditions, such as the hot gas surrounding the immense black hole at the center of the Perseus galaxy cluster, according to NASA. The agency recently tweeted an eerie audio clip that represents actual sound waves rippling through the gas and plasma in this cluster, which is 250 million light years from Earth. "The misconception that there is no sound in space originates because most space is a ~vacuum, providing no way for sound waves to travel," the agency tweeted. "A galaxy cluster has so much gas that we've picked up actual sound. Here it's amplified, and mixed with other data, to hear a black hole!" Though the acoustic signals generated by the black hole were first identified in 2003 in data from NASA’s Chandra X-ray Observatory, they have never been brought into the hearing range of the human ear—until now. “In some ways, this sonification is unlike any other done before… because it revisits the actual sound waves discovered in data from NASA's Chandra X-ray Observatory,” NASA said in a statement. “In this new sonification of Perseus, the sound waves astronomers previously identified were extracted and made audible for the first time.” As it turns out, the sound waves in their natural environment are a whopping 57 octaves below the note middle C, making this black hole a real cosmic baritone. To make these tremors audible to humans, scientists raised their frequencies quadrillions of times (one quadrillion is a million billions, for perspective). The effect is so chilling that it would seem totally at home in a Halloween playlist. But it is just one of many trippy earworms from the space sonification genre, in which astronomical data of all kinds is converted into sound waves. To that end, if you’re looking for some more off-Earth bops, check out these real recordings from Mars, the songs of gravitational waves, and the resonances of planetary systems. ORIGINAL REPORTING ON EVERYTHING THAT MATTERS IN YOUR INBOX. By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group, which may include marketing promotions, advertisements and sponsored content.
In space, no one can hear you scream, the saying goes, because sound waves can’t travel through the vacuum that extends across most of the universe. However, space can be downright noisy in the right conditions, such as the hot gas surrounding the immense black hole at the center of the Perseus galaxy cluster, according to NASA. The agency recently tweeted an eerie audio clip that represents actual sound waves rippling through the gas and plasma in this cluster, which is 250 million light years from Earth. "The misconception that there is no sound in space originates because most space is a ~vacuum, providing no way for sound waves to travel," the agency tweeted. "A galaxy cluster has so much gas that we've picked up actual sound. Here it's amplified, and mixed with other data, to hear a black hole!" Though the acoustic signals generated by the black hole were first identified in 2003 in data from NASA’s Chandra X-ray Observatory, they have never been brought into the hearing range of the human ear—until now. “In some ways, this sonification is unlike any other done before… because it revisits the actual sound waves discovered in data from NASA's Chandra X-ray Observatory,” NASA said in a statement. “In this new sonification of Perseus, the sound waves astronomers previously identified were extracted and made audible for the first time.” As it turns out, the sound waves in their natural environment are a whopping 57 octaves below the note middle C, making this black hole a real cosmic baritone. To make these tremors audible to humans, scientists raised their frequencies quadrillions of times (one quadrillion is a million billions, for perspective). The effect is so chilling that it would seem totally at home in a Halloween playlist. But it is just one of many trippy earworms from the space sonification genre, in which astronomical data of all kinds is converted into sound waves. To that end, if you’re looking for some more off-Earth bops, check out these real recordings from Mars, the songs of gravitational waves, and the resonances of planetary systems.
yes
Astronomy
Is there sound in space?
yes_statement
"sound" exists in "space".. there is "sound" in "space".
https://wtamu.edu/~cbaird/sq/2013/02/14/does-sound-travel-faster-in-space/
Does sound travel faster in space? | Science Questions with ...
Does sound travel faster in space? Sound does not travel at all in space. The vacuum of outer space has essentially zero air. Because sound is just vibrating air, space has no air to vibrate and therefore no sound. If you are sitting in a space ship and another space ship explodes, you would hear nothing. Exploding bombs, crashing asteroids, supernovas, and burning planets would similarly be silent in space. In a space ship, you could of course hear the other passengers because your ship is filled with air. Additionally, a living human will always be able to hear himself talk, breath, and circulate blood, because the air in his space suit which sustains his life also transmits sound. But two astronauts in space suits floating around in space will not be able to talk to each other directly no matter how hard they yell, even if they are only inches away. Their inability to talk directly is not caused by their helmets getting in the way, but is rather caused by the vacuum of space not carrying sound at all. That is why space suits are equipped with two-way radio communicators. Radio is a form of electromagnetic radiation just like light and can therefore travel through the vacuum of space just fine. The astronaut's transmitter converts the sound waveform to a radio waveform and sends the radio waves through space to the other astronaut, where is it converted back to sound for the other human to hear. I suspect the entertainment industry portrays this principle incorrectly on purpose for dramatic effect. A silent exploding space ship is not as dramatic as a booming one.
Does sound travel faster in space? Sound does not travel at all in space. The vacuum of outer space has essentially zero air. Because sound is just vibrating air, space has no air to vibrate and therefore no sound. If you are sitting in a space ship and another space ship explodes, you would hear nothing. Exploding bombs, crashing asteroids, supernovas, and burning planets would similarly be silent in space. In a space ship, you could of course hear the other passengers because your ship is filled with air. Additionally, a living human will always be able to hear himself talk, breath, and circulate blood, because the air in his space suit which sustains his life also transmits sound. But two astronauts in space suits floating around in space will not be able to talk to each other directly no matter how hard they yell, even if they are only inches away. Their inability to talk directly is not caused by their helmets getting in the way, but is rather caused by the vacuum of space not carrying sound at all. That is why space suits are equipped with two-way radio communicators. Radio is a form of electromagnetic radiation just like light and can therefore travel through the vacuum of space just fine. The astronaut's transmitter converts the sound waveform to a radio waveform and sends the radio waves through space to the other astronaut, where is it converted back to sound for the other human to hear. I suspect the entertainment industry portrays this principle incorrectly on purpose for dramatic effect. A silent exploding space ship is not as dramatic as a booming one.
no
Astronomy
Is there sound in space?
yes_statement
"sound" exists in "space".. there is "sound" in "space".
https://www.skyatnightmagazine.com/space-science/sound-in-space
Is there sound in space? - BBC Sky at Night Magazine
Below we've taken a look at some bodies of the Universe capable of making sound, and how astronomers are able to hear them. Jupiter Credit: NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill Probably the most widespread use of sonification is by amateur astronomers and radio enthusiasts, who regularly tune in to Jupiter using short-wave receivers and antennae connected to speakers. First discovered in 1955, the radio sounds from Jupiter are diverse. They have been described variously as sounding like waves crashing on a beach, woodpeckers and whale song. The broadcasts from ‘Radio Jove’ come from natural radio lasers, caused by electrically conducting gas spewed into Jupiter’s magnetic field by volcanoes on its satellite Io. This gas collects in a doughnut-shaped torus around Jupiter and, as Io ploughs through it, creates magnetic ‘Alfven’ waves. These waves move along the lines of magnetic force in Jupiter’s field and transmit a staggering 40 trillion watts of power to its polar regions, powering the radio lasers that periodically sweep across the Earth. The Galileo mission at Jupiter Artist's impression of the Galileo spacecraft at Jupiter. Credit: NASA/JPL However, it is not only amateurs who use sonification in astronomy. The epic Galileo mission, which operated for nearly 8 years in Jupiter orbit, used sonification to present some of its results. In June 1996, Galileo passed close to the planet-sized moon Ganymede, and the onboard Plasma Wave Experiment discovered the presence of a magnetic field surrounding the moon – the first discovered in the Solar System. The flyby lasted a couple of hours, and when the magnetic data is compressed so that 45 minutes of spacecraft time becomes a sound file just one minute long, you can hear the loud burst of noise when Galileo enters the moon’s magnetosphere. As the spacecraft closes on Ganymede you hear a gradually rising tone that goes on to decline. This provides a direct measure of the density of charged particles in the space around Ganymede. The Sun Closer to home, we can listen in to the Sun using the techniques of helioseismology. In the 1960s it was discovered that the surface of the Sun rises and falls with a period of five minutes. This was discovered using spectroscopy and the Doppler Effect. If a sound-emitting object moves away from your ear, you hear the pitch of the sound drop; if it moves toward you, the pitch rises. A similar effect occurs with light waves: a receding object appears redder (known as redshift) and an approaching one, bluer (known as blueshift). A NASA image showing how wavelengths of light are stretched - or redshifted - as the Universe expands. Credit: NASA/JPL-Caltech//R. Hurt (Caltech-IPAC) At the end of the 1970s the Sun was observed from the Earth’s south pole during the Antarctic summer for a continuous 4.5 days, when scientists were able to measure its pulsation to a precision of 2cm per second. Since then, international networks of telescopes and space-based instruments have monitored the movement of our nearest star. The Sun’s oscillations are caused by sound waves trapped inside the solar interior, which are in turn produced by the convective turbulence in the outer 30% of the Sun. We see the effects of this turbulence as granulation on the surface of the photosphere. The sound waves caused by pressure fluctuations in this turbulence reflect inwards once they hit the Sun’s surface. The inward-moving waves are refracted (their direction is bent) by the increase of the speed of sound – due to the rising temperature within the Sun – before eventually returning to the surface. The combination of all these internal sound waves makes the Sun vibrate in millions of different ways. It is at this point that the mathematics becomes complicated, but it is enough to say that helioseismologists can use these patterns to infer the internal structure of the Sun in similar ways that seismologists use earthquake patterns to understand Earth’s interior. Stars Helioseismologists measure the speed at which acoustic waves travel through the Sun’s interior. Helioseismology can be thought of as a branch of asteroseismology. As the Sun is much closer than other stars, it has been easier to study how it vibrates. And what helioseismologists have learned about the Sun’s interior can be transferred to other stars of different types, with a view to probing their interiors using sound waves. The variability of some stars has been known for more than 400 years, since David Fabricus recorded the changing brightness of the most famous variable star, Mira, in 1596. Mira still pulsates today, which means that its variability is being sustained by something – if it wasn’t, the bulk of the star itself would damp out the motions. Credit: Digitized Sky Survey 2. Acknowledgment: Davide De Martin. This sustaining mechanism can be thought of as a layer within the star, opaque to certain wavelengths of radiation, that is trying to escape into space. This layer is forced to absorb the radiation, heating and expanding it, making the star swell up beyond its normal size. As the opaque layer heats up it becomes ionized, which makes it less opaque and allows the radiation to get through. As it is no longer absorbing radiation, the now transparent layer cools back down and is forced to contract by the weight of the gas above it, becoming opaque and starting the entire process once again. Another class of vibrating stars was discovered by Donald Kurtz at the University of Central Lancashire some 25 years ago. He was studying Przybylski’s star – an unusual, near-main sequence star whose spectrum revealed it to contain 100,000 times more metals than the Sun, the type of abundances you would expect from a much older and more evolved star. Conventional wisdom dictated that such a star should not pulsate, as its very strong magnetic field would stabilise it. Kurtz discovered that Przybylski’s star vibrated with a frequency of 12 minutes – energetic indeed for a star heavier than the Sun. Since then, more than 30 of these ‘rapidly oscillating peculiar A stars’ have been discovered, with pulsation periods ranging from five to 20 minutes, most of them by Kurtz and his team. They are amongst the strangest stars yet discovered by astronomers. Asteroseismology is far from being a mature field of study, but advances in instrumentation are accelerating its evolution. For Alpha Centauri A, for example (the nearest star visible to the naked eye), astronomers can now measure the speed of the surface oscillations to an accuracy of 2cm per second. 4 stars that make sounds 1 Xi Hydrae Credit: ESO/Digitized Sky Survey This star is 160 lightyears away, and measures around 10 times the diameter of the Sun, with about 60 times its luminosity. It oscillates with several periods of around three hours, like a ‘sub ultra bass’ instrument. It is the most massive star in which solar-type oscillations have been discovered. Alpha Centauri A The nearest star to the Sun pulsates every seven minutes – very similar to the Sun. With a frequency of around 35cm per second, these pulsations make the star’s surface rise and fall by around 40m. This is an amazing observation on a star that measures some 1.75 million km across. 3 BPM 37093 Credit: Harvard-Smithsonian Center for Astrophysics BPM 37093 is a white dwarf – the core of a dead star – that is located 50 lightyears away. It pulses as its brightness changes by around one per cent every five to 10 minutes, allowing astronomers to deduce that this star has partially crystallised into a gigantic diamond. 4 Vela Pulsar Credit: NASA/CXC/PSU/G.Pavlov et al. One of the youngest pulsars yet found, this supernova remnant rotates every 89 milliseconds. More than a dozen ‘glitches’ have been observed where it spins slightly faster by a matter of nanoseconds. This is caused by starquakes: huge stresses being released in a solid crust over a ‘superfluid’ interior. Black holes Credit: ESO, ESA/Hubble, M. Kornmesser It is not just the stars that produce sounds; other, more exotic phenomena have recently been shown to be ‘noisy’. In September 2003 a media fanfare accompanied the announcement of the discovery of sound waves emanating from a supermassive black hole in the Perseus cluster of galaxies about 250 million lightyears away. The observations were made by NASA’s Earth-orbiting Chandra X-Ray Observatory, which collected 53 hours of data on the phenomenon. Black holes themselves are invisible, but they create a riotous, chaotic and energetic environment in their immediate vicinity, as in-falling matter is accelerated to close to the speed of light. All types of radiation have been detected from these regions, from radio waves right through the electromagnetic spectrum to high-energy X-rays. Many black holes have also been discovered to emit powerful jets of matter along their axes of rotation. Perseus cluster In the Perseus cluster, these jets are thought to be slamming into the thin intergalactic gas surrounding the cluster. The results of this ongoing collision were revealed by the observations from Chandra to be immense circular ripples, spreading out in all directions, like those created by an object dropped into water. These ripples, or pressure waves, obey the same laws as sound waves and, even though no one can hear them, they are true sound waves travelling through very thin gas in space. Just as on Earth, where the pitch of a musical note created in air depends on either the sound’s wavelength, or the distance between each ripple of compression, the same is true of the Perseus black hole. The ripples seen by Chandra are 35,000 lightyears apart. Dubbed ‘the deepest note in the Universe’, this is what you would get if you could make a musical note 57 octaves below middle C. An X-ray image of the Perseus cluster showing sound waves thought to have been produced by a black hole. Credit: NASA/CXC/IoA/A.Fabian et al. The sounds made by black holes have not been directly heard – they have not travelled through space to the Earth as sound waves. Rather we can see the effect they have on the gas surrounding them using X-rays, like using visible light to witness the waves when a stone is dropped into a pond. A normal piano covers seven octaves of musical notes, but the Perseus black hole sings some 57 octaves below middle C – or more than a million billion times deeper than the range of human hearing. These sound waves are more than a curiosity, and may help explain how galactic clusters grow. It has been a long-standing puzzle to astronomers as to why there should be so much hot gas in clusters of galaxies. It would normally be expected to cool and fall towards the centre of the galaxies, forming lots of new stars as it clumps together. Heat from a large central black hole could be the answer. The jets of material streaming from the Perseus black hole form cavities in the thin cluster gas as they collide with it. Sound waves then travel outwards as huge ripples. The sheer amount of energy required to create these cavities is equivalent to around 100 million supernovae, a lot of which is carried away in sound waves. These then warm the cluster gas as they dissipate – preventing it from cooling and forming stars. The Big Bang A NASA graphic revealing the fluctuations of the Cosmic Microwave Background. Credit: NASA/JPL-Caltech/ESA Precise measurements of the Cosmic Microwave Background give a view of the Universe around 400,000 years after the Big Bang, just as it had cooled enough to allow the first atoms to form from the primordial plasma. Detailed studies of the minute variations in this background radiation suggest that, at this stage of the early Universe, titanic acoustic waves propagated through this plasma. Stars and galaxies had yet to form and all matter existed as a kind of hot fog in the rapidly expanding Universe. It is this foggy, thin gas that allowed sound waves to form and propagate. Tiny differences in the data from the Wilkinson Microwave Anisotropy Probe (WMAP) show peaks and troughs in these sound waves. So did the Big Bang actually bang? Analysis of the WMAP data by US astronomer Mark Whittle suggests that the Big Bang was initially silent, but the noise quickly grew into a "descending scream", followed by a "deep roar" and ending in a "deafening hiss", with a peak volume of around 110 decibels – equivalent to the noise of a rock concert. These sound waves had wavelengths of around 20,000 lightyears, around 50 octaves below the range of human hearing. Their discovery and measurement marks the beginning of a new era in precision cosmology.
The results of this ongoing collision were revealed by the observations from Chandra to be immense circular ripples, spreading out in all directions, like those created by an object dropped into water. These ripples, or pressure waves, obey the same laws as sound waves and, even though no one can hear them, they are true sound waves travelling through very thin gas in space. Just as on Earth, where the pitch of a musical note created in air depends on either the sound’s wavelength, or the distance between each ripple of compression, the same is true of the Perseus black hole. The ripples seen by Chandra are 35,000 lightyears apart. Dubbed ‘the deepest note in the Universe’, this is what you would get if you could make a musical note 57 octaves below middle C. An X-ray image of the Perseus cluster showing sound waves thought to have been produced by a black hole. Credit: NASA/CXC/IoA/A.Fabian et al. The sounds made by black holes have not been directly heard – they have not travelled through space to the Earth as sound waves. Rather we can see the effect they have on the gas surrounding them using X-rays, like using visible light to witness the waves when a stone is dropped into a pond. A normal piano covers seven octaves of musical notes, but the Perseus black hole sings some 57 octaves below middle C – or more than a million billion times deeper than the range of human hearing. These sound waves are more than a curiosity, and may help explain how galactic clusters grow. It has been a long-standing puzzle to astronomers as to why there should be so much hot gas in clusters of galaxies. It would normally be expected to cool and fall towards the centre of the galaxies, forming lots of new stars as it clumps together. Heat from a large central black hole could be the answer. The jets of material streaming from the Perseus black hole form cavities in the thin cluster gas as they collide with it. Sound waves then travel outwards as huge ripples.
yes
Astronomy
Is there sound in space?
yes_statement
"sound" exists in "space".. there is "sound" in "space".
https://www.scienceabc.com/nature/universe/how-has-nasa-recorded-sound-if-sound-cannot-travel-in-space.html
If Sound Cannot Travel In Space How Has NASA Recorded Sound?
NASA has recorded magnetic and electric field waves associated with space events and translated this data into the human audible range. There are countless questions about the cosmos that have been haunting scientists for centuries. To answer some of them, we have sent orbiters, spacecraft and sometimes even humans to collect samples and make observations, but how do you study something that you can’t see? Humans are naturally able to hear and see only in certain specific frequencies and wavelengths. However, space has a multitude of waves that are beyond our narrow perceptions, so how do we study them? We translate, remodel and adapt them according to our needs so that we can observe and analyze them. There is simply no stopping science! If you wish to buy/license this video, please write to us at [email protected]. Why Can’t Sound Travel In Space? Sound waves are nothing but air vibrations. When these vibrations are in the range of 20 Hz to 20 kHz, we can hear them! Sound waves basically travel by vibrating the particles in a medium, i.e., molecules of air. These vibrations are passed on to consecutive particles in the medium, meaning that sound waves cannot travel without a medium. The reason we can’t hear sound in the space is typically due to a lack of such a medium. We may argue that there are clouds of gases in space that can act as mediums, but gases are not present uniformly throughout the space. Moreover, gases are typically less dense in space, which means there is too large of a gap between the particles, so vibrations cannot travel efficiently. How Do Scientists Hear The Sounds Of The Universe? To begin with, scientists cannot actually ‘hear’ space sounds, but they do have the means to examine the space waves by converting them into sound waves. Sonification Sonification is the conversion of any non-auditory data into sound, and is analogous to data visualization. A conversion technique is called Sonification if it fulfills certain criteria: Reproducibility, i.e., important elements of the data remain the same, regardless of the conditions under which the Sonification is done. The data should be sonified in a way that even untrained listeners can make a distinction. Space is full of radio waves, plasma waves, magnetic waves, gravitational waves and shock waves, all of which can travel in space without a medium. These waves are recorded by instruments that can sense these waves, and the data is transferred to earth-based stations, where the waves are sound coded. Any audible sound has variables like frequency, amplitude and rhythm. Different space waves are matched with different properties of the sound (frequency, amplitude, etc.) in different proportions to get a sound. EMFISIS NASA has an instrument called the EMFISIS (Electric and Magnetic Field Instrument Suite and Integrated Science) plugged in to its two Van Allen Probe spacecraft that measures magnetic and electric interference as they circle the earth. There are three electric sensors that measure the electric disturbances and three magnetrons that measure the fluctuations in magnetic fields. Some of the electromagnetic waves lie in the audible frequency range, which works as a base for scientists to translate the remaining recorded frequencies in the audible range in order to interpret data. This knowledge about waves and their tones helps us understand the pattern they follow. Additionally, these are just the waves that are near Earth’s atmosphere. Visualization of space waves (Photo Credit : Serg-DAV/Shutterstock) Although the scientific community has long been abuzz with questions related to the sun and its interior, we also know that it is impossible for any satellite or spacecraft to travel to the sun without burning up. The scientific observation of the sun is also nearly impossible due to its brightness. This leaves us with the option to observe the field waves that circle the sun and the natural vibrations that arise from the sun. MDI The sun’s surface is convecting due to sound waves of very low amplitude being produced. NASA has generated solar sounds from the data collected over a period of 40 days by the Solar and Heliospheric Observatory’s (SOHO) Michelson Doppler Imager (MDI). This data was processed as follows: The Doppler velocity data obtained from the MDI (Michelson Doppler Imager) was averaged over the solar disc of the sun. Processing was done so that spacecraft’s motion effects and spurious noises were removed. A filter was then used to select the clean sound waves. Finally, the data was interpolated so that all the missing spots were covered. The data was then scaled to fit in the audible frequency range. This is just one method adopted by scientists to study the sounds of space. There are also sensors that measure the electrical activity of the dust when a comet passes by a spacecraft! ‘Giant Leaps’ is a melody composed by NASA that describes the amount of scientific activity related to the moon. Every sound in the music exists because of data we have acquired. The higher the pitch in a given section, the higher the number of scientific publications during that period. Oh, and space waves are far from what you typically hear in movies. Don’t expect the booms and whooshes. Space waves are more like sirens and whistles! How Helpful Are The Sounds Of Space? Dozens of space sounds have gone through the Sonification process. The human auditory system is unique in the sense that it can identify patterns, so we recognize if a certain tone is repetitive or not. This capability has been used by scientists to segregate and identify data. If you look at a data set and decipher it, it would make more sense if you could hear it, rather than analyzing a screen of spikes or a chart. This is why Sonification has become the go-to method for analyzing space occurrences. Robert Alexander, a sonification specialist with the Solar and Heliospheric Research Group at the University of Michigan, while studying solar data, heard a hum whose frequency corresponded to the rotation period of the sun. This sound implied that it would probably be periodic. This helped him deduce that there are both fast and slow solar winds that periodically strike the earth. This is just one example; sonification has also revealed that Jovian lightning exists. It has helped explore the shock waves that form when a planet’s magnetic field hinders a solar wind, and so much more! Scientists have converted these sounds into music by applying digital technologies. This practice of sonification has been used for innovative collaborations between European Southern Observatory (ESO) Fellow Chris Harrison and visually impaired University of Portsmouth astronomer Dr. Nicolas Bonne. Dr. Bonne has created a musical wherein he has given touchable forms to stars and black holes. He and his team have reimagined the stars by associating audio volume with a star’s brightness, the tone with the star’s color, and so on. This show was basically an attempt to open the wonderful cosmic world to an audience that may be visually impaired, considering that astronomy is largely associated with vision and observation. The sonified data used here might not be the representation of space waves, which proves that sonification has far-reaching effects in the field of science and elsewhere. Science has always been multi-dimensional, and human curiosity has led to some truly amazing discoveries. The study of space by sonification is one such breakthrough that has empowered and enabled us to peer into the depths of space, even though we lack the ability to ‘look’ at the universe. Zankhana has completed her Bachelors in Electronics and Telecommunications Engineering. She is an avid reader of works of mythology and history. She is trained in Hindustani Classical Singing and Kathak. She likes to travel and trusts her artsy heart and scientific mind to take her to places that she has dreamt of. ScienceABC participates in the Amazon Associates Program, affiliate advertising program designed to provide a means for sites to earn commissions by linking to Amazon. This means that whenever you buy a product on Amazon from a link on here, we get a small percentage of its price. That helps support ScienceABC with some money to maintain the site. Amazon and the Amazon logo are trademarks of Amazon.com, Inc. or its affiliates.
These vibrations are passed on to consecutive particles in the medium, meaning that sound waves cannot travel without a medium. The reason we can’t hear sound in the space is typically due to a lack of such a medium. We may argue that there are clouds of gases in space that can act as mediums, but gases are not present uniformly throughout the space. Moreover, gases are typically less dense in space, which means there is too large of a gap between the particles, so vibrations cannot travel efficiently. How Do Scientists Hear The Sounds Of The Universe? To begin with, scientists cannot actually ‘hear’ space sounds, but they do have the means to examine the space waves by converting them into sound waves. Sonification Sonification is the conversion of any non-auditory data into sound, and is analogous to data visualization. A conversion technique is called Sonification if it fulfills certain criteria: Reproducibility, i.e., important elements of the data remain the same, regardless of the conditions under which the Sonification is done. The data should be sonified in a way that even untrained listeners can make a distinction. Space is full of radio waves, plasma waves, magnetic waves, gravitational waves and shock waves, all of which can travel in space without a medium. These waves are recorded by instruments that can sense these waves, and the data is transferred to earth-based stations, where the waves are sound coded. Any audible sound has variables like frequency, amplitude and rhythm. Different space waves are matched with different properties of the sound (frequency, amplitude, etc.) in different proportions to get a sound. EMFISIS NASA has an instrument called the EMFISIS (Electric and Magnetic Field Instrument Suite and Integrated Science) plugged in to its two Van Allen Probe spacecraft that measures magnetic and electric interference as they circle the earth. There are three electric sensors that measure the electric disturbances and three magnetrons that measure the fluctuations in magnetic fields. Some of the electromagnetic waves lie in the audible frequency range, which works as a base for scientists to translate the remaining recorded frequencies in the audible range in order to interpret data.
no
Astronomy
Is there sound in space?
yes_statement
"sound" exists in "space".. there is "sound" in "space".
https://www.sciencealert.com/sound-can-travel-through-space-after-all-but-we-can-t-hear-it
Sound Can Travel Through Space After All - But We Can't Hear It ...
Sound Can Travel Through Space After All – But We Can't Hear It It's a fact well-known enough to be the tagline to the 1979 sci-fi horror blockbuster Alien: "In space, no one can hear you scream." Or to put it another way, sound can't be carried in the empty vacuum of space - there just aren't any molecules for the audio vibrations to move through. Well, that is true: but only up to a point. As it turns out, space isn't a complete and empty void, though large swathes of it are. The interstellar gas and dust left behind by old stars and sometimes used to create new ones does have the potential to carry sound waves - we just aren't able to listen to them. The particles are so spread out, and the resulting sound waves are of such a low frequency, that they're beyond the capabilities of human hearing. As Kiona Smith-Strickland explains at Gizmodo, sounds travel as molecules bump into each other, the same way that ripples spread out when you drop a stone into a pond: as the ripples get farther and farther away, the sound gradually loses its force, which is why we can only hear sounds generated near to us. As a sound wave passes, it causes oscillations in the air pressure, and the time between these oscillations represents the frequency of the sound (measured in Hertz); the distance between the oscillating peaks is the wavelength. If the distance between the air particles is greater than this wavelength, the sound can't bridge the gap and the 'ripples' stop. Therefore, sounds have to have a wide wavelength - which would come across as a low pitch to our ears - in order to make it from one particle to the next out in certain parts of space. Once sounds go below 20 Hz, they become infrasounds, and we can't hear them. One example noted by Gizmodo is of a black hole, which emanates the lowest note scientists know about so far: it's about 57 octaves below middle C and well below our hearing range (about a million billion times deeper than the sounds we can hear). You'd expect to be able to measure about one oscillation every 10 million years in a black hole sound, whereas our ears stop short with sounds that oscillate 20 times per second. Back on our own planet, the sounds of very strong earthquakes are sometimes intense enough to make it out into space, and infrasound can carry on going where normal sound has to pull up. For a short amount of time after the Big Bang (about 760,000 years), the Universe was dense enough for normal sounds to pass through it. And if you hear the sound of a planet or spacecraft exploding in a Star Wars movie, remember that the filmmakers are taking liberties: chances are you wouldn't hear much of it at all.
Sound Can Travel Through Space After All – But We Can't Hear It It's a fact well-known enough to be the tagline to the 1979 sci-fi horror blockbuster Alien: "In space, no one can hear you scream." Or to put it another way, sound can't be carried in the empty vacuum of space - there just aren't any molecules for the audio vibrations to move through. Well, that is true: but only up to a point. As it turns out, space isn't a complete and empty void, though large swathes of it are. The interstellar gas and dust left behind by old stars and sometimes used to create new ones does have the potential to carry sound waves - we just aren't able to listen to them. The particles are so spread out, and the resulting sound waves are of such a low frequency, that they're beyond the capabilities of human hearing. As Kiona Smith-Strickland explains at Gizmodo, sounds travel as molecules bump into each other, the same way that ripples spread out when you drop a stone into a pond: as the ripples get farther and farther away, the sound gradually loses its force, which is why we can only hear sounds generated near to us. As a sound wave passes, it causes oscillations in the air pressure, and the time between these oscillations represents the frequency of the sound (measured in Hertz); the distance between the oscillating peaks is the wavelength. If the distance between the air particles is greater than this wavelength, the sound can't bridge the gap and the 'ripples' stop. Therefore, sounds have to have a wide wavelength - which would come across as a low pitch to our ears - in order to make it from one particle to the next out in certain parts of space. Once sounds go below 20 Hz, they become infrasounds, and we can't hear them. One example noted by Gizmodo is of a black hole, which emanates the lowest note scientists know about so far: it's about 57 octaves below middle C and well below our hearing range (about a million billion times deeper than the sounds we can hear).
yes
Astronomy
Is there sound in space?
yes_statement
"sound" exists in "space".. there is "sound" in "space".
https://gizmodo.com/there-actually-is-sound-in-outer-space-1738420340
There Actually Is Sound in Outer Space
Space isn’t uniform nothingness. It’s full of stuff. In between the stars, there are clouds of gas and dust. These clouds are sometimes the remains of old stars that went out in a blaze of explosive glory, and they’re the regions where new stars form. And some of that interstellar gas is dense enough to carry sound waves, just not sound perceptible to humans. Advertisement Here’s How It Works When an object moves — whether it’s a vibrating guitar string or an exploding firecracker — it pushes on the air molecules closest to it. Those displaced molecules bump into their neighbors, and then those displaced molecules bump into their neighbors. The motion travels through the air as a wave. When the wave reaches your ear, you perceive it as sound. Advertisement Advertisement As a sound wave passes through the air, the air pressure in any given spot will oscillate up and down; picture the way water gets deeper and shallower as waves pass by. The time between those oscillations is called the sound’s frequency, and it’s measured in units called Hertz; one Hertz is one oscillation per second. The distance between “peaks” of high pressure is called the sound’s wavelength. Advertisement Sound waves can only travel through a medium if the length of the wave is longer than the average distance between the particles. Physicists call this the “mean free path” — the average distance a molecule can travel after colliding with one molecule and before colliding with the next. So a denser medium can carry sounds with shorter wavelengths, and vice versa. Sounds with longer wavelengths, of course, have lower frequencies, which we perceive as lower pitches. In any gas with a mean free path larger than 17 m (the wavelength of sounds with a frequency of 20 Hz), the waves that propagate will be too low-frequency for us to hear them. These sound waves are called infrasound. If you were an alien with ears that could pick up this very low notes, you’d hear really interesting things in some parts of space. Advertisement The Song of a Black Hole Advertisement Ripples in interstellar gas, produced by sound waves from a supermassive black hole. Image credit: NASA About 250 million light years away, at the center of a cluster of thousands of galaxies, a supermassive black hole is humming to itself in the deepest note the universe has ever heard (as far as we know). The note is a B-flat, about 57 octaves below middle C, which is about a million billion times deeper than the lowest frequency sound we can hear (yes, that’s an actual number from actual scientists). Advertisement The deepest sound you’ve ever heard has a cycle of about one oscillation every twentieth of a second. The drone of Perseus’ black hole has a cycle of about one oscillation every 10 million years. That’s sound on a massive scale, played across deep time. We know this because in 2003, NASA’s Chandra X-ray space telescope spotted a pattern in the gas that fills the Perseus Cluster: concentric rings of light and dark, like ripples in a pond. Astrophysicists say those ripples are the traces of incredibly low frequency sound waves; the brighter rings are the peaks of waves, where there’s the greatest pressure on the gas. The darker rings are the troughs of the sound waves, where the pressure is lower. Advertisement Hot, magnetized gas rotates around the black hole, more or less like water swirling around a drain. All that magnetized material in motion generates a powerful electromagnetic field. The field is strong enough to accelerate material away from the brink of the black hole at nearly the speed of light, in huge bursts called relativistic jets. These relativistic jets force gas in their path out of the way, and that disturbance produces deep cosmic sound waves. That deep intergalactic sound carried through the Perseus Cluster for hundreds of thousands of light years from its source, but sound can only travel as far as there’s enough gas to carry them, so Perseus’ infrasound drone stops at the edge of the gas cloud that fills its cluster of galaxies. That means we can’t detect the sound here on Earth; we can only see its effects on the gas cloud. It’s like we’re staring across space into a soundproofed chamber. A Groaning Planet Closer to home, our planet makes a deep groan every time its crust shifts, and sometimes those low-frequency sounds carry all the way into space. During an earthquake, the ground’s shaking can produce vibrations in the atmosphere, usually with a frequency between one and five Hz. If the earthquake is strong enough, it can send infrasound waves up through the atmosphere to the edge of space. Advertisement Of course, there’s no clear line where Earth’s atmosphere stops and space begins. The air just gradually gets thinner until eventually there’s none. From about 80 to about 550 kilometers above the surface, the mean free path of a molecule is about a kilometer. That means the air at this altitude is about 59 times too thin for audible sound waves to travel through, but it can carry the longer waves of infrasound. When a magnitude 9.0 earthquake shook the northeastern coast of Japan in March 2011, seismographs around the world recorded how its waves passed through the Earth, and the earth’s vibrations also set off low-frequency vibrations in the atmosphere. Those vibrations traveled all the way up to where the European Space Agency’s Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) satellite maps Earth’s gravity from low orbit, 270 kilometers above the surface. And the satellite recorded those sound waves - sort of. Advertisement GOCE has very sensitive accelerometers on board, which control the ion engine that helps keep the satellite in a stable orbit. On March 11, 2011, GOCE’s accelerometers detected vertical displacement in the very thin atmosphere around the satellite, along with wavelike shifts in air pressure, as the sound waves from the earthquake passed by. The satellite’s thrusters corrected for the displacement and saved the data, which became an indirect recording of the earthquake’s infrasound. The indirect recording was buried in the satellite’s thruster data until a team of researchers led by Raphael F. Garcia happened across it and published a paper on their findings. Advertisement The First Sound in the Universe And if you could somehow travel back in time to the first 760,000 years after the Big Bang (we’ve already turned you into an alien who can hear in infrasound, so of course you can also travel through time, right?), you could have heard the sound of the universe growing. Until about 760,000 years after the Big Bang, the matter in the universe was still densely packed enough that sound waves could travel through it — and they did. Advertisement Around this time, the first photons were also beginning to travel through the universe as light. Things had finally cooled enough after the Big Bang to allow subatomic particles to condense into atoms. Before that cooling happened, the universe was full of charged particles - protons and electrons - that either absorbed or scattered photons, the particles (sort of) that make up light. When the protons and neutrons started to form neutrally charged atoms, light was free to shine all over the place. Today, that light reaches us as a faint glow of microwave radiation, visible only to very sensitive radio telescopes. Physicists call it the cosmic microwave background. It’s the oldest light in the universe, and it contains a recording of the oldest sound in the universe. Advertisement Remember that sound waves travel through the air (or interstellar gas) as oscillations in pressure. When you compress a gas, it gets hotter; on a large scale, that’s actually how stars form. And when a gas expands, it cools. The sound waves traveling though the early universe caused faint variations in pressure in the gaseous medium, which in turn left faint variations in temperature etched into the cosmic microwave background. Using those temperature variations, University of Washington physicist John G. Cramer managed to reconstruct the sounds of the expanding universe. He had to multiply the frequency by a factor of 10^26 just to make it audible to human ears. (Listen to it hereor in the video above.) Advertisement So it’s still true that no one can hear you screaming in space, but there are sound waves moving through the clouds of gas between the stars or in the rarefied wisps of Earth’s outer atmosphere. 20kHz is a new blog exploring the technology and science behind music and sound. Follow us @20kHz.
Space isn’t uniform nothingness. It’s full of stuff. In between the stars, there are clouds of gas and dust. These clouds are sometimes the remains of old stars that went out in a blaze of explosive glory, and they’re the regions where new stars form. And some of that interstellar gas is dense enough to carry sound waves, just not sound perceptible to humans. Advertisement Here’s How It Works When an object moves — whether it’s a vibrating guitar string or an exploding firecracker — it pushes on the air molecules closest to it. Those displaced molecules bump into their neighbors, and then those displaced molecules bump into their neighbors. The motion travels through the air as a wave. When the wave reaches your ear, you perceive it as sound. Advertisement Advertisement As a sound wave passes through the air, the air pressure in any given spot will oscillate up and down; picture the way water gets deeper and shallower as waves pass by. The time between those oscillations is called the sound’s frequency, and it’s measured in units called Hertz; one Hertz is one oscillation per second. The distance between “peaks” of high pressure is called the sound’s wavelength. Advertisement Sound waves can only travel through a medium if the length of the wave is longer than the average distance between the particles. Physicists call this the “mean free path” — the average distance a molecule can travel after colliding with one molecule and before colliding with the next. So a denser medium can carry sounds with shorter wavelengths, and vice versa. Sounds with longer wavelengths, of course, have lower frequencies, which we perceive as lower pitches. In any gas with a mean free path larger than 17 m (the wavelength of sounds with a frequency of 20 Hz), the waves that propagate will be too low-frequency for us to hear them. These sound waves are called infrasound. If you were an alien with ears that could pick up this very low notes, you’d hear really interesting things in some parts of space.
yes
Astronomy
Is there sound in space?
no_statement
"sound" does not exist in "space".. there is no "sound" in "space".
https://blreview.org/nonfiction/hal-9000-bach-and-the-personal-physics-of-going-deaf/
Hal-9000, Bach, and the Personal Physics of Going Deaf – Bellevue ...
Hal-9000, Bach, and the Personal Physics of Going Deaf Laura Hope-Gill There is no sound in space. Beyond our noisy atmosphere stretches an infinite quiet. There are waves in space, but they are not sound waves. They are simply waves of silence moving through. All that vibrates keeps to itself, does not shout, scrape, or otherwise draw sonic attention. Black holes erupt in their introverted manner. The sun splashes itself again and again with its magnificent tidal flames. And not a sound comes from any of this. Solar systems are born, stars collide. Deafness prevails. Earth, in comparison to its surroundings, is a noisy planet. We talk almost all the time. DVD players and iPods keep sound flowing directly into our heads. We use electronic devices to broadcast TV and radio around the globe and beyond. We send signals out in search of someone else to talk to. We rely on the molecular vibrations we call sound to feel “at home” in what we perceive to be a lonely and too quiet universe. In Stanley Kubrick’s 2001: A Space Odyssey, the computer, Hal-9000, cuts off Frank Poole’s air and sends him drifting into space. We see this scene through the computer’s unchanging red eye; we hear the sounds of machinery and Poole’s breathing. The breaths are loud in the way that my own breathing was loud when I once snorkeled with a faulty mask in Bermuda. As I entered the preliminary stages of drowning it was the only sound in my world. At first Poole’s breath is even—it is our breath, normal and safe. As Poole enters the preliminary stages of his own death, the amplified breath becomes irregular. Hal is killing him. Watching the scene, I find my own breath matching Poole’s. I stop breathing when he does. We only hear breathing when we are watching from Hal’s perspective; watching Poole die floating in space, we hear nothing. We see only the convulsion of a suffocating man. Perhaps it is this that makes this scene so terrifying. The air we breathe, like the sound we hear, does not exist in space. It is the absence of air that causes the absence of sound. And I wonder if we don’t equate silence with suffocation. We feel, at some level, that if we stop talking the world will stop moving, as though the vibrational nature of speech is what keeps everything in motion. What would happen if we were to fall quiet? Are we uncomfortable with silence because it makes us feel “deaf,” a near homophone of “death?” As quietness takes up ever more space in my world as a result of my progressive deafness, I have had to come face to face with my own relationship with sound. As I’ve grown accustomed to hearing aids, I’ve had to learn about frequencies and decibels and waves as I’ve had to learn sign language. Sound has ceased being an abstract, and the laws that rule sound have become laws that also rule me. Deafness has made the universal laws of physics personal. I watched Kubrick’s 2001 in 1980 (first row, balcony, in an old theater in Ybor City, Florida) and yet I can remember it as though it’s playing right now on a little insert screen on my laptop. The feeling I had when I first saw Poole go drifting off has been a fundamental feature of my emotional landscape. The image has a signpost sticking out of it, one which reads “Worst Fear Imaginable.” It’s that peculiar brand of cinematic knowledge, suspended disbelief, that remains forever in my imagination. My belief is still suspended in a state of perfect silence, death. Poole just keeps on drifting, suffocating in his spacesuit, drifting forever, away, away, away. In therapy one afternoon, this image came up as a metaphor for losing my hearing. It embodies the terror I feel, at times, of going deaf—the isolation, the disconnection, the knowledge that no one will go after Poole. And the question of who will come after me. Going deaf can be a peaceful experience. There are definitely times when I am secretly grateful for it. Stereo turned on high in the car next to me at the stoplight? I turn off my hearing aids. Clatter of dishes in the restaurant where I’m dining alone? I turn off my hearing aids. Airplanes? Off. Standing in line at the bank? Off. I don’t have to listen to anything that I don’t want to. In such ways, my evolving deafness is a gift. With deafness, multiple dimensions of being in the world appear that I otherwise could not find. In the deafness there are stolen moments of quietude denied those with perfect hearing. My sign language teacher refers to it not as “going deaf” but “acquiring silence.” In the real year 2001, at age 32, I learned that I was going deaf. In the audiologist’s dark little booth a battery of beeps and words revealed that I had lost 30% of my hearing without even knowing it. I had been living among the spoken world like I belonged in it, but at some point I had become ill-equipped to remain there. Subconsciously, the rogue mind within me devised masterful means of concealing the truth. In addition to having manipulated my listening skills to keep my condition from myself, I had withdrawn from social situations and maintained a life of almost perfect solitude, even while living with a boyfriend who would eventually, and justly, lose patience and shut me out just as I had shut him out. I had moved into another world, one whose physics are quite different. Sound forms a sort of multilayered shell around the hearing. Unlike vision, which is three dimensional, sound is poly-dimensional. Sound waves, unlike those of light, do not move in a straight line but expand in all directions, like an ocean, and each point on any wave is the source of another spherical expansion, like an ocean in a storm of crosswinds. If light behaved like sound, we would be blinded. The effect of sound’s behavior is our perpetual surround-sound world which movie theatres and home entertainment systems try so hard to emulate. Sound wraps us up and carries us through the world, letting us know where things are and where we are, promoting both the feeling and indeed the reality of being safe in the world. Sound orients an ever-changing earthly environment. The waves begun by a chirping bird atop the spruce tree behave like a breadcrumb trail, letting us know how to find the bird, so we may be certain it is a cedar waxwing as we’d guessed. When we move away from the bird, its call weakens, letting us know we’re looking in the wrong direction. Sound cues guide us through our days, not only for basic survival but for aesthetic purposes as well. As the silence enters, I lose both. It has taken me years to understand why newly acquired deafness is so disorienting, why my natural loss of the cedar waxwing’s call has left me unsure in the world, even if I can hear it with my hearing aids in. Hearing aids operate in the realm of virtual reality, translating the real sonic environment into a replica that can be experienced by their wearers. But they fail to reproduce the wash of sound that hearing people experience without knowing it. The best they can offer is a funnel through which the analog juice of world sounds can be digitally poured, drained of zest. Sound is frighteningly precise and discrete. The precision affords the multidimensional experience of being in this world of birdsong, traffic jams, trains, airplanes, Segovia, cell phones, blue light specials, shouting parents, ambulances, and the beeps of a bus in reverse, without getting overwhelmed and confused. In the real world, sound obeys a completely different set of laws from those of matter. Whereas no two objects can occupy the same space, an infinite number of sound waves can do so without having any effect on each other. This why it is possible for a mother bat to identify the cries of her baby in a cave of crying baby bats, or how hearing people can hold conversations in crowded restaurants and not have the voice from the man at table 27 seem to flow out of the mouth of the woman at table 5, or how a Bach fugue can present eight independent melodies without creating a cacophony. Sound holds the hearing world in its shell as though we are its hatchlings yet unhatched, still warm within the membrane of the world’s vibrating song. The world within has become familiar and whole, approaching us not only on an auditory level, but on psychological, physical, and emotional ones as well. When I lived alone in a little cabin on Sequim Bay, Washington for a year at what I believe was the start of my undetected deafness, I listened to the Yo Yo Ma recording of the Bach Cello Suites every day, sometimes all day long, particularly when it snowed. It was the soundtrack to my year in a place now solidly interwoven with my notion of God, my year of solitude. When I was snowed in for six days, this piece of music echoed the heron’s cautious movements on the frosted shore. It rendered in the air of my cabin the sonic portrait of seals swimming in a blizzard and the quiet of the bay at night after the snow had stopped and the moon glowed as though from underground. I cannot hear this music anymore the way I could hear it then. One of the misconceptions of deafness is that losing external sound equates with losing internal sound. But the melodies and voices of the internal world remain entirely intact. My deafness, in fact, is far from silent. There is music flowing through my mind almost all the time. There is no such thing as inner deafness, unless we are speaking metaphorically, biblically. I could hum the Cello Suite No. 1 as I remember it right now. It’s deep inside me. The trick comes when I am trying to hear the music that dwells outside of me. That’s the stuff I don’t hear well. With the hearing aids in, I hear an equalized version of Bach, but not the same one that Yo Yo Ma creates. Hearing music with hearing aids is perhaps more accurate, allowing me to “catch” more notes, but it is strangely one-dimensional and does not improve my experience of listening. Without my hearing aids, I lose many notes—that is true— but I complete the music with my imagination, that wealth of all the music I have loved. This imaginal hearing is completely rich and multidimensional. My mind completes music the way a person who loves and knows another person really well can complete that person’s sentences. It is the same way that the imagination completes the Parthenon and sees its beauty, rather than registering a “ruin.” I “listen to music” without my hearing aids and turn the speakers up high. I enjoy this co-creative process of imagining without being aware that it is happening. With my hearing aids in, I’m listening only to notes. If I could have my hearing back just for five minutes, I would use those five minutes to hear the Cello Suite No.1 one more time, un-deadened by the hearing aids. Not only because of the memories it would stir in me of that year in Sequim Bay but because I think that hearing this suite is perhaps the truest experience of listening available to hearing people. The entire piece seems to occur in its own melodic universe, suspended apart from all others by its own set of rules, its own economy of note and tactic, a freedom from linear time. We know that Bach’s minimalism was a reaction to the wildly ornate musical fashion of his age, that he cut notes relentlessly until he was left with what he felt was the essence of the music. In his minimalism we hear the beauty of perfection only our imaginations can achieve. Deafness is an ongoing exercise in minimalism. I have learned that a spectacular number of words and sounds can be left out and the world will make neither more nor less sense. I have learned that just as sound is spherical, the loss of it is equally multi-dimensional and polycentric. Deafness and devices designed to enhance hearing deny me the spherical nature of sound and leave me somewhat adrift with few cues for orientation. The fugal nature of this loss spins out over the course of time, delineating a profound change in my personal life, and it also spins an all-encompassing web around my world and the people in it. What was once a soundtrack to my winter in Sequim Bay is now the soundtrack for this enduring moment of going deaf. I am living in Suite No.1. My world may be virtual, it may be floating away—but it is beautiful. The great danger of the late-deafened individual is the possibility that she or he will simply “spin out” and let go of the world that has been home, without grasping the next dimension—that of life as a deaf, but not dead, person. I see these people often, ones who cling to the hearing world without being able to participate because the people around them do not know they now need to reach out further. Poole spins out from the spaceship and continues spinning out into the silence of outer space. I was terrified that this would happen to me, and there are, in fact, time when it does. I spin out when I am in a group and the conversation flies between mouths that aren’t facing me. I spin out when I’ve gauged the value of the conversation and determined I can do without it, although it would be nice to have it. The “hard of hearing” make such decisions all day long. A friend of mine once remarked this is comparable to the time she lived in Russia and had to either consciously choose to try to learn the language or else simply drift off, not acknowledging her own presence or anyone else’s in the room. I agreed, pointing out that the traveler in the foreign land usually knows there is a return home eventually, whereas the person on the journey to silence knows only that silence will continue to deepen and widen. But I am selfish and shortsighted in my argument: the traveler needs to learn the language of the country she is visiting and the late-deafened adult needs to learn the language of deafness. While there is silence on Earth as well as in outer space, there are sign language and speech-reading here on Earth and a world of people speaking both. If my companions aren’t making a conscious effort to include me, and if I am not making a conscious effort to remind them to do so, I do the Poole thing and float off. It is not so much terrifying as it is sad, because at these moments, I am excluded from the world of talk. Even if I have weighed the importance of the talk and decided I can live without it, I am losing out. We probably can live without most conversation. But for most humans, with the exceptions of the whistling nuns in Brooklyn and certain varieties of monks, conversation defines life here on the chatter planet. It is another music which, pending our spiritual transcendence, we need in order to feel fully alive. My deaf tendency is to sacrifice myself to the convenience of others. I let something go that I ought to seize hold of. I am certain there were moments when Bach realized he had taken away one too many notes and killed the beauty of a Cello Suite, so he put that note back in. Being deaf requires we let go of some things. But not everything.
Hal-9000, Bach, and the Personal Physics of Going Deaf Laura Hope-Gill There is no sound in space. Beyond our noisy atmosphere stretches an infinite quiet. There are waves in space, but they are not sound waves. They are simply waves of silence moving through. All that vibrates keeps to itself, does not shout, scrape, or otherwise draw sonic attention. Black holes erupt in their introverted manner. The sun splashes itself again and again with its magnificent tidal flames. And not a sound comes from any of this. Solar systems are born, stars collide. Deafness prevails. Earth, in comparison to its surroundings, is a noisy planet. We talk almost all the time. DVD players and iPods keep sound flowing directly into our heads. We use electronic devices to broadcast TV and radio around the globe and beyond. We send signals out in search of someone else to talk to. We rely on the molecular vibrations we call sound to feel “at home” in what we perceive to be a lonely and too quiet universe. In Stanley Kubrick’s 2001: A Space Odyssey, the computer, Hal-9000, cuts off Frank Poole’s air and sends him drifting into space. We see this scene through the computer’s unchanging red eye; we hear the sounds of machinery and Poole’s breathing. The breaths are loud in the way that my own breathing was loud when I once snorkeled with a faulty mask in Bermuda. As I entered the preliminary stages of drowning it was the only sound in my world. At first Poole’s breath is even—it is our breath, normal and safe. As Poole enters the preliminary stages of his own death, the amplified breath becomes irregular. Hal is killing him. Watching the scene, I find my own breath matching Poole’s. I stop breathing when he does. We only hear breathing when we are watching from Hal’s perspective; watching Poole die floating in space, we hear nothing. We see only the convulsion of a suffocating man. Perhaps it is this that makes this scene so terrifying.
no
Astronomy
Is there sound in space?
no_statement
"sound" does not exist in "space".. there is no "sound" in "space".
https://www.nbcnews.com/sciencemain/how-voyager-1-recorded-noises-when-theres-no-sound-interstellar-2d11701506
How Voyager 1 recorded noises when there's no sound in ...
How Voyager 1 recorded noises when there's no sound in interstellar space Beyond the border of interstellar space, the distant Voyager 1 spacecraft called back to Earth earlier this year with noises from its new environment. It's true that the void of space does not carry sound — there's no gas or other substance to transmit the waves — but the signal Voyager detected can be played back at frequencies the human ear can understand. NASA announced in September that Voyager 1 had left the heliosphere in August 2012. The heliosphere is a sheath of magnetic influence that emanates from the sun and expands through a stream of charged particles called the solar wind. At the press conference, Don Gurnett, the principal investigator for Voyager 1's plasma wave science instrument, demonstrated a series of sounds the instrument had picked up. "Strictly speaking, the plasma wave instrument does not detect sound. Instead, it senses waves of electrons in the ionized gas or 'plasma' that Voyager travels through," NASA stated in a statement. These waves, however, do take place at frequencies that humans can detect. This image is a visual representation of the sound of interstellar space recorded by NASA's Voyager 1 probe, which entered interstellar space in 2012NASA "We can play the data through a loudspeaker and listen," Gurnett, a physics professor at the University of Iowa, said in the statement. "The pitch and frequency tell us about the density of gas surrounding the spacecraft." [Hear What Voyager 1 Detected] Within the heliosphere, the sounds had a frequency of about 300 Hz. Once Voyager left the scene, the frequency jumped higher, to between 2 and 3 kHz, "corresponding to denser gas in the interstellar medium," according to the NASA release. There have been at least two verified instances of these tones: October to November 2012, and April to May 2013. Both occurred after huge coronal mass ejections (material from the sun) bumped up plasma activity around Voyager 1. There was a lag before scientists discovered the recordings because the data is only played back every three to six months, NASA said, and more time is required to interpret the results. Gurnett further speculated that "shock fronts" from beyond the solar system could be tearing through interstellar space and disturbing the plasma surrounding Voyager 1. He will be listening for any evidence of this activity in future recordings from humanity's furthest spacecraft, he said.
How Voyager 1 recorded noises when there's no sound in interstellar space Beyond the border of interstellar space, the distant Voyager 1 spacecraft called back to Earth earlier this year with noises from its new environment. It's true that the void of space does not carry sound — there's no gas or other substance to transmit the waves — but the signal Voyager detected can be played back at frequencies the human ear can understand. NASA announced in September that Voyager 1 had left the heliosphere in August 2012. The heliosphere is a sheath of magnetic influence that emanates from the sun and expands through a stream of charged particles called the solar wind. At the press conference, Don Gurnett, the principal investigator for Voyager 1's plasma wave science instrument, demonstrated a series of sounds the instrument had picked up. "Strictly speaking, the plasma wave instrument does not detect sound. Instead, it senses waves of electrons in the ionized gas or 'plasma' that Voyager travels through," NASA stated in a statement. These waves, however, do take place at frequencies that humans can detect. This image is a visual representation of the sound of interstellar space recorded by NASA's Voyager 1 probe, which entered interstellar space in 2012NASA "We can play the data through a loudspeaker and listen," Gurnett, a physics professor at the University of Iowa, said in the statement. "The pitch and frequency tell us about the density of gas surrounding the spacecraft." [Hear What Voyager 1 Detected] Within the heliosphere, the sounds had a frequency of about 300 Hz. Once Voyager left the scene, the frequency jumped higher, to between 2 and 3 kHz, "corresponding to denser gas in the interstellar medium," according to the NASA release. There have been at least two verified instances of these tones: October to November 2012, and April to May 2013. Both occurred after huge coronal mass ejections (material from the sun) bumped up plasma activity around Voyager 1.
no
Astronomy
Is there sound in space?
no_statement
"sound" does not exist in "space".. there is no "sound" in "space".
https://physics.stackexchange.com/questions/259059/why-cannot-longitudinal-waves-travel-through-space-vacuum
Why cannot longitudinal waves travel through space (vacuum ...
'The reason sound can't travel through a vacuum is that sound needs a medium (solid, liquid or gas with real vibrating molecules) and not because it is a longitudinal wave' How does this make sense as there are particles in space which can vibrate. Light waves travel through space hence they reach earth and they also use vibration of particle to transport energy. It seems like these two ideas are contracting themselves. Maybe the thing I don't understand is why longitudinal waves such as sound have to travel through a media. $\begingroup$If you mean a perfect vacuum, then there is nothing to displace. However, "space" is not a vacuum. It is, as you say, filled with particles, admittedly a low density, but space is not empty. Longitudinal waves can exist in space, but not in a vacuum. Note also that sound cannot be said to exist in space for two reasons. 1.) sound is a psychophysical phenomenon that exist only in the human brain. 2) the longitudinal waves in space are have a very low frequency that no human could hear.$\endgroup$ $\begingroup$@garyp - Actually there are electrostatic ion-acoustic waves, which are a longitudinal (i.e., $\mathbf{k} \times \delta \mathbf{E} = 0$) sound wave, that oscillate near the ion plasma frequency. In the solar wind near Earth, this corresponds to ~100-1000 Hz in the plasma rest frame. In a spacecraft frame (which is basically at rest compared to the plasma), the waves can be Doppler shifted up to ~1-10 kHz. We can convert these signals directly to an audio signal. The University of Iowa has done a bunch of this for various plasma waves.$\endgroup$ 2 Answers 2 Electromagnetic waves are produced by oscillating charged particles but they do not need other particles to propagate. Indeed electromagnetic waves are solutions of the Maxwell equations with no sources, i.e. in the vacuum. On the other hand, mechanical waves need an elastic medium to propagate, regardless of being transverse, longitudinal or mixed waves. Regarding the particles present in deep space which could propagate sound, I suggest you to read this post. $\begingroup$Waves whose longitudinal component, ie., component that is parallel with the direction of propagation do exist in vacuum, see any TE or TM waves in waveguides. TE waves have the H, TM waves have the E component parallel with the guide axis. The boundary conditions and not the medium of propagation decide if there are longitudinal components for EM waves.$\endgroup$ $\begingroup$@hyportnex modes of waveguides are not plane waves but superpositions thereof and the basis plane wave constituents are not directed along the waveguide. This answer still holds good for plane waves, which I think is the key point for discussion - or at least an important one. Waveguide modes can always be resolved into plane wave superpositions whereas, in contrast, mechanical waves cannot be resolved into plane wave superpositions - there's a part of the field that is fundamentally longitudinal. So the boundary conditions determine the superpositions which group together to form the ....$\endgroup$ $\begingroup$@WetSavannaAnimal_aka_Rod_Vance as you say "modes of waveguides are not plane waves", and I said that all the TE and TM waveguide modes have longitudinal components. (Admittedly I did not get in the added possibility of a true TEM mode for multi-connected cross section) And while it is true that any TE or TM mode can be construed as some superposition of multiply reflected plane waves, it is also true that the sum has longitudinal component unlike the case of acoustic waves, and that is the issue what I thought the question was about.$\endgroup$
'The reason sound can't travel through a vacuum is that sound needs a medium (solid, liquid or gas with real vibrating molecules) and not because it is a longitudinal wave' How does this make sense as there are particles in space which can vibrate. Light waves travel through space hence they reach earth and they also use vibration of particle to transport energy. It seems like these two ideas are contracting themselves. Maybe the thing I don't understand is why longitudinal waves such as sound have to travel through a media. $\begingroup$If you mean a perfect vacuum, then there is nothing to displace. However, "space" is not a vacuum. It is, as you say, filled with particles, admittedly a low density, but space is not empty. Longitudinal waves can exist in space, but not in a vacuum. Note also that sound cannot be said to exist in space for two reasons. 1.) sound is a psychophysical phenomenon that exist only in the human brain. 2) the longitudinal waves in space are have a very low frequency that no human could hear.$\endgroup$ $\begingroup$@garyp - Actually there are electrostatic ion-acoustic waves, which are a longitudinal (i.e., $\mathbf{k} \times \delta \mathbf{E} = 0$) sound wave, that oscillate near the ion plasma frequency. In the solar wind near Earth, this corresponds to ~100-1000 Hz in the plasma rest frame. In a spacecraft frame (which is basically at rest compared to the plasma), the waves can be Doppler shifted up to ~1-10 kHz. We can convert these signals directly to an audio signal. The University of Iowa has done a bunch of this for various plasma waves.$\endgroup$ 2 Answers 2 Electromagnetic waves are produced by oscillating charged particles but they do not need other particles to propagate. Indeed electromagnetic waves are solutions of the Maxwell equations with no sources, i.e. in the vacuum. On the other hand, mechanical waves need an elastic medium to propagate, regardless of being transverse, longitudinal or mixed waves.
no
Conservation
Is trophy hunting beneficial for conservation?
yes_statement
"trophy" "hunting" is "beneficial" for "conservation".. "trophy" "hunting" contributes to "conservation" efforts.
https://theconversation.com/trophy-hunting-in-africa-the-case-for-viable-sustainable-alternatives-115649
Trophy hunting in Africa: the case for viable, sustainable alternatives
For decades, the public has been fed the myth that trophy hunting is absolutely necessary for sustainable conservation in Africa. Some sections of the academy, as well as the hunting lobby, continue to argue that banning trophy hunting will have a negative effect on wildlife biodiversity. Their rationale is that trophy hunting contributes a significant amount of revenue, which African countries rely on for funding wildlife conservation. In essence the argument is: a few animals are sacrificed through regulated quotas for the greater good of the species. This opens the door for Western tourists to shoot charismatic mega-fauna and make a virtue of it. In reality, trophy hunting revenues make up a very small percentage of total tourism revenues in Africa. For most African countries with an active trophy hunting industry, among them South Africa, Zimbabwe, Zambia, and Namibia, the industry generates only between 0.3% and 5% of total tourism revenues. Clearly, trophy hunting’s economic importance is often overstated. It’s also claimed by proponents that local communities benefit significantly from trophy hunting. The evidence suggests otherwise. A 2013 analysis of literature on the economics of trophy hunting done by Economists at Large, a network of economists who contribute their expertise to economic questions that are of public interest, showed that communities in the areas where hunting occurs derive little benefit from this revenue. On average communities receive only about 3% of the gross revenue from trophy hunting. Another line of argument is that non-consumptive forms of wildlife tourism are not lucrative enough to sustain conservation efforts. The hunting lobby has therefore built a narrative where hunting is the only viable means of financing sustainable conservation in Africa. I recently completed a book chapter in which I explore these and other claims made by the hunters, focusing in particular on how they choose their words to rationalise and sanitise their pastime. Trophy hunting’s paradoxes Trophy hunters often claim that they kill animals because they love animals. They rationalise their choice, for instance, by arguing that trophy hunting allows broader animal populations to be conserved. As I argued in my chapter, the paradox of killing an animal you allegedly “love” cannot be resolved in the sphere of ethics. In the chapter I explore the words that are used by hunters as euphemisms to describe trophy hunting, while avoiding the word “killing”. Examples include words like “harvesting” and “taking” that serve to sanitise killing. This “euphemisation” is exemplified by Walter Palmer, who shot the beloved Zimbabwean lion, Cecil, in the infamous “Cecilgate” incident. Palmer issued a statement in response to the outcry, stating: To my knowledge, everything about this trip was legal and properly handled and conducted. I had no idea that the lion I took was a known, local favourite… This choice of words isn’t accidental. The effect is that we lose sight of what’s actually being done to lions, rhinos, elephants, and other precious species. Alternatives and the way forward The proponents of trophy hunting claim that there are no viable alternatives for Africa. They suggest that non-consumptive forms of wildlife tourism such as photo-safaris, where tourists view and photograph animals, do not generate sufficient benefits to justify keeping the wildlife habitat. If we stop trophy hunting, they say, wildlife will lose its economic value for local communities. Wildlife habitat will be lost to other land uses. The truth is that well managed, non-consumptive wildlife tourism is sufficient for funding and managing conservation. Botswana, for example, which in 2014 banned all commercial hunting in favour of photo-tourism, continues to thrive. In a 2017 study, residents of Mababe village in Botswana noted that, compared to hunting, which is seasonal, photographic camps were more beneficial to the community because people are employed all year round. Trophy hunting is not the solution to Africa’s wildlife conservation challenges. Proper governance, characterised by accountability, rigorous, evidence-based policies and actions, and driven by a genuine appreciation of the intrinsic – not just economic – value of Africa’s majestic fauna, is.
For decades, the public has been fed the myth that trophy hunting is absolutely necessary for sustainable conservation in Africa. Some sections of the academy, as well as the hunting lobby, continue to argue that banning trophy hunting will have a negative effect on wildlife biodiversity. Their rationale is that trophy hunting contributes a significant amount of revenue, which African countries rely on for funding wildlife conservation. In essence the argument is: a few animals are sacrificed through regulated quotas for the greater good of the species. This opens the door for Western tourists to shoot charismatic mega-fauna and make a virtue of it. In reality, trophy hunting revenues make up a very small percentage of total tourism revenues in Africa. For most African countries with an active trophy hunting industry, among them South Africa, Zimbabwe, Zambia, and Namibia, the industry generates only between 0.3% and 5% of total tourism revenues. Clearly, trophy hunting’s economic importance is often overstated. It’s also claimed by proponents that local communities benefit significantly from trophy hunting. The evidence suggests otherwise. A 2013 analysis of literature on the economics of trophy hunting done by Economists at Large, a network of economists who contribute their expertise to economic questions that are of public interest, showed that communities in the areas where hunting occurs derive little benefit from this revenue. On average communities receive only about 3% of the gross revenue from trophy hunting. Another line of argument is that non-consumptive forms of wildlife tourism are not lucrative enough to sustain conservation efforts. The hunting lobby has therefore built a narrative where hunting is the only viable means of financing sustainable conservation in Africa. I recently completed a book chapter in which I explore these and other claims made by the hunters, focusing in particular on how they choose their words to rationalise and sanitise their pastime. Trophy hunting’s paradoxes Trophy hunters often claim that they kill animals because they love animals. They rationalise their choice, for instance, by arguing that trophy hunting allows broader animal populations to be conserved.
no