score
float64
4
5.34
text
stringlengths
256
572k
url
stringlengths
15
373
4.125
The Amazon rainforest contains a wider variety of plant and animal life than any other biome in the world. The region in its entirety is home to roughly 2.5 million insect species, tens of thousands of plants, and some 2000 birds and mammals. To date, at least 40,000 plant species, 2000 fishes, 1000 birds, 427 mammals, 428 amphibians, and 378 reptiles have been scientifically classified. The Brazilian Amazon harbors roughly 40% of the world’s tropical forest and a significant proportion of global biodiversity. The numbers speak for themselves. However, over the past few years what we’re seeing is a biodiversity crisis in slow-motion. Those numbers are at risk. The largest drivers of which being climate change and habitat loss by deforestation. Habitat loss results in species extinction, but not immediately. When habitats shrink it may take several generations after an initial impact before the last individual of a species is gone. Extinction against the clock and in slow motion. Visualising how this occurs and estimating the impact has always been a problem for researchers. The question now is — how many species are headed for extinction as a result of past and future deforestation? New research published today in Science describes cutting-edge statistical tools used to devise a novel strategy to estimate the expected number of local species extinctions as a function of the extent of habitat loss. Researchers made predictions on the extent of the extinction damage based on four possible scenarios — two in which all parties comply and respect current environmental law and protected area networks; and two which rely on strong reductions and eliminations in current deforestation rates. A reflection of recent pledges by the Brazilian government and potential reductions in deforestation proposed in 2009. What researchers describe is an estimation of something much more serious than previous estimates. Deforestation over the last three decades in some localities of the Amazon has already committed up to 8 species of amphibians, 10 species of mammals, and 20 species of birds to future extinction. Local regions will lose an average of nine vertebrate species and have a further 16 committed to extinction by 2050. The worst is yet to come it seems. More than 80% of local extinctions expected from historical deforestation have not yet happened. An “extinction debt” — future biodiversity loss which have yet to be realized as a result of current or past habitat destruction — offers a time delay, the chance to save them… but it is a race against the clock. A window of opportunity for conservation, during which, researchers write, “it is possible to restore habitat or implement alternative measures to safeguard the persistence of species that are otherwise committed to extinction.” The fight for the heart of the Brazilian Amazon has already begun. Brazil has a long-standing tradition of conserving its Amazon — or, at least, trying to be seen as conserving its Amazon. Up till now a mixture of firm legislation and strong-arm tactics ensured this. The Brazilian government has been known to use police raids to crack down on illegal deforesters. Currently, around 54% of Brazilian Amazon is under some form of environmental protection. Thanks, in no small part to the country’s half-century-old Forest Code. The past few years, since the economic crisis, has seen political will erode and the Forest Code challenged. In today’s economic climate, reconciling income generation with sustainability is a difficult balance to maintain. The economist’s mantra, foregoing all for economic growth, has seen the Brazilian government pushing forth a rapid expansion of infrastructure in the Amazon, from the construction of vast hydroelectric power plants in the Amazon basin to agricultural expansions. Agriculture represents a significant proportion of Brazil’s GDP and there is pressure to open up more forest land to production. Earlier this year, a bill seeking to overhaul the Forest Code was up for debate. As Rio+20 welcomed the world to debate the merits of development versus conservation Brazil herself was going through the same struggles. Protected forest areas across the Brazil Amazon represents a cost of US$147 billion for Brazil (a number that includes the lost profits if they were to be opened up for a free-for-all as well as investments needed for their conservation). They cover a total area of 1.9 million km² and encompass just under half of the Amazon biome. It is hoped that this culture of conservation will continue and expand the total area. In April of this year, the new forest code was approved, with many reservations from conservationists. The new code allows for a considerable reduction in reserve areas in the Amazon. How this will affect the “extinction debt” is yet to be seen. Researchers acknowledge that their “best case scenario” (an end of deforestation scenario) “appeared feasible when first published in 2009 but appears much less so now in the face of recently voted changes to the Brazilian Forest Code that may weaken controls on deforestation rates.” The future of biodiversity in the Brazilian Amazon currently stands at a critical juncture. It seems the Amazon will continue to collect extinction debt for decades to come as we witness how the impact of present-day decision making policies by governments on future species extinctions. [Images courtesy of Alexander Lees and William Laurance] Originally appearing at Australian Science Soares-Filho B, Moutinho P, Nepstad D, Anderson A, Rodrigues H, Garcia R, Dietzsch L, Merry F, Bowman M, Hissa L, Silvestrini R, & Maretti C (2010). Role of Brazilian Amazon protected areas in climate change mitigation. Proceedings of the National Academy of Sciences of the United States of America, 107 (24), 10821-6 PMID: 20505122 Ricketts TH, Soares-Filho B, da Fonseca GA, Nepstad D, Pfaff A, Petsonk A, Anderson A, Boucher D, Cattaneo A, Conte M, Creighton K, Linden L, Maretti C, Moutinho P, Ullman R, & Victurine R (2010). Indigenous lands, protected areas, and slowing climate change. PLoS biology, 8 (3) PMID: 20305712
http://scienceleftuntitled.wordpress.com/tag/nature/
4.03125
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Learning About Word Families with Click, Clack, Moo |Grades||K – 2| |Lesson Plan Type||Standard Lesson| |Estimated Time||Six 20-minute sessions| This lesson uses the book Click, Clack, Moo: Cows That Type by Doreen Cronin to teach students word identification strategies. Through shared readings, teachers and students read and reread text from the book with fluency and expression. With repeated teacher modeling and guided practice, students learn to identify rimes or word families and apply their knowledge to the decoding of new words. Allen, L. (1998). An integrated strategies approach: Making word identification instruction work for beginning readers. The Reading Teacher, 52(3), 254–268. - Children need automatic decoding skills. They also need to acquire the motivation that comes from engagement in purposeful, meaningful literacy tasks. Using quality literature that children can enjoy provides practice with and a purpose for learning word identification strategies as well as motivation to read. - Using literature and connected spelling and writing activities in conjunction with word study enables children to see a purpose and connection between the strategies they are learning and how they apply to reading and writing. - Children need to be directly taught how to use spelling patterns (rimes or word families) to spell and read new words. - Repeated reading of texts has been shown to be effective in developing fluency (Clay, 1994; Dowhower, 1989). Clay, M. (1994). Reading Recovery: A guidebook for teachers in training. Portsmouth, NH: Heinemann. Dohower, S. (1989). Repeated reading: Research into practice. The Reading Teacher, 42(7), 502–507.
http://www.readwritethink.org/classroom-resources/lesson-plans/learning-about-word-families-847.html?tab=1
4.09375
Is Current Warming Natural? In Earth’s history before the Industrial Revolution, Earth’s climate changed due to natural causes not related to human activity. Most often, global climate has changed because of variations in sunlight. Tiny wobbles in Earth’s orbit altered when and where sunlight falls on Earth’s surface. Variations in the Sun itself have alternately increased and decreased the amount of solar energy reaching Earth. Volcanic eruptions have generated particles that reflect sunlight, brightening the planet and cooling the climate. Volcanic activity has also, in the deep past, increased greenhouse gases over millions of years, contributing to episodes of global warming. A biographical sketch of Milutin Milankovitch describes how changes in Earth’s orbit affects its climate. These natural causes are still in play today, but their influence is too small or they occur too slowly to explain the rapid warming seen in recent decades. We know this because scientists closely monitor the natural and human activities that influence climate with a fleet of satellites and surface instruments. NASA satellites record a host of vital signs including atmospheric aerosols (particles from both natural sources and human activities, such as factories, fires, deserts, and erupting volcanoes), atmospheric gases (including greenhouse gases), energy radiated from Earth’s surface and the Sun, ocean surface temperature changes, global sea level, the extent of ice sheets, glaciers and sea ice, plant growth, rainfall, cloud structure, and more. On the ground, many agencies and nations support networks of weather and climate-monitoring stations that maintain temperature, rainfall, and snow depth records, and buoys that measure surface water and deep ocean temperatures. Taken together, these measurements provide an ever-improving record of both natural events and human activity for the past 150 years. Scientists integrate these measurements into climate models to recreate temperatures recorded over the past 150 years. Climate model simulations that consider only natural solar variability and volcanic aerosols since 1750—omitting observed increases in greenhouse gases—are able to fit the observations of global temperatures only up until about 1950. After that point, the decadal trend in global surface warming cannot be explained without including the contribution of the greenhouse gases added by humans. Though people have had the largest impact on our climate since 1950, natural changes to Earth’s climate have also occurred in recent times. For example, two major volcanic eruptions, El Chichon in 1982 and Pinatubo in 1991, pumped sulfur dioxide gas high into the atmosphere. The gas was converted into tiny particles that lingered for more than a year, reflecting sunlight and shading Earth’s surface. Temperatures across the globe dipped for two to three years. Although volcanoes are active around the world, and continue to emit carbon dioxide as they did in the past, the amount of carbon dioxide they release is extremely small compared to human emissions. On average, volcanoes emit between 130 and 230 million tonnes of carbon dioxide per year. By burning fossil fuels, people release in excess of 100 times more, about 26 billion tonnes of carbon dioxide, into the atmosphere every year (as of 2005). As a result, human activity overshadows any contribution volcanoes may have made to recent global warming. Changes in the brightness of the Sun can influence the climate from decade to decade, but an increase in solar output falls short as an explanation for recent warming. NASA satellites have been measuring the Sun’s output since 1978. The total energy the Sun radiates varies over an 11-year cycle. During solar maxima, solar energy is approximately 0.1 percent higher on average than it is during solar minima. Each cycle exhibits subtle differences in intensity and duration. As of early 2010, the solar brightness since 2005 has been slightly lower, not higher, than it was during the previous 11-year minimum in solar activity, which occurred in the late 1990s. This implies that the Sun’s impact between 2005 and 2010 might have been to slightly decrease the warming that greenhouse emissions alone would have caused. Scientists theorize that there may be a multi-decadal trend in solar output, though if one exists, it has not been observed as yet. Even if the Sun were getting brighter, however, the pattern of warming observed on Earth since 1950 does not match the type of warming the Sun alone would cause. When the Sun’s energy is at its peak (solar maxima), temperatures in both the lower atmosphere (troposphere) and the upper atmosphere (stratosphere) become warmer. Instead, observations show the pattern expected from greenhouse gas effects: Earth’s surface and troposphere have warmed, but the stratosphere has cooled. The stratosphere gets warmer during solar maxima because the ozone layer absorbs ultraviolet light; more ultraviolet light during solar maxima means warmer temperatures. Ozone depletion explains the biggest part of the cooling of the stratosphere over recent decades, but it can’t account for all of it. Increased concentrations of carbon dioxide in the troposphere and stratosphere together contribute to cooling in the stratosphere.
http://www.visibleearth.nasa.gov/Features/GlobalWarming/page4.php
4
||This article needs additional citations for verification. (January 2010)| A musical saw, also called a singing saw, is the application of a hand saw as a musical instrument. Capable of glissando, the sound creates an ethereal tone, very similar to the theremin. The musical saw is classified as a friction idiophone with direct friction (131.22) under the Hornbostel-Sachs system of musical instrument classification. The saw is generally played seated with the handle squeezed between the legs, and the far end held with one hand. Some sawists play standing, either with the handle between the knees and the blade sticking out in front of them, or with the handle under the chin (like a violin). The saw is usually played with the serrated edge, or teeth facing the body, though some players face them away. Some saw players file down the teeth for added comfort. To sound a note, a sawist first bends the blade into an S-curve. The parts of the blade that are curved are damped from vibration, and do not sound. At the center of the S-curve a section of the blade remains relatively flat. This section, the "sweet spot", can vibrate across the width of the blade, producing a distinct pitch: the wider the section of blade, the lower the sound. Sound is usually created by drawing a bow across the back edge of the saw at the sweet spot, or sometimes by striking the sweet spot with a mallet. The sawist controls the pitch by adjusting the S-curve, making the sweet spot travel up the blade (toward a thinner width) for a higher pitch, or toward the handle for a lower pitch. Harmonics can be created by playing at varying distances on either side of the sweet spot. Sawists can add vibrato by shaking one of their legs or by wobbling the hand that holds the tip of the blade. Once a sound is produced, it will sustain for quite a while, and can be carried through several notes of a phrase. Types of saw Sawists often use standard wood-cutting saws, although special musical saws are also made. As compared with wood-cutting saws, the blades of musical saws are generally wider, for range, and longer, for finer control. They do not have set or sharpened teeth, and may have grain running parallel to the back edge of the saw, rather than parallel to the teeth. Some musical saws are made with thinner metal, to increase flexibility, while others are made thicker, for a richer tone, longer sustain, and stronger harmonics. A typical musical saw is 5" wide at the handle end and 1" wide at the tip. A saw will generally produce about 2 octaves regardless of length. A bass saw may be 6" at the handle and produce about 2½ octaves. Two-person saws, also called "misery whips", can also be played, though with less virtuosity, and they produce an octave or less of range. Most sawists use cello or violin bows, using violin rosin, but some may use improvised home-made bows, such as a wooden dowel. Producers of musical saws Musical saws have been produced for over a century, primarily in the United States, though there are some producers in other countries. United States In the early 1900s, there were at least ten companies in the United States manufacturing musical saws. These saws ranged from the familiar steel variety to gold-plated masterpieces worth hundreds of dollars. However, with the start of World War II the demand for metals made the manufacture of saws too expensive and many of these companies went out of business. By the year 2000, only three companies in the United States - Mussehl & Westphal, Charlie Blacklock, and Wentworth - were making saws. Outside the United States Outside the United States, makers of musical saws include Sandvik in Sweden, makers of the limited edition Stradivarius, Alexis in France, which produces a toothless saw, "La Lame Sonore", with a range of three and a half octaves (Patent: N° E31975), and Thomas Flinn & Company in the United Kingdom, based in Sheffield, who produce three different sized musical saws, as well as accessories. Events and world records The International Musical Saw Association (IMSA) produces an annual International Musical Saw Festival (including a "Saw-Off" competition every August in Santa Cruz and Felton, California. An International Musical Saw Festival is held every summer in New York City, produced by Natalia Paruz. Paruz also produced a Musical Saw Festival in Israel. There are also annual Saw Festivals in Japan and China. Saw players This is a list of people notable for playing the musical saw. - Natalia Paruz, also known as the 'Saw Lady', plays the musical saw in movie soundtracks, in TV commercials, with orchestras internationally, and is the organizer of international saw festivals in New York City and Israel. - David Coulter, Multi-Instrumentalist and Producer/ Music Supervisor; ex-member of Test Dept and The Pogues, has played Musical Saw on numerous albums and live with a who's who of Contemporary Popular Music: Damon Albarn, Gorillaz, Tom Waits, Hal Willner, Richard Hawley, Jarvis Cocker, Marianne Faithfull, Tim Robbins, The Tiger Lillies. He has played on many film scores including Is Anybody There? (2008) directed by John Crowley and starring Michael Caine, score composed by Joby Talbot; It's a Boy Girl Thing (2006) directed by Nick Hurran, score composed by Christian Henson and has featured on TV soundtrack and themes tunes, most recently for Psychoville, composed by Joby Talbot and episodes of Wallander, composed by Ruth Barrett. He continues to play internationally and collaborates with musicians across many musical styles. (more info: http://www.guardian.co.uk/culture/2004/may/21/2) - Kev Hopper, formerly the bass guitarist in the 1980s band Stump, made an album entitled Saurus in 2003 featuring six original saw tunes. - Charles Hindmarsh, known as The Yorkshire Musical Saw Player, has played the musical saw throughout the UK. - Elly Deliou was regarded[by whom?] as one of the best soloists of the musical saw. Born in Alexandria in 1935, she learned to play the saw at the age of seven, with Poland-Austrian Anton Stein. She moved to Greece in 1956, and worked as a professional saw musician. She died in April 2012. - Janeen Rae Heller played the saw in four television guest appearances: The Tracey Ullman Show (1989), Quantum Leap (1990), and Home Improvement (1992 and 1999). She has also performed on albums such as Michael Hedges' The Road to Return in 1994 and Rickie Lee Jones's Ghostyhead in 1997. - Julian Koster of the band Neutral Milk Hotel played the singing saw, along with other instruments, in the band and currently plays the saw in his solo project, The Music Tapes. In 2008, he released The Singing Saw at Christmastime. - Armand Quoidbach, is a Belgian saw player who has played the saw since 1997. In 1999 he played on the national Belgian TV (RTBF2) . In August 2000 he won the first prize at the contest for bands of the 25th "Plinn festival" in Bourbriac (Brittany) with the band "Le Bûcheron Mélomane et les Nains de la Forêt" (The Music-loving Lumberjack and the Dwarfs of the Forest). In 2002 he played on a CD Music Drama of the band "My Little Cheap Dictaphone" La Médiatheque de Belgique. He performed with numerous musicians in Belgium and in France. - Thomas Jefferson Scribner was a familiar figure on the streets of Santa Cruz, California during the 1970s playing the musical saw. He performed on a variety of recordings and appeared in folk music festivals in the USA and Canada during the 1970s. His work as labour organizer and member of the Industrial Workers of the World is documented in the 1979 film The Wobblies. Canadian composer/saw player Robert Minden pays tribute to him on his website. Musician/songwriter, Utah Phillips has recorded a song referencing Scribner, "The Saw Playing Musician" on the album Fellow Workers with Ani DiFranco. Artist Marghe McMahon was inspired in 1978 to create a bronze statue of Tom playing the musical saw which sits in downtown Santa Cruz. - Marlene Dietrich played the saw on the Berlin stage and later used it to entertain troops during World War II. - Ali Luminescent plays the musical saw at festivals around the USA, concerts with Kai Altair and in Cynthia von Buhler s play "Speakeasy Dollhouse" currently running for the last year and a half in New York City. Some artists have composed music specifically for the musical saw. The composer Krzysztof Penderecki wrote regularly for the musical saw, including several obbligato parts in his comic opera Ubu Rex, and Canadian composer Robert Minden has written extensively for the musical saw. The Romanian composer George Enescu uses the musical saw at the end of the second act of his opera Œdipe to express the death of the sphinx killed by Oedipus. Michael A. Levine composed Divination By Mirrors for musical saw soloist and two string ensembles tuned a quarter tone apart, taking advantage of the saws ability to play in both tunings. See also - Grove-White, will (September 25, 2005). "Music: Cutting-edge sounds". The Sunday Times. Retrieved 10 March 2009. - "Sandvik Stradivarius Musical Saw", MusicalInstruments.com. - Fabrication de Scie Musicale (French) - "Musical Saws and Accessories", Thomas Flinn & Co.. - Article in Haaretz - Israeli news paper (Hebrew) - "Guinness World Record", NYC Musical Saw Festival. - Janeen Rae Heller at the Internet Movie Database. - "Home Improvement: Stereo-Typical episode summary, TV.com. - http://en.wikipedia.org/wiki/Hawks_%26_Doves[better source needed] - "1979", Vancouver Folk Music Festival. - "The Wobblies (1979)", IMDB.com. - "Robert Minden Duo contact", LostSound.com. - "Thomas Jefferson Scribner - Musician Statues", WayMarking.com. - Flint, Peter B. (May 7, 1992). "Marlene Dietrich, 90, Symbol of Glamour, Dies". New York Times. Retrieved 10 March 2009. - see for example: "Serenade" from The Boy Who Wanted To Talk To Whales, "Epilogue" from Long Journey Home: "Catalogue: Otter Bay Productions", LostSound.com. - "Dale, sawist", Violin-Saw.com. Full review at E. Haig (Oct 17, 2003). "Varied Program Highlights New Century Premier", NCCO.org. |Wikimedia Commons has media related to: Musical saw|
http://en.wikipedia.org/wiki/Musical_saw
4.0625
Jon Nelson is a research scientist at Ritsumeikan University in Kyoto, Japan, who studies snowflakes. He said that the processes that give snowflakes their uniqueness are poorly understood. For example, scientists are uncertain why crystals take different shapes at different temperatures and do not know precisely how temperature and humidity affect growth. Nor are researchers sure how snow crystals impact global climate. In the daytime, for example, thick clouds full of snow crystals are believed to reflect sunlight, keeping Earth cool. At night, however, the same clouds act as a blanket, absorbing the heat given off by Earth. "It has competing effects," he said. "It's not a very simple thing." Researchers do know enough to confirm that the "no two snowflakes are alike" adage is likely true for fully developed snowflakes, Nelson added. But it may not hold for some flakes that fall out in the early stages of crystal formation, he said. In the earliest stages, Nelson pointed out, snow crystals are simply six-sided prisms—plain plates and columns of various sizes. Nelson's research shows that snowflakes will stay in this stage for a relatively long time at temperatures between 8.6ºF and 12.2ºF (-13ºC and -11ºC). "In that form they sometimes do reach the ground. And in that case, there's not much detail to distinguish any two," he said. However, once branches start growing the crystal "very easily picks up its own unique shape," he added. And just because two underdeveloped snowflakes may look alike, Nelson said, don't expect to find them. If you had a million snow crystals photographed for comparison and could compare two of them every second, "you'd be there for nearly a hundred thousand years or so," he said. "It's a safe bet they won't be discovered." According to Gosnell, the writer, some people have looked at snowflakes through a microscope and claimed they found two that look alike. "But there's a lot of things a microscope—a good optical microscope—can't see, and the chances that at the molecular level they will be the same are pretty much nil," she said. In her book, Gosnell cites snow scientist Charles Knight at the National Center for Atmospheric Research in Boulder, Colorado. Knight estimates there are 10,000,000,000,000,000,000 water molecules in a typical snow crystal. "The way they can arrange themselves is almost infinite," Gosnell said. And, she adds, David Phillips, the senior climatologist with Environment Canada, has estimated that the number of snowflakes that have fallen on Earth over the course of time is 10 followed by 34 zeros. "So, you know, nobody can say for absolute certain," Gosnell said. "But I think experts are in agreement the likelihood of two being identical is next to impossible." Free Email News Updates Best Online Newsletter, 2006 Codie Awards Sign up for our Inside National Geographic newsletter. Every two weeks we'll send you our top stories and pictures (see sample). SOURCES AND RELATED WEB SITES
http://news.nationalgeographic.com/news/2007/02/070213-snowflake_2.html
4.09375
About the Exhibition Four museums and two research centers have collaborated, with support from the National Science Foundation, to develop, create, and evaluate Math Moves! Here are just a few of the activities you'll find when you visit: Make stories or scenes as you experiment with the placement of scaled objects and a bright-white LED light to cast shadows of objects on a grid. By moving these objects, you increase or reduce the size of its shadow. Use one or more small wheels driven by a large wheel to create rhythmic percussive sounds. Experiment with several wheels to compare frequencies of the clicks and hear the rhythm of proportions and frequency of clicking. In this area of Math Moves!, you'll find three chairs that are identical in every aspect except proportional scale. One chair is full scale (X), the other chairs are 1/2X, and 2X. Use your body and other measuring tools to investigate how the chairs differ in size. See how your rate of motion affects a graph on a screen. In this full-body activity, you'll walk back and forth, slowly and quickly, creating graphs of your motions. You can also work with a partner to create a number of different graph shapes. The graphs display your movement over time, giving you another way to think about and feel how your rates compare. Drawing with Gears At this mechanical drawing table, you'll draw harmonic patterns using proportional wheels. The drawn patterns repeat after a fixed number of rotations and are determined by the ratio of the gears used at a particular time. You'll create complex, circular drawings on paper to take home or share with other visitors. Learn more about the Math Moves! project and partners at www.mathmoves.org.
http://www.smm.org/mathmoves/about
4.40625
- This work by is licensed under a Creative Commons Attribution 3.0 United States License. There are 1 assets for this resource. 2nd Grade Lesson Plan on Paleontologists and Paleontology- Great for social studies/science, studying careers, dinosaurs - Digging Up Dinosaurs, by Aliki. Harper & Row, Publishers: New York, 1988. (Find out how teams of experts work together to dig dinosaur fossils out of the ground, and put together skeletons that look like the dinosaurs that lived millions of years ago.) - Pictures of paleontologists at work. - Blank KWL chart - Marker (to fill in KWL chart) - Fossil samples (if available) - Balls of clay (one per student) - Pieces of 6"x2" tag board (one per student) - Objects to imprint in the clay (shells, rocks, twigs, leaves, etc.) - Paper and pencils - Plaster of Paris - A container and wooden spoon to mix the plaster - Containers of hardened plaster of Paris with ‘fossilized’ chicken bones (one container per group) - Dull instruments for the students to use to dig for the fossilized bones As a result of this activity, the students will be able to: - give three examples of tools that a paleontologist might use. - demonstrate how a paleontologist might use tools to uncover a fossil. - explain, through writing, how they uncovered their own fossils. - explain what a fossil is and give an example how we use fossils to learn about the past. - use scientific knowledge to ask questions and make observations. - The word “paleontologist” will be written on the board and a sign "paleontologists at work" will be placed on the activity table. Pictures will be shown of a paleontologist at work. Students will be asked who they think a paleontologist is and what he does. The responses will be compiled in a web at the front of the classroom. - Next, read Digging Up Dinosaurs, by Aliki. Tell the students to listen closely to the story and listen to the different tools a paleontologist might use. The students should remember three examples of tools that a paleontologist might use, and will write the examples on a piece of notebook paper and turn it in. - Samples of fossils will be passed out for students to examine. Students will be asked to discuss, within their work groups, what fossils are and how they think the fossils were made, as well as why paleontologists use them as keys to our past. The information will be shared with the class and posted on a KWL chart. - Discuss the role of a paleontologist. Ask the class to give examples of how a paleontologist might work, where they might work, and why. Explain that paleontologists are scientists who studies ancient living things, like dinosaurs. Paleontologists can only guess what dinosaurs looked like, what they ate, where they lived, and how they died. Paleontology is packed with mysteries about living things such as plants and animals that lived thousands, millions, and billions of years before humans existed. To solve these mysteries, paleontologists use fossils. So what are fossils? Fossils are the remains or traces of ancient life that are usually buried in rocks. They’re almost like the leftovers of life from long ago. (Pass around examples) Examples include bones, teeth, shells, impressions of leaves, nests, and even footprints. If a paleontologist discovered a fossil, how do you think they would use that fossil to learn about history? (Allow time for discussion and input) A fossil to a paleontologist is almost like crime evidence would be for a police officer. This evidence, or the fossil, helps the paleontologist reveal what our planet was like long ago. Fossils show how animals changed over time, and how they are related to one another. How do you think a paleontologist figures out what colors dinosaurs were or what they sounded like? (Allow time for response). Fossils help paleontologists study what ancient living things, like dinosaurs, looked like, but they keep us guessing about their colors, sounds, and most of their behavior. Many fossils of living things will never be found, because they may be buried too deep in the earth, or they may be in parts of the world where no one is digging. Fossils are like pieces to a big jigsaw puzzle, and paleontologists work hard to find the missing pieces in order to put that jigsaw puzzle together. - Each student will be given a ball of clay and a strip of tag board 6"x2" (stapled into a circle). The students will roll the clay out to a thickness of not less than 1 inch. Next, the student will insert the paper ring so that it forms a seal. The student will select an object he wishes to make into a fossil and press it into the clay. When the student carefully removes the object, an imprint is left. At this time the students can review how this might have happened in nature. - After reviewing their chart on fossils, the children may become paleontologists and "discover" their fossils by removing the circle of paper and clay. They may have the next 5 minutes to share them with their classmates. They will be placed on the activity table until dismissal so the students can examine them during their spare time. - Next, the students can write stories about how they ‘discovered’ their fossils. Students should work on sequencing and proper sentence structure. Students should have at least four sentences written about their discovery to hand in. Whatever isn’t finished in class should be sent home and finished for class the next day. - Review the KWL chart about fossils and paleontologists. Explain to the students that they will once again be paleontologists for the day and will be put on a mission to discover long lost fossils (buried chicken bones). Demonstrate how to properly and carefully use the utensils as a paleontologist would. (Each group table should have a small container of hardened plaster of Paris with ‘undiscovered’ chicken bones throughout. See handout for instructions on putting together the fossil sites. Students will be given dull instruments, like wooden toy hammers, tweezers, paintbrushes, etc, to dig for the fossilized bones.) - After students have successfully discovered their chicken bones, discuss with the students why paleontologists would want or need to dig for real fossils. - Ask the students questions such as: - What was surprising about excavating the bones? - What strategies did you find worked well for removing the plaster? - How would you have worked differently if you had no idea what was buried inside? - Discuss other tasks a paleontologist might have besides digging bones (taking notes, taking pictures, giving speeches, etc). - Students will be able to verbally provide the definition of a fossil and a paleontologist. - Students will write three examples of tools that a paleontologist might use on a piece of notebook paper. - Through class discussion, the teacher will take note whether the students participated in filling out the web and/or KWL chart using a three-point system. One point for little participation, two points for some, and three for several contributions. - Students will write a four-sentence story about their fossil discovery, with 85% accuracy for proper sentence structure.
http://teachershare.scholastic.com/resources/10815
4.5625
The U.S. Constitution, the blueprint of American democracy, is the oldest national constitution in continuous use. It was signed in September 1787 after four difficult months spent drafting and debating it. But signing wasn’t enough; the new Constitution had to be ratified by nine of the 13 states before it became binding. That happened when New Hampshire ratified it on June 21, 1788 — 224 years ago. Ratification had been far from a sure thing. The new Constitution replaced the Articles of Confederation, which had been adopted during the Revolutionary War. Supporters of the Constitution, the Federalists, favored a strong federal government, while opponents, the Anti-Federalists, thought it gave the central government too much power. There were bitter struggles in many of the states, but the Federalists were better organized and they won the day. In order to obtain ratification in several important states, the Federalists promised to add amendments to the Constitution guaranteeing the basic rights of citizens. The amendments they wrote came into effect in December 1791 and are known as the Bill of Rights. Above is a painting of the signing of the U.S. Constitution by Howard Chandler Christy.
http://iipdigital.usembassy.gov/st/english/inbrief/2012/06/201206217872.html?CP.rss=true
4.625
Exoplanet CoRoT-7b is five times heavier than the Earth Even in ancient times, people observed the planets that orbit our Sun. (See also the astronomy question from week 1: Why are there seven days in a week?) Nowadays we know that there are many trillions of other stars in the Universe, in addition to the Sun. It seems likely that planets orbit many of these stars too. The evidence that extrasolar planets (exoplanets for short) exist was obtained for the first time in the 1990s. However, exoplanets are small, non-luminous bodies that are light years away and as a rule indiscernible to us – how are we able to prove that they exist? Since 1995, over 370 exoplanets have been found – and there appears to be no end to the discoveries. Although astronomers have now succeeded in making a direct optical verification, two indirect astronomical measuring techniques have been shown to be particularly reliable in the search for exoplanets: the ‘radial velocity’ method and the 'transit' method. Methods to verify the existence of extrasolar planets Corot-Mission: Exoplanets can be discovered using the transit method The radial velocity method is based on the premise that a star and the planet orbiting it have a reciprocal influence on each other due to their gravity. For this reason, the star moves periodically (in synchrony with movement of the planet around it) a little towards the observer and a little away from the observer along the line of sight. Due to the Doppler effect, in the electromagnetic spectrum of the star a radial movement such as this leads to a small periodic shift in the spectral lines – first towards the blue wavelength range, then back towards the red. (See also the astronomy question from week 38: How quickly is the Universe expanding?) If we analyse this movement of the spectral lines quantitatively, what is known as the radial-velocity curve can be derived from it. This yields parameters for the planetary orbit and the maximum mass of the planetary candidates. If the latter is less than the mass that a heavenly body requires to initiate thermonuclear fusion, the body is regarded as a planet. The transit method works if the orbit of the planet is such that, when viewed from the Earth, it passes in front of the star. During the planet’s passage across the star disk, known as a transit, the planet’s presence reduces the amount of radiation from the stellar disc that reaches the observer and a decrease in the apparent brightness of the star can be measured. The radius of the planet and its density can be calculated from these measurements together with other data (such as the distance of the star from Earth) – astronomers then know whether the planet in question is a rocky planet or a gas planet. Such findings are incorporated into models of how planets are formed and help us to better understand how planetary systems develop.
http://www.dlr.de/en/desktopdefault.aspx/tabid-5170/8702_read-20367/usetemplate-print/
4.28125
May 25, 2011 Scientists at Imperial College London have made the most accurate measurement yet of the shape of the humble electron, finding that it is almost a perfect sphere, in a study published in the journal Nature on May 25. The experiment, which spanned more than a decade, suggests that the electron differs from being perfectly round by less than 0.000000000000000000000000001 cm. This means that if the electron were magnified to the size of the solar system, it would still appear spherical to within the width of a human hair. The physicists from Imperial's Centre for Cold Matter studied the electrons inside molecules called ytterbium fluoride. Using a very precise laser, they made careful measurements of the motion of these electrons. If the electrons were not perfectly round then, like an unbalanced spinning-top, their motion would exhibit a distinctive wobble, distorting the overall shape of the molecule. The researchers saw no sign of such a wobble. The researchers are now planning to measure the electron's shape even more closely. The results of this work are important in the study of antimatter, an elusive substance that behaves in the same way as ordinary matter, except that it has an opposite electrical charge. For example, the antimatter version of the negatively charged electron is the positively charged anti-electron (also known as a positron). Understanding the shape of the electron could help researchers understand how positrons behave and how antimatter and matter might differ. Research co-author, Dr Jony Hudson, from the Department of Physics at Imperial College London, said, "We're really pleased that we've been able to improve our knowledge of one of the basic building blocks of matter. It's been a very difficult measurement to make, but this knowledge will let us improve our theories of fundamental physics. People are often surprised to hear that our theories of physics aren't 'finished', but in truth they get constantly refined and improved by making ever more accurate measurements like this one." The currently accepted laws of physics say that the Big Bang created as much antimatter as ordinary matter. However, since antimatter was first envisaged by Nobel Prize-winning scientist Paul Dirac in 1928, it has only been found in minute amounts from sources such as cosmic rays and some radioactive substances. Imperial's Centre for Cold Matter aims to explain this lack of antimatter by searching for tiny differences between the behaviour of matter and antimatter that no-one has yet observed. Had the researchers found that electrons are not round it would have provided proof that the behaviour of antimatter and matter differ more than physicists previously thought. This, they say, could explain how all the antimatter disappeared from the universe, leaving only ordinary matter. Professor Edward Hinds, research co-author and head of the Centre for Cold Matter at Imperial College London, said: "The whole world is made almost entirely of normal matter, with only tiny traces of antimatter. Astronomers have looked right to the edge of the visible universe and even then they see just matter, no great stashes of antimatter. Physicists just do not know what happened to all the antimatter, but this research can help us to confirm or rule out some of the possible explanations." Antimatter is also studied in tiny quantities in the Large Hadron Collider at CERN in Switzerland, where physicists hope to understand what happened in the moments following the Big Bang and to confirm some currently unproven fundamental theories of physics, such as supersymmetry. Knowing whether electrons are round or egg-shaped tests these same fundamental theories, as well as other theories of particle physics that even the Large Hadron Collider cannot test. To help improve their measurements of the electron's shape, the researchers at the Centre for Cold Matter are now developing new methods to cool their molecules to extremely low temperatures, and to control the exact motion of the molecules. This will allow them to study the behaviour of the embedded electrons in far greater detail than ever before. They say the same technology could also be used to control chemical reactions and to understand the behaviour of systems that are too complex to simulate with a computer. Other social bookmarking and sharing tools: - J. J. Hudson, D. M. Kara, I. J. Smallman, B. E. Sauer, M. R. Tarbutt, E. A. Hinds. Improved measurement of the shape of the electron. Nature, 2011; 473 (7348): 493 DOI: 10.1038/nature10104 Note: If no author is given, the source is cited instead.
http://www.sciencedaily.com/releases/2011/05/110525131707.htm
4
Shrubs are relatively sparse on the tundra and are just beginning to spread, but they have already caused dramatic change, at least in part by absorbing more solar radiation than they reflect. Why the tundra is transforming A new study reveals mechanisms behind summer warming of the Arctic tundra By Deane Morrison From M, winter 2006 For years, summer has been strengthening its grip on the vast expanse of Arctic tundra. Scientists have documented how years of warming have led to thinning and retreat of ice in the Arctic Ocean--an ominous omen for polar bears, walruses and other marine animals--but no one knew why temperatures were rising even faster over the grassy tundra. Now, a study of the tundra shows how warming of the land can "snowball" into profound ecological changes. The culprits are a longer snow-free season and a shift in vegetation, including shrubs as well as coniferous trees. It was already known that the loss of sea ice allows the ocean to absorb more solar heat. The ocean then warms both the atmosphere and the ice, causing even more melting in a cycle of positive feedbacks. But the melting sea ice can't explain the extensive warming that has happened over land during the summer. Instead, the explanation lies in a different feedback loop that also started with a little warming, says Joe McFadden, an assistant professor of ecology, evolution and behavior in the College of Biological Sciences. "We can consider the Arctic a bellwether," says McFadden, who includes Minnesota among the regions where climate and ecosystems are sensitive to the timing of snowmelt. "We show that the global warming models' predictions were right that the Arctic would respond earlier and more strongly. This is a wake-up call. Something's really happening."The key is that a little warming causes small changes that amplify the warming of Arctic lands, McFadden explains. In a paper published in last week's Science magazine, McFadden and co-authors from several other institutions showed--using real data, not computer simulations--that the snow-free period is lengthening in the Arctic and the spread of shrubs and trees is contributing to the change. "We can consider the Arctic a bellwether," says McFadden, who includes Minnesota among the regions where climate and ecosystems are sensitive to the timing of snowmelt. "We show that the global warming models' predictions were right that the Arctic would respond earlier and more strongly. This is a wake-up call. Something's really happening." In the same issue, the magazine also ran a commentary piece praising the work and calling for more attention to Arctic warming in general and the role of terrestrial vegetation in particular. The timing of snowmelt varies greatly around the tundra, but on average the date of snowmelt is being pushed back 2.5 days per decade, the researchers concluded. The loss of snow cover contributes to the growth of trees, which are starting to invade more northern areas. But what surprised the scientists was how the spread of shrubs influences local climate. Shrubs are relatively sparse on the tundra and are just beginning to spread, but they have already caused dramatic change, at least in part by absorbing more solar radiation than they reflect. "The effect of shrubs is almost as great as that of tree expansion," McFadden says. This factor is missing from nearly all models of Arctic climate change, and including it will only intensify the predicted impact of warming on the region. The changes in date of snowmelt and the spread of vegetation have a huge proportional effect because the growing season is so short and the amount of shrub and tree cover is so small. The effects will be felt by all kinds of tundra dwellers. For example, the caribou herds time their northward migrations to the summer calving grounds to coincide with the emergence of tender shoots of vegetation. With earlier snowmelt comes an increased risk of mistiming their arrival, which could have a significant impact on the health of the herds because older shoots are less palatable. Another risk from earlier melting is that the snow will refreeze and coat the vegetation with thick ice that the caribou can't break through. Higher temperatures may also affect the Inuit. Negative impacts on subsistence resources like marine mammals and caribous will diminish their food supplies or at least make it harder to hunt them. Also, snow machines and skis are harder to use when snow is warmer and mushier, and people who have to cross frozen lakes will be at risk from thinner ice. But despite the worrisome climate trends, McFadden finds his niche in the new science of global ecology exciting. He finds the study of previously unknown feedback systems fascinating, and he enjoys having colleagues who share his enthusiasm. "The University has made a big commitment to growing in the area of global ecology," he says. "There are faculty in multiple colleges and departments, including soil, water and climate [Agricultural, Food and Environmental Sciences], geography [Liberal Arts], geology and geophysics [Institute of Technology] and my home department." Doing research in the Arctic was a source of excitement in itself. McFadden set up instruments in 24 research sites between the treeline on the north slope of Alaska and the Arctic Ocean. The instruments measured the exchanges of heat, water vapor and carbon dioxide between the vegetation and the atmosphere. McFadden was quite familiar with the instruments; as a graduate student at the University of California, Berkeley, he had devised a way to package them compactly. Good thing, because he had to be flown to the Alaskan sites by helicopter. In his current work, McFadden is using computer models to examine the interactions between shrubs and snow, in particular whether they contribute to the release of carbon dioxide under the snowpack in winter. The published work was funded by the National Science Foundation's Arctic System Science program.
http://www1.umn.edu/news/features/2006/UR_72153_REGION1.html
4.21875
In the beginning, there was light. Under the intense conditions of the early universe, ionized matter gave off radiation that was trapped within it like light in a dense fog. But as the universe expanded and cooled, electrons and protons came together to form neutral atoms, and matter lost its ability to ensnare light. Today, some 14 billion years later, the photons from that great release of radiation form the cosmic microwave background (CMB). Tune a television set between channels, and about 1 percent of the static you see on the screen is from the CMB. When astronomers scan the sky for these microwaves, they find that the signal looks almost identical in every direction. The ubiquity and constancy of the CMB is a sign that it comes from a simpler past, long before structures such as planets, stars and galaxies formed. Because of this simplicity, we can predict the properties of the CMB to exquisite accuracy. And in the past few years, cosmologists have been able to compare these predictions with increasingly precise observations from microwave telescopes carried by balloons and spacecraft. This research has brought us closer to answering some age-old questions: What is the universe made of? How old is it? And where did objects in the universe, including our planetary home, come from? This article was originally published with the title The Cosmic Symphony.
http://www.scientificamerican.com/article.cfm?id=the-cosmic-symphony
4.125
Ice cores drilled in the Greenland ice sheet are giving scientists their clearest insight to a world that was warmer than today. In a new study, scientists have used a 2,540 metre long Greenland ice core to reach back to the Eemian period 115-130 thousand years ago and reconstruct the Greenland temperature and ice sheet extent back through the last interglacial. This period is likely to be comparable in several ways to climatic conditions in the future, especially the mean global surface temperature, but without anthropogenic or human influence on the atmospheric composition. The Eemian period is referred to as the last interglacial, when warm temperatures continued for several thousand years due mainly to the earth's orbit allowing more energy to be received from the sun. The world today is considered to be in an interglacial period and that has lasted 11,000 years, and called the Holocene. "The ice is an archive of past climate and analysis of the core is giving us pointers to the future when the world is likely to be warmer," said CSIRO's Dr Mauro Rubino, the Australian scientist working with the North Greenland Eemian ice core research project. Dr Rubino stated that the Greenland ice sheet is presently losing mass more quickly than the Antarctic ice sheet. Of particular interest is the extent of the Greenland continental ice sheet at the time of the last interglacial and its contribution to global sea level. Deciphering the ice core archive proved especially difficult for ice layers formed during the last interglacial because, being close to bedrock, the pressure and friction due to ice movement impacted and re-arranged the ice layering. These deep layers were "re-assembled" in their original formation using careful analysis, particularly of concentrations of trace gases that tie the dating to the more reliable Antarctic ice core records. Using dating techniques and analysing the water stable isotopes, the scientists estimated the warmest Greenland surface temperatures during the interglacial period about 130,000 years ago were 8 plus-minus 4oC degrees warmer than the average of the last 1,000 years. At the same time, the thickness of the Greenland ice sheet decreased by 400 plus-minus 250 metres. "The findings show a modest response of the Greenland ice sheet to the significant warming in the early Eemian and lead to the deduction that Antarctica must have contributed significantly to the 6 metre higher Eemian sea levels," the researchers noted. Additionally, ice core data at the drilling site reveal frequent melt of the ice sheet surface during the Eemian period. "During the exceptional heat over Greenland in July 2012 melt layers formed at the site. With additional warming, surface melt might become more common in the future," the researchers added. Dr Rubino said the research results provide new benchmarks for climate and ice sheet scenarios used by scientists in projecting future climate influences. The research was published in the journal Nature. (ANI)
http://www.sify.com/news/Greenland-ice-cores-provide-vision-of-warmer-future-world-news-International-nbyq5cfagcg.html
4.09375
Humans are among more than 200 species of primates living on Earth today--one of the latest products of a long history of primate evolution. But over the past 65 million years, many now-extinct primate species flourished around the world. As groups adapted to different environments, they began to acquire features and abilities that persist in many of their varied descendants, including ourselves. The evolution of the primates is written in the fossil record. Each of the five species displayed here is representative of the primates living at a particular moment over the past 56 million years. Together, these examples reveal the development of features that are characteristic of living primates--for instance, grasping hands and feet, relatively large brains and keen eyesight. Mural of Primate Evolution This 1993 mural by artist Jay Matternes depicts five different kinds of extinct primates, each living at a different point during the past 56 million years. Plesiadapis (56 million years old) Notharctus (48 million years old) Aegyptopithecus (30 million years old) Proconsul (18 million years old) Sivapithecus (8 million years old) When most dinosaurs went extinct about 65 million years ago, mammals moved into newly vacated territories and rapidly evolved into many new species--including the ancestors of today's primates. Soon, groups of small primates were flourishing in forests around the world. Known as plesiadapiforms, these proto-primates lacked many features that characterize living primates. But significantly, their teeth were much like those of lemurs, monkeys, apes and humans--an indication that they were closely related to the direct ancestors of all modern primates. It's All Relative Plesiadapis cookei and other plesiadapiforms didn't look like most living primates: their eyes were set in the sides of their heads, instead of in front, and most species had non-grasping hands and feet. But plesiadapiforms were, in fact, well adapted for their environments. Enlarged nasal cavities allowed them to smell what they couldn't see, and claws served them well when scaling trees. Chew On This Despite its curiously enlarged front teeth, Plesiadapis had teeth very much like those of living primates. Indeed, its chewing teeth are much flatter than those of most other early mammals, which suggests that Plesiadapis ate a good deal of soft fruit and vegetation and had moved away from a primary diet of insects.
http://www.amnh.org/exhibitions/past-exhibitions/human-origins/understanding-our-past/extinct-primates
4.0625
- Starting a Business - Inventing a Product - Buying a Franchise - Home Business Basic Accounting Terms Basic Accounting Terms Here are some basic accounting terms to become familiar with. (Don’t get overwhelmed!) You don’t have to memorize these, but you need to be able to interpret them. Here are the most frequent accounting terms used. Accounting - process of identifying, measuring, and reporting financial information of an entity. Accounting Equation - Assets = Liabilities + Equity Accounts Payable - money owed to creditors, vendors, etc. Accounts Receivable - money owed to a business, i.e. credit sales. Accrual Accounting - a method in which income is recorded when it is earned and expenses are recorded when they are incurred. Asset - property with a cash value that is owned by a business or individual. Balance Sheet - summary of a company's financial status, including assets, liabilities, and equity. Bookkeeping - recording financial information. Break-even – the amount of product that needs to be sold to create a profit of zero. Cash-Basis Accounting - a method in which income and expenses are recorded when they are paid. Chart of Accounts - a listing of a company's accounts and their corresponding numbers. Cost Accounting - a type of accounting that focuses on recording, defining, and reporting costs associated with specific operating functions. Credit - an account entry with a negative value for assets, and positive value for liabilities and equity. Debit - an account entry with a positive value for assets, and negative value for liabilities and equity. Depreciation - recognizing the decrease in the value of an asset due to age and use. Double-Entry Bookkeeping - system of accounting in which every transaction has a corresponding positive and negative entry (debits and credits). Equity - money owed to the owner or owners of a company, also known as "owner's equity". Financial Accounting - accounting focused on reporting an entity's activities to an external party; ie: shareholders. Financial Statement - a record containing the balance sheet and the income statement. Fixed Asset - long-term tangible property; building, land, computers, etc. General Ledger - a record of all financial transactions within an entity. Income Statement - a summary of income and expenses. Job Costing - system of tracking costs associated with a job or project (labor, equipment, etc) and comparing with forecasted costs. Journal - a record where transactions are recorded, also known as an "account" Liability - money owed to creditors, vendors, etc. Liquid Asset - cash or other property that can be easily converted to cash. Loan - money borrowed from a lender and usually repaid with interest. Net Income - money remaining after all expenses and taxes have been paid. Non-operating Income - income generated from non-recurring transactions; ie: sale of an old building or piece of equipment. Note - a written agreement to repay borrowed money; sometimes used in place of "loan" Operating Income - income generated from regular business operations. Payroll - a list of employees and their wages. Profit - see "net income" Profit/Loss Statement - see "income statement" Revenue - total income before expenses. Single-Entry Bookkeeping - system of accounting in which transactions are entered into one account.
http://chic-ceo.com/basic-accounting-terms
4.53125
The Legislative Process Congress’s primary duty is to pass laws. The legislative process is often slow, just as the framers of the Constitution intended. The framers believed that a slow-moving legislature would be less able to infringe on citizens’ rights and liberties. Bills and Laws Most bills that Congress considers are public bills, meaning that they affect the public as a whole. A private bill grants some relief or benefit to a single person, named in the bill. Many private bills help foreign nationals obtain visas, but they can cover a variety of other matters. The process through which a bill becomes law occurs in several stages in both houses: - Introduction: Only a member of Congress may introduce a bill. After a bill is introduced, it is assigned a designation number. Only members of the House of Representatives may introduce bills concerning taxes. - Referral to committee: The leader of the house in which the bill was introduced then refers the bill to an appropriate committee or committees. - Committee action: The committees can refer the bill to subcommittees for action, hearings, markup sessions, and votes. The committee can also kill the bill by doing nothing at all, a process known as pigeonholing. - Referral to the full body: If a committee approves a bill, the bill is sent on to the full House or Senate. Floor debate and vote: The full body debates the bill and then votes. The two houses differ significantly in how they handle debate: - In the House, the Rules Committee has the power to limit debate and the number of amendments offered during debate. A vote in which every member’s vote is recorded is called a roll-call vote. - In the Senate, members are allowed to speak as much as they wish and to propose as many amendments as they wish. There is no Senate Rules Committee. - Conference committee: Often, the two houses produce different versions of a single bill. When this happens, both houses appoint members to a conference committee, which works to combine the versions. After the conference committee’s report, both houses must vote on the new bill. - The President: The president’s only official legislative duty is to sign or veto bills passed by Congress. If the president signs the bill, it becomes law. If the bill is vetoed, it goes back to Congress, which can override the veto with a two-thirds vote in both houses. Veto overrides are rare—it is extremely difficult to get two-thirds of each house of Congress to agree to override. Instead, presidential vetoes usually kill bills. Sometimes the president chooses to do nothing with bills that Congress sends. If the president still has not signed or vetoed the bill after ten days, the bill becomes law if Congress is in session. If Congress has since adjourned, the bill does not become law. This is called a pocket veto. Congress must also pass the federal budget. According to the Constitution, Congress must approve all government spending. In other words, Congress has the power of the purse. Many congressional activities are related to spending and generating revenue. The U.S. government runs on a fiscal year, a twelve-month period used for accounting purposes. Currently, the fiscal years starts on the first day of October, but Congress has the power to change the start date. Congress must pass a budget for every fiscal year. Because the budget is so complex, the president and Congress begin work on it as much as eighteen months before the start of a fiscal year. The president submits a budget proposal to Congress every January for the upcoming fiscal year. Congress then acts on the proposal, usually granting much of what the president wants. To prevent a government shutdown, Congress must pass the budget by the end of the fiscal year. Authorization and Appropriation Spending money is a two-step process: - Congress must authorize the money being spent. Authorization is a declaration by a committee that a specific amount of money will be made available to an agency or department. - After authorizing expenditures, Congress must appropriate the money by declaring how much of the authorized money an agency or department will spend. Sometimes appropriation bills come with strict guidelines for spending the money. Congress usually ends up creating an appropriation bill for each government department, although sometimes departments are combined into a single bill. Each bill must be passed for that department to receive funding. Some appropriation bills are easily passed, but others are very controversial. Congress must pass a budget every year by the start of the new fiscal year, which means that appropriation bills must be passed for every part of the government. If an appropriation bill does not pass, then the department whose budget is being discussed will shut down, and all nonessential employees will be temporarily out of work. Sometimes Congress passes a continuing resolution, which provides funding for a limited period (usually a week or two). Congress then uses the extra time to reach an agreement on the budget.
http://www.sparknotes.com/us-government-and-politics/american-government/congress/section4.rhtml
4.15625
Lesson Plans and Worksheets Browse by Subject Population Teacher Resources Find teacher approved Population educational resource ideas and activities In this World Population Day worksheet, students complete activities such as reading a passage, phrase matching, fill in the blanks, correct words, multiple choice, spelling sequencing, scrambled sentences, asking questions, take a survey, and writing. Students complete 12 activities for World Population Day. Young scholars explore biological impact by completing a worksheet in class. For this animal population lesson, students utilize river rocks and jars to conduct a coqui frog population role play activity. Young scholars define a list of amphibian related vocabulary terms before completing a frog population worksheet in class.
http://www.lessonplanet.com/lesson-plans/population
4.1875
In this lesson students discover that measurements from space can tell us the temperature of the ocean, both on an annual average and as measured on any given date. For the annual average the highest ocean temperatures are near the equator, and drop as one moves either northward or southward from the equator. Students will graph each temperature value as a function of latitude and write a linear equation that best fits the points on their graph. They can choose as data points any point at that approximate latitude because the temperature is not uniform for a certain latitude - some areas are hotter and some are cooler. They can also look at today's ocean temperatures via the link provided to see how the seasons affect whether the northern or southern oceans are warmer. Students will take ocean temperature data from a map and plot temperature versus angle from the equator. Intended for grade levels: Type of resource: Adobe Acrobat reader Cost / Copyright: Activities are Copyright Rice University, however may be freely copied for educational use so long as headers are not changed. Teachers are encouraged to register using the registration facility so that they receive notices of new activities, etc. DLESE Catalog ID: DLESE-000-000-007-779 Resource contact / Creator / Publisher:
http://www.dlese.org/library/catalog_DLESE-000-000-007-779.htm
4.09375
Scientists at Stanford University School of Medicine have discovered a new potential means of preventing HIVThe abbreviation for human immunodeficiency virus, which is the cause of AIDS.infectionInvasion by organisms that may be harmful, for example bacteria or parasites.. Human Immunodeficiency Virus (HIV), the virusA microbe that is only able to multiply within living cells. responsible for AIDSAcquired immune deficiency syndrome, a deficiency of the immune system due to infection with HIV., is notoriously difficult to treat due to its tendency to mutate. The American team, however, have found a way to get around this problem by preventing infection in the first place. The method consists of altering the geneticRelating to the genes, the basic units of genetic material. codes of the specific white bloodA fluid that transports oxygen and other substances through the body, made up of blood cells suspended in a liquid. cells which are targeted by HIV so as to make them naturally resistantA microbe, such as a type of bacteria, that is able to resist the effects of antibiotics or other drugs. to the virus. The researchers of the study, published in Molecular Therapy, have claimed that this tailored geneThe basic unit of genetic material carried on chromosomes. therapy might ultimately replace current HIV drug treatments, which necessitate patients taking multiple drugs on a daily basis. While not a cure for the virus, this method could potentially prevent the spread of infection and, therefore, the likelihood of AIDS developing. The treatment could eventually also be applied to other blood based diseases, such as sickle cell anaemiaThe anaemia resulting from sickle cell disease..
http://www.wellsphere.com/health-education-article/genetic-therapy-may-prevent-hiv/1856459
4.15625
by Laura Cheshire Published in 1995 The Ross Sea is home to Antarctica's largest polynya (an area of open water surrounded by ice). It was first spotted by explorers in the early 19th century, and then by satellite imagery in the 1970s. Since 1975, scientists have monitored the polynya closely to determine its role in global primary production and carbon cycling in the Ross Sea. The Ross Sea is an ideal location for research on climate changes and carbon cycling in high-latitude environments because it represents a typical Antarctic coastal region, has well-defined boundaries and current circulation, and exhibits large biogenic production in the form of plankton. In combination, these factors also result in a fairly simple model representative of a polar ocean. Polynyas are significant because they allow light and nutrient conditions in which seasonal, very dense blooms flourish, creating a significant carbon sink. In late November each year, the polynya in the Ross Sea opens, reaching its fullest extent in January (Antarctica's brief summer). During this time, researchers observe enormous phytoplankton blooms, particularly around the receding ice edge. Phytoplankton absorb atmospheric carbon, and with such a large annual bloom in the polynya, researchers realized that the Ross Sea might play a greater role in global carbon cycling than was previously thought. "What ultimately lead to this study is that models for climate change predict larger changes at the poles, so the Ross Sea is a place where small changes may show up first," says Rob Dunbar of Rice University, who has done field work in Antarctica since 1982. By using a variety of methods and tools, such as weather stations, ocean moorings, and satellite imagery, Dunbar and other researchers can begin to piece together a more accurate view of the role of polar regions in climate change. During field programs in 1991 and 1992, researchers collected water samples, temperature-depth readings, sediment cores, and biogenic particles. Throughout the study, three ocean moorings were used to obtain a time-series of information on organic particulates in the ocean below and around the sea ice. Each had two particle traps attached, one about 250 meters below the surface, and another about 50 meters above the sea floor. Placed in locations that exhibit differing sea ice conditions, the moorings were used to measure variations in primary production as well as distribution of biogenic forms and organic matter. The sediment traps featured 14 cups that rotated into place every few weeks, gathering plankton, silica, and diatoms over a two-year period. Throughout the study, passive microwave brightness temperature data from the National Snow and Ice Data Center DAAC provided information on sea ice concentration and ice edge growth and recession. Researchers used averaged ice concentrations to estimate sea ice cover during years in which they performed field work, and used satellite imagery from 1978 to 1994 to analyze longer term variations in sea ice extent and concentration. Though the investigators encountered technical problems with cup rotation and mooring retrieval, they still obtained sufficient information to determine biogenic populations in different parts of the Ross Sea. They also developed a time series of plankton bloom and dissolution, and determined spatial and temporal variations in primary production. As witnessed in the annual polynya blooms, productivity reaches a peak in mid to late December along the receding ice edges, and then declines steadily into January and February as the ice freezes and closes the polynya. "If you look at sea ice data archives, December is a key time for phytoplankton." Dunbar says, "Algae need to have open water." In years that the polynya did not open to such a large extent, phytoplankton blooms were minimal, and primary productivity was consequently lower. The annual event of the Ross Sea polynya provides insight into how polar regions interact with and potentially influence larger climate systems. Whether or not the polynya opens each year depends on several factors. The distribution and condition of sea ice reflects changes in wind speed and direction, changes in atmospheric and ocean temperature, and changes in weather patterns. Carbon cycling in the Ross Sea is more than a modeling scheme component. Primary productivity is vital to marine life. Algae require open water to bloom, so when the polynya does not open, the marine food chain is interrupted. "If you're part of the food chain," Dunbar explains, "this is it." During years when the Ross Sea polynya did not open, "there was a lot of seal mortality because the base of the food chain was perturbed -- everything there is marine-based, delicately in balance with physical conditions." Dunbar plans to continue studying Antarctic carbon cycling using similar techniques, and will continue to rely on sea ice data from the National Snow and Ice Data Center. Carbon uptake in Antarctic phytoplankton blooms may seem a small factor in global carbon cycling, but Dunbar insists that it is necessary to understand as many components in the process as possible in order to come up with a complete picture of climate change. Dunbar, R. B., A. R. Leventer, and D. A. Mucciarone. 1996. Water column sediment fluxes in the Ross Sea, Antarctica (I): Atmospheric and sea ice forcing. Submitted to Journal of Geophysical Research.
http://earthdata.nasa.gov/featured-stories/featured-research/phytoplankton-and-polynyas
4.3125
On February 11, 2000, NASA’s Space Shuttle Endeavour embarked on an 11-day mission to create the first near-global, high-resolution topographic map of Earth. A decade later, the data collected during the Shuttle Radar Topography Mission (SRTM) is still widely used by scientists, engineers, and mapmakers. The Davenport Ranges of central Australia, known for their striking oval and zigzag patterns, are just one of many dramatic landscapes mapped during the mission. The dome of sinuous mesas and narrow valleys rises from the surrounding plain like an island surrounded by the sea. The higher-elevation features (pale yellow) are formed from harder rocks, such as quartz, while valleys are formed from softer rocks, such as shale, which erode more easily. Until fairly recently, geologists thought that so little erosion had occurred in Australia’s interior that it was possible that the landforms of the Davenport Ranges had existed at the surface for more than 500 million years. However, new evidence indicates that while the tectonic event that bent and folded the underlying rocks took place hundreds of millions of years ago, the exposed ridges and valleys were exhumed by erosion as recently as 100 million years ago. - Belton, D., Brown, R., Kohn, B., Fink, D., and Farley, K. (2004). Quantitative resolution of the debate over antiquity of the central Australian landscape: implications for the tectonic and geomorphic stability of cratonic interiors. Earth and Planetary Science Letters, 219 (1-2), 21-34.
http://visibleearth.nasa.gov/view.php?id=42733
4.03125
4 June 1998 During this peak, often referred to as solar maximum, great amounts of energy are released that can produce magnetic storms in the Earth's ionosphere. Until now, scientists have believed that auroras occur more frequently during this time of increased solar activity. The research findings benefit both the scientific community and the general public, according to Newell. "Our discovery may help us narrow down the actual cause of auroras," says Newell. "It will lead to a better understanding of their effect on upper atmospheric weather. And, it will help tourists traveling to Alaska and other high-latitude locations plan their trips to see the grand 'northern lights' during optimum viewing times." The APL scientists confirmed an existing theory regarding how auroras are created, called the ionospheric conductivity feedback mechanism. This theory implies that during solar maximum, when the sun's ultraviolet rays increase ionospheric conductivity, there should be fewer intense auroras, at least under sunlit conditions. The theory predicts no change in auroral frequency during conditions of darkness. To test this theory, Dr. Newell and his colleagues conducted the first-ever statistical study of an entire solar cycle using 12 consecutive years of charged particle data from Air Force Defense Meteorological satellites. "The data from the Air Force satellites confirmed our assumptions based on the ionospheric theory, and support the idea that intense auroras are a discharge phenomenon, analogous to lightning," says Newell. He adds, however, that rare, huge magnetic storms, which allow auroras to be seen at lower-than-usual latitude, are still more common following solar maximum, but they only represent a small fraction of the total number of intense auroras. To read more about the auroral research by Dr. Newell and his colleagues, visit the following Web site: http://sd-www.jhuapl.edu/Aurora. The Applied Physics Laboratory is a not-for-profit laboratory and independent division of The Johns Hopkins University. APL conducts research and development primarily for national security and for nondefense projects of national and global significance. Located midway between Baltimore and Washington, D.C., in Laurel, Md., APL employs 2,700 full-time staff.
http://carlkop.home.xs4all.nl/aurora2.html
4
Kinematics and Dynamics - Key terms A change in velocity. The study of why objects move as they do; compare with kinematics. The product of mass multiplied by acceleration. A statement capable of being scientifically tested for accuracy. The tendency of an object in motion to remain in motion, and of an object at rest to remain at rest. The study of how objects move; compare with dynamics. A measure of inertia, indicating the resistance of an object to a change in its motion—including a change in velocity. The material of physical reality. There are four basic states of matter : solid, liquid, gas, and plasma. The study of bodies in motion. The sum of two or more vectors, which measures the net change in distance and direction. A quantity that possesses only magnitude, with no specific direction. Mass, time, and speed are all scalars. The opposite of a scalar is a vector. The rate at which the position of an object changes over a given period of time. Space entirely devoid of matter, including air. A quantity that possesses both magnitude and direction. Velocity, acceleration, and weight (which involves the downward acceleration due to gravity)are examples of vectors. Its opposite is ascalar. The speed of an object in a particular direction. A measure of the gravitational force on an object; the product of mass multiplied by the acceleration due to gravity. (The latter is equal to 32 ft or 9.8 m per second per second, or 32 ft/9.8 m per second squared.)
http://www.scienceclarified.com/everyday/Real-Life-Chemistry-Vol-3-Physics-Vol-1/Kinematics-and-Dynamics-Key-terms.html
4.0625
Child & Adult Care Food Program Nutrition education activities with fruits and vegetables Identify common fruits and vegetables. All fruits have seeds inside of them. Vegetables are the roots, bulbs or leaves of plants. Recognize that fruits and vegetables can be prepared and eaten in a variety of ways - juices, sauces, cooked, raw and with other foods. Recognize that children need two to three servings of fruit and three servings of vegetables each day. Notice that food changes when it is cooked. Recognize that snacks have foods from at least two food groups. Recognize the difference between "anytime snacks" and "sometimes snacks." 1. Provide a variety of fruits and vegetables. Ask children which ones are fruits; which ones are vegetables. Talk about how many vegetables and fruits we should eat each day. Sort the fruits and vegetables by the following categories: Eat cooked or raw or both 2. Prepare the fruits and vegetables for snack. Ask these questions using the five senses. How does it look? How does it feel? How does it sound when I eat it? How does it smell? How does it taste? Use the children's comments to help them sort the fruits and vegetables in new ways. For example, some things taste sweet, some need to be chewed for a long time. Some are squishy, some are hard. Some have skins, some are peeled, others aren't peeled. 3. Explore with the children all the ways fruits and vegetables may be served. Include: raw, cooked; as part of soups, casseroles or other dishes; as juices; and as sauces (apple, cranberry, tomato). 4. Prepare applesauce or mashed potatoes with the children. Before you cook, ask the children: How does it look, taste, smell and feel and sound when you eat it? Ask the children to predict what will happen when the fruit or vegetable is cooked. After it is cooked, as the children the same five questions about senses as in Number 2 above. 5. Talk with children about their favorite snacks. How might you group these snacks? In addition, some new categories are: baked in the oven grows on a farm comes out of the refrigerator comes out of a box Make a sometime/anytime snack list to send home with the children. The children can help prepare the following items: applesauce, fruit gelatin, fruit slurpy, fruit/vegetable muffins, mashed potatoes, zucchini muffins. As you eat fruits at home, count the seeds. Source: Darlene Martin, Extension Nutrition Specialist and Program Coordinator, University of Nebraska-Lincoln
http://www.education.ne.gov/NS/CACFP/Caring/nutri_ed1.html
4.15625
Governor Massachusetts Bay Colony: 1630-1634, 1637-1640, 1642-1644, 1646-1649 John Winthrop was the young Massachusetts Colony's most prominent leader, serving as Governor for fifteen of its first twenty years. In his famous "City on a Hill" speech, Winthrop articulated the Puritan hope that their community would be an example to the world. For Puritans did not merely seek to escape repression of their faith, they aspired to create a society based on that faith as a model to redeem their homeland. In March of 1630, the Winthrop fleet of eleven vessels with more than 1,000 passengers onboard set off for Massachusetts. Unlike the Pilgrims who suffered through their passage and ended up 200 miles too far north during December, Winthrop and the Puritan settlers had a speedy passage, arriving in the warm weather of June and July at Salem where Governor John Endecott welcomed them. Winthrop led the Puritans to Charlestown and eventually to the Shawmut Peninsula, because of its fresh water springs. An old Cambridge classmate of Winthrop's, the Reverend William Blackstone, who had been part of an earlier failed expedition, inhabited Shawmut. He invited the Massachusetts Bay Colony to join him on the Shawmut Peninsula. Settler Thomas Dudley, who would succeed Winthrop as the Colony's Governor, suggested the settlers name the new settlement "Boston." Dudley, as well as many of the settlers hailed from Boston in Lincolnshire, England. The name of their hometown recalled their desire to make a version of English society based on the principles of their faith. In contemporary accounts Winthrop is often only recalled as the prosecutor of Anne Hutchinson. Winthrop's intolerant and even misogynistic nature was common among the zealous Puritan founders of Massachusetts. It often escapes contemporary readers that Winthrop was an able Governor in his time. He used the legal training he obtained as a young man studying law at the Inns of Court in London to effectively defend the Colony's charter in England. He was respected both by colonists in Massachusetts, as well as by the leaders of Plymouth, Connecticut, and New Haven who joined with Massachusetts in confederation and elected Winthrop their first chief executive.
http://www.mass.gov/portal/government-taxes/laws/interactive-state-house/historical/governors-of-massachusetts/massachusetts-bay-colony-period-1629-1686/john-winthrop-1587-1649.html
4.15625
ADA Information and Checklist Increasing audiences by increasing accessibility and meeting ADA rules General information on disabilities - What is a disability? A person described as having a disability is a person with a physical, emotional or mental impairment that substantially limits one or more major life activities OR a person with a record of such impairment OR a person who is perceived as having such an impairment. “Major life activities” include: thinking, processing information, listening, seeing, hearing, breathing, walking, taking care of personal needs, working, interaction with others, concentrating, sitting, standing and reading. - What is the ADA? The Americans with Disabilities Act was signed into effect by Congress on July 26, 1990. The ADA is intended to a “provide clear and comprehensive national mandate for the elimination of discrimination against persons with disabilities.” - What does the ADA do? The ADA prohibits discrimination on the basis of disability and provides the first comprehensive civil rights to persons with disabilities in the areas of employment; public accommodations; state and local government programs, services and activities; and communications. This includes not just architectural accessibility but also programmatic accessibility. - People First Language—WORDS CAN HURT. Dignity and respect begin with the language we use to represent ourselves. Always remember, when speaking or writing, put the person first then the disability. EXAMPLE: Person with a disability, as opposed to a disabled person, or even worse, the disabled. - ADA and the ARTS—None of us plan an event with the conscious intent of discrimination. We want to involve people in our passion...the arts. Without a conscious awareness of the planning process and an understanding of what “accessibility” means, we could easily exclude a portion of the community. This exclusion is discrimination. What does accessible mean? - Accessibility enables everyone to attend, participate and benefit. Your arts event is accessible if people can get to it and, once there, if the people attending are able to participate actively in the program. The word accessibility is most often associated with wheelchair use, but accessibility actually involves the needs of people who have visual, cognitive or hearing disabilities, as well as those with activity manual or mobility impairments. Adopting an access philosophy - The first step is to recognize that access is a civil rights issue. - Access is also a social issue. Access promotes diversity and inclusion by ensuring that the arts are open to all people, regardless of ability. - Access benefits the greater population. EVERYONE will experience a permanent or temporary disability, either personally or with a loved one. Remember that the aging process lessens mobility and presents hearing and visual difficulties. - Access is related to audience development in the broadest sense and provides opportunities for people to be involved in all aspects of the arts to the fullest extent possible. - Access has economic benefits. People with disabilities and older adults comprise a significant part of our population and are a large market for the arts. Requirements for accessibility - The best source for requirements is the Title III Technical Assistance Manual available from the Department of Justice. (1-800-949-4ADA) or www.usdoj.gov/crt/ada/publicat.htm. - To evaluate your facilities, you can use the ADA Checklist for Existing Facilities version 2.1, available from the Disability and Business Technical Assistance Center at 1-800-949-4ADA. The checklist provides the requirements for numerous elements of accessibility accompanied by suggested solutions for barriers you may identify while completing your evaluations. Or see the Facility Checklist below. - Go through this evaluation process with a committee of interested people who also have expertise in several of the necessary areas (i.e., architects, contractors, representatives from the various “disability communities,” etc.). With this evaluation, the arts management doesn’t have the sole responsibility of convincing their board and others of needed adaptations. For more information, please call (601) 359-6030. - Are 96" wide parking spaces designated with a 60" access? - Are there accessible parking spaces located near the main building entrance? - Is there a “drop off” zone at the building entrance? - Is the gradient from parking to building entrance 1:12 or less? - Is the entrance doorway at least 32" wide? - Is the door handle easy to grasp? - Is the door easy to open (less than 8 lbs. pressure)? - Are doors other than revolving doors available? - Is the path of travel free of obstruction and wide enough for a wheelchair? - Is the floor surface hard and not slippery? - Do obstacles (phones, fountains) protrude no more than 4"? - Are elevator controls low enough (48") to be reached from a wheelchair? - Are elevator markings in Braille for the blind? - Does elevator provide audible signals for the blind? - Does elevator interior provide a turning area of 51" for wheelchair? - Are restrooms near building entrances and/or personnel offices? - Do doors have lever handles? - Are doors at least 32" wide? - Is the restroom large enough for wheelchair turnaround (51" minimum)? - Are stalls at least 32" wide? - Are grab bars provided in toilet stalls? - Are sinks at least 30” high with room for a wheelchair to roll under? - Are sink handles easily reached and used? - Are soap dispensers, towels, etc., no more then 48" from the floor? - Are exposed hot water pipes located under sinks wrapped in insulation to avoid injury to those individuals using a wheelchair? Facilities that serve the general public (Offices, exhibit halls, box offices, meeting spaces) - Are doors at least 32" wide? - Is the door easy to open? - Is the threshold no more than ½" high? - Are paths between desks, tables, etc., wide enough for wheelchairs? - Do you have a counter that is low enough to serve individuals in wheelchairs?
http://www.arts.state.ms.us/resources/ada-checklist.php
4.21875
The terms "bullying" and "cyberbullying" are often used to describe mean behavior. However, they're both specific types of mean, aggressive behavior, which are part of a wider range of aggression and harassment behaviors. Bullying is mean behavior that: - Is intentional - the bullying is being done for a reason - Involves the aggressive person using some sort of power over his/her victim. He or she may feel powerful because they are popular or the aggressive person might simply feel it's okay to put down others. - Happens repeatedly - more than just a couple of times Bullying can be physical or verbal and can be face to face, or behind someone's back. It can also involve actions that border on discrimination, whether it's about sexuality, gender, race, class, or something else. Cyberbullying, or "online" bullying, is bullying that happens on the internet or on a mobile device. As technologies grow and change over time, so do new ways to use them to carry out acts of bullying. Bullying and cyberbullying aren't totally different things. They're examples of the same kind of behavior-with the same social, cultural, and human roots-in different contexts. It's often hard to separate bullying and cyberbullying, since your experience of the online and offline worlds are so closely connected. For example, many online victims know their bullies in real life (although they might only be bullied by him or her online). Conflicts dealing with relationships that start out offline may carry over to social media sites, cell phones, etc., or vice versa, and escalate or turn into cyberbullying. For example: A prank pulled in the locker room at school results in ongoing social humiliation through pictures shared via cell phones and online. Another example could start with a nasty Facebook post which leads to bullying at school. So what counts as bullying and cyberbullying, and not other kinds of meanness? Some examples include: - Regularly insulting someone for their sense of style, or bad grades - Someone repeatedly posts unflattering photos or insults about a person on his/her Facebook timeline, or encourages others to do so. - Someone creates a new website or online group meant specifically to insult another person or group of classmates. - Someone shares texts of private photos of someone else without their permission Bullying and cyberbullying are not: - Once in a while, or a one-time event, where a person puts someone else down. - Posting a rude comment to someone else's Facebook timeline when both people are in a fight. - Disagreements, fights, or mean words that result from misunderstandings. Still, things aren't always crystal clear. It's important to keep in mind that conflicts that start out as jokes, misunderstandings, or arguments can escalate and lead to ongoing bullying situations. What's more common - bullying offline, or online? Offline bullying remains more common than cyberbullying. Offline bullying is more common among middle-schoolers, but online bullying tends to be most common among high-school students. What roles do people play in online bullying? Different roles include the bully; victim; bully-victim; and bystander. However, these roles aren't always clear-cut. For example, bully-victims are people who are victims, but also bully others. Bystanders observe an act of bullying happening to someone else. Regardless of what role a person plays, being involved in bullying both online and offline is connected with certain negative psychological, social, and academic consequences. Examples include: low self-esteem, trouble with relationships, and less success in school. Other things to know about bullying at school: Your relationship with your friends and others at school can affect the way you feel about bullying and can also affect your involvement in it. If you're comfortable in your social group and your friends don't support bullying, you likely won't either. People who have friends who don't support bullying are also more likely to stand up for the victim if they witness bullying. However, if your friends are bullying others, it can be easy to give into peer pressure and take part in bullying with them. What should I do if I'm being cyberbullied? Instead of responding to threatening messages, save the messages or inappropriate pictures in a folder and get off of the site or chat room, or close out of the IM right away. If you're being cyberbullied on a social networking site, take a screen shot while the bullying is going on, because the bully may be able to delete the offending message or picture at any point. Tell an adult what happened or seek support from a close, trusted friend. In extreme cases, it may be necessary for you to report a bullying/cyberbullying situation to school officials and/or the police. If this happens, you should be ready to answer the following questions: - What exactly was said? (Print out a copy of the message/post/picture, or show the saved text message) - What type of technology was used to make the threat? (IM, text, cell phone, other hand-held device, etc.) - How often has the threat occurred? - Do you know who is responsible for the threats? (Answer this question honestly. (Do you know exactly who it is?, Do you think you know who is doing it, or Do you not have a clue who is making the threats.) You can also report abuse to the online application that was used to deliver the harassing behavior. For instance, if someone is repeatedly posting mean comments about you on Facebook, you can click the "Report/Mark as Spam" option that comes up on the right side of the post if you mouse over the pencil icon. If someone is sending e-mails to your Gmail account that violate Gmail policies, fill out the form to report the abuse. Remember that you can usually block and filter users from contacting you by adjusting your privacy settings. |Previous||Next: Information Quality| This Cyberbullying health guide is made possible by a grant from the The Comcast Foundation.
http://www.youngwomenshealth.org/cyberbullying.html
4.03125
A circle is a simple shape of Euclidean geometry consisting of the set of points in a plane that is a given distance from a given point, the centre. The distance between any of the points and the centre is called the radius. Circles are simple closed curves which divide the plane into two regions: an interior and an exterior. In everyday use, the term "circle" may be used interchangeably to refer to either the boundary of the figure, or to the whole figure including its interior; in strict technical usage, the circle is the former and the latter is called a disk. A circle is a special ellipse in which the two foci are coincident and the eccentricity is 0. Circles are conic sections attained when a right circular cone is intersected by a plane perpendicular to the axis of the cone. of the circle = π x area of the shaded square As proved by Archimedes, the area enclosed by a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, which comes to π multiplied by the radius squared: (Our solved example in mathguru.com uses this concept). Equivalently, denoting diameter by d, that is, approximately 79 percent of the circumscribing square (whose side is of length d). The circle is the plane curve enclosing the maximum area for a given arc length. This relates the circle to a problem in the calculus of variations, namely the isoperimetric inequality. In Euclidean plane geometry, a rectangle is any quadrilateral with four right angles. The term "oblong" is occasionally used to refer to a non-square rectangle. A rectangle with vertices ABCD would be denoted as ABCD. A so-called crossed rectangle is a crossed (self-intersecting) quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals. Its angles are not right angles. Other geometries, such as spherical, elliptic, and hyperbolic, have so-called rectangles with opposite sides equal in length and equal angles that are not right angles. If a rectangle has length l and width w It has area A = lw (Our solved example in mathguru.com uses this concept). above explanation is copied from Wikipedia, the free encyclopedia and is remixed as allowed under the Creative Commons Attribution- ShareAlike 3.0 Unported
http://www.mathguru.com/level2/areas-related-to-circles-2009110500046093.aspx
4
Some computer knowledge, reading skills appropriate for the website, experience with PowerPoint a plus. The goal of this lesson is to use interactive processes to discover more about the human nervous system and the effect that it has on our everyday lives. The lesson focuses on the autonomic nervous system and how our organs individually react to both sympathetic and parasympathetic reactions. Text of Learning Exercise: Go to http://faculty.washington.edu/chudler/neurok.html Click on "Explore the Nervous System" Click on "The Peripheral Nervous System" at the top of the page Click on "The Autonomic Nervous System" Answer the following questions: 1. What does ANS stand for? 2. How does ANS cause us to respond in emergencies? In non-emergencies? 3. What 3 parts is the ANS divided into? Read about the Sympathetic and Parasympathetic Nervous Systems 1. What region of the spinal cord are the cell bodies of the sympathetic nervous system located? The parasympethetic? Submit answers to these questions via email 1. Choose 5 organs and diagram their responses to sympathetic and parasympathetic stimulation. 2. Over the next 3 days, create a PowerPoint telling a story in which the main character encounters a situation where he/she must use the sympathetic or parasympathetic nervous system. Mention the responses of the 5 organs you chose and, if you can, use the Internet to find pictures of the 5 organs and bring them into your PowerPoint at the pages where you talk about them. Be creative! Submit your PowerPoint. Extra Credit Option: At the bottom of the ANS web page, click on "Interactive Word Search Puzzle on the Autonomic Nervous System" Complete the puzzle Print the puzzle Return the completed puzzle to me along with the PowerPoint
http://www.merlot.org/merlot/viewAssignment.htm?id=91669&backPage=%0A%0A%0A%0A%2Fassignments.htm%3Fmaterial%3D88036%0A&hitlistPage=%0A%0A%0A%0A%2Fassignments.htm%3Fmaterial%3D88036
4.21875
Think of Antarctica – and if you’re like many people, you will think of a blank canvas. Anyone familiar with the tales of Scott, Shackleton, and Amundsen will know that they had to traverse glaciers and mountain ranges, but the interior of the continent remains, in the popular imagination at least, a vast, flat, white desert. Yet the ground below the ice sheet is anything but smooth – and that has had, and continues to have, implications for both climate and sea level change. Antarctica: What’s Under the Ice? Logic, along with just a little knowledge of earth processes, indicates that there will be a landscape buried under the ice even if we can’t see any of it (and of course, with peaks appearing through the ice, we can). It’s a landscape that has mountains and valleys, lakes and streams – and which, though currently subjected to alteration by ice, is influenced by other dynamic forces as well. Now, new research undertaken by the British Antarctic Survey (BAS) and the University of Aberdeen has not only revealed a major new feature of that landscape, but also sheds some light on how the topography interacts with the glacial processes – and, most crucially, the wasting (melting) of the Antarctic ice sheet. Over the last 50 years or so, new technologies such as radar and remote sensing have allowed scientists to begin to ‘see’ the shape of the Antarctic landscape. ‘We know more than we ever have done about the underlying topography,’ the study’s lead author, Dr. Rob Bingham, told Decoded Science. ‘This is not to say we’re done – there are still large, unexplored areas about which we know little.’ Antarctic Ice Sheet: Topography and Mass Wasting The study focused on a particularly remote area of West Antarctica known as the Ferrigno Ice Stream. Measurements indicated that the ice stream overlies a rift valley, a major feature which is the result of two of the earth’s tectonic plates moving apart. Researchers were able to link the presence of this rift, and warming of the ocean, to increasing flow of the ice stream. Pages: 1 2
http://www.decodedscience.com/ice-melt-ocean-warming-antarctic-rift/16302
4.34375
Learners will be able to identify a variety of factors and processes pertaining to rock formation, rock types, and the rock cycle. The overall instructional goal of The Rock Cycle Race is to provide practice and reinforcement for sixth-grade science students regarding this instructional objective. In the state of California, curriculum frameworks focus on teaching Earth Science at the sixth-grade level (http://www.cde.ca.gov/board/pdf/science.pdf). The learning standards established for sixth-grade students in San Diego City Schools (http://www-internal.sandi.net/standards/HTML/SciK12.html) is typical for many districts and include developing an understanding of the earth's structure as well as the processes that shape the earth. The Rock Cycle Race supports these standards. Students play this game by following paths that replicate the processes of the rock cycle. Students advance on the board by successfully answering questions designed to test and reinforce their understanding of these processes and the rock types created from these processes. The game targets middle-school students taking integrated physical science which includes a unit on Earth materials, specifically rocks and the rock cycle. In California schools, this content is presented to sixth-grade students ranging in age from 11 to 12 years. Generally, students will have had little prior knowledge regarding rocks and the rock cycle. The game is designed to be played in a middle-school science classroom. Few accommodations are needed to play The Rock Cycle Race. Table or floor space is needed to set up and play the game. The game can be played once during a 50 minute middle-school class period by three players, or by six players with three teams of two students. Students will need approximately five minutes to set up the board and five minutes to clean up with actual play time lasting between 30 and 35 minutes. The Rock Cycle Race is a race board game in which players move along the path of the rock cycle. All players begin at "Start" and the first to reach "Finish" wins the game. The playing path is divided into board spaces that provide players with a choice in the direction of the path taken as well as shortcuts and obstacles. Players advance along the board by correctly answering question cards. Each question card has two levels of questions-"Rock Questions" and the more difficult "Boulder Questions". The Boulder level questions allow players to advance more board spaces. Before the question is read, the player chooses the level of question she/he will try to answer. Inside the game box, you will find the following objects The Playing Board The path for The Rock Cycle Race board is divided into three colors corresponding to the three rock categories - igneous is red, sedimentary is blue, and metamorphic is green. There are also three types of question cards corresponding to the three kinds of rocks - igneous, sedimentary, and metamorphic. The three types of cards use the same color plan as the board path. Players answer questions that match the path section they are in. For example, if a player is in the Metamorphic section (green) they answer questions from the Metamorphic Rock Question cards (also green). Each card contains two questions: a "Rock Question" (an easy level) and a "Boulder Question," (a more difficult level). The "Rock Question" allows the player to advance 1 to 3 spaces if answered correctly. The "Boulder Question" allows the player to advance 4 to 5 spaces. If the player answers the question incorrectly, she/he will be penalized by the same number of spaces she/he would have been rewarded. A typical game can be played once during a 50 minute middle-school class period by three players, or alternatively, six players with three teams of two players. Students will need approximately five minutes to set up the board and five minutes to clean up. Actual play will last 30 to 35 minutes. Open the board and place the three types of question cards, Igneous, Sedimentary, and Metamorphic, face down on the designated areas on the board. We designed the game to accommodate various numbers of players and types of students. Ellington, Addinall, and Percival (1982) discuss the 'snakes and ladders' class of board games and points to The Great Blood Race as a good example of a game that teaches students about the human circulatory system. It occurred to us that such a format could easily support the learning of systems, processes, and cycles found in science. We decided to focus on the rock cycle since one of our team members had a background in earth science and could serve as a subject matter expert. We gathered background information on the game's content through both web searches and using grade-appropriate text books used in local San Diego schools. The game board was based on an image we found at http://duke.usask.ca/~reeves/prog/geoe118/geoe118.011.html. We also contacted an earth science teacher to review the questions for content and appropriateness. To determine the conduct of the play, we held brainstorming and play trial sessions. Initially, we intended for players to use a spinner to determine the level of difficulty for each question. However, we soon realized that this was contrary to the literature regarding motivational theory. Keller & Suzuki (1988) point out that one element critical to motivating students is their confidence. Although there are many dimensions to confidence, they state that three of the most important are perceived competence, perceived control, and expectancy for success. Allowing players to select the difficulty of each question supports these dimensions, particularly the latter two. In addition, allowing players to choose levels of questions, and consequently the number of spaces they can move, increases the risk factor for players. Players must strategize. Also, by basing the movement of their pieces on questions about the rock cycle, players are encouraged/motivated to increase their knowledge of the rock cycle in order to win. Books & Journals D., Butler, L., Hixson, B. & Matthias, W. (1999). Glencoe science: An introduction to the life, earth, and physical sciences. Woodland Hills, CA: G. R. (1990). The catalyst collection: outstanding earth/space science activities. Fullerton, CA: The National Science Foundation. Feather, Jr., R. M. & Snyder, S. L. (1997). Glencoe earth science (Teacher wraparound ed.). New York: J. M., & Suzuki, K. (1988). Use of the ARCS motivation model in courseware design. In D. H. Jonassen (Ed.). Instructional designs for microcomputer courseware. Hillsdale, NJ: Lawrence Blaustein, D., Butler, L., Hixson, B. & Matthias, W. (1999). Glencoe science: An introduction to the life, earth, and physical sciences. Woodland Hills, CA: Glencoe/McGraw-Hill. Carlson, G. R. (1990). The catalyst collection: outstanding earth/space science activities. Fullerton, CA: The National Science Foundation. Feather, Jr., R. M. & Snyder, S. L. (1997). Glencoe earth science (Teacher wraparound ed.). New York: Glencoe/McGraw-Hill. Keller, J. M., & Suzuki, K. (1988). Use of the ARCS motivation model in courseware design. In D. H. Jonassen (Ed.). Instructional designs for microcomputer courseware. Hillsdale, NJ: Lawrence Erlbaum. Last updated October 21, 2000
http://edweb.sdsu.edu/courses/edtec670/Cardboard/Board/R/RockCycleRace/index.htm
4.03125
Everyone needs play. It is essential to learning, creativity, and discovery. It guides physical, intellectual, and social development. It drives innovation, increases productivity, and contributes to healthier lives. Children playing on playgrounds learn to incorporate found objects and put them to novel uses, develop creative pretend and dramatic play scenarios, and build on the ideas of others. Inventors draw on these same skills to make imaginative and unlikely connections that lead to exciting new products or important medical and technical advances. Collectors play at acquiring their favorite things and, in doing so, help document important cultural trends. Play is critical to human development Research proves that play: - builds ability to solve problems, negotiate rules, and resolve conflicts; - develops confident, flexible minds that are open to new possibilities; - develops creativity, resilience, independence, and leadership; - strengthens relationships and empathy; and - helps grow strong healthy bodies and reduces stress. Children who play do better in school and become more successful adults Through play children learn to: - question, predict, hypothesize, evaluate, and analyze; - form and substantiate opinions; and - persist through adversity. Studying play increases knowledge of history - reveals individual, group, and national identity, opinions, and preferences; - illuminates the role of children and childhood in society; and - documents technological advancements.
http://www.thestrong.org/about-play
4.15625
In 2008, 62% of students with LD spent 80% or more of their in-school time in general education classrooms. In 2000, that figure was just 40%.* We’ve made significant process in using the inclusion model in our nation’s schools. Inclusion is another term for “mainstreaming” or merging special education with regular education classrooms. There are many benefits to this model for children with learning disabilities. They have the benefit of learning in the “least restrictive environment” and the opportunity to be with peers and create bonds and friendships. Teaching to a room full of unique students, however, is definitely a challenge. It is true that the inclusion classroom often has a minimum of two teachers -- a regular educator and a special educator. (The special educator role may often be filled by a paraprofessional or special education assistant.) But it isn’t always easy for educators to share the classroom with another teacher, not to mention, the daunting task of teaching to students with a wide range of skills and learning styles. The results, however, are priceless. James Wendorf, NCLD’s Executive Director, states, "We've seen graduation rates and classroom inclusion rise more than 15 percent over the past 10 years. But we need to continue to empower parents and teachers, reduce stigma among kids, and keep education funding on the top of the education agenda if we are going to see those numbers increase, not decrease." The following are some strategies that parents can request for their child and teachers can implement to make the inclusion learning environment comfortable and successful for all the students. Eliminate all unnecessary materials from the student’s desk to reduce distractions. Use a checklist to help the student get organized. Keep an extra supply of pencils, pens, books and paper in the classroom. Have an agreed upon cue for the student to leave the classroom. Reduce visual distractions in the classroom. Time Management and Transitions Space short work periods with breaks. Provide additional time to complete the assignment. Inform the student with several reminders, several minutes apart, before changing from one activity to the next. Provide a specific place for turning in assignments. Break regular assignments into segments of shorter tasks. And, break long assignments into small sequential steps, monitoring each step. Presentation of Materials Provide a model of the end product. Provide written and verbal direction with visuals, if possible. Explain learning expectations to the student before beginning a lesson. Allow for student to use tape recorders, computers, calculators and dictation to obtain and retain assignment success. Limit the number of concepts presented at one time. Assessment, Grading and Testing Allow tests to be scribed if necessary and/or allow for oral responses. Divide test into small sections. Grade spelling separately from content. Avoid time tests. Permit retaking the test. Provide an appropriate peer role model. Develop a system or code that will let the student know when behavior is not appropriate. Ignore attention-seeking behaviors that are not disruptive to the classroom. Develop a code of conduct for the classroom and visually display it in an appropriate place where all students can see it, review it frequently.
http://www.ncld.org/ld-insights/entry/1/212
4.09375
Properties of Water Water is so important because of its molecular structure. Without its unique characteristics, water would not be able to serve the functions it does. The molecular formula of water is H2O. A central oxygen atoms shares two of the electrons floating in its outer layer with one electron each from two hydrogen atoms. Buoyancy is the upward force that water exerts. It is what keeps ice cubes, which are less dense than liquid water, floating in a glass. Density is the mass of an object per unit of volume, and buoyancy is the ability of water to let objects (or liquids) that are less dense float in it. This is important because it provides physical support in aquatic environments. The fish and other organisms have developed structures based on the fact that water reduces the pull of gravity. This also minimizes the need for large supporting structures in order to move around. Organisms move through water differently than they move on land, and their outer appearances reflect this. Buoyancy also plays a role in helping animals move vertically. Because these creatures are surrounded by water, they can move in any direction in three dimensions, including vertically. On land, this can only be accomplished with wings or especially strong legs (for short periods of time). Fish only need to develop fins (no need for wings) to take advantage of increased mobility. Heat capacity is the capability of water to absorb heat without undergoing an increase in temperature. Water has a high heat capacity of ---. This means that even though heat energy is entering a body of water, that energy will not be reflected in the measured temperature of the water until more heat has been added. High heat capacity plays a vital role in regulating global climate. A good example of this is that coastal regions typically have cooler climates than regions that are further in land. This is because the ocean water absorbs heat from the land, which has a lower heat capacity, and the temperatures increase at a slower rate. This effect of water is extended across the globe because 78%--? of Earth is made up of water. In addition to a widespread effect on the planet, water’s heat capacity helps animals regulate their internal temperatures, which must be sustained within a certain range. For aquatic organisms, the external environment is subject to fewer extreme temperature changes, so internal heat is easier to maintain constant. Surface tension helps water molecules hold together. This allows the formation of rain, which dissolves nutrients in soil. Water filters ultraviolet rays that penetrate the Earth's atmosphere in the form of sunlight. A polar molecule is one in which the electric charge is not evenly distributed between the atoms. The oxygen atom in a molecule of water attracts electrons with a negative charge more strongly than the two hydrogen atoms. This means that one end of the molecule has a slightly negative charge while the other end has a slightly positive charge. Therefore, water is a good solvent, meaning that many substances can dissolve in water since nonpolar substances tend to dissolve in nonpolar solvents. Because of this, aquatic plants can obtain nutrients directly from the water (unlike terrestrial plants, which must obtain them from a combination of air and soil). Water can also transport molecules throughout organisms and throughout the Earth, and it dilutes and dissolves pollutants.
http://library.thinkquest.org/04oct/01590/intro/properties.html
4.53125
X-rays and gamma rays differ only in their source of origin. X-rays are produced by an x-ray generator and gamma radiation is the product of radioactive atoms. They are both part of the electromagnetic spectrum. Theyare waveforms, as are light rays, microwaves, and radio waves. X-rays and gamma rays cannot been seen, felt, or heard. They possess no charge and no mass and, therefore, are not influenced by electrical and magnetic fields and will generally travel in straight lines. However, they can be diffracted (bent) in a manner similar to light. Both X-rays and gamma rays can be characterized by frequency, wavelength, and velocity. However, they act somewhat like a particle at times in that they occur as small "packets" of energy and are referred to as "photons." Electromagnetic radiation has also been described in terms of a stream of photons (massless particles) each traveling in a wave-like pattern and moving at the speed of light. Each photon contains a certain amount (or bundle) of energy, and all electromagnetic radiation consists of these photons. The only difference between the various types of electromagnetic radiation is the amount of energy found in the photons. Due to their short wavelength they have more energy to pass through matter than do the other forms of energy in the electromagnetic spectrum. As they pass through matter, they are scattered and absorbed and the degree of penetration depends on the kind of matter and the energy of the rays. Properties of X-Rays and Gamma Rays They are not detected by human senses (cannot be seen, heard, felt, etc.). They travel in straight lines at the speed of light. Their paths cannot be changed by electrical or magnetic fields. They can be diffracted to a small degree at interfaces between two different materials. They pass through matter until they have a chance encounter with an atomic particle. Their degree of penetration depends on their energy and the matter they are traveling through. They have enough energy to ionize matter and can damage or destroy living cells.
http://www.ndt-ed.org/EducationResources/CommunityCollege/Radiography/Physics/nature.htm
4.1875
Want to stay on top of all the space news? Follow @universetoday on Twitter Dust is everywhere in space, but the pervasive stuff is one thing astronomers know little about. Cosmic dust is also elusive, as it lasts only about 10,000 years, a brief period in the life of a star. “We not only do not know what the stuff is, but we do not know where it is made or how it gets into space,” said Donald York, a professor at the University of Chicago. But now York and a group of collaborators have observed a double-star system, HD 44179, that may be creating a fountain of dust. The discovery has wide-ranging implications, because dust is critical to scientific theories about how stars form. The double star system sits within what astronomers call the Red Rectangle, a nebula full of gas and dust located approximately 2,300 light years from Earth. One of the double stars is a post-asymptotic giant branch (post-AGB) star, a type of star astronomers regard as a likely source of dust. These stars, unlike the sun, have already burned all the hydrogen in their cores and have collapsed, burning a new fuel, helium. During the transition between burning hydrogen and helium, which takes place over tens of thousands of years, these stars lose an outer layer of their atmosphere. Dust may form in this cooling layer, which radiation pressure coming from the star’s interior pushes out the dust away from the star, along with a fair amount of gas. In double-star systems, a disk of material from the post-AGB star may form around the second smaller, more slowly evolving star. “When disks form in astronomy, they often form jets that blow part of the material out of the original system, distributing the material in space,” York explained. “If a cloud of gas and dust collapses under its own gravity, it immediately gets hotter and starts to evaporate,” York said. Something, possibly dust, must immediately cool the cloud to prevent it from reheating. The giant star sitting in the Red Rectangle is among those that are far too hot to allow dust condensation within their atmospheres. And yet a giant ring of dusty gas encircles it. Witt’s team made approximately 15 hours of observations on the double star over a seven-year period with the 3.5-meter telescope at Apache Point Observatory in New Mexico. “Our observations have shown that it is most likely the gravitational or tidal interaction between our Red Rectangle giant star and a close sun-like companion star that causes material to leave the envelope of the giant,” said collaborator Adolph Witt, from the University of Toledo. Some of this material ends up in a disk of accumulating dust that surrounds that smaller companion star. Gradually, over a period of approximately 500 years, the material spirals into the smaller star. Just before this happens, the smaller star ejects a small fraction of the accumulated matter in opposite directions via two gaseous jets, called “bipolar jets.” Other quantities of the matter pulled from the envelope of the giant end up in a disk that skirts both stars, where it cools. “The heavy elements like iron, nickel, silicon, calcium and carbon condense out into solid grains, which we see as interstellar dust, once they leave the system,” Witt explained. Cosmic dust production has eluded telescopic detection because it only lasts for perhaps 10,000 years—a brief period in the lifetime of a star. Astronomers have observed other objects similar to the Red Rectangle in Earth’s neighborhood of the Milky Way. This suggests that the process Witt’s team has observed is quite common when viewed over the lifetime of the galaxy. “Processes very similar to what we are observing in the Red Rectangle nebula have happened maybe hundreds of millions of times since the formation of the Milky Way,” said Witt, who teamed up with longtime friends at Chicago for the study. The team had set out to achieve a relatively modest goal: find the Red Rectangle’s source of far-ultraviolet radiation. The Red Rectangle displays several phenomena that require far-ultraviolet radiation as a power source. “The trouble is that the very luminous central star in the Red Rectangle is not hot enough to produce the required UV radiation,” Witt said, so he and his colleagues set out to find it. It turned out neither star in the binary system is the source of the UV radiation, but rather the hot, inner region of the disk swirling around the secondary, which reaches temperatures near 20,000 degrees. Their observations, Witt said, “have been greatly more productive than we could have imagined in our wildest dreams.” Source: University of Chicago
http://www.universetoday.com/24699/astronomers-find-cosmic-dust-fountain/
4.125
Submitted by: Rachael Tarshes In this lesson plan, which is adaptable for grades 6-12, students use BrainPOP resources to explore the role of a video game test. Students will create their own evaluations for video games and test various games out to see how the games match up to pre-determined criteria. Finally, students will debate the pros and cons of using video games in the classroom. Log in with your BrainPOP Educators account to view the Lesson. Don't have a BrainPOP Educators account? Sign up here for BrainPOP Educators.
http://www.brainpop.com/educators/community/lesson-plan/student-gamers-lesson-plan/
4.03125
Christmas Trolls Lesson Plan - Grades: PreK–K, 1–2 About this book - Christmas Trolls, by Jan Brett Set Up and Prepare Tell the students that you are going to read the words on a page then give them a chance to discuss the illustrations and notice the details. Be sure to have the students talk about the story that is happening at the side of each page. Allow time for the students to notice the details in the illustrations on each page. Have the students predict what will happen next in the story. The students can compare Christmas Trolls to Trouble with Trolls. The students could visit Jan Brett's website for more Christmas Trolls activities and coloring pages. The students could learn more about Norway by visiting the Internet links listed in the resources. The students could learn more about rosemaling by visiting the links listed in the resources. Christmas Trolls comprehension questions Christmas Trolls coloring page Map - http://drift.uninett.no/kart/norgeskartet/index.php Flag - http://fotw.fivestarflags.com/no.html The World Fact book: Norway - https://www.cia.gov/cia/publications/factbook/geos/no.html The Royal Family - http://www.kongehuset.no/default.asp?lang=eng Norway photos - http://www.jorgetutor.com/noruega/noruega.htm Christmas in Norway - http://www.californiamall.com/holidaytraditions/traditions-Norway.htm Rosemaling - http://www.elca.org/countrypackets/norway/crafts.html Rosemaling styles - http://www.rosemaling.org/styles.htm Rosemaling samples - http://www.twinportsrosemaling.org/new-gallery.htm Norwegian trunk - http://www.state.nd.us/hist/forms/olsontrunkletter.pdf Scandinavian festival - http://www.twinportsrosemaling.org/ Discuss the behaviors of the trolls. How did Treva help the trolls?
http://www.scholastic.com/teachers/lesson-plan/christmas-trolls-lesson-plan
4.09375
- High School - Number & Quantity - Statistics & Probability - Language Arts - Social Studies - Art & Music - World Languages - Your Life - Experiment With Transformations In The Plane G.CO.1Know precise definitions of angle, c...more Know precise definitions of angle, circle, perpendicular line, parallel line, and line segment, based on the undefined notions of point, line, distance along a line, and distance around a circular arc. G.CO.2Represent transformations in the pla...more Represent transformations in the plane using, e.g., transparencies and geometry software; describe transformations as functions that take points in the plane as inputs and give other points as outputs. Compare transformations that preserve distance and angle to those that do not (e.g., translation versus horizontal stretch). G.CO.3Given a rectangle, parallelogram, tr...more Given a rectangle, parallelogram, trapezoid, or regular polygon, describe the rotations and reflections that carry it onto itself. G.CO.4Develop definitions of rotations, re...more Develop definitions of rotations, reflections, and translations in terms of angles, circles, perpendicular lines, parallel lines, and line segments. G.CO.5Given a geometric figure and a rotat...more Given a geometric figure and a rotation, reflection, or translation, draw the transformed figure using, e.g., graph paper, tracing paper, or geometry software. Specify a sequence of transformations that will carry a given figure onto another. - Understand Congruence In Terms Of Rigid Motions G.CO.6Use geometric descriptions of rigid ...more Use geometric descriptions of rigid motions to transform figures and to predict the effect of a given rigid motion on a given figure; given two figures, use the definition of congruence in terms of rigid motions to decide if they are congruent. G.CO.7Use the definition of congruence in ...more Use the definition of congruence in terms of rigid motions to show that two triangles are congruent if and only if corresponding pairs of sides and corresponding pairs of angles are congruent. G.CO.8Explain how the criteria for triangl...more Explain how the criteria for triangle congruence (ASA, SAS, and SSS) follow from the definition of congruence in terms of rigid motions. - Prove Geometric Theorems G.CO.9Prove theorems about lines and angle...more Prove theorems about lines and angles. Theorems include: vertical angles are congruent; when a transversal crosses parallel lines, alternate interior angles are congruent and corresponding angles are congruent; points on a perpendicular bisector of a line segment are exactly those equidistant from the segment’s endpoints. G.CO.10Prove theorems about triangles. Theo...more Prove theorems about triangles. Theorems include: measures of interior angles of a triangle sum to 180°; base angles of isosceles triangles are congruent; the segment joining midpoints of two sides of a triangle is parallel to the third side and half the length; the medians of a triangle meet at a point. G.CO.11Prove theorems about parallelograms....more Prove theorems about parallelograms. Theorems include: opposite sides are congruent, opposite angles are congruent, the diagonals of a parallelogram bisect each other, and conversely, rectangles are parallelograms with congruent diagonals. - Make Geometric Constructions G.CO.12Make formal geometric constructions ...more Make formal geometric constructions with a variety of tools and methods (compass and straightedge, string, reflective devices, paper folding, dynamic geometric software, etc.). Copying a segment; copying an angle; bisecting a segment; bisecting an angle; constructing perpendicular lines, including the perpendicular bisector of a line segment; and constructing a line parallel to a given line through a point not on the line. G.CO.13Construct an equilateral triangle, a...more Construct an equilateral triangle, a square, and a regular hexagon inscribed in a circle. Major cluster will be a majority of the assessment, Supporting clusters will be assessed through their success at supporting the Major Clusters and Additional Clusters will be assessed as well. The assessments will strongly focus where the standards strongly focus. Now Creating a New Plan Moving Games. Just a moment...
http://powermylearning.com/zh-hans/directory/math/high-school/geometry/congruence
4.1875
Endocarditis (EN-do-kar-DI-tis) is an infection of the inner lining of the heart chambers and valves. This lining is called the endocardium (en-do-KAR-de-um). The condition also is called infective endocarditis (IE). The term "endocarditis" also is used to describe an inflammation of the endocardium due to other conditions. This article only discusses endocarditis related to infection. IE occurs if bacteria, fungi, or other germs invade your bloodstream and attach to abnormal areas of your heart. The infection can damage your heart and cause serious and sometimes fatal complications. IE can develop quickly or slowly; it depends on what type of germ is causing it and whether you have an underlying heart problem. When IE develops quickly, it's called acute infective endocarditis. When it develops slowly, it's called subacute infective endocarditis. IE mainly affects people who have: People who have normal heart valves also can have IE. However, the condition is much more common in people who have abnormal hearts. Certain factors make it easier for bacteria to enter your bloodstream. These factors put you at higher risk for IE. For example, poor dental hygiene and unhealthy teeth and gums increase your risk for the infection. Other risk factors include using intravenous (IV) drugs, having a catheter (tube) or another medical device in your body for long periods, and having a history of IE. Common symptoms of IE are fever and other flu-like symptoms. Because the infection can affect people in different ways, the signs and symptoms vary. IE also can cause problems in many other parts of the body besides the heart. If you're at high risk for IE, seek medical care if you have signs or symptoms of the infection, especially a fever that persists or unexplained fatigue (tiredness). IE is treated with antibiotics for several weeks. You also may need heart surgery to repair or replace heart valves or remove infected heart tissue. Most people who are treated with the proper antibiotics recover. But if the infection isn't treated, or if it persists despite treatment (for example, if the bacteria are resistant to antibiotics), it's usually fatal. If you have signs or symptoms of IE, see your doctor as soon as you can, especially if you have abnormal heart valves. Clinical trials are research studies that explore whether a medical strategy, treatment, or device is safe and effective for humans. The NHLBI updates Health Topics articles on a biennial cycle based on a thorough review of research findings and new literature. The articles also are updated as needed if important new research is published. The date on each Health Topics article reflects when the content was originally posted or last revised.
http://www.nhlbi.nih.gov/health/health-topics/topics/endo/
4.09375
July 22, 2011 Water really is everywhere. A team of astronomers have found the largest and farthest reservoir of water ever detected in the universe -- discovered in the central regions of a distant quasar. Quasars contain massive black holes that are steadily consuming a surrounding disk of gas and dust; as it eats, the quasar spews out huge amounts of energy. The energy from this particular quasar was released some 12 billion years ago, only 1.6 billion years after the Big Bang and long before most of the stars in the disk of our Milky Way galaxy began forming. The research team includes Carnegie's Eric Murphy, as well as scientists from the Jet Propulsion Laboratory, the California Institute of Technology, University of Maryland, University of Colorado, University of Pennsylvania, and the Institute for Space and Astronautical Science in Japan. Their research will be published in Astrophysical Journal Letters. The quasar's newly discovered mass of water exists in gas, or vapor, form. It is estimated to be at least 100,000 times the mass of the Sun, equivalent to 34 billion times the mass of Earth or 140 trillion times the mass of water in all of Earth's oceans put together. Since astronomers expected water vapor to be present even in the early universe, the discovery of water is not itself a surprise. There is water vapor in the Milky Way, although the amount is 4,000 times less massive than in the quasar. There is other water in the Milky Way, but it is frozen and not vaporous. Nevertheless water vapor is an important trace gas that reveals the nature of the quasar. In this particular quasar, the water vapor is distributed around the black hole in a gaseous region spanning hundreds of light years in size (a light year is about six trillion miles). The gas is unusually warm and dense by astronomical standards. It is five-times hotter and 10- to 100-times denser than what is typical in galaxies like the Milky Way. The large quantity of water vapor in the quasar indicates that it is bathing the gas in both X-rays and infrared radiation. The interaction between the radiation and water vapor reveals properties of how the gas is influenced by the quasar. For example, analyzing the water vapor shows how the radiation heats the rest of the gas. Furthermore, measurements of the water vapor and of other molecules, such as carbon monoxide, suggest that there is enough gas to enable the black hole to grow to about six times its size. Whether or not this has happened is unclear, the astronomers say, since some of the gas could condense into stars or being ejected from the quasar. A major new telescope in the design phase called CCAT will allow astronomers to measure the abundance of water vapor in many of the early Universe's galaxies. Funding for Z-Spec was provided by NSF, NASA, the Research Corporation, and the partner institutions. The Caltech Submillimiter Observatory is operated by the California Institute of Technology under a contract from the NSF. CARMA was built and is operated by a consortium of universities -- The California Institute of Technology, University of California Berkeley, University of Maryland College Park, University of Illinois Urbana-Champaign, and the University of Chicago -- with funding from a combination of state and private sources, as well as the NSF and its University Radio Observatory program. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
http://www.sciencedaily.com/releases/2011/07/110722142058.htm
4
Our definition of play is 'A physical or mental leisure activity that is undertaken purely for enjoyment or amusement and has no other objective'. There are other areas of human activity that may also be defined in this way hence the need for contextual elaboration. For our purposes ‘play’ may assist learning and self development. It can be undertaken by individuals or groups of children spontaneously or as part of a planned activity. There isn’t any intervention so there is no need for clinical supervision, quality management, code of ethics or adult training. The only concern is that there is a physically safe environment. A question often raised today is whether children know how to play. This is probably a philosophical question since children's play is a natural activity essential for their healthy development. It may be that children play differently from their forebears. Neuroscience research confirms the importance of play for infants in developing children's brains and minds. It has also been shown that exposure to metaphor and symbols, as used in play, has a beneficial effect upon the development of the brain. Therapeutic Play Work Child Psychotherapy and Clinical Psychology (CPCP) Return to the Therapeutic Play Continuum
http://www.playtherapy.org.uk/AboutPlayTherapy/PlayDefinition1.htm
4.1875
The human body is made up of cells. For example, when you have a sunburn your skin peels, then you are shedding skin "cells." In the center of each cell is an area called the nucleus. Human chromosomes are located inside the nucleus of the cell. A chromosome is a structure that contains your genes. Your genes determine your traits, such as eye color and blood type. The usual number of chromosomes inside every cell of your body is 46 total chromosomes, or 23 pairs. You inherit half of your chromosomes (one member of each pair) from your biological mother, and the other half (the matching member of each pair) from your biological father. Scientists have numbered the chromosome pairs from 1 to 22, with the 23rd pair labeled as X's or Y's, depending on the structure. The first 22 pairs of chromosomes are called "autosomes," which are the body chromosomes. The 23rd pair of chromosomes are known as the "sex chromosomes," because they determine whether someone will be born male or female. Females have two "X" chromosomes, while males have one "X" and one "Y" chromosome. A picture of all 46 chromosomes, in their pairs, is called a karyotype. A normal female karyotype is written 46, XX, and a normal male karyotype is written 46, XY. Click here to view the Online Resources of Cancer Center
http://www.nyhq.org/diw/Content.asp?PageID=DIW007176&More=WTN
4.03125
VILLANOVA, Pa. – Geckos are known for sticky toes that allow them to climb up walls and even hang upside down on ceilings. A new study shows that geckos have gained and lost these unique adhesive structures multiple times over the course of their long evolutionary history in response to habitat changes. The findings are published in the most recent edition of PLoS ONE. Aaron Bauer, PhD, professor and Gerald M. Lemole Endowed Chair in Integrative Biology at Villanova University, is the study’s senior author. The research is part of a long-standing collaboration on gecko evolution among biologists at Villanova University, the University of Minnesota and the University of Calgary. Geckos, a type of lizard, are found in tropical and semitropical regions around the world. About 60 percent of the approximately 1,400 gecko species have adhesive toepads. Remaining species lack the pads and are unable to climb smooth surfaces. Geckos with these toepads are able to exploit vertical habitats on rocks and boulders that many other kinds of lizards can’t easily get to. This advantage gives them access to food in these environments, such as moths and spiders. Climbing also helps geckos avoid predators. The researchers found that sticky toes evolved independently in about 11 different gecko groups. In addition, they were lost in at least nine different gecko groups. The gain and subsequent loss of adhesive toepads seems associated with habitat changes; e.g., living on boulders and in trees versus living on the ground, often in sand dunes, where the feature could be a hindrance rather than an advantage. “The loss of adhesive pads in dune-dwelling species is an excellent example of natural selection in action,” says Bauer. Repeated evolution is a key phenomenon in the study of evolutionary biology. A classic example is the independent evolution of wings in birds, bats and pterosaurs. It represents a shared solution that organisms arrived at separately to overcome common problems. “Scientists have long thought that adhesive toepads originated just once in geckos, twice at the most,” added University of Minnesota postdoctoral researcher Tony Gamble, a coauthor of the study. “To discover that geckos evolved sticky toepads again and again is amazing.” In order to understand how the toepads evolved, the research team produced the most complete gecko family tree ever constructed, including representatives of more than 100 genera (closely related groups of species) from around the world. This family tree can serve as the basis for answering many other questions, such as how and when did live birth, temperature-dependent sex determination, and night color vision evolve in geckos? The family tree will also allow the authors to revise gecko taxonomy to best reflect the group’s evolutionary history. Gecko toepads adhere through a combination of weak intermolecular forces, called van der Waals forces, and frictional adhesion. Hundreds to hundreds of thousands of hair-like bristles, called setae, line the underside of a gecko’s toes. The large surface area created by this multitude of bristles generates enough weak intermolecular forces to support the whole animal. The amazing clinging ability of Gecko toes has inspired engineers to develop biomimetic technologies ranging from dry adhesive bandages to climbing robots. “Gaining a better understanding of the complex evolutionary history of gecko toepads allows bio-inspired engineers to learn from these natural designs and develop new applications,” added co-author Anthony Russell, of the University of Calgary. While scientists have a good understanding of how geckos stick at the microscopic level, they are just beginning to understand how geckos use their adhesive toepads to move around complex environments in the wild. Learning how gecko toepads have evolved to move in nature is an important step in developing robotic technologies that can do similar things. Examining the repeated evolution of gecko toepads will let scientists find common ways natural selection solved these problems and focus on the characteristics shared across different gecko species. Authors of the study included scientists from Villanova University, University of Calgary, the University of Texas at El Paso and the University of Minnesota. Funding was provided by the National Science Foundation of the United States and the Natural Sciences and Engineering Research Council of Canada. About Villanova University: Since 1842, Villanova University’s Augustinian Catholic intellectual tradition has been the cornerstone of an academic community in which students learn to think critically, act compassionately and succeed while serving others. There are more than 10,000 undergraduate, graduate and law students in the University's five colleges – the College of Liberal Arts and Sciences, the Villanova School of Business, the College of Engineering, the College of Nursing and the Villanova University School of Law. As students grow intellectually, Villanova prepares them to become ethical leaders who create positive change everywhere life takes them.
http://www1.villanova.edu/villanova/media/pressreleases/2012/0629.html
4.15625
Canada's diamonds face old age During the last few decades, prospectors and exploration companies have succeeded in locating diamonds in the extreme environment of Canada’s Arctic, and the country has since seen a diamond rush as mining companies have moved in to stake their claims (see Geotimes, April 2006). As geologists get in on the rush, they are uncovering the unique origins of Canadian diamonds, and finding not only that they are surprisingly old, but also that they have implications for the timing of Earth’s early tectonic processes. A sulfide inclusion is visible inside a diamond from the Jwaneg Mine in Botswana. Similar inclusions found within Canadian diamonds helped researchers date the diamonds to be among the oldest on Earth. Photograph is courtesy of J.W. Harris. Scientists have long known that diamonds are old, says Steve Shirey, a geochemist at the Carnegie Institution’s Department of Terrestrial Magnetism in Washington, D.C. Diamonds from Africa and Siberia, for example, have been dated at about 3 billion years old, he says. But now, Shirey and Kalle Westerlund, a visiting graduate student from the University of Cape Town in South Africa, and their colleagues have used a precise dating method to reveal that some Canadian diamonds actually formed about 500 million years earlier. Before the team could date Canada’s diamonds, they had to wait for mining companies to move in with elaborate rigs to pull the diamond-bearing rocks from the ground. Ekati Mine in the Northwest Territories, the first operational diamond mine in Canada, supplied the team with samples. They then measured the isotopes rhenium and osmium, and knowing the rhenium decays to osmium, the team calculated that the diamonds formed 3.52 billion years ago. To find out if they could learn more about the rocks than just their age, the team next looked at iron sulfide inclusions inside the diamonds that contain a mineral record of how the diamond formed. “Like anything in geology, you look for what is special, and might tell you something new,” Shirey says. Mineralogists have shown that diamonds form from similar elements, including carbon, hydrogen, oxygen and sulfur, but the origin of that material is not always the same. Diamonds have previously been grouped into two categories: “peridotitic” diamonds, which formed from material hundreds of kilometers below Earth’s surface, and “eclogitic” diamonds, which formed from materials in Earth’s crust that were recycled into the planet via plate tectonics. Scientists can tell a diamond’s origin by looking at its composition and the chemistry of its inclusions. Surprisingly, the team found that while inclusion composition clearly indicated that the diamonds formed deep in Earth as peridotitic diamonds, osmium isotopes indicated involvement from surface-recycled materials, similar to eclogitic diamonds. That surface recycling contributed to the diamonds’ formation 3.52 billion years ago implies that some sort of subduction process likely existed at that time, the researchers reported in the September Contributions to Mineralogy and Petrology. According to Thomas Stachel, a geologist with the diamond research lab at the University of Alberta in Edmonton, the old age is significant in that it suggests that the formation of diamonds does not require Earth’s early continents to have been stable, as some scientists previously thought. That stabilization did not occur until about 2.5 billion years ago, but this study shows that diamonds started growing 1 billion years before that, he says. “It’s a great piece of work.” The next step, Shirey says, is to analyze Canadian diamonds from the Diavik mine, also in the Northwest Territories. That information will help determine if the Ekati Mine’s diamonds are truly anomalous in their age.
http://www.geotimes.org/nov06/NN_Diamonds.html
4.25
Standard Deviation and Variance Deviation just means how far from the normal The Standard Deviation is a measure of how spread out numbers are. Its symbol is σ (the greek letter sigma) The formula is easy: it is the square root of the Variance. So now you ask, "What is the Variance?" The Variance is defined as: The average of the squared differences from the Mean. To calculate the variance follow these steps: You and your friends have just measured the heights of your dogs The heights (at the shoulders) are: 600mm, 470mm, 170mm, 430mm and 300mm. Find out the Mean, the Variance, and the Standard Deviation. Your first step is to find the Mean: 600 + 470 + 170 + 430 + 300 so the mean (average) height is 394 mm. Let's plot this on the chart: Now, we calculate each dogs difference from the Mean: To calculate the Variance, take each difference, square it, and then average the result: So, the Variance is 21,704. And the Standard Deviation is just the square root of Variance, so: Standard Deviation: σ = √21,704 = 147.32... = 147 (to the nearest mm) And the good thing about the Standard Deviation is that it is useful. Now we can show which heights are within one Standard Deviation (147mm) of the Mean: So, using the Standard Deviation we have a "standard" way of knowing what is normal, and what is extra large or extra small. Rottweilers are tall dogs. And Dachshunds are a bit short ... but don't tell them! Now try the Standard Deviation Calculator. But ... there is a small change with Sample Data Our example was for a Population (the 5 dogs were the only dogs we were interested in). But if the data is a Sample (a selection taken from a bigger Population), then the calculation changes! When you have "N" data values that are: - The Population: divide by N when calculating Variance (like we did) - A Sample: divide by N-1 when calculating Variance All other calculations stay the same, including how we calculated the mean. Example: if our 5 dogs were just a sample of a bigger population of dogs, we would divide by 4 instead of 5 like this: Think of it as a "correction" when your data is only a sample. Here are the two formulas, explained at Standard Deviation Formulas if you want to know more: |The "Sample Standard Deviation":| Looks complicated, but the important change is to divide by N-1 (instead of N) when calculating a Sample Variance. If we just added up the differences from the mean ... the negatives would cancel the positives: So that won't work. How about we use absolute values? That looks good, but what about this case: Oh No! It also gives a value of 4, Even though the differences are more spread out! So let us try squaring each difference (and taking the square root at the end): That is nice! The Standard Deviation is bigger when the differences are more spread out ... just what we want! In fact this method is a similar idea to distance between points, just applied in a different way. And it is easier to use algebra on squares and square roots than absolute values, which makes the standard deviation easy to use in other areas of mathematics.
http://www.mathsisfun.com/data/standard-deviation.html
4.09375
Novell's Networking Primer Although we routinely use the terms "data" and "information" interchangeably, they are not technically the same thing. Computer data is a series of electrical charges arranged in patterns to represent information. In other words, the term "data" refers to the form of the information (the electrical patterns), not the information itself. Conversely, the term "information" refers to data that has been decoded. In other words, information is the real-world, useful form of data. For example, the data in an electronic file can be decoded and displayed on a computer screen or printed onto paper as a business letter. Encoding and Decoding Data To store meaningful information as data and to retrieve the information, computers use encoding schemes: series of electrical patterns that represent each of the discrete pieces of information to be stored and retrieved. For example, a particular series of electrical patterns represents the alphabetic character "A." There are many encoding schemes in use. One common data-encoding scheme is American Standard Code for Information Interchange (ASCII). To encode information into data and later decode that data back into information, we use electronic devices, such as the computer, that generate electronic signals. Signals are simply the electric or electromagnetic encoding of data. Various components in a computer enable it to generate signals to perform encoding and decoding tasks. To guarantee reliable transmission of this data across a network, there must be an agreed-on method that governs how data is sent, received, and decoded. That method must address questions such as: How does a sending computer indicate to which computer it is sending data? If the data will be passed through intervening devices, how are these devices to understand how to handle the data so that it will get to the intended destination? What if the sending and receiving computers use different data formats and data exchange conventions—how will data be translated to allow its exchange? In response to these questions, a communication model known as the OSI model was developed. It is the basis for controlling data transmission on computer networks. Understanding the OSI model will allow you to understand how data can be transferred between two networked computers. ISO and the OSI Model The OSI model was developed by the International Organization for Standardization (ISO) as a guideline for developing standards to enable the interconnection of dissimilar computing devices. It is important to understand that the OSI model is not itself a communication standard. In other words, it is not an agreed-on method that governs how data is sent and received; it is only a guideline for developing such standards. The Importance of the OSI Model It would be difficult to overstate the importance of the OSI model. Virtually all networking vendors and users understand how important it is that network computing products adhere to and fully support the networking standards this model has generated. When a vendor's products adhere to the standards the OSI model has generated, connecting those products to other vendors' products is relatively simple. Conversely, the further a vendor departs from those standards, the more difficult it becomes to connect that vendor's products to those of other vendors. In addition, if a vendor were to depart from the communication standards the model has engendered, software development efforts would be very difficult because the vendor would have to build every part of all necessary software, rather than being able to build on the existing work of other vendors. The first two problems give rise to a third significant problem for vendors: a vendor's products become less marketable as they become more difficult to connect with other vendors' products. The Seven Layers of the OSI Model Because the task of controlling communications across a computer network is too complex to be defined by one standard, the ISO divided the task into seven subtasks. Thus, the OSI model contains seven layers, each named to correspond to one of the seven defined subtasks. Each layer of the OSI model contains a logically grouped subset of the functions required for controlling network communications. The seven layers of the OSI model and the general purpose of each are shown in Figure 2. Figure 2: The OSI model Network Communications through the OSI Model Using the seven layers of the OSI model, we can explore more fully how data can be transferred between two networked computers. Figure 3 uses the OSI model to illustrate how such communications are accomplished. Figure 3: Networked computers communicating through the OSI model The figure represents two networked computers. They are running identical operating systems and applications and are using identical protocols (or rules) at all OSI layers. Working in conjunction, the applications, the OS, and the hardware implement the seven functions described in the OSI model. Each computer is also running an e-mail program that is independent of the OSI layers. The e-mail program enables the users of the two computers to exchange messages. Our figure represents the transmission of one brief message from Sam to Charlie. The transmission starts when Sam types in a message to Charlie and presses the "send" key. Sam's operating system appends to the message (or "encapsulates") a set of application-layer instructions (OSI Layer 7) that will be read and executed by the application layer on Charlie's computer. The message with its Layer 7 header is then transferred to the part of the operating system that deals with presentation issues (OSI Layer 6) where a Layer 6 header is appended to the message. The process repeats through all the layers until each layer has appended a header. The headers function as an escort for the message so that it can successfully negotiate the software and hardware in the network and arrive intact at its destination. When the data-link-layer header is added at Layer 2, the data unit is known as a "frame." The final header, the physical-layer header (OSI Layer 1) tells the hardware in Sam's computer the electrical specifics of how the message will be sent (which medium, at which voltage, at which speed, etc.). Although it is the final header to be added, the Layer 1 header is the first in line when the message travels through the medium to the receiving computer. When the message with its seven headers arrives at Charlie's computer, the hardware in his computer is the first to handle the message. It reads the instructions in the Layer 1 header, executes them, and strips off the header before passing the message to the Layer 2 components. These Layer 2 components execute those instructions, strip off the header, and pass the message to Layer 3, and so on. Each layer's header is successively stripped off after its instructions have been read so that by the time the message arrives at Charlie's e-mail application, the message has been properly received, authenticated, decoded, and presented. Commonly Used Standards and Protocols National and international standards organizations have developed standards for each of the seven OSI layers. These standards define methods for controlling the communication functions of one or more layers of the OSI model and, if necessary, for interfacing those functions with the layers above and below. A standard for any layer of the OSI model specifies the communication services to be provided and a protocol that will be used as a means to provide those services. A protocol is a set of rules network devices must follow (at any OSI layer) to communicate. A protocol consists of the control functions, control codes, and procedures necessary for the successful transfer of data. More than one protocol standard exists for every layer of the OSI model. This is because a number of standards were proposed for each layer, and because the various organizations that defined those standards—specifically, the standards committees inside these organizations—decided that more than one of the proposed standards had real merit. Thus, they allowed for the use of different standards to satisfy different networking needs. As technologies develop and change, some standards win a larger share of the market than others, and some dominate to the point of becoming "de facto" standards. To understand the capabilities of computer networking products, it will help to know the OSI layer at which particular protocols operate and why the standard for each layer is important. By converting protocols or using multiple protocols at different layers of the OSI model, it becomes possible for different computer systems to share data, even if they use different software applications, operating systems, and data-encoding techniques. Figure 4 shows some commonly used standards and the OSI layer at which they operate. Figure 4: Important standards at various OSI layers Layer 7 and Layer 6 Standards: Application and Presentation The application layer performs high-level services such as making sure necessary resources are present (such as a modem on the receiving computer) and authenticating users when appropriate (to authenticate is to grant access after verifying that the you are who you say you are). The presentation layer, usually part of an operating system, converts incoming and outgoing data from one presentation format to another. Presentation-layer services include data encryption and text compression. Most standards at this level specify Layer 7 and Layer 6 functions in one standard. The predominant standards at Layer 7 and Layer 6 were developed by the Department of Defense (DoD) as part of the Transmission Control Protocol/Internet Protocol (TCP/IP) suite. This suite consists of the following protocols, among others: File Transfer Protocol (FTP), the protocol most often used to download files from the Internet; Telnet, which enables you to connect to mainframe computers over the Internet; HyperText Transfer Protocol (HTTP), which delivers Web pages; and Simple Mail Transfer Protocol (SMTP), which is used to send e-mail messages. These are all Layer 7 protocols; the TCP/IP suite consists of more than 40 protocols at several layers of the OSI model. X.400 is an International Telecommunication Union (ITU) standard that encompasses both the presentation and application layers. X.400 provides message handling and e-mail services. It is the basis for a number of e-mail applications (primarily in Europe and Canada) as well as for other messaging products. Another ITU standard in the presentation layer is the X.500 protocol, which provides directory access and management. File Transfer, Access, and Management (FTAM) and Virtual Terminal Protocol (VTP) are ISO standards that encompass the application layer. FTAM provides user applications with useful file transfer and management functions. VTP is similar to Telnet; it specifies how to connect to a mainframe over the Internet via a "virtual terminal" or terminal emulation. In other words, you can see and use a mainframe's terminal display on your own PC. These two standards have been largely eclipsed by the DoD standards. Compact HTML is defined by the World Wide Web Consortium (W3C) and is a subset of HTML protocols. Like WAP, it addresses small-client limitations by excluding functions such as JPEG images, tables, image maps, multiple character fonts and styles, background colors and images, and frame style sheets. Layer 5 Standards: Session As its name implies, the session layer establishes, manages, and terminates sessions between applications. Sessions consist of dialogue between the presentation layer (OSI Layer 6) of the sending computer and the presentation layer of the receiving computer. The session layer synchronizes dialogue between these presentation layer entities and manages their data exchange. In addition to basic regulation of conversations (sessions), the session layer offers provisions for data expedition, class of service, and exception reporting of problems in the session, presentation, and application layers. Transmission Control Protocol (TCP)—part of the TCP/IP suite—performs important functions at this layer as does the ISO session standard, named simply "session." In a NetWare environment the NetWare Core Protocol™ (NCP™) provides most of the necessary session-layer functions. The Service Advertising Protocol (SAP) also provides functions at this layer. Both NCP and SAP are discussed in greater detail in the "Internetworking" section of this primer. Wireless Session Protocol (WSP), part of the WAP suite, provides WAE with two session services: a connection-oriented session over Wireless Transaction Protocol (WTP) and a connectionless session over Wireless Datagram Protocol (WDP). Wireless Transaction Protocol (WTP), also part of the WAP suite, runs on top of UDP and performs many of the same tasks as TCP but in a way optimized for wireless devices. For example, WTP does not include a provision for rearranging out-of-order packets; because there is only one route between the WAP proxy and the handset, packets will not arrive out of order as they might on a wired network. Layer 4 Standards: Transport Standards at this OSI layer work to ensure that all packets have arrived. This layer also isolates the upper three layers—which handle user and application requirements—from the details that are required to manage the end-to-end connection. IBM's Network Basic Input/Output System (NetBIOS) protocol is an important protocol at this layer and at the session layer. However, designed specifically for a single network, this protocol does not support a routing mechanism to allow messages to travel from one network to another. For routing to take place, NetBIOS must be used in conjunction with another "transport mechanism" such as TCP. TCP provides all functions required for the transport layer. WDP is the transport-layer protocol for WAP that allows WAP to be bearer-independent; that is, regardless of which protocol is used for Layer 3—USSD, SMS, FLEX, or CDMA—WDP adapts the transport-layer protocols so that WAP can operate on top of them. Layer 3 Standards: Network The function of the network layer is to manage communications: principally, the routing and relaying of data between nodes. (A node is a device such as a workstation or a server that is connected to a network and is capable of communicating with other network devices.) Probably the most important network-layer standard is Internet Protocol (IP), another part of the TCP/IP suite. This protocol is the basis for the Internet and for all intranet technology. IP has also become the standard for many LANs. The ITU X.25 standard has been a common fixture in the network layer, but newer, faster standards are quickly replacing it, especially in the United States. It specifies the interface for connecting computers on different networks by means of an intermediate connection made through a packet-switched network (for example, a common carrier network such as Tymnet). The X.25 standard includes X.21, the physical-layer protocol and link access protocol balanced (LAPB), the data-link-layer protocol. Layer 2 Standards: Data-Link (Media Access Control and Logical Link Control) The most commonly used Layer 2 protocols are those specified in the Institute of Electrical and Electronics Engineering (IEEE): 802.2 Logical Link Control, 802.3 Ethernet, 802.4 Token Bus, and 802.5 Token Ring. Most PC networking products use one of these standards. A few Layer 2 standards under development or that have recently been proposed to IEEE are 802.1P Generic Attribute Registration Protocol (GARP) for virtual bridge LANs, 802.1Q Virtual LAN (VLAN), and 802.15 Wireless Personal Area Network (WPAN), which will define standards used to link mobile computers, mobile phones, and other portable handheld devices, and to provide connectivity to the Internet. Another Layer 2 standard is Cells In Frames (CIF), which provides a way to send Asynchronous Transfer Mode (ATM) cells over legacy LAN frames. ATM is another important technology at Layer 2, as are 100Base-T (IEEE 802.2u), and frame relay. These technologies are treated in greater detail in the "Important WAN and High-Speed Technologies" section. Layer 2 standards encompass two sublayers: media access control (MAC) and logical link control. Media Access Control The media access control protocol specifies how workstations cooperatively share the transmission medium. Within the MAC sublayer there are several standards governing how data accesses the transmission medium. The IEEE 802.3 standard specifies a media access method known as "carrier sense multiple access with collision detection" (CSMA/CD), and the IEEE 802.4, 802.5, and fiber distributed data interface (FDDI) standards all specify some form of token passing as the MAC method. These standards are discussed in greater detail in the "Network Topologies" section. The token-ring MAC method is not as prominent in computer networks as it once was: Ethernet, which uses CSMA/CD, has become the more popular networking protocol for linking workstations and servers. The token-ring technology of ARCnet (Attached Resource Computer network), however, has become the preferred method for embedded and real-time systems such as automobiles, factory control systems, casino games, and heating, ventilation, and cooling systems. Logical Link Control The function of the logical link control sublayer is to ensure the reliability of the physical connection. The IEEE 802.2 standard (also called Logical Link Control or LLC) is the most commonly used logical link control standard because it works with either the CSMA/CD or token-ring standards. The Point-to-Point Protocol (PPP) is another standard at this OSI level. This protocol is typically used to connect two computers through a serial interface, such as when connecting a personal computer to a server through a phone line or a T1 or T3 line. PPP encapsulates TCP/IP packets and forwards them to a server, which then forwards them to the Internet. The advantage to using PPP is that it is a "full-duplex" protocol, which means that it can carry a sending and a receiving signal simultaneously over the same line. It can also be used over twisted-pair wiring, fiber optic cable, and satellite transmissions. Layer 1 Standards: Physical Standards at the physical layer include protocols for transmitting a bitstream over media such as baseband coaxial cable, unshielded twisted-pair wiring, optical fiber cable, or through the air. The most commonly used are those specified in the IEEE 802.3, 802.4, and 802.5 standards. Use of the American National Standards Institute (ANSI) FDDI standard has declined as Ethernet has replaced token-ring technologies. Much of the FDDI market has largely been replaced by Synchronous Optical Network (SONET) and Asynchronous Transfer Mode (ATM). The different types of network cable and other network hardware will be discussed in greater detail in the "Hardware Technology" section. Further Perspective: Standards and Open Systems You probably noticed from looking at Figure 4 that most accepted standards do not include all (and only) those services specified for any OSI layer. In fact, most common standards encompass parts of multiple OSI layers. Product vendors' actual implementation of OSI layers is divided less neatly. Vendors implement accepted standards—which already include mixed services from multiple layers—in different ways. The OSI model was never intended to foster a rigid, unbreakable set of rules: it was expected that networking vendors would be free to use whichever standard for each layer they deemed most appropriate. They would also be free to implement each standard in the manner best suited to the purposes of their products. However, it is clearly in a vendor's best interest to manufacture products that conform to the intentions behind the OSI model. To do this, a vendor must provide the services required at each OSI model layer in a manner that will enable the vendor's system to be connected to the systems of other vendors easily. Systems that conform to these standards and offer a high degree of interoperability with heterogeneous environments are called open systems. Systems that provide interoperability with components from only one vendor are called proprietary systems. These systems use standards created or modified by the vendor and are designed to operate in a homogeneous or single-vendor environment.Return to Primer Index | Next Section
http://www.novell.com/info/primer/prim05.html
4.15625
Atomism was one of the theories the ancient Greek natural philosophers devised to explain the universe. The atoms, from the Greek for "not cut" were indivisible. They had few innate properties (size, shape, order, and position) and could hit each other in the void. By hitting one another and locking together, they become something else. This philosophy explained the material of the universe and is called a materialist philosophy. Atomists also developed ethics, epistemology, and political philosophy based on atomism. Leucippus and Democritus: Leucippus (c. 480 - c. 420 B.C.) is credited with coming up with atomism, although sometimes this credit is extended equally to Democritus of Abdera, the other main early atomist. Another (earlier) candidate is Moschus of Sidon, from the Trojan War era. Leucippus and Democritus (460-370 B.C.) posited that the natural world is comprised of only two, indivisible bodies, the void and atoms. Atoms continually bounce around in the void, bouncing into each other, but eventually bouncing off. This movement explains how things change. Motivation for Atomism: Aristotle (384-322 B.C.) wrote that the idea of indivisible bodies came in response to the teaching of another Pre-Socratic philosopher, Parmenides, who said that the very fact of change implies that something that is not either really is or comes into being from nothing. The atomists are also thought to have been countering the paradoxes of Zeno, who argued that if objects can be infinitely divided, then motion should be impossible because otherwise a body would have to cover an infinite number of spaces in a finite amount of time. The atomists believed we see objects because a film of atoms drops off the surface of the objects we see. Color is produced by the position of these atoms. Early atomists thought perceptions exist "by convention," while atoms and the void exist by reality. Later atomists rejected this distinction. A few hundred years after Democritus, the Hellenistic era revived the atomist philosophy. Epicureans (341-270 B.C.) formed a community applying atomism to a philosophy of living a pleasant life. Their community included women and some women raised children there. Epicureans sought pleasure by getting rid of things like fear. Fear of gods and death are inconsistent with atomism and if we can get rid of them, we will be free of mental anguish. Source: Berryman, Sylvia, "Ancient Atomism", The Stanford Encyclopedia of Philosophy (Winter 2005 Edition), Edward N. Zalta (ed.)
http://ancienthistory.about.com/od/presocraticphiloso/p/Atomism.htm
4
Martian Cataclysm: Impact energy analysis in support of the origin of multiple anomolies on Mars (cont.) by Gary R. Spexarth 9. Massive Cratering in the Southern Hemisphere. The previous sections dealt only with the Hellas crater. But what if the entire Southern Hemisphere experienced massive bombardment in a relatively short period of time? 93% of craters greater than 30 km lie in the southern hemisphere ! Why is it that most of the craters lie in the Southern Hemisphere? Current theory suggests mantel convection as the cause . In other words, it is believed that there used to be about the same density of craters in the North, just as in the South, but the mantel overturned due to convection and the craters were lost forever. Figure-10: This image, taken in early September 2000 by the Mars Global Surveyor's Mars Orbiter Camera, shows a group of sand dunes at the edge of a much larger field of dark-toned dunes in Proctor Crater. Such sand formations erosion patterns would make it difficult to distinguish between the ages of craters based only on the "sharpness" of their rim features. However, an alternate option must be considered as well. Recently, it has been proposed that the Southern Hemisphere was bombarded in a relatively short period of time due to an object that broke up prior to impact . It is interesting to note that the two moons of Mars are not spherical, but rather potato shaped. Are they remnants of the object that broke up and impacted the Southern Hemisphere? Also, the boundary of the Southern Craters forms a great circle (a circle with a circumference that is equal to the distance around the equator). In other words, if Mars were a wet beach ball and a handful of sand was thrown at it, the sand would stick on only one side of Mars. The boundary of the sand would form a great circle. In fact, the boundary of the Southern Craters form a great circle! This can partially be seen in Figure- 1. In addition, Mars has over 170 elliptical craters greater than 3 km (compared to only 4 or 5 on the Moon). Elliptical craters are formed when an impact occurs at a shallow angle. Shallow angle impacts would be expected if the Southern Hemisphere was bombarded during one massive "blast" of impacts. In fact, many of the elliptical craters are orientated along great circles. Critics argue that the Southern Hemisphere could not have been bombarded at the same time because some of the crater rims are sharp, while others are eroded. It is thought that this is a clear sign that the impacts occurred at vastly different times, since the older craters would have more of an eroded rim than newly formed craters. However, shouldn't different rim patterns be expected on a planet that has dust storms, even if the craters are the same age? The Mars dust storms would surely form all kinds of different erosion formations along crater rims over thousands of years, independent of their true age. In fact, recent Mars photos have shown various erosion patterns and "sand dunes" due to the Mars winds as shown in Figure-10. This type of surface process could conceal the actual age of the craters. The age of planet surfaces are estimated based on the number of craters that are counted. In other words, the surfaces with the more craters are older. By counting craters on the moon and measuring the amount of solar radiation in the craters during the Apollo missions, the age of the lunar surface was estimated. By assuming that projectiles impacted Mars with the same frequency that they hit the moon, it has been estimated that the surface of Mars is over 3 billion years old . But, what if it isn't 3 billion years old? What if the impacts did not occur slowly throughout the life of the planet, but rather, they occurred suddenly, in a short amount of time, and the surface erosion is masking their true age to us distant observers? Then we have no way of dating the surface without actually going there and investigating. We may find that the current surface of Mars may only be thousands of years old instead of billions! Maybe in the near past it still had running water and an atmosphere! The implications of such are astounding!
http://www.grahamhancock.com/forum/SpexarthG1.php?p=7
4.125
Lesson Plans and Worksheets Browse by Subject Compare and Contrast Teacher Resources Find teacher approved Compare and Contrast educational resource ideas and activities Spanning virtually all subjects, this digital approach to Venn diagrams gives compare and contrast a 21st century spin! Written specifically as a guide to Inspiration Software, but the visuals make it useful even without access to this program. Using a prepared comparison template, learners create informative diagrams with graphics, text, hyperlinks, and more. Prepared examples make modelling easy. Finally, pupils can create a multimedia presentation with the program. Second graders explore today's jobs and use graphic organizers to classify them as providing a good or a service. A variety of resources, such as fiction and nonfiction literature, artifacts, photographs and diaries, are used to acquire information about daily life and the types of jobs that existed in the past. Photosynthesis is such an important process for young biologists to understand. By implementing this incredible, six-lesson packet with your learners, they should have a thorough understanding of this important process, as well as an enhanced appreciation for the complex structures of living things. The activities described in the plans are all supported by fabulous worksheets, lab sheets, graphic organizers, photos, and a quiz at the end. Highly recommended! Tenth graders examine the impact of the Great Depression on the United States. In groups, they use the internet to research the causes of the Great Depression and the effects of the Dust Bowl. To end the activity, they compare and contrast the federal government's role before and after the Great Depression. Young readers use graphic organizers, such as Venn diagrams and story maps, to analyze a variety of folktales and the elements of a story. They use writing, sequencing activities, and creative art to identify the morals learned from a read aloud. This is a unit with at least eight lessons, and handouts are included. Participate in a life science unit that examines the relationships of living organisms to each other and to their environment as well as the student's role in the cycle of life. Through hands-on activities, research, and scientific investigations they explore the problem of persistent pollutants and their harmful effects on both humans and ecosystems. What influence did the arrival of the Europeans have on the Cherokee nation? That's what the historical fiction story The Trail of Tears by Joseph Bruchac helps readers to understand. Reading comprehension is the focus as the class completes a variety of activities such as restating key details, comparing and contrasting information, and looking at vocabulary in context. Scaffolding is included for English language learners.
http://www.lessonplanet.com/lesson-plans/compare-and-contrast/4
4
MELBOURNE, FLA. —A paper published this week in Science provides the most nuanced view to date of the small, shifting human populations in much of the Amazon before the arrival of Europeans. The research, which includes the first landscape-scale sampling of central and western Amazonia, finds that early inhabitants were concentrated near rivers and lakes but actually had little long-term impact on the outlying forests, as if they merely tiptoed around the land far from natural sources of water. In doing so, the new study overturns the currently popular idea that the Amazon was a cultural parkland in pre-Columbian times. The Amazon Basin is one of Earth's areas of highest biodiversity. Therefore, understanding how Amazonia was modified by humans in the past is important for conservation and understanding the ecological processes of tropical rainforests. Researchers, at Florida Institute of Technology, the Smithsonian Institution, Wake Forest University and the University of Florida looked at how widespread human impacts were in Amazonia before the Europeans arrived. If the Pre-Columbian Amazon was a highly altered landscape, then most of the Amazon's current biodiversity could have come from human effects. The research team, led by Florida Tech's Crystal McMichael and Mark Bush, retrieved 247 soil cores from 55 locations throughout the central and western Amazon, sampling sites that were likely disturbed by humans, like river banks and areas known from archeological evidence to have been occupied by people. They also collected cores farther away from rivers, where human impacts were unknown and used markers in the cores to track the histories of fire, vegetation and human alterations of the soil. The eastern Amazon has already been studied in detail. McMichael, Bush, and their colleagues conclude that people in the central and western Amazon generally lived in small groups, with larger populations on some rivers. "There is strong evidence of large settlements in eastern Amazonia, but our data point to different cultural adaptations in the central and western Amazon, which left vast areas with very little human imprint," said Bush. They did not live in large settlements throughout the basin as was previously thought. Even sites of supposedly large settlements did not show evidence of high population densities and large-scale agriculture. All the signs point to smaller, mobile populations before Europeans arrived. The impacts of these small populations were largely limited to river banks. "The amazing biodiversity of the Amazon is not a byproduct of past human disturbance," said McMichael. "We also can't assume that these forests will be resilient to disturbance, because many have never been disturbed, or have only been lightly disturbed in the past." Certainly there is no parallel in western Amazonia for the scale of modern disturbance that accompanies industrial agriculture, road construction, and the synergies of those disturbances with climate change."
http://www.sciencecodex.com/scientists_dispel_myths_provide_new_insight_into_human_impact_on_precolumbian_amazon_river_basin-93370
4.125
Blast From The Past! — Part 1 A recently recovered deep-sea core contains convincing new evidence of an asteroid impact 65 million years ago, when dinosaurs went extinct. A section of the core is the centerpiece of a multimedia exhibit, "Blast from the Past," currently on display at the Smithsonian Institution's National Museum of Natural History. The Asteroid Hypothesis The conceptual sketch above depicts the asteroid moments before impact, as it takes aim on Mexico's Yucatan coastline. Approaching at an angle from the southeast, it will send the main force of its impact northward in a fire storm over North America. The evidence has grown so overwhelming that few scientists dispute that an asteroid nearly 10 km (6 mi) wide slammed into what is now Mexico's Yucatan Peninsula. The impact blasted a 180 kilometer-wide (100 miles) crater many kilometers deep into the Earth. The heat of impact sent a searing vapor cloud speeding northward which, within minutes, set the North American continent aflame. This fireball and the darkness that followed caused major plant extinctions in North America. Environmental consequences led to global extinction of many plants and animals, including the dinosaurs. Lingering airborne debris is believed to have triggered darkness and a decline in the global temperature, making Earth uninhabitable not only for dinosaurs by also for many other plants and animals. Dinosaurs, like Triceratops depicted in this sketch, passed into extinction, ending more than 150 million years of evolution and dominance over life on Earth. Other organisms, including mammals like those shown in the lower left of this sketch, somehow survived. In the words of Dr. Brian Huber: "This event profoundly changed the course of life on Earth. If it had not happened, evolution would have followed a different path and in all likelihood we would not be here today." Map of the Earth as it was 65 million years ago, when the impact occurred. The drill ship JOIDES Resolution obtained the core 480 km (350 mi) east of Florida, more than 1,920 km (1,200 mi) from the now buried impact crater. The core was drilled 2,658 m (8,860 ft) below the ocean surface and 128 m (427 ft) below the ocean floor. Dust and ash fallout as well as material blasted from the crater are clearly evident in the deep-sea core, which scientists recovered more than 1,920 km (1200 miles) from the impact site off the east coast of Florida. Images by Mary Parrish, Map by My Le Ducharme. [ TOP ]
http://paleobiology.si.edu/blastPast/index.html
4.09375
In 1899, after Wilbur Wright had written a letter of request to the Smithsonian Institution for information about flight experiments, the Wright Brothers designed their first aircraft: a small, biplane glider flown as a kite to test their solution for controlling the craft by wing warping. Wing warping is a method of arching the wingtips slightly to control the aircraft's rolling motion spent a great deal of time observing birds in flight. They noticed that birds soared into the wind and that the air flowing over the curved surface of their wings created lift. Birds change the shape of their wings to turn and maneuver. They believed that they could use this technique to obtain roll control by warping, or changing the shape, of a portion of the wing. Over the next three years, Wilbur and his brother Orville would design a series of gliders which would be flown in both unmanned (as kites) and piloted flights. They read about the works of Cayley, and Langley, and the hang-gliding flights of Otto Lilienthal. They corresponded with Octave Chanute concerning some of their ideas. They recognized that control of the flying aircraft would be the most crucial and hardest problem to solve. successful glider test, the Wrights built and tested a full-size glider. They selected Kitty Hawk, North Carolina as their test site because of its wind, sand, hilly terrain and remote location. In 1900, the Wrights successfully tested their new 50-pound biplane glider with its 17-foot wingspan and wing-warping mechanism at Kitty Hawk, in both unmanned and piloted flights. In fact, it was the first piloted glider. Based upon the results, the Wright Brothers planned to refine the controls and landing gear, and build a bigger glider. 1901, at Kill Devil Hills, North Carolina, the Wright Brothers flew the largest glider ever flown, with a 22-foot wingspan, a weight of nearly 100 pounds and skids for landing. However, many problems occurred: the wings did not have enough lifting power; forward elevator was not effective in controlling the pitch; and the wing-warping mechanism occasionally caused the airplane to spin out of control. In their disappointment, they predicted that man will probably not fly in their lifetime. In spite of the problems with their last attempts at flight, the Wrights reviewed their test results and determined that the calculations they had used were not reliable. They decided to build a wind tunnel to test a variety of wing shapes and their effect on lift. Based upon these tests, the inventors had a greater understanding of how an airfoil (wing) works and could calculate with greater accuracy how well a particular wing design would fly. They planned to design a new glider with a 32-foot wingspan and a tail to help the brothers flew numerous test glides using their new glider. Their studies showed that a movable tail would help balance the craft and the Wright Brothers connected a movable tail to the wing-warping wires to coordinate turns. With successful glides to verify their wind tunnel tests, the inventors planned to build a powered aircraft. of studying how propellers work the Wright Brothers designed a motor and a new aircraft sturdy enough to accommodate the motor's weight and vibrations. The craft weighed 700 pounds and came to be known as the Flyer. built a movable track to help launch the Flyer. This downhill track would help the aircraft gain enough airspeed to fly. After two attempts to fly this machine, one of which resulted in a minor crash, Orville Wright took the Flyer for a 12-second, sustained flight on December 17, 1903. This was the first successful, powered, piloted flight in history. In 1904, the first flight lasting more than five minutes took place on November 9. The Flyer II was flown by Wilbur Wright. 1908, passenger flight took a turn for the worse when the first fatal air crash occurred on September 17. Orville Wright was piloting the plane. Orville Wright survived the crash, but his passenger, Signal Corps Lieutenant Thomas Selfridge, did not. The Wright Brothers had been allowing passengers to fly with them since May 14, 1908. In 1909, the U.S. Government bought its first airplane, a Wright Brothers biplane, on July 30. The airplane sold for $25,000 plus a bonus of $5,000 because it exceeded 40 mph. 1911, the Wrights' Vin Fiz was the first airplane to cross the United States. The flight took 84 days, stopping 70 times. It crash-landed so many times that little of its original building materials were still on the plane when it arrived in California. The Vin Fiz was named after a grape soda made by the Armour Packing Company. In 1912, a Wright Brothers plane, the first airplane armed with a machine gun was flown at an airport in College Park, Maryland. The airport had existed since 1909 when the Wright Brothers took their government-purchased airplane there to teach Army officers to fly. On July 18, 1914, an Aviation Section of the Signal Corps (part of the Army) was established. Its flying unit contained airplanes made by the Wright Brothers as well as some made by their chief competitor, Glenn Curtiss. That same year, the U.S. Court has decided in favor of the Wright Brothers in a patent suit against Glenn Curtiss. The issue concerned lateral control of aircraft, for which the Wrights maintained they held patents. invention, ailerons (French for "little wing"), was far different from the Wrights' wing-warping mechanism, the Court determined that use of lateral controls by others was "unauthorized" by patent law. page > Aviation History - Main Page "Photos and research courtesy of
http://theinventors.org/library/inventors/bl_wright_brothers.htm
4.125
© NASA/Zuber, M.T. et al., Nature, 2012 Elevation (left) and shaded relief (right) image of Shackleton, a 21-km-diameter (12.5-mile-diameter) permanently shadowed crater adjacent to the lunar south pole. The structure of the crater's interior was revealed by a digital elevation model constructed from over 5 million elevation measurements from the Lunar Orbiter Laser Altimeter. NASA said its Lunar Reconnaissance Orbiter (LRO) spacecraft has found a crater - dubbed Shackleton -- on the south pole of the moon that may have as much as 22% of its surface covered in ice. Shackleton, named after the Antarctic explorer Ernest Shackleton, is two miles deep and more than 12 miles wide and because of the Moon's tilt is always in the dark. Using laser light from LRO's laser altimeter NASA said found the crater's floor is brighter than those of other nearby craters, which is consistent with the presence of small amounts of ice. This information will help researchers understand crater formation and study other uncharted areas of the Moon, NASA said. NASA said the LRO mapped Shackleton crater with unprecedented detail, and the laser light measured to a depth comparable to its wavelength, or about a micron. That represents a millionth of a meter, or less than one ten-thousandth of an inch. The team also used the instrument to map the relief of the crater's terrain based on the time it took for laser light to bounce back from the Moon's surface. The longer it took, the lower the terrain's elevation, NASA said. NASA said that in addition to the possible evidence of ice, the study of Shackleton revealed a remarkably preserved crater that has remained relatively unscathed since its formation more than three billion years ago. The crater's floor is itself pocked with several small craters, which may have formed as part of the collision that created Shackleton. Maria Zuber, the team's lead investigator from the Massachusetts Institute of Technology said that while the crater's floor was relatively bright, its walls were even brighter. The finding was at first puzzling because scientists had thought that if ice were anywhere in a crater, it would be on the floor, where no direct sunlight penetrates. The upper walls of Shackleton crater are occasionally illuminated, which could evaporate any ice that accumulates. A theory offered by the team to explain the puzzle is that "moonquakes"-- seismic shaking brought on by meteorite impacts or gravitational tides from Earth -- may have caused Shackleton's walls to slough off older, darker soil, revealing newer, brighter soil underneath. Zuber's team's ultra-high-resolution map provides strong evidence for ice on both the crater's floor and walls. "There may be multiple explanations for the observed brightness throughout the crater," Zuber said in a statement. "For example, newer material may be exposed along its walls, while ice may be mixed in with its floor." This is not the first time NASA has found ice on the moon. The space agency's Lunar CRater Observation and Sensing Satellite (LCROSS) and Lunar Reconnaissance Orbiter which in 2009 slammed into the Moon as part of an experiment to find out what the orb was really made of found an ice-filled a debris plume from the experiment. NASA said the mission found evidence that the lunar soil within craters is rich in useful materials, and the moon is chemically active and has a water cycle. Scientists also confirmed the water was in the form of mostly pure ice crystals in some places. In 2010, using data from a NASA radar that flew aboard India's Chandrayaan-1 spacecraft, scientists detected ice deposits near the moon's north pole. NASA's Mini-SAR instrument found more than 40 small craters with water ice. The craters range in size from 1 to 9 miles (2 to15 km) in diameter. Although the total amount of ice depends on its thickness in each crater, it's estimated there could be at least 1.3 trillion pounds (600 million metric tons) of water ice.
http://www.sott.net/article/246974-NASA-finds-major-ice-source-in-Moon-crater
4.625
This section contains 30 daily lessons. Each one has a specific objective and offers at least three (often more) ways to teach that objective. Lessons include classroom discussions, group and partner activities, in-class handouts, individual writing assignments, at least one homework assignment, class participation exercises and other ways to teach students about the text in a classroom setting. Use some or all of the suggestions provided to work with your students in the classroom and help them understand the text. Objective: Setting. Where a play, book or story takes place often affects the characters' personalities and the possibilities for plot. The objective of this lesson is to look at setting. 1) 1. Homework. Students will rewrite the basic plot of "Waiting for Normal" and set it in another era, explaining how the different setting changes the work. For example, what would be different if it was... This section contains 6,597 words| (approx. 22 pages at 300 words per page)
http://www.bookrags.com/lessonplan/waiting-for-normal/lessons.html
4.125
Sustainable Waste Management Waste disposal is a major environmental concern. It is one of the main sources of local pollution, a threat to the local ecosystems, contributor to the greenhouse effect and a threat to the public health. In addition, the recent review of the problem with waste disposal has revealed that there are only a few years of landfill capacity in some parts of the United Kingdom. Sustainable waste management, however, is not only about responsible waste disposal and search for proper locations. In fact, sustainable waste management foresees waste disposal as the last resort. Waste reduction is the very first priority of sustainable waste management. It is impossible not to produce any waste but there are ways to create less waste and this applies to both households and businesses. Examples include avoidance of disposable products such as plastic carrying bags, donating old clothing and toys which are still useful to charity organisations, etc. Many objects which end up in trash bins can also be reused, while even more can be recycled and reprocessed for further use which has been shown to be one of the most effective ways of both waste minimisation and sustainable waste management. Most of the products including those that are eventually discarded such as metal, glass, paper and even plastic can be reprocessed and reused. In addition to reducing pressure on the landfills, recycling also reduces the pressure on natural resources such as forests for example as most of paper is produced from wood. But it also significantly reduces carbon dioxide emissions because reprocessing uses considerably less energy than virgin material production. Waste recycling, however, does not only reduce consumption of energy but it can also be used to produce it. Example includes biogas plants which use organic waste to produce biogas which can be used to generate electricity, water heating as well as to fuel vehicles. But in order to be able to recover recyclable materials and use organic waste to produce energy, it is crucial for everyone to cooperate and sort their waste. The system for recycling waste has been established a time long ago, however, the percentage of recycled waste is still unsatisfactory. This implies that the public is not adequately educated about the benefits of waste sorting or about which materials are recyclable, or both. Although sustainable waste management is primarily focused on alternative solutions to the existing waste management methods, it puts a major emphasis on responsible waste disposal as well, especially of hazardous materials. Many products that are used by households on a virtually daily basis are highly toxic chemicals, while the majority of population tends to be unaware of the harmful impact they have on the environment because they are easily available in supermarkets and are in use for decades. Local authorities should therefore put more emphasis on rising the awareness about dangerous household products and promoting environmentally-friendly alternatives.
http://www.sustainability4yorkshire.org.uk/sustainable-waste-management.html
4.28125
OPENING LECTURE ENGLISH 4610 One question this course seeks to answer is What is a hero? Joseph Campbell contends that every hero undertakes a journey that resembles that of all other heroes. He then tells us about the journey of the hero. Our challenge in our readings is to perceive the qualities in a hero within a culture that we do not understand because it is so long ago or far away. Reading about Old and Middle English heroes like Beowulf or the the leader in the Battle of Maldon or Gawain or King Arthur is difficult because they do not have the same values we do. It is like reading the Power and the Glory if one is not LDS or The Chronicles of Narnia or The Lord of the Rings if one is not Christian or Father Dowling or Brother Cadfield mysteries if one is not Catholic. One understands the plot but not the attitudes of the characters toward different events. (Can you think of other fiction that one understands better if one has certain background?) To get at certain attitudes and values that underlie the readings we will do first, I have linked the Consolation of Philosophy by the ancient philosopher Boethius to this site. You can print it from your computer or just read it on the computer, but do not get out of joint if you do not understand it that well on the first reading. You might do well to read only Books 4 and 5. Boethius was a 5th century philosopher and statesman. Due to what was probably political enmity, he was accused of betraying his emperor and thrown into prison. He wrote his Consolation while in jail and shortly thereafter was tortured and executed. His work had enormous influence on English women and men from the 9th century through the Reformation (1534). Return to Syllabus
http://faculty.weber.edu/dkrantz/en4610web/Opening%20lecture.html
4.21875
Benjamin Franklin appointed the first Post Master General in 1775, but the United States Postal Services was officially created in 1794. Initially spanning from Georgia to Maine and covering 195 postal offices. By 1800 there were 903 offices. Stagecoaches carried a majority of the mail in leather chests. The chests were opened by postal personnel at each stop. The personnel would take out the mail intended for their jurisdiction and add anything outgoing to the chest before it was sent back out on the stage. Smaller routes were covered by men on horseback who carried the mail in their saddlebags. Stamps were not created until 1847. Before that a fee was paid directly to the Post Master for the letter, who passed on the profit to the Government. The Post Master was paid no base salary. He was paid a percentage of the profit for his jurisdiction. Envelopes were not available until the late 1840’s. Before then people simply folded their letters inward and addressed the blank side of the page. During the Civil War when paper was not readily available people would make their own envelopes out of wallpaper, brown paper or maps. Pretty much anything they could find that was not needed elsewhere. An existing envelope was used as a template for the homemade envelopes. In the 1830’s postal rates were often priced by distance as well as per sheet of paper. A single sheet of paper traveling up to 30 miles could cost 6 cents while the same sheet of paper traveling over 400 miles could cost as much as 25 cents. If you had two pieces of paper double that, three pieces you would triple that cost, etc. It could get costly if you were long winded. As the post office developed mail was carried by steamboats, railroads, etc. Whatever means available to move the increasing amounts of mail at an even faster pace was used. During the Civil War the post office was severed in two. The Post Office Department of the Confederate States was established on February 21, 1861. The post office lacked funds and stamps as well as steady personnel. The Northern blockades created disruption in service, making it hard to get mail delivered. By November 1865, after the war’s end, the Postal Service resumed services throughout all the states.
http://throughhiseyestoo.blogspot.com/2011/04/united-states-post-office-1800.html
4.0625
Error trapped: NOTICE(8), Undefined variable: attachments Try this for a fun, visual way to start off your math work. Discuss just how large a hundred, thousand, million and billion are. Now ask if they know how large a googol is? A googol is a 1 followed by 100 zeroes. To illustrate this, use a package of 100 paper plates. First, place a large “1″ on the wall or sidewalk. Now, lay down all 100 plates, placing a comma (written on a post-it note) between every three plates. Now your child can see just how large a googol really is! For a bit of word fun, scroll down a bit and have fun trying to say the big number word names on this chart. Have you heard of a googolplex or a googolplexplex? (I was told that duotrigintillion was the largest number. It doesn’t look like that’s the case, though.) As a side note, you might discuss how the folks at the popular search engine, Google, chose chose their name. How did Google get its name? According to Professor “X” on Questions.com (and multiple other sources): Google derived its name from the word “googol“, a term coined by then nine-year-old Milton Sirotta, nephew of the American mathematician Edward Kasner. The story goes, Kasner had asked his nephew to invent a name for a very large number – ten to the power of one hundred (the numeral one followed by 100 zeros), and Milton called it a googol. The term was later made popular and in Kasner’s book, Mathematics and the Imagination, which he co-authored with James Newman. Later, another mathematician invented the term “googolplex“, which represents ten to the power of a googol – a substantially larger number. Here’s a bit more about the history of Google (the company, not the number), if you’re interested.
http://pjsallday.com/2012/04/googols-and-other-big-numbers/
4.25
England during the Price Revolution: cause & effect The Golden Age of the English Peasant came to an end in the sixteenth century. The population rose sharply and so did prices, with prices something like five and a half times higher by the end of the Tudor century than the start. In many ways it could be argued that economic life in England during the Tudor period was more expensive and healthier that at any time since the Roman times. The revitalization of a pre-industrial economy is essentially a matter of recovering population, and this is something that England had failed to do on a large scale ever since the Black Death. After 1525 the population finally started to rise sharply - it was a mere 2.26 million in 1525, but it was 4.10 million by 16011. This steep rise in population was the result of a complex and in part unknowable process, of which three factors may be highlighted - - Decline in disease. Population had stagnated in the fifteenth century largely due to disease in both town and countryside. England was generally healthier during the sixteenth century. By the reign of Elizabeth I the annual death rate was never more than 2.68% of the population. - Higher fertility rates. During the fifteenth century many people were dying unmarried or without male heirs. Because of the increased prosperity this afforded the peasantry (supply and demand applies to labour as well: when there are fewer workers they can demand better terms) fertility rates may well have risen. - Earlier marriage. Another result of people being more prosperous was that they could marry earlier. This meant they were much more likely to have children, or more children. Demographers have also calculated that the chance of survival at birth was getting higher (again, perhaps down to the increased prosperity). You might well ask what any of this has to do with a "price revolution". Well, when aggregate demand increases sharply across an economy and output remains fairly static, demand-pull inflation2 occurs. Grain prices increased between five and six times over the Tudor century. The cost of living rose and people found themselves with an increasingly low standard of living. Wage rates went down as the available labour pool steadily rose and inevitably there were more people looking for work on the land than there were jobs for them to take. In part this fuelled the growth of the cities, where people would go to seek work, and the growth of "cottage industry" (often outside the traditional control of the guilds). Yet one of the remarkable things about Tudor England was its ability to feed itself amidst the general decline in standards of living. There was never an instance of mass mortality and Multhusian checks failed to kick in to keep the population low. Commercial farming by and large rose to meet the challenge. Other factors than population growth contributed to the price rise, but the extent of their effect is much debated. The debasement of the coinage to pay for Continental war, especially that carried out by Henry VIII and Protector Somerset, decreased the value of coin in circulation (Gresham's Law, bad coin drives out the good: people horde the valuable coin and only use the least valuable)3. Inflation was certainly European wide (which is what made people eventually realise something so massive in scale could not be caused by trivial things like enclosure), and this has been put down to bullion flowing in from the New World and increased output from the silver mines of Bohemia. The enclosure movement was much resented by contemporaries, and many blamed it for the general decline in their standard of living. In fact, the term "enclosure" is a bit of a blanket description for a few distinct practices, some of which were beneficial to the rural economy, and some beneficial to the national economy. Sometimes a small freeholder, often of villein stock, would buy up strips of land adjacent to his own and put a hedge around them, thus cutting them off from the open land, in a process called engrossing. This was not a new practice and it could increase the productivity of this land considerably. But the gentry began to engage in a practice that was more odious for their locale, but one which they were driven to by the price rise (their costs were rising but rents were static, so they had to do something to avoid going bankrupt). A gentleman might buy, as an outside speculator, large estates or stretches of open pasture, evict the tenants, and engross them. This depopulated the area and increased local unemployment, although the national economy benefitted from this consolidation. Finally, in very specific areas, a gentleman might enclose the common land, thus depriving everyone else of its benefit. This didn't very happen very often, but it was one of the specific agrarian grievances of Ket's rebellion. Less enterprising gentlemen who did not wish to engage in the vicissitudes of commercial farming could try that trusty old expedient of rack-renting (raising rents or increasing the fine charged when one tenant succeeded the next). His ability to do this varied greatly on a case by case basis because there were many different types of agreements between landlord and tenant. A tenant-at-will was most vulnerable, because he essentially had no legal rights, and only held his land so long as his Lord willed it so. A "customary tenant" had specific rights and obligations as laid out in the manorial court roll, whereas a copyholder had a copy of his rights and obligations which he could produce in the King's courts. He was the most secure, but unless he possessed an 'estate of inheritance' then the Lord could impose an arbitrary fine on his heir when he wished to succeed to the land. This made it easy for a Lord to force the tenant out and switch to commercial farming practices if he wished, or force the tenant out and sell to an outside speculator. The fluidity of the land market produced what amounted to a revolution in the agrarian life of England. There was great wealth in some areas and great poverty in others. Overall the national life prospered and the nation became wealthier. The revolution was certainly needed to lay down the path for easier times to come, and the stimulation provided by the high inflation - it stimulated because it created hardship - led to vital structural change and the final collapse of the feudal order in the South (there were instances in the conservative North of peers going bankrupt rather than give up the established order!). As in all other spheres of national life, the Tudor century contained much of the turmoil needed to consolidate things for the stability ahead. 1. The source for this is the Wrigley-Schofield Index from The Population History of England, 1541 - 1871: A reconstruction 2. It's a pretty primitive form of demand-pull inflation. In an industrial economy this sort of inflation is accentuated greatly because as prices rise, costs rise, which forces prices to rise, and so on... if you don't understand this, don't worry too much. In an agrarian economy, greater aggregate demand for a static amount of grain is clearly going to push grain prices up as it becomes scarcer. 3. Cardinal Wolsey, who started this, was in a way just following in the rest of Europe's footsteps. English coin contained much more silver than Continental coin at the start of the sixteenth century, the result being an inequity when trading it (Continental coin was worth less). Elton, G. R. England Under the Tudors 2nd. ed.: Methuen & Co, 1974. Guy, John. Tudor England: Oxford University Press, 1988. Helm, P. J. England under the Tudors and Yorkists: 1471-1603: Bell & Hyman, 1968. Lotherington, John. The Tudor Years: Hodder & Stoughton, 1994.
http://everything2.com/user/Noung/writeups/The+Price+Revolution+in+Tudor+England
4.09375
In computer science and technology, a database cursor is a control structure that enables traversal over the records in a database. Cursors facilitate subsequent processing in conjunction with the traversal, such as retrieval, addition and removal of database records. The database cursor characteristic of traversal makes cursors akin to the programming language concept of iterator. Cursors are used by database programmers to process individual rows returned by database system queries. Cursors enable manipulation of whole result sets at once. In this scenario, a cursor enables the rows in a result set to be processed sequentially. In SQL procedures, a cursor makes it possible to define a result set (a set of data rows) and perform complex logic on a row by row basis. By using the same mechanics, an SQL procedure can also define a result set and return it directly to the caller of the SQL procedure or to a client application. A cursor can be viewed as a pointer to one row in a set of rows. The cursor can only reference one row at a time, but can move to other rows of the result set as needed. To use cursors in SQL procedures, you need to do the following: - Declare a cursor that defines a result set. - Open the cursor to establish the result set. - Fetch the data into local variables as needed from the cursor, one row at a time. - Close the cursor when done. To work with cursors you must use the following SQL statements This section introduces the ways the SQL:2003 standard defines how to use cursors in applications in embedded SQL. Not all application bindings for relational database systems adhere to that standard, and some (such as CLI or JDBC) use a different interface. A programmer makes a cursor known to the DBMS by using a CURSOR statement and assigning the cursor a (compulsory) name: DECLARE cursor_name CURSOR FOR SELECT ... FROM ... Before code can access the data, it must open the cursor with the OPEN statement. Directly following a successful opening, the cursor is positioned before the first row in the result set. Programs position cursors on a specific row in the result set with the FETCH statement. A fetch operation transfers the data of the row into the application. FETCH cursor_name INTO ... Once an application has processed all available rows or the fetch operation is to be positioned on a non-existing row (compare scrollable cursors below), the DBMS returns a SQLSTATE '02000' (usually accompanied by an SQLCODE +100) to indicate the end of the result set. The final step involves closing the cursor using the After closing a cursor, a program can open it again, which implies that the DBMS re-evaluates the same query or a different query and builds a new result set. Programmers may declare cursors as scrollable or not scrollable. The scrollability indicates the direction in which a cursor can move. With a non-scrollable (or forward-only) cursor, you can FETCH each row at most once, and the cursor automatically moves to the next row. After you fetch the last row, if you fetch again, you will put the cursor after the last row and get the following code: SQLSTATE 02000 (SQLCODE +100). A program may position a scrollable cursor anywhere in the result set using the FETCH SQL statement. The keyword SCROLL must be specified when declaring the cursor. The default is NO SCROLL, although different language bindings like JDBC may apply a different default. DECLARE cursor_name sensitivity SCROLL CURSOR FOR SELECT ... FROM ... The target position for a scrollable cursor can be specified relatively (from the current cursor position) or absolutely (from the beginning of the result set). FETCH [ NEXT | PRIOR | FIRST | LAST ] FROM cursor_name FETCH ABSOLUTE n FROM cursor_name FETCH RELATIVE n FROM cursor_name Scrollable cursors can potentially access the same row in the result set multiple times. Thus, data modifications (insert, update, delete operations) from other transactions could have an impact on the result set. A cursor can be SENSITIVE or INSENSITIVE to such data modifications. A sensitive cursor picks up data modifications impacting the result set of the cursor, and an insensitive cursor does not. Additionally, a cursor may be ASENSITIVE, in which case the DBMS tries to apply sensitivity as much as possible. Cursors are usually closed automatically at the end of a transaction, i.e. when a COMMIT or ROLLBACK (or an implicit termination of the transaction) occurs. That behavior can be changed if the cursor is declared using the WITH HOLD clause. (The default is WITHOUT HOLD.) A holdable cursor is kept open over COMMIT and closed upon ROLLBACK. (Some DBMS deviate from this standard behavior and also keep holdable cursors open over ROLLBACK.) DECLARE cursor_name CURSOR WITH HOLD FOR SELECT ... FROM ... When a COMMIT occurs, a holdable cursor is positioned before the next row. Thus, a positioned UPDATE or positioned DELETE statement will only succeed after a FETCH operation occurred first in the transaction. Note that JDBC defines cursors as holdable per default. This is done because JDBC also activates auto-commit per default. Due to the usual overhead associated with auto-commit and holdable cursors, both features should be explicitly deactivated at the connection level. Positioned update/delete statements Cursors can not only be used to fetch data from the DBMS into an application but also to identify a row in a table to be updated or deleted. The SQL:2003 standard defines positioned update and positioned delete SQL statements for that purpose. Such statements do not use a regular WHERE clause with predicates. Instead, a cursor identifies the row. The cursor must be opened and already positioned on a row by means of UPDATE table_name SET ... WHERE CURRENT OF cursor_name DELETE FROM table_name WHERE CURRENT OF cursor_name The cursor must operate on an updatable result set in order to successfully execute a positioned update or delete statement. Otherwise, the DBMS would not know how to apply the data changes to the underlying tables referred to in the cursor. Cursors in distributed transactions Using cursors in distributed transactions (X/Open XA Environments), which are controlled using a transaction monitor, is no different than cursors in non-distributed transactions. One has to pay attention when using holdable cursors, however. Connections can be used by different applications. Thus, once a transaction has been ended and committed, a subsequent transaction (running in a different application) could inherit existing holdable cursors. Therefore, an application developer has to be aware of that situation. Cursors in XQuery The XQuery language allows cursors to be created using the subsequence() function. The format is: let $displayed-sequence := subsequence($result, $start, $item-count) Where $result is the result of the initial XQuery, $start is the item number to start and $item-count is the number of items to return. Equivalently this can also be done using a predicate: let $displayed-sequence := $result[$start to $end] Where $end is the end sequence. For complete examples see the XQuery Wikibook. Disadvantages of cursors The following information may vary depending on the specific database system. Fetching a row from the cursor may result in a network round trip each time. This uses much more network bandwidth than would ordinarily be needed for the execution of a single SQL statement like DELETE. Repeated network round trips can severely impact the speed of the operation using the cursor. Some DBMSs try to reduce this impact by using block fetch. Block fetch implies that multiple rows are sent together from the server to the client. The client stores a whole block of rows in a local buffer and retrieves the rows from there until that buffer is exhausted. Cursors allocate resources on the server, for instance locks, packages, processes, temporary storage, etc. For example, Microsoft SQL Server implements cursors by creating a temporary table and populating it with the query's result set. If a cursor is not properly closed (deallocated), the resources will not be freed until the SQL session (connection) itself is closed. This wasting of resources on the server can not only lead to performance degradations but also to failures. EMPLOYEES TABLE SQL> DESC EMPLOYEES_DETAILS; Name NULL? TYPE ----------------------------------------- -------- -------------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(30) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) SAMPLE CURSOR KNOWN AS EE CREATE OR REPLACE PROCEDURE EE AS BEGIN DECLARE v_employeeID EMPLOYEES_DETAILS.EMPLOYEE_ID%TYPE; v_FirstName EMPLOYEES_DETAILS.FIRST_NAME%TYPE; v_LASTName EMPLOYEES_DETAILS.LAST_NAME%TYPE; v_JOB_ID EMPLOYEES_DETAILS.JOB_ID%TYPE:= 'IT_PROG'; Cursor c_EMPLOYEES_DETAILS IS SELECT EMPLOYEE_ID, FIRST_NAME, LAST_NAME FROM EMPLOYEES_DETAILS WHERE JOB_ID ='v_JOB_ID'; BEGIN OPEN c_EMPLOYEES_DETAILS; LOOP FETCH c_EMPLOYEES_DETAILS INTO v_employeeID,v_FirstName,v_LASTName; DBMS_OUTPUT.put_line( v_employeeID); DBMS_OUTPUT.put_line( v_FirstName); DBMS_OUTPUT.put_line( v_LASTName); EXIT WHEN c_EMPLOYEES_DETAILS%NOTFOUND; END LOOP; CLOSE c_EMPLOYEES_DETAILS; END; END; - Christopher J. Date: Database in Depth, O'Reilly & Associates, ISBN 0-596-10012-4 - Thomas M. Connolly, Carolyn E. Begg: Database Systems, Addison-Wesley, ISBN 0-321-21025-5 - Ramiz Elmasri, Shamkant B. Navathe: Fundamentals of Database Systems, Addison-Wesley, ISBN 0-201-54263-3 - Neil Matthew, Richard Stones: Beginning Databases with PostgreSQL: From Novice to Professional, Apress, ISBN 1-59059-478-9 - Thomas Kyte: Expert One-On-One: Oracle, Apress, ISBN 1-59059-525-4 - Kevin Loney: Oracle Database 10g: The Complete Reference, Oracle Press, ISBN 0-07-225351-7 - Cursor Optimization Tips (for MS SQL Server) - Descriptions from Portland Pattern Repository - PostgreSQL Documentation - Berkeley DB Reference Guide: Cursor operations - Java SE 7 - Q3SqlCursor Class Reference - OCI Scrollable Cursor - function oci_new_cursor - MySQL's Cursor Documentation - FirebirdSQL cursors documentation - Cursors in DB2 CLI applications; Cursors in DB2 SQL stored procedures
http://en.wikipedia.org/wiki/Cursor_(databases)
4.0625
A hearing impairment is a decrease in one's ability to hear (i.e. perceive auditory information). While some cases of hearing loss are reversible with medical treatment, many lead to a permanent disability (often called deafness). more... If the hearing loss occurs at a young age, it may interfere with the acquisition of spoken language and social development. Hearing aids and cochlear implants may alleviate some of the problems caused by hearing impairment, but are often insufficient. People who have hearing impairments, especially those who develop a hearing problem later in life, often require support and technical adaptations as part of the rehabilitation process. There are four major causes of hearing loss: genetic, disease processes affecting the ear, medication and physical trauma. Hearing loss can be inherited. Both dominant and recessive genes exist which can cause mild to profound impairment. If a family has a dominant gene for deafness it will persist across generations because it will manifest itself in the offspring even if it is inherited from only one parent. If a family had genetic hearing impairment caused by a recessive gene it will not always be apparent as it will have to be passed onto offspring from both parents. Dominant and recessive hearing impairment can be syndromic or nonsyndromic. Recent gene mapping has identified dozens of nonsyndromic dominant (DFNA#) and recessive (DFNB#) forms of deafness. - The most common type of congenital hearing impairment in developed countries is DFNB1, also known as Connexin 26 deafness or GJB2-related deafness. - The most common dominant syndromic forms of hearing impairment include Stickler syndrome and Waardenburg syndrome. - The most common recessive syndromic forms of hearing impairment are Pendred syndrome, Large vestibular aqueduct syndrome and Usher syndrome. Disease or illness - Measles may result in auditory nerve damage - Meningitis may damage the auditory nerve or the cochlea - Autoimmune disease has only recently been recognised as a potential cause for cochlear damage. Although probably rare, it is possible for autoimmune processes to target the cochlea specifically, without symptoms affecting other organs. Wegener's granulomatosis is one of the autoimmune conditions that may precipiate hearing loss. - Presbyacusis is deafness due to loss of perception to high tones, mainly in the elderly. It is considered a degenerative process, and it is poorly understood why some elderly people develop presbyacusis while others do not. - Mumps (Epidemic parotitis) may result in profound sensorineural hearing loss (90 dB or more), unilateral (one ear) or bilateral (both ears). - Adenoids that do not disappear by adolescence may continue to grow and may obstruct the Eustachian tube, causing conductive hearing impairment and nasal infections that can spread to the middle ear. - AIDS and ARC patients frequently experience auditory system anomalies. - HIV (and subsequent opportunistic infections) may directly affect the cochlea and central auditory system. - Chlamydia may cause hearing loss in newborns to whom the disease has been passed at birth. - Fetal alcohol syndrome is reported to cause hearing loss in up to 64% of infants born to alcoholic mothers, from the ototoxic effect on the developing fetus plus malnutrition during pregnancy from the excess alcohol intake. - Premature birth results in sensorineural hearing loss approximately 5% of the time. - Syphilis is commonly transmitted from pregnant women to their fetuses, and about a third of the infected children will eventually become deaf. - Otosclerosis is a hardening of the stapes (or stirrup) in the middle ear and causes conductive hearing loss. Read more at Wikipedia.org
http://the-medical-dictionary.com/hearing_impairment.htm
4.0625
Fewer than one percent of the roughly 100,000 thunderstorms that occur each year in the United States spawn tornadoes. They tend to appear on the trailing edge of a storm, beginning high off the ground as intricate combinations of three ingredients: wind, temperature, and moisture. The process begins with the storm drawing into itself warm, humid air. The warm air rises to a point where the moisture condenses into rain. This sets up an opposing motion: cool downdraft. Changes in wind speed or direction at higher altitudescalled wind shearcan knit together the cool air and warm air in a horizontally spinning tube, like a giant invisible steamroller. If more wind shifts tilt this tube so that one end touches the ground, a tornado is born.
http://www.nationalgeographic.com/eye/tornadoes/phenomena.html
4.03125
Woodson chose the second week of February for Negro History Week because it marks the birthdays of Frederick Douglass and Abraham Lincoln. The celebration is still sponsored by the Association for the Study of African American Life and History. In 1976, the Association expanded the celebration from one week to a month-long event. Timeline of Black Political History The U.S Constitution states that Congress may not ban the slave trade until 1808. Congress bans the importation of slaves from Africa. The Missouri Compromise bans slavery north of the southern boundary of Missouri. Nat Turner, an enslaved African-American preacher, leads a slave uprising in Southampton County, Virginia. The militia quells the rebellion; Turner is hanged; and Virginia tightens its slave laws. Frederick Douglass launches his abolitionist newspaper. Harriet Beecher Stowe publishes Uncle Tom's Cabin, one of the most influential anti-slavery publications. Congress passes the Kansas-Nebraska Act, repealing the Missouri Compromise of 1820 and renewing tensions between anti- and pro-slavery political factions. In the Dred Scott decision, the U.S. Supreme Court holds that the Missouri Compromise was unconstitutional and that Congress did not have the right to ban slavery. Three of the justices also held that a black “whose ancestors were … sold as slaves” was not entitled to the rights of a federal citizen and therefore had no standing in court. President Lincoln issues the Emancipation Proclamation, declaring "that all persons held as slaves" within the Confederate states "are, and henceforward shall be free." - 1865, June 19: Slavery in the United States finally ends when 250,000 slaves in Texas are informed that the Civil War had ended two months earlier. - 1865, December 6: The 13th Amendment to the Constitution is ratified, prohibiting slavery. The 14th Amendment to the Constitution defines citizenship as individuals born or naturalized in the United States, including those born as slaves. This nullifies the Dred Scott decision. Howard University's law school becomes the country's first black law school. - 1870, February 3: The 15th Amendment was passed, granting blacks the right to vote. - 1870 February 1870: The first black U.S. senator, Hiram R. Revels (1822-1901), took the oath of office. In Plessy v. Ferguson, the Supreme Court rules that racial segregation is constitutional. The National Association for the Advancement of Colored People is founded in New York. Jackie Robinson breaks Major League Baseball's color barrier when he signs a contract with the Brooklyn Dodgers. President Harry S. Truman issues an executive order integrating the U.S. armed forces. - 1954, May 17: In Brown v. Board of Education of Topeka, KS, the U.S Supreme Court declares that racial segregation in schools is unconstitutional. Martin Luther King, Charles K. Steele, and Fred L. Shuttlesworth found the Southern Christian Leadership Conference (SCLC), a civil rights group. - 1964, July 2: President Johnson signs the Civil Rights Act, the most sweeping civil rights legislation since Reconstruction. It prohibits discrimination of all kinds based on race, color, religion, or national origin. - 1968, April 4: Martin Luther King, Jr., is assassinated in Memphis, TN. - 1968, April 11: President Johnson signs the Civil Rights Act of 1968. - 1978, June 28: In Regents of the University of California v. Bakke, the Supreme Court upheld the constitutionality of affirmative action. - 2008, November 4: Barack Obama becomes the first bi-racial American to be elected president of the United States.
http://uspolitics.about.com/od/politicaljunkies/a/black_history_m.htm
4
arctic tundra is at the top of the world -- around the North Pole. Animals are adapted to handle cold winters and to breed and raise young quickly in the very short and cool summers. Temperatures during the arctic winter can dip to -60 F (-51 C)! The average temperature of the warmest month is between 50 F (10 C) and 32 F (0 C). Sometimes as few as 55 days per year have a mean temperature higher than 32 F (0 C). The average annual temperature is only 10 to 20F (-12C to -6C). The soil is often frozen. Permafrost, or permanent ice, usually exists within a meter of the surface. Water is unavailable during most of the year. Annual precipitation is very low, usually less than 10 inches (25 centimeters).
http://www.mbgnet.net/sets/tundra/facts.htm
4.1875
Oct. 17, 2007 With the goal of saving lives and preventing environmental and structural damage during real tsunamis, Princeton Engineering researchers created experimental mini-tsunamis in Oregon this summer. Existing models for predicting the impact of tsunamis focus on the incoming rush of water while largely ignoring the effect of the powerful forces that a tsunami wave can exert on the earth beneath when it draws back into the ocean. “This was the first experiment of this kind and it will allow us to develop a realistic model to show us what really happens to the sand during a tsunami,” said Yin Lu “Julie” Young, an assistant professor of civil and environmental engineering at Princeton University’s School of Engineering and Applied Science. Young said that knowing how to construct buildings that stay in place during a tsunami would be especially crucial to survival in certain locations, such as the Waikiki Beach in Hawaii. “This is absolutely necessary in a place like Waikiki because in the event of a tsunami there is no place to run,” she said. “It is too populated and the near-shore bathymetry [the topography of the ocean bed] is too flat. The building has to stay intact so that people can evacuate vertically.” Young and her colleagues created model-scale tsunamis at Oregon State University’s Tsunami Wave Basin, the largest experimental facility dedicated to the study of tsunamis in North America. She is the lead investigator from Princeton on the study of tsunami-induced sediment transport, part of the larger NSF-sponsored Network for Earthquake Engineering Simulation (NEES) program. The experimental wave bed consisted of two flumes, each about 7 feet wide with a base of natural Oregon beach sand. It took three weeks of hard work to set up an experimental mini-tsunami, where each wave lasted only a few minutes, according to Young. “It is a difficult and time-consuming experiment to run due to the difficulty with sand, which changes the bathymetry with every wave,” she said. The OSU wave generator produced large waves that – like a tsunami -- had only a crest and no trough. The concrete walls of the flumes had built-in windows that allowed Young and fellow researchers to observe and videotape the action underwater. Four cameras perched overhead also recorded the experiments. The ultimate goal of Young’s experiments this summer is to establish “performance-based tsunami engineering” – basically guidelines for building structures that will withstand tsunamis. Young and her colleagues are particularly interested in the study of enhanced sediment transport and potential “liquefaction” of the soil, which occurs when a tsunami wave recedes and exerts a sudden decrease in downward pressure on the saturated land; this in turn can cause the sand to liquefy and to flow out as a heavy slurry. Liquefaction can lead to the eventual collapse of buildings, highways or bridge abutments. Tsunamis can also cause landslides and the formation of gigantic potholes called “scours,” which can force underground oil pipelines to pop, resulting in environmental damage. Most previous tsunami experiments have taken place over smooth, rigid, impervious bases such as glass, steel or concrete and thus have failed to take into account how the wave can profoundly alter the ground beneath it. The problem of sediment transport is especially complex because of so many variables in the dynamics of sand and water, according to Young. “Sediment transport during tsunamis hasn’t been studied well at all,” said Young. “We plan to use this research to create a benchmark test that everyone can use to compare their numerical predictions. Ultimately we want to come up with a design procedure that can give a sense of the risk and the reliability of a structure and its foundation.” Young’s research this summer was part of a larger NSF-funded project known as NEESR-SG: Development of Performance Based Tsunami Engineering (PBTE). Young is co-principal investigator on that project and her collaborators include Ron Riggs, Ian Robertson, and Kwok Fai Cheung of University of Hawaii at Manoa, and Solomon Yim of Oregon State University. Young was assisted in her research this summer by Princeton Engineering graduate students Xiao Heng, Tan Ting, and Sun Waiching. Adedotun Moronkeji, an undergraduate from the University of Missouri-Rolla, also assisted in the research as part of the NSF-funded Research Experience for Undergraduates program. Trevor Clark, a high school student from Oregon, helped with the experiments and processing the video of the experimental waves. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
http://www.sciencedaily.com/releases/2007/10/071014173915.htm
4.28125
The Constitution of the Commonwealth of Kentucky is the document that governs the Commonwealth of Kentucky. It was first adopted in 1792 and has since been rewritten three times and amended many more. The latter versions were adopted in 1799, 1850 and 1891. The 1792 Constitution The first constitutional convention of Kentucky was called by Colonel Benjamin Logan on December 27, 1784 in Danville, the capital of Kentucky County, Virginia. Over the next eight years, a total of ten constitutional conventions were called, each making some progress toward a viable constitution. The state's first constitution was accepted by the United States Congress on June 1, 1792, making Kentucky the fifteenth state. The 1792 Constitution had several similarities to the United States Constitution in that it provided for three branches of government – legislative, executive, and judicial – and a bicameral legislature called the General Assembly. The document contained a bill of rights, and called for an electoral college to elect senators and the state's governor. (Representatives were chosen by popular election.) Some relatively new ideas were included in the 1792 Constitution. One was the stipulation that the General Assembly vote by ballot instead of voice. There was also a requirement that representation to the General Assembly be based on population, not geography. The 1792 Constitution was seen as an experiment called for a re-evaluation of the document at the end of the century. The 1799 Constitution A second constitutional convention was called for by the voters of Kentucky in 1799. The 1799 Constitution abolished the electoral college, allowing senators, representatives, the governor, and the newly-created office of lieutenant governor to be directly elected. In addition to appointing judges, the governor was given the power to appoint a number of local offices including sheriffs, coroners, and justices of the peace. With all of these expansions in the governor's power, the 1799 Constitution also placed term limits on the governor, stipulating that a governor could not succeed himself in office for a period of seven years. Membership in both houses of the General Assembly was also limited. In some ways, the 1799 Constitution was a regression. The progressive idea of voting by ballot in the General Assembly was removed. Neither of the first two Kentucky constitutions provided a method of amendment, and the 1799 Constitution made it even more difficult to call a constitutional convention. The 1799 Constitution was regressive for blacks as well. It retained the pro-slavery provisions of the original constitution untouched, but went a step further by disenfranchising free blacks. The 1850 Constitution It was not long before some of the weaknesses in the 1799 Constitution were exposed. As early as 1828, some in the General Assembly began calling for a new constitutional convention. However, because the 1799 Constitution made the calling of a convention such an arduous task, it took more than twenty years to call the convention, which finally convened in Frankfort on October 1, 1849. One major item of dissatisfaction with the 1799 Constitution was the appointment of so many officials by the governor. This was addressed in the 1850 Constitution by making all state officials, even judges, popularly elected and imposing term limits on these offices. While the Kentucky Constitution had always provided for protection of slave property, pro-slavery forces sought and received even greater protections in the 1850 Constitution. Among the new provisions were a requirement that slaves and their offspring remain in the state, and that ministers of religion – thought to be largely anti-slavery – were prohibited from holding the office of governor or seats in the General Assembly. The bulk of the reforms in the 1850 Constitution, however, were reserved for the General Assembly, whose spending had spiraled out of control. Membership in the Senate was fixed at 38; in the House the number was fixed at 100. Sessions of the General Assembly were limited to sixty days biennially, requiring a two-thirds majority to extend them. The 1850 Constitution also created a sinking fund for the liquidation of the state's debt, which had climbed to $4.5 million. To prevent the debt from climbing too high in the future, the 1850 Constitution mandated a maximum of $500,000 of indebtedness for the state. At the time, this represented about a year's worth of revenue for the state, but this provision remains in the current Kentucky Constitution, even though receipts in the 2001-02 fiscal year were approximately $6.5 billion. Another dated provision of the 1850 Constitution that survives in the present Constitution is the ineligibility for public office of anyone who had participated in a duel since the ratification of the 1850 Constitution. While the relevance of this prohibition may be disputed now, it could potentially have derailed Governor William Goebel's eligibility for public office in the 1890s. The 1891 Constitution Ratification of the Thirteenth, Fourteenth, and Fifteenth Amendments to the U. S. Constitution following the Civil War provided the impetus for another constitutional convention, since much of the existing constitution provided protection for slave property and were now at odds with the Federal Constitution. However, this required a majority of the voters in the previous two elections to vote in favor of a convention, a measure that failed every two years from 1873 to 1885, finally receiving the necessary majority in 1888 and 1889 after the General Assembly called for a registration of all eligible voters in 1887. The convention began September 8, 1890. For the first time, ratification of the constitution required a referendum of the citizens. At the same time, the 1891 Constitution did little to ameliorate the difficult process of calling a constitutional convention. This resulted in failed calls for subsequent constitutional conventions in 1931, 1947, 1960, and 1977. The 1891 Convention did, for the first time, provide a means of amending itself that has been used by the General Assembly to keep a century-old document somewhat current. Judicial decisions have also helped to adapt the current constitution to modern times. For example, the 1891 Constitution limited state officials' salaries to $5000. A 1949 Amendment raised this number to $12,000, but difficulty of keeping the number up-to-date quickly became apparent. The 1962 Kentucky Supreme Court case Matthews v. Allen addressed this problem by opining that the only way to keep circuit judge's salaries adequate, as required by Section 133 of the constitution, was to allow the General Assembly to adjust the $12,000 figure in Section 246 to account for the value of a dollar in 1949. Despite some provisions that some claim are antiquated, the 1891 Constitution (as amended) remains the constitution that governs the Commonwealth today. Notable amendments Several amendments to the Kentucky Constitution were enacted in 1992. One important amendment lifted the restriction that the Governor could not succeed himself or herself in office. Per the 1992 amendment, the incumbent can seek one additional term before becoming ineligible for four years. The amendment was drafted so that it did not apply to the then-current holder of the office (Brereton Jones), which meant that the first Governor to which the amendment applied was elected in 1995 (Paul Patton). The 1992 amendments to Kentucky's Constitution significantly changed the office of Lieutenant Governor. Previously, the Lieutenant Governor became acting Governor whenever the Governor was out of state. Since the amendments took effect, the Lieutenant Governor only takes over gubernatorial powers when the Governor is incapacitated. The amendments also removed the Lieutenant Governor's duties in the Senate — previously, the Lieutenant Governor had cast the tie breaking vote in the Senate. Finally, the amendments allow candidates for Governor and Lieutenant Governor to run on a single ticket. Prior to the amendments, the two offices were sometimes inhabited by members of different parties. See also Lieutenant Governor of Kentucky In 2004, Kentucky became the fourth state to send a constitutional amendment banning same-sex unions to the state's voters. On Election Day of that year, Kentucky joined 10 other states in passing such an amendment, with voters passing it by a 3-to-1 margin. The text of the amendment reads: "Only a marriage between one man and one woman shall be valid or recognized as a marriage in Kentucky. A legal status identical or substantially similar to that of marriage for unmarried individuals shall not be valid or recognized." - "Constitutional Background". Kentucky Government: Informational Bulletin No. 137 (Revised). Frankfort, Kentucky: Kentucky Legislative Research Commission. February 2003. - Kleber, John E., ed. (1992). "Constitutions". The Kentucky Encyclopedia. Associate editors: Thomas D. Clark, Lowell H. Harrison, and James C. Klotter. Lexington, Kentucky: The University Press of Kentucky. ISBN 0-8131-1772-0. - McQueen, Keven (2001). "William Goebel: Assassinated Governor". Offbeat Kentuckians: Legends to Lunatics. Ill. by Kyle McQueen. Kuttawa, Kentucky: McClanahan Publishing House. ISBN 0-913383-80-5. - Foust, Michael (2004-04-14). "Reversal: Ky. lawmakers send marriage amendment to voters". Baptist Press. Retrieved 2007-03-08. - Peterson, Kavan (2004-11-03). "50-state rundown on gay marriage laws". StateLine.org. Retrieved 2007-03-08. - "Election 2004 - Ballot Measures". CNN.com. Retrieved 2007-03-08. - "Kentucky Constitution: Section 233A". Legislative Research Commission. Retrieved 2007-03-08. - Full texts of the various versions of the Kentucky Constitution - Report of the debates & proceedings of the Convention for the revision of the Constitution of the state of Kentucky (Frankfort, A.G. Hodges & Co., 1849)
http://en.wikipedia.org/wiki/Kentucky_Constitution
4.09375
Complete Neanderthal Genome Sequenced Initial analysis suggests that up to two percent of the DNA in the genome of present-day humans outside of Africa, originated in Neanderthals or in Neanderthals' ancestors. This new data suggests evolution did not proceed in a straight line. Rather, evolution appears to be a messier process, with emerging species merging back into the lines from which they diverged. Goers: Researchers have produced the first whole genome sequence of the Neanderthal genome. Neanderthals diverged from the primate line that led to present-day humans some 400,000 years ago in Africa. Dr. Jim Mullikin is the Acting Director of the NIH Intramural Sequencing Center and a computational geneticist. He recently worked with an international team of scientists to sequence the complete genome of the Neanderthal. Mullikin: It has been an amazing journey with the group, led by Svante Pääbo at the Max Planck Institute in Leipzig, Germany. Goers: The researchers compared DNA samples from the bones of three female Neanderthals who lived some 40,000 years ago. Mullikin: And they have been in these caves, buried under layers and layers of sediment for all those tens of thousands of years. And, when they unearthed those bones, they used a clean room environment, they were able to take some of the bones and take small bits of the bones and make it into a powder. And from that, they could extract the DNA from the powder material, and some of it, not much of it, but some fraction of it, was actually from that the Neanderthal that died in that position. Goers: Despite being the closest distinct relative of modern humans, one of the obstacles in decoding the Neanderthal gene was the possibility of contamination of the sample with modern human DNA. Dr. Mullikin explains: Mullkin: So they had to make extra special care in handling the DNA, prior to sequencing it. They were short fragments of DNA, that had been altered by the eons of time that they had been underground. They change in a distinctive pattern so that we know which ones are real, that are the old pieces from the ones that might be contamination. But contamination was one of the main focuses of understanding what might be contaminate and what was real Neanderthal DNA. We tried to remove as much of the contaminate as possible from the sequence. We got it down to below 1 percent. Goers: The researchers compared DNA samples from the Neanderthal bones to samples from five present-day humans from China, France, New Guinea, southern Africa and western Africa. Mullkin: After we were able to sequence it and essentially map it back to other closest living relatives, homo sapiens and chimpanzees, we aligned the sequence back to both of those species and were able to find the differences that are unique to the Neanderthal, and more important, what are similar between Neanderthal and modern humans and modern chimpanzees. Goers: Dr. Mullikin and his fellow researchers found two percent of the genome similar to the modern humans in the study not from Africa. They hypothesize that humans migrating from Africa bred with Neanderthals in Europe leading to the Neanderthal DNA present in some humans today. For more information on the Neanderthal genome, visit www.genome.gov. This is Elizabeth Goers, National Institutes of Health, Bethesda, Maryland.
http://www.nih.gov/news/radio/jun2010/20100608neanderthal.htm
4.3125
Watch the grey wolves. See the climate changes.Lesson Overview The grey wolf may be the next “Canary in the Coal Mine” for climate change.Grade Level Middle school – Grade 7Time Required One or two classesCurriculum Connection (Province/Territory and course) The Ontario Curriculum: History and Geography, Grades Seven and Eight Link to the Return to the Wild Virtual Exhibition www,canadiangeographic.ca/vmc Additional Resources, Materials and Equipment Required - Canadian Atlas Online www.canadiangeographic.ca/atlas/ - Hinterland Who’s Who www.hww.ca - Haliburton Forest Wolf Centre www.haliburtonforest.com/wolf.html - University of California Berkeley, Press release March 2005 “Wolves alleviate impact of climate change on food supply, finds new study” Sarah Yang University of California Berkeley .(http://berkeley.edu/news/media/releases/2005/03/21_wolvesclimate.shtml Under the Theme of Interaction, students relate effects of climate change to the habitat and patterns of the grey wolf.Learning Outcomes By the end of the lesson, students will be able to: - describe the habitat and range of the grey wolf populations of North America - explain adaptations of the wolves to their habitat and ecosystem. - relate climate change to the impact of the grey wolves on their territories and ecosystems.The Lesson Teacher Activity/Student ActivityIntroduction Describe to students the tradition of using canaries in coal mines as a way to indicate air quality in mine shafts. Review the concept of climate change and how it affects habitats. Ask students to describe how other animals can be used to predict natural events. Give some example of climate changes.Lesson Development Have students research the habitat and range of the grey wolf populations of North America. Review terminology such as : ecosystem, climate, climate change, predator, scavenger, range, habitat Hand out the grey wolf Work Sheet provided to gather information Complete the work sheet using the web sites, Wilderness Who’s Who references and other material available.Conclusion Collect the Work Sheet answers and correct them. Lead a summary discussion of how wolf activities and the scavengers that follow them can indicate changes in climate patterns and ecosystems Students complete the work sheet and submit it. Discuss how the activities of the wolf packs are indicators of climate change.Lesson Extension Use the Hinterland Who’s Who web site video guide to have the interested students make a one minute video of the grey wolf. ( Make your own HWW www.hww.ca/hww.asp?id=53&pid=3 Visit or contact the Haliburton Forest Wolf Centre for additional information on the habitat, activities and distribution of grey wolf populations in North America. Watch the movie based on Canadian writer Farley Mowat’s autobiographical book ”Never Cry Wolf.”Assessment of Student Learning Grade the worksheet. Include a question on the grey wolf and climate change in the next test.Further Reading “Never Cry Wolf” by Farley Mowat. (1963) Back Bay Books 2001) Link to Canadian National Standards for Geography Essential Element #5: Environment and Society – Environmental Issues. The effect of climate change on the ecosystem and adaptation of grey wolf populations. Geographic Skill #1 – Asking Geographic Questions. Plan how to answer Geographic Questions STUDENT ACTIVITY WORKSHEET 1a Look at the map “Range of the Grey Wolf” on the Return to the Wild web site. Describe the regions of North America where Grey Wolves live. 1b. What is a “habitat”? 1c. What type of habitat do Grey Wolves need? 2a. What food does the Grey Wolf usually eat? 2b. Why do Wolves hunt in packs? 2c. Why are packs so effective at getting their food? 3a. What are scavengers? 3b What types of scavenger follow wolves when they hunt? 4a. How does a good supply of game to hunt indicate climate change? 4b. How do scavengers benefit from increased wolf kills? 5.Explain a connection between the “Canary in the Coal Mine” and the wolves’ successful hunt for food as environmental indicators. © 2010 Royal Canadian Geographical Society. All Rights Reserved.
http://www.museevirtuel-virtualmuseum.ca/edu/ViewLoitDa.do?method=preview&lang=EN&id=16554
4.0625
Back | Next STUDENTS' PROJECTS > GANDHI - A BIOGRAPHY FOR CHILDREN AND BEGINNERS > Chapter 15 Meanwhile, there were developments in the field of constitutional reform. The Simon Commission which had been appointed to review the Act of 1919 had submitted its report. But no action had been taken. In 1935 the British Parliament passed a new Constitution for India, and it came into force in 1937. Nationalist India was totally disappointed. There was no real transfer of power even in the States, (then called Provinces) not to speak of the Centre. Some powers in the States were transferred to a Council of Ministers. But even these were subject to the veto of the British Governor. Important subjects were reserved for the Governor. Franchise was limited. All the same, the State Assemblies were to be elected. It gave the Congress an opportunity to prove its public support. If the Congress kept away from the polls the Assemblies and Governments would be formed by elements that were keeping away from the national struggle. The Congress was in a dilemma. Though Gandhi was not a member or office bearer of the Congress, his advice was important for the Congress. He was the one who had his finger on the pulse of the masses. He alone could lead the country if the experiment failed, and it came to a struggle again. Gandhi was not against participation in the Assemblies if the Congress could use them to solve the crying problems of the people, like drinking water, sanitation, welfare of the Harijans and tribals, primary and secondary education, alcoholism and so on. The Congress decided to contest the elections. It won massive majorities in many States, and was in a position to form Governments in seven out of the eleven States. But it would form Governments only if the Governor gave an assurance that they would not intervene or use his overriding powers to thwart the policies and decisions of the people's representatives. After long discussions, the Congress felt assured that the Governors would act as constitutional heads. Congress Ministries were formed in most States, with leaders like Rajagopalachari, Govind Ballabh Pant, B. G. Kher, Srikrishna Sinha, Gopinath Bordoloi, Dr. Khan Saheb and others becoming "Prime Ministers" in the States (Chief Ministers were called Prime Ministers at that time). The Governments set examples in probity, accountability, austerity and concern for the problems of the people. But the Governments could not remain in office for long. On the 3rd of September 1939, the Second World War broke out. As soon as Britain declared war on Germany, the Viceroy too declared that India was at war with Germany. There was not even the semblance of consultation with the Prime Ministers in the States or the representatives of the people. The Congress Ministries resigned declaring that the hollowness of the claims of the new Constitution had been exposed. What was the Congress to do during the war? Were they to help actively in the war effort? Prominent leaders of the Congress like Jawaharlal Nehru, Maulana Azad, Rajagopalachari and others were supporters of the Allies. They were totally against Hitler and Mussolini, against Nazism and Fascism. They supported Britain and the Allies because they were fighting for democracy— against dictatorship. They wanted India to take full part in the fight for democracy. But how could India do so, how could the leaders enthuse the people of India to cooperate in the war, if democracy was meant only for Britain, and not meant for India as well. They wanted the Congress to tell the Viceroy that the Congress would support Indian participation in the War effort if the British Government would declare that at the end of the war, India would attain full freedom. They would participate in a national Government if it was set up on these terms. Gandhi himself supported the Allied cause. He met the Viceroy. He could not help shedding tears when he thought of the destruction of the historic city of London or of the woes of the people. But he was against all wars. The British attitude to India had disillusioned him. Yet his sympathy for the Allied cause and people who had to suffer the terrible consequences of the war came from his heart. However he was a votary of non-violence. He believed that all wars were ruinous. They would cause suffering, but would not solve any issue. He wanted to work for a world without wars. Only non-violence could save humanity and secure justice. He could act as an advisor of the Government and of the Congress if they wanted him to lead them to a world without war. The Congress was not willing to accept this position. It had not accepted 'pacifism'. It had never accepted the view that Independent India would have no army, and would not use arms in self-defence. It, therefore, reluctantly and respectfully decided to differ from Gandhi and offer co-operation to the Government in its war effort if a provisional Government was set up. The Government did not care to accept the offer of the Congress. It made a statement which was a virtual incitement to communal and obstructive elements to persist in obstruction. It virtually assured them that the progress towards self-government would depend on their consent. Congress felt insulted and humiliated. The country too felt that its hand of friendship and co-operation had been rejected. Some kind of protest was called for, even to protect national honour. They did not want to disrupt the war effort. Nor did Gandhi want to embarrass the Government when it was fighting for the survival of Britain and the Allies. Congress turned or returned to Gandhi and asked him to resume leadership. Gandhi hit upon a new form of Civil Disobedience, — Individual Civil Disobedience. Individuals chosen or approved by Gandhi would defy the orders of the Government by notifying the Government of their intention to do so. They would address the public and declare that India had not been consulted before the Government proclaimed that India was at war. Vinoba Bhave was chosen as the first Satyagrahi. In phases, members of the Working Committee, Legislators, Office bearers of the party and others offered Satyagraha in this manner. Tens of thousands were lodged in prison. Meanwhile, the war was going against the Allies. Country after country had been overrun in Europe. The soldiers of the Axis powers — Germany and Italy — were on the shores of the Mediterranean. Britain was fighting a heroic battle for survival. Japan had entered the war, and had made spectacular gains, sweeping down the Asian coast. America had rallied to the defence of the Allies. President Roosevelt of America felt that some move should be made to solve the "Indian problem" and induct the Indian leaders into the struggle against the Axis powers. The pressure of circumstances was too much even for Churchill, the war time Prime Minister of Britain, who was a known opponent of Indian independence. The British War Cabinet drafted its proposals for future constitutional change in India, and sent Sir Stafford Cripps, a well-known friend of India, to persuade Indian leaders to accept the proposals. The proposals were in two parts. The long-term proposals visualized that after the war, India would acquire the right to be a full Dominion (with the right to opt out of the Empire). But the States or provinces and the Rulers would be free to remain out of the new Dominion and retain direct relationship with the British Crown. In the immediate present, there would be a new Executive Council to assist the Viceroy, but it will not have the rights of a cabinet of the type that ruled in England. Gandhi who was summoned by Sir Stafford looked at the proposals and advised Sir Staffard to take the next plane home as the proposals were not acceptable to India. He returned to his Ashram at Wardha. The Congress leaders had long discussions with Sir Stafford, and finally rejected the proposal because it would pave the way for a fragmentation of India. In the immediate present, it would only enable the Government to put up a facade that Indians were part of the Government. Sir Stafford's mission was a failure. He returned to England, and blamed Gandhi, although Gandhi had taken no part in the negotiations between the Congress and the British Government. India felt frustrated. There was a mood of indignation and anxiety. The war was no longer distant for India. The Japanese had overrun the entire Asian coast and Singapore. They had occupied Burma and were knocking at the door of India at Manipur. It looked as though British invincibility was a myth. The British Army was being forced to withdraw from country after country. It was withdrawing after destroying crops and other materials to ensure that the Japanese did not have access to them. What was to happen to the people of these countries? They could not run away. Even their food was being destroyed. How would they survive? Who would defend them? Britain had surrendered its responsibility. Who would defend India? What would happen if people lost the will to defend? The situation called for a drastic remedy. Britain might leave India and go as it had left other countries, leaving people defenseless and hopeless. India had to be taken out of this morass of helplessness and fear. No people can become free or remain free without the will to resist. India should discover its will to resist. Who can help the country to do this, without losing time? The Congress and the country turned to Gandhi.
http://mkgandhi.org/beginnersbio/chapter15.htm
4.09375
Conservation Planning (Systematic) Author: Gillian Maree and team ~ CSIR ( Article Type: Explanation ) The science of systematic conservation planning aims to identify and set aside representative examples of all biodiversity to ‘biodiversity banks’ as proactive protection against future modifications. Such conserved areas become heritage resources for sharing our biodiversity heritage with future generations as well as benchmarks against which human modification of ecosystems can be measured in the long term. Biodiversity pattern and the ecological and evolutionary processes that maintain and generate species are the primary considerations of the planning process. Loss of biodiversity inevitably leads to ecosystem degradation and the subsequent loss of important ecosystem services. Moreover, this tends to harm poor rural communities more directly, since poor people have limited assets and infrastructure and are more dependent on common property resources for their livelihoods, while the wealthy are buffered against loss of ecosystem services by being able to purchase basic necessities and scarce commodities. Our path towards sustainable development, poverty alleviation and enhanced human wellbeing for all is therefore completely dependent on how effectively we manage and protect biodiversity. The current reality is that species are becoming extinct at a rate estimated to be 100 to 1 000 times greater than rates recorded through recent geological time. Amphibians and freshwater fish are thought to be, respectively, the world’s most and second-most threatened groups of vertebrates. Poor planning in the identification and designation of areas for biodiversity conservation has exacerbated the extinction problem. Historically, most areas that have been designated for conservation purposes were selected in an ad hoc manner and not specifically for conservation purposes. More recently, and partly in order to avoid conflicts, conservation efforts have focused on areas of relatively low human population or low economic potential – which in turn often results from factors such as unproductive soils, steep slopes or high altitudes. However, the majority of biological diversity (as measured by the number of species) tends to occur in lower elevations, warmer climates, and coastal areas that are more attractive to human occupation and use. These areas tend to have a much higher associated opportunity cost, and are generally disproportionately degraded in comparison to less-populated areas. Traditionally, reasons for conserving biodiversity focused on biodiversity pattern, emphasising the intrinsic importance that people place on species and habitats. Supporters of this approach believe that the future generations have the right to enjoy these species and habitats. Even more compelling arguments in recent times (such as the Millennium Ecosystem Assessment) have linked biodiversity to ecosystem services, which are strongly correlated to poverty reduction and quality of life. These arguments reason that our dependence on biodiversity is absolute: without it, human beings would not be able to survive. All our food is directly or indirectly obtained from plant species and other photosynthetic organisms. Apart from the direct benefits of biodiversity from the harvest of domesticated or wild species for food, fibres, fuel, pharmaceuticals and many other purposes, humans also derive benefit from its influence on climate regulation, water purification, soil formation, flood prevention and nutrient cycling; and the aesthetic and cultural impact is obvious. These benefits to people fall into the broad category of ‘ecosystem services’ (see also Ecosystem services and human wellbeing below), and can be summarised into provisioning, regulating and cultural services that affect people directly, and indirect supporting services which maintain the other services. These services affect human wellbeing through impacting security, quality of life, health and social relations, all of which influence freedom and choices available to people. Ecosystem services and human wellbeing Ecosystem services are the benefits that humans derive from functioning ecosystems. These include provisioning services, such as food and water; regulating services, such as water regulation and purification; supporting services required to maintain other services, such as nutrient cycling; and cultural services, such as recreation and spiritual services. Changes in these services affect human wellbeing through impacts on security, the basic material for a good life, health, and social and cultural relations. These constituents of wellbeing are, in turn, influenced by and have influence on the freedoms and choices available to people. When ecosystem services are impaired this inevitably leads to a narrowing of livelihood choices and an increase in the vulnerability of the poor.
http://www.enviropaedia.com/topic/default.php?topic_id=52
4.25
The recent media attention on bullying in the United States has brought a greater awareness to parents, teachers, and mental health professional over the last couple of years. While schools continue to attempt to enforce a zero tolerance policy on bullying, this problem continues to persist. It is important to be aware of how this affects children, but we also need to equip them with the skills to deal with bullying effectively. Skill 1: Keep Them Involved A bully’s goal is to make their victim feel alone and powerless. Children can feel empowered when they make and maintain connections with loyal friends and supportive adults. Helping to identify a teacher, social worker, etc., in the school can help your child have a “go to” person at school. Skill 2: Awareness Often times children refuse to tell adults about bullying because they are convinced “it won’t help.” Many times, bullying can occur when the adults are not aware of what is going on. Bully’s often look for unsupervised opportunities. Children are left feeling isolated and alone, which can perpetuate bullying even more. Skill 3: Act Quickly The longer a bully has power over a victim, the stronger the hold becomes. Once the bully knows he has hooked you as his victim he will do it more. It is important to recognize the signs and tell an adult. Ignoring the behavior often is not the answer to bullying. Helping your child feel comfortable talking to you or an adult is important to act as quickly as possible. Skill 4: Respond Assertively The more a bully thinks he can pick on a victim without a response, the more he will do it. Children who master the skills of assertiveness are comfortable in responding to the bully in a way that does not invite further abuse. Remember, passive responses invite further abuse. Skill 5: Use Non Verbal Communication When teaching your child the skills of assertive communication, it is helpful to practice using body language to reinforce words. • Maintain eye contact • Keep your voice calm and even • Stand an appropriate distance from the bully • Use the bully's name when speaking to him
http://woodbury-middlebury.patch.com/groups/michael-stokess-blog/p/bp--five-skills-to-help-your-child-with-bullying
4.1875
Just because harmonics is becoming a more prevalent problem, that doesn't mean the subject is getting any easier to understand Harmonics are AC voltages and currents with frequencies that are integer multiples of the fundamental frequency. On a 60-Hz system, this could include 2nd order harmonics (120 Hz), 3rd order harmonics (180 Hz), 4th order harmonics (240 Hz), and so on. Normally, only odd-order harmonics (3rd, 5th, 7th, 9th) occur on a 3-phase power system. If you observe even-order harmonics on a 3-phase system, you more than likely have a defective rectifier in your system. If you connect an oscilloscope to a 120V receptacle, the image on the screen usually isn't a perfect sine wave. It may be very close, but it will likely be different in one of several ways. It might be slightly flattened or dimpled as the magnitude approaches its positive and negative maximum values (Fig. 1). Or perhaps the sine wave is narrowed near the extreme values, giving the waveform a peaky appearance (Fig. 2 below). More than likely, random deviations from the perfect sinusoid occur at specific locations on the sine wave during every cycle (Fig. 3 below). The flattened and dimpled sinusoid in Fig. 1 has the mathematical equation, y=sin (x)+0.25 sin (3x). This means a 60-Hz sinusoid (the fundamental frequency) added to a second sinusoid with a frequency three times greater than the fundamental (180 Hz) and an amplitude ¼ (0.25 times) of the fundamental frequency produces a waveform similar to the first part of Fig. 1. The 180-Hz sinusoid is called the third harmonic, since its frequency is three times that of the fundamental frequency. Similarly, the peaky sinusoid in Fig. 2 has the mathematical equation, y=sin (x) -0.25 sin (3x). This waveform has the same composition as the first waveform, except the third harmonic component is out of phase with the fundamental frequency, as indicated by the negative sign preceding the “0.25 sin (3x)” term. This subtle mathematical difference produces a very different appearance in the waveform. The waveform in Fig. 3 contains several other harmonics in addition to the third harmonic. Some are in phase with the fundamental frequency and others out of phase. As the harmonic spectrum becomes richer in harmonics, the waveform takes on a more complex appearance, indicating more deviation from the ideal sinusoid. A rich harmonic spectrum may completely obscure the fundamental frequency sinusoid, making a sine wave unrecognizable. Analyzing harmonics. When the magnitudes and orders of harmonics are known, reconstructing the distorted waveform is simple. Adding the harmonics together, point by point, produces the distorted waveform. The waveform in Fig. 1 is synthesized in Fig. 4 by adding the magnitudes of the two components, the fundamental frequency (red waveform) and the third harmonic (blue waveform), for each value of x, which results in the green waveform. Decomposing a distorted waveform into its harmonic components is considerably more difficult. This process requires Fourier analysis, which involves a fair amount of calculus. However, electronic equipment has been developed to perform this analysis on a real-time basis. One manufacturer offers a 3-phase power analyzer that can digitally capture 3-phase waveforms and perform a host of analysis functions, including Fourier analysis, to determine harmonic content. Another manufacturer offers similar capabilities for single-phase applications. Easy-to-use analyzers like these can help detect and diagnose harmonic-related problems on most power systems.What causes harmonics? If harmonic voltages aren't generated intentionally, where do they come from? One common source of harmonics is iron core devices like transformers. The magnetic characteristics of iron are almost linear over a certain range of flux density, but quickly saturate as the flux density increases. This nonlinear magnetic characteristic is described by a hysteresis curve. Because of the nonlinear hysteresis curve, the excitation current waveform isn't sinusoidal. A Fourier analysis of the excitation current waveform reveals a significant third harmonic component, making it similar to the waveform shown in Fig. 2. Core iron isn't the only source of harmonics. Generators themselves produce some 5th harmonic voltages due to magnetic flux distortions that occur near the stator slots and nonsinusoidal flux distribution across the air gap. Other producers of harmonics include nonlinear loads like rectifiers, inverters, adjustable-speed motor drives, welders, arc furnaces, voltage controllers, and frequency converters. Semiconductor switching devices produce significant harmonic voltages as they abruptly chop voltage waveforms during their transition between conducting and cutoff states. Inverter circuits are notorious for producing harmonics, and are in widespread use today. An adjustable-speed motor drive is one application that makes use of inverter circuits, often using pulse width modulation (PWM) synthesis to produce the AC output voltage. Various synthesis methods produce different harmonic spectra. Regardless of the method used to produce an AC output voltage from a DC input voltage, harmonics will be present on both sides of the inverter and must often be mitigated.Effects of harmonics. Besides distorting the shape of the voltage and current sinusoids, what other effects do harmonics cause? Since harmonic voltages produce harmonic currents with frequencies considerably higher than the power system fundamental frequency, these currents encounter much higher impedances as they propagate through the power system than does the fundamental frequency current. This is due to “skin effect,” which is the tendency for higher frequency currents to flow near the surface of the conductor. Since little of the high-frequency current penetrates far beneath the surface of the conductor, less cross-sectional area is used by the current. As the effective cross section of the conductor is reduced, the effective resistance of the conductor is increased. This is expressed in the following equation: where R is the resistance of the conductor, ρ is the resistivity of the conductor material, L is the length of the conductor, and A is the cross-sectional area of the conductor. The higher resistance encountered by the harmonic currents will produce a significant heating of the conductor, since heat produced — or power lost — in a conductor is I2R, where I is the current flowing through the conductor. This increased heating effect is often noticed in two particular parts of the power system: neutral conductors and transformer windings. Harmonics with orders that are odd multiples of the number three (3rd, 9th, 15th, and so on) are particularly troublesome, since they behave like zero-sequence currents. These harmonics, called triplen harmonics, are additive due to their zero-sequence-like behavior. They flow in the system neutral and circulate in delta-connected transformer windings, generating excessive conductor heating in their wake.Reducing the effects of harmonics. Because of the adverse effect of harmonics on power system components, the IEEE developed standard 519-1992 to define recommended practices for harmonic control. This standard also stipulates the maximum allowable harmonic distortion allowed in the voltage and current waveforms on various types of systems. Two approaches are available for mitigating the effects of excessive heating due to harmonics, and a combination of the two approaches is often implemented. One strategy is to reduce the magnitude of the harmonic waveforms, usually by filtering. The other method is to use system components that can handle the harmonics more effectively, such as finely stranded conductors and k-factor transformers. Harmonic filters can be constructed by adding an inductance (L) in series with a power factor correction capacitor (C). The series L-C circuit can be tuned for a frequency close to that of the troublesome harmonic, which is often the 5th. By tuning the filter in this way, you can attenuate the unwanted harmonic. Filtering isn't the only means of reducing harmonics. The switching angles of an inverter can be preselected to eliminate some harmonics in the output voltage. This can be a very cost-effective means of reducing inverter-produced harmonics. Since skin effect is responsible for the increased heating caused by harmonic currents, using conductors with larger surface areas will lessen the heating effects. This can be done by using finely stranded conductors, since the effective surface area of the conductor is the sum of the surface area of each strand. Specially designed transformers called k-factor transformers are also advantageous when harmonic currents are prevalent. They parallel small conductors in their windings to reduce skin effect and incorporate special core designs to reduce the saturation effects at the higher flux frequencies produced by the harmonics. You should also increase the size of neutral conductors to better accommodate triplen harmonics. Per the FPN in 210.4(A) and 220.22 of the 2002 NEC, “A 3-phase, 4-wire wye-connected power system used to supply power to nonlinear loads may necessitate that the power system design allow for the possibility of high harmonic neutral currents.” And per 310.15(B)(4)(c), “On a 4-wire, 3-phase wye circuit where the major portion of the load consists of nonlinear loads, harmonic currents are present on the neutral conductor: the neutral shall therefore be considered a current-carrying conductor.” It's important to note that the duct bank ampacity tables in B.310.5 through B.310.7 are designed for a maximum harmonic loading on the neutral conductor of 50% of the phase currents. Harmonics will undoubtedly continue to become more of a concern as more equipment that produces them is added to electrical systems. But if adequately considered during the initial design of the system, harmonics can be managed and their detrimental effects avoided. Fehr is an independent engineering consultant located in Clearwater, Fla.
http://ecmweb.com/print/archive/harmonics-made-simple
4.21875
In meteorology, precipitation (also known as one of the classes of hydrometeors, which are atmospheric water phenomena) is any product of the condensation of atmospheric water vapour that falls under gravity. The main forms of precipitation include drizzle, rain, sleet, snow, graupel and hail. Precipitation occurs when a local portion of the atmosphere becomes saturated with water vapour, so that the water condenses and "precipitates". Thus, fog and mist are not precipitation but suspensions, because the water vapour does not condense sufficiently to precipitate. Two processes, possibly acting together, can lead to air becoming saturated: cooling the air or adding water vapour to the air. Generally, precipitation will fall to the surface; an exception is Virga which evaporates before reaching the surface. Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Rain drops range in size from oblate, pancake-like shapes for larger drops, to small spheres for smaller drops. Unlike raindrops, snowflakes grow in a variety of different shapes and patterns, determined by the temperature and humidity characteristics of the air the snowflake moves through on its way to the ground. While snow and ice pellets require temperatures close to the ground to be near or below freezing, hail can occur during much warmer temperature regimes due to the process of its formation. Moisture overriding associated with weather fronts is an overall major method of precipitation production. If enough moisture and upward motion is present, precipitation falls from convective clouds such as cumulonimbus and can organize into narrow rainbands. Where relatively warm water bodies are present, for example due to water evaporation from lakes, lake-effect snowfall becomes a concern downwind of the warm lakes within the cold cyclonic flow around the backside of extratropical cyclones. Lake-effect snowfall can be locally heavy. Thundersnow is possible within a cyclone's comma head and within lake effect precipitation bands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation. On the leeward side of mountains, desert climates can exist due to the dry air caused by compressional heating. The movement of the monsoon trough, or intertropical convergence zone, brings rainy seasons to savannah climes. Precipitation is a major component of the water cycle, and is responsible for depositing the fresh water on the planet. Approximately 505,000 cubic kilometres (121,000 cu mi) of water falls as precipitation each year; 398,000 cubic kilometres (95,000 cu mi) of it over the oceans and 107,000 cubic kilometres (26,000 cu mi) over land. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in), but over land it is only 715 millimetres (28.1 in). Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes. |Part of the nature series| Any phenomenon which was at some point produced due to condensation or precipitation of moisture within the Earth's atmosphere is known as a hydrometeor. Particles composed of fallen precipitation which fell onto the Earth's surface can become hydrometeors if blown off the landscape by wind. Formations due to condensation such as clouds, haze, fog, and mist are composed of hydrometeors. All precipitation types are hydrometeors by definition, including virga, which is precipitation which evaporates before reaching the ground. Particles removed from the Earth's surface by wind such as blowing snow and blowing sea spray are also hydrometeors. Precipitation is a major component of the water cycle, and is responsible for depositing most of the fresh water on the planet. Approximately 505,000 km3 (121,000 mi3) of water falls as precipitation each year, 398,000 km3 (95,000 cu mi) of it over the oceans. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in). Mechanisms of producing precipitation include convective, stratiform, and orographic rainfall. Convective processes involve strong vertical motions that can cause the overturning of the atmosphere in that location within an hour and cause heavy precipitation, while stratiform processes involve weaker upward motions and less intense precipitation. Precipitation can be divided into three categories, based on whether it falls as liquid water, liquid water that freezes on contact with the surface, or ice. Mixtures of different types of precipitation, including types in different categories, can fall simultaneously. Liquid forms of precipitation include rain and drizzle. Rain or drizzle that freezes on contact within a subfreezing air mass is called "freezing rain" or "freezing drizzle". Frozen forms of precipitation include snow, ice needles, ice pellets, hail, and graupel. How the air becomes saturated Cooling air to its dew point The dew point is the temperature to which a parcel must be cooled in order to become saturated, and (unless super-saturation occurs) condenses to water. Water vapour normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. An elevated portion of a frontal zone forces broad areas of lift, which form clouds decks such as altostratus or cirrostratus. Stratus is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. It can also form due to the lifting of advection fog during breezy conditions. There are four main mechanisms for cooling the air to its dew point: adiabatic cooling, conductive cooling, radiational cooling, and evaporative cooling. Adiabatic cooling occurs when air rises and expands. The air can rise due to convection, large-scale atmospheric motions, or a physical barrier such as a mountain (orographic lift). Conductive cooling occurs when the air comes into contact with a colder surface, usually by being blown from one surface to another, for example from a liquid water surface to colder land. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. Evaporative cooling occurs when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or until it reaches saturation. Adding moisture to the air The main ways water vapour is added to the air are: wind convergence into areas of upward motion, precipitation or virga falling from above, daytime heating evaporating water from the surface of oceans, water bodies or wet land, transpiration from plants, cool or dry air moving over warmer water, and lifting air over mountains. Coalescence occurs when water droplets fuse to create larger water droplets, or when water droplets freeze onto an ice crystal, which is known as the Bergeron process. The fall rate of very small droplets is negligible, hence clouds do not fall out of the sky; precipitation will only occur when these coalesce into larger drops. When air turbulence occurs, water droplets collide, producing larger droplets. As these larger water droplets descend, coalescence continues, so that drops become heavy enough to overcome air resistance and fall as rain. Raindrops have sizes ranging from 0.1 millimetres (0.0039 in) to 9 millimetres (0.35 in) mean diameter, above which they tend to break up. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Contrary to the cartoon pictures of raindrops, their shape does not resemble a teardrop. Intensity and duration of rainfall are usually inversely related, i.e., high intensity storms are likely to be of short duration and low intensity storms can have a long duration. Rain drops associated with melting hail tend to be larger than other rain drops. The METAR code for rain is RA, while the coding for rain showers is SHRA. Ice pellets Ice pellets or sleet are a form of precipitation consisting of small, translucent balls of ice. Ice pellets are usually (but not always) smaller than hailstones. They often bounce when they hit the ground, and generally do not freeze into a solid mass unless mixed with freezing rain. The METAR code for ice pellets is PL. Ice pellets form when a layer of above-freezing air exists with sub-freezing air both above and below. This causes the partial or complete melting of any snowflakes falling through the warm layer. As they fall back into the sub-freezing layer closer to the surface, they re-freeze into ice pellets. However, if the sub-freezing layer beneath the warm layer is too small, the precipitation will not have time to re-freeze, and freezing rain will be the result at the surface. A temperature profile showing a warm layer above the ground is most likely to be found in advance of a warm front during the cold season, but can occasionally be found behind a passing cold front. Like other precipitation, hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted again. Hail has a diameter of 5 millimetres (0.20 in) or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least 6.4 millimetres (0.25 in). GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil. Stones just larger than golf ball-sized are one of the most frequently reported hail sizes. Hailstones can grow to 15 centimetres (6 in) and weigh more than .5 kilograms (1.1 lb). In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud. Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. Once a droplet has frozen, it grows in the supersaturated environment. Because water droplets are more numerous than the ice crystals the crystals are able to grow to hundreds of micrometers or millimeters in size at the expense of the water droplets. This process is known as the Wegner-Bergeron-Findeison process. The corresponding depletion of water vapor causes the droplets to evaporate, meaning that the ice crystals grow at the droplets' expense. These large crystals are an efficient source of precipitation, since they fall through the atmosphere due to their mass, and may collide and stick together in clusters, or aggregates. These aggregates are snowflakes, and are usually the type of ice particle that falls to the ground. Guinness World Records list the world's largest snowflakes as those of January 1887 at Fort Keogh, Montana; allegedly one measured 38 cm (15 inches) wide. The exact details of the sticking mechanism remain a subject of research. Although the ice is clear, scattering of light by the crystal facets and hollows/imperfections mean that the crystals often appear white in color due to diffuse reflection of the whole spectrum of light by the small ice particles. The shape of the snowflake is determined broadly by the temperature and humidity at which it is formed. Rarely, at a temperature of around −2 °C (28 °F), snowflakes can form in threefold symmetry—triangular snowflakes. The most common snow particles are visibly irregular, although near-perfect snowflakes may be more common in pictures because they are more visually appealing. No two snowflakes are alike, which grow at different rates and in different patterns depending on the changing temperature and humidity within the atmosphere that the snowflake falls through on its way to the ground. The METAR code for snow is SN, while snow showers are coded SHSN. Diamond dust Diamond dust, also known as ice needles or ice crystals, forms at temperatures approaching −40 °F (−40 °C) due to air with slightly higher moisture from aloft mixing with colder, surface based air. They are made of simple ice crystals that are hexagonal in shape. The METAR identifier for diamond dust within international hourly weather reports is IC. Frontal activity Stratiform or dynamic precipitation occurs as a consequence of slow ascent of air in synoptic systems (on the order of cm/s), such as over surface cold fronts, and over and ahead of warm fronts. Similar ascent is seen around tropical cyclones outside of the eyewall, and in comma-head precipitation patterns around mid-latitude cyclones. A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is associated with a drying of the air mass. Occluded fronts usually form around mature low-pressure areas. Precipitation may occur on celestial bodies other than Earth. When it gets cold, Mars has precipitation that most likely takes the form of ice needles, rather than rain or snow. Convective rain, or showery precipitation, occurs from convective clouds, e.g., cumulonimbus or cumulus congestus. It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited horizontal extent. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform precipitation also occurs. Graupel and hail indicate convection. In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts. Orographic effects Orographic precipitation occurs on the windward side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see katabatic wind) on the descending and generally warming, leeward side where a rain shadow is observed. In Hawaii, Mount Waiʻaleʻale, on the island of Kauai, is notable for its extreme rainfall, as it has the second highest average annual rainfall on Earth, with 460 inches (12,000 mm). Storm systems affect the state with heavy rains between October and March. Local climates vary considerably on each island due to their topography, divisible into windward (Koʻolau) and leeward (Kona) regions based upon location relative to the higher mountains. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover. In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina. The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts. Extratropical cyclones can bring cold and dangerous conditions with heavy rain and snow with winds exceeding 119 km/h (74 mph), (sometimes referred to as windstorms in Europe). The band of precipitation that is associated with their warm front is often extensive, forced by weak upward vertical motion of air over the frontal boundary which condenses as it cools and produces precipitation within an elongated band, which is wide and stratiform, meaning falling out of nimbostratus clouds. When moist air tries to dislodge an arctic air mass, overrunning snow can result within the poleward side of the elongated precipitation band. In the Northern Hemisphere, poleward is towards the North Pole, or north. Within the Southern Hemisphere, poleward is towards the South Pole, or south. Southwest of extratropical cyclones, curved cyclonic flow bringing cold air across the relatively warm water bodies can lead to narrow lake-effect snow bands. Those bands bring strong localized snowfall which can be understood as follows: Large water bodies such as lakes efficiently store heat that results in significant temperature differences (larger than 13 °C or 23 °F) between the water surface and the air above. Because of this temperature difference, warmth and moisture are transported upward, condensing into vertically oriented clouds (see satellite picture) which produce snow showers. The temperature decrease with height and cloud depth are directly affected by both the water temperature and the large-scale environment. The stronger the temperature decrease with height, the deeper the clouds get, and the greater the precipitation rate becomes. In mountainous areas, heavy snowfall accumulates when air is forced to ascend the mountains and squeeze out precipitation along their windward slopes, which in cold conditions, falls in the form of snow. Because of the ruggedness of terrain, forecasting the location of heavy snowfall remains a significant challenge. Within the tropics The wet, or rainy, season is the time of year, covering one or more months, when most of the average annual rainfall in a region falls. The term green season is also sometimes used as a euphemism by tourist authorities. Areas with wet seasons are dispersed across portions of the tropics and subtropics. Savanna climates and areas with monsoon regimes have wet summers and dry winters. Tropical rainforests technically do not have dry or wet seasons, since their rainfall is equally distributed through the year. Some areas with pronounced rainy seasons will see a break in rainfall mid-season when the intertropical convergence zone or monsoon trough move poleward of their location during the middle of the warm season. When the wet season occurs during the warm season, or summer, rain falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves, and vegetation grows significantly. Soil nutrients diminish and erosion increases. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season. Tropical cyclones, a source of very heavy rainfall, consist of large air masses several hundred miles across with low pressure at the centre and with winds blowing inward towards the centre in either a clockwise direction (southern hemisphere) or counterclockwise (northern hemisphere). Although cyclones can take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage. Large-scale geographical distribution On the large scale, the highest precipitation amounts outside topography fall in the tropics, closely tied to the Intertropical Convergence Zone, itself the ascending branch of the Hadley cell. Mountainous locales near the equator in Colombia are amongst the wettest places on Earth. North and south of this are regions of descending air that form subtropical ridges where precipitation is low; the land surface underneath is usually arid, which forms most of the Earth's deserts. An exception to this rule is in Hawaii, where upslope flow due to the trade winds lead to one of the wettest locations on Earth. Otherwise, the flow of the Westerlies into the Rocky Mountains lead to the wettest, and at elevation snowiest, locations within North America. In Asia during the wet season, the flow of moist air into the Himalayas leads to some of the greatest rainfall amounts measured on Earth in northeast India. The standard way of measuring rainfall or snowfall is the standard rain gauge, which can be found in 100 mm (4 in) plastic and 200 mm (8 in) metal varieties. The inner cylinder is filled by 25 mm (1 in) of rain, with overflow flowing into the outer cylinder. Plastic gauges have markings on the inner cylinder down to 0.25 mm (0.01 in) resolution, while metal gauges require use of a stick designed with the appropriate 0.25 mm (0.01 in) markings. After the inner cylinder is filled, the amount inside it is discarded, then filled with the remaining rainfall in the outer cylinder until all the fluid in the outer cylinder is gone, adding to the overall total until the outer cylinder is empty. These gauges are used in the winter by removing the funnel and inner cylinder and allowing snow and freezing rain to collect inside the outer cylinder. Some add anti-freeze to their gauge so they do not have to melt the snow or ice that falls into the gauge. Once the snowfall/ice is finished accumulating, or as 300 mm (12 in) is approached, one can either bring it inside to melt, or use lukewarm water to fill the inner cylinder with in order to melt the frozen precipitation in the outer cylinder, keeping track of the warm fluid added, which is subsequently subtracted from the overall total once all the ice/snow is melted. Other types of gauges include the popular wedge gauge (the cheapest rain gauge and most fragile), the tipping bucket rain gauge, and the weighing rain gauge. The wedge and tipping bucket gauges will have problems with snow. Attempts to compensate for snow/ice by warming the tipping bucket meet with limited success, since snow may sublimate if the gauge is kept much above freezing. Weighing gauges with antifreeze should do fine with snow, but again, the funnel needs to be removed before the event begins. For those looking to measure rainfall the most inexpensively, a can that is cylindrical with straight sides will act as a rain gauge if left out in the open, but its accuracy will depend on what ruler is used to measure the rain with. Any of the above rain gauges can be made at home, with enough know-how. When a precipitation measurement is made, various networks exist across the United States and elsewhere where rainfall measurements can be submitted through the Internet, such as CoCoRAHS or GLOBE. If a network is not available in the area where one lives, the nearest local weather office will likely be interested in the measurement. Return period The likelihood or probability of an event with a specified intensity and duration, is called the return period or frequency. The intensity of a storm can be predicted for any return period and storm duration, from charts based on historic data for the location. The term 1 in 10 year storm describes a rainfall event which is rare and is only likely to occur once every 10 years, so it has a 10 percent likelihood any given year. The rainfall will be greater and the flooding will be worse than the worst storm expected in any single year. The term 1 in 100 year storm describes a rainfall event which is extremely rare and which will occur with a likelihood of only once in a century, so has a 1 percent likelihood in any given year. The rainfall will be extreme and flooding to be worse than a 1 in 10 year event. As with all probability events, it is possible to have multiple "1 in 100 Year Storms" in a single year. Role in Köppen climate classification The Köppen classification depends on average monthly values of temperature and precipitation. The most commonly used form of the Köppen classification has five primary types labeled A through E. Specifically, the primary types are A, tropical; B, dry; C, mild mid-latitude; D, cold mid-latitude; and E, polar. The five primary classifications can be further divided into secondary classifications such as rain forest, monsoon, tropical savanna, humid subtropical, humid continental, oceanic climate, Mediterranean climate, steppe, subarctic climate, tundra, polar ice cap, and desert. Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between 1,750 millimetres (69 in) and 2,000 millimetres (79 in). A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between 750 millimetres (30 in) and 1,270 millimetres (50 in) a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia. The humid subtropical climate zone where winter rainfall (and sometimes snowfall) is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones. Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees away from the equator. An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year round. The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of Western and South Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters. A steppe is a dry grassland. Subarctic climates are cold with continuous permafrost and little precipitation. Effect on agriculture Precipitation, especially rain, has a dramatic effect on agriculture. All plants need at least some water to survive, therefore rain (being the most effective means of watering) is important to agriculture. While a regular rain pattern is usually vital to healthy plants, too much or too little rainfall can be harmful, even devastating to crops. Drought can kill crops and increase erosion, while overly wet weather can cause harmful fungus growth. Plants need varying amounts of rainfall to survive. For example, certain cacti require small amounts of water, while tropical plants may need up to hundreds of inches of rain per year to survive. In areas with wet and dry seasons, soil nutrients diminish and erosion increases during the wet season. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season. Changes due to global warming Increasing temperatures tend to increase evaporation which leads to more precipitation. Precipitation has generally increased over land north of 30°N from 1900 through 2005 but has declined over the tropics since the 1970s. Globally there has been no statistically significant overall trend in precipitation over the past century, although trends have varied widely by region and over time. Eastern portions of North and South America, northern Europe, and northern and central Asia have become wetter. The Sahel, the Mediterranean, southern Africa and parts of southern Asia have become drier. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Changes in precipitation and evaporation over the oceans are suggested by the decreased salinity of mid- and high-latitude waters (implying more precipitation), along with increased salinity in lower latitudes (implying less precipitation, more evaporation, or both). Over the contiguous United States, total annual precipitation increased at an average rate of 6.1 percent per century since 1900, with the greatest increases within the East North Central climate region (11.6 percent per century) and the South (11.1 percent). Hawaii was the only region to show a decrease (-9.25 percent). Changes due to urban heat island The urban heat island warms cities 0.6 °C (1.1 °F) to 5.6 °C (10.1 °F) above surrounding suburbs and rural areas. This extra heat leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between 20 miles (32 km) to 40 miles (64 km) downwind of cities, compared with upwind. Some cities induce a total precipitation increase of 51%. The Quantitative Precipitation Forecast (abbreviated QPF) is the expected amount of liquid precipitation accumulated over a specified time period over a specified area. A QPF will be specified when a measurable precipitation type reaching a minimum threshold is forecast for any hour during a QPF valid period. Precipitation forecasts tend to be bound by synoptic hours such as 0000, 0600, 1200 and 1800 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid to late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States. Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within six to seven hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast. See also - List of meteorology topics - Basic precipitation - Mango showers, pre-monsoon showers in the Indian states of Karnataka and Kerala that help in the ripening of mangoes. - Sunshower, an unusual meteorological phenomenon in which rain falls while the sun is shining. - Wintry showers, an informal meteorological term for various mixtures of rain, freezing rain, sleet and snow. - "Precipitation". Glossary of Meteorology. American Meteorological Society. 2009. Retrieved 2009-01-02. - Dr. Chowdhury's Guide to Planet Earth (2005). "The Water Cycle". WestEd. Retrieved 2006-10-24. - Dr. Jim Lochner (1998). "Ask an Astrophysicist". NASA Goddard Space Flight Center. Retrieved 2009-01-16. - Glossary of Meteorology (2009). "Hydrometeor". American Meteorological Society. Retrieved 2009-07-16. - The American Meteor Society (2001-08-27). "Definition of terms by the IAU Commission 22, 1961". Archived from the original on 2009-04-20. Retrieved 2009-07-16. - Emmanouil N. Anagnostou (2004). "A convective/stratiform precipitation classification algorithm for volume scanning weather radar observations". Meteorological Applications (Cambridge University Press) 11 (4): 291–300. Bibcode:2004MeApp..11..291A. doi:10.1017/S1350482704001409. - A.J. Dore, M. Mousavi-Baygi, R.I. Smith, J. Hall, D. Fowler and T.W. Choularton (June 2006). "A model of annual orographic precipitation and acid deposition and its application to Snowdonia". Atmosphere Environment 40 (18): 3316–3326. doi:10.1016/j.atmosenv.2006.01.043. - Robert Penrose Pearce (2002). Meteorology at the Millennium. Academic Press. p. 66. ISBN 978-0-12-548035-2. Retrieved 2009-01-02. - Jan Jackson (2008). "All About Mixed Winter Precipitation". National Weather Service. Retrieved 2009-02-07. - Glossary of Meteorology (June 2000). "Dewpoint". American Meteorological Society. Retrieved 2011-01-31. - FMI (2007). "Fog And Stratus - Meteorological Physical Background". Zentralanstalt für Meteorologie und Geodynamik. Retrieved 2009-02-07. - Glossary of Meteorology (2009). "Adiabatic Process". American Meteorological Society. Retrieved 2008-12-27. - TE Technology, Inc (2009). "Peltier Cold Plate". Retrieved 2008-12-27. - Glossary of Meteorology (2009). "Radiational cooling". American Meteorological Society. Retrieved 2008-12-27. - Robert Fovell (2004). "Approaches to saturation". University of California in Los Angelese. Retrieved 2009-02-07. - National Weather Service Office, Spokane, Washington (2009). "Virga and Dry Thunderstorms". Retrieved 2009-01-02. - Bart van den Hurk and Eleanor Blyth (2008). "Global maps of Local Land-Atmosphere coupling". KNMI. Retrieved 2009-01-02. - H. Edward Reiley, Carroll L. Shry (2002). Introductory horticulture. Cengage Learning. p. 40. ISBN 978-0-7668-1567-4. Retrieved 2011-01-31. - National Weather Service JetStream (2008). "Air Masses". Retrieved 2009-01-02. - Dr. Michael Pidwirny (2008). "CHAPTER 8: Introduction to the Hydrosphere (e). Cloud Formation Processes". Physical Geography. Retrieved 2009-01-01. - Paul Sirvatka (2003). "Cloud Physics: Collision/Coalescence; The Bergeron Process". College of DuPage. Retrieved 2009-01-01. - United States Geological Survey (2009). "Are raindrops tear shaped?". United States Department of the Interior. Retrieved 2008-12-27. - J . S. 0guntoyinbo and F. 0. Akintola (1983). "Rainstorm characteristics affecting water availability for agriculture". IAHS Publication Number 140. Retrieved 2008-12-27. - Robert A. Houze Jr (1997). "Stratiform Precipitation in Regions of Convection: A Meteorological Paradox?". Bulletin of the American Meteorological Society 78 (10): 2179–2196. Bibcode:1997BAMS...78.2179H. doi:10.1175/1520-0477(1997)078<2179:SPIROC>2.0.CO;2. - Norman W. Junker (2008). "An ingredients based methodology for forecasting precipitation associated with MCS’s". Hydrometeorological Prediction Center. Retrieved 2009-02-07. - Alaska Air Flight Service Station (2007-04-10). "SA-METAR". Federal Aviation Administration via the Internet Wayback Machine. Archived from the original on 2008-05-01. Retrieved 2009-08-29. - "Hail (glossary entry)". National Oceanic and Atmospheric Administration's National Weather Service. Retrieved 2007-03-20. - Weatherquestions.com. "What causes ice pellets (sleet)?". Retrieved 2007-12-08. - Glossary of Meteorology (2009). "Hail". American Meteorological Society. Retrieved 2009-07-15. - Ryan Jewell and Julian Brimelow (2004-08-17). "P9.5 Evaluation of an Alberta Hail Growth Model Using Severe Hail Proximity Soundings in the United States". Retrieved 2009-07-15. - National Severe Storms Laboratory (2007-04-23). "Aggregate hailstone". National Oceanic and Atmospheric Administration. Retrieved 2009-07-15. - Julian C. Brimelow, Gerhard W. Reuter, and Eugene R. Poolman (October 2002). "Modeling Maximum Hail Size in Alberta Thunderstorms". Weather and Forecasting 17 (5): 1048–1062. Bibcode:2002WtFor..17.1048B. doi:10.1175/1520-0434(2002)017<1048:MMHSIA>2.0.CO;2. - Jacque Marshall (2000-04-10). "Hail Fact Sheet". University Corporation for Atmospheric Research. Retrieved 2009-07-15. - M. Klesius (2007). "The Mystery of Snowflakes". National Geographic 211 (1): 20. ISSN 0027-9358. - William J. Broad (2007-03-20). "Giant Snowflakes as Big as Frisbees? Could Be". New York Times. Retrieved 2009-07-12. - Jennifer E. Lawson (2001). Hands-on Science: Light, Physical Science (matter) - Chapter 5: The Colors of Light. Portage & Main Press. p. 39. ISBN 978-1-894110-63-1. Retrieved 2009-06-28. - Kenneth G. Libbrecht (2006-09-11). "Guide to Snowflakes". California Institute of Technology. Retrieved 2009-06-28. - John Roach (2007-02-13). ""No Two Snowflakes the Same" Likely True, Research Reveals". National Geographic News. Retrieved 2009-07-14. - Kenneth Libbrecht (Winter 2004/2005). "Snowflake Science". American Educator. Retrieved 2009-07-14. - Glossary of Meteorology (June 2000). "Diamond Dust". American Meteorological Society. Retrieved 2010-01-21. - Kenneth G. Libbrecht (2001). "Morphogenesis on Ice: The Physics of Snow Crystals". Engineering & Science (California Institute of Technology) (1): 12. Retrieved 2010-01-21. - B. Geerts (2002). "Convective and stratiform rainfall in the tropics". University of Wyoming. Retrieved 2007-11-27. - David Roth (2006). "Unified Surface Analysis Manual". Hydrometeorological Prediction Center. Retrieved 2006-10-22. - Glossary of Meteorology (2009). "Graupel". American Meteorological Society. Retrieved 2009-01-02. - Toby N. Carlson (1991). Mid-latitude Weather Systems. Routledge. p. 216. ISBN 978-0-04-551115-0. Retrieved 2009-02-07. - Diana Leone (2002). "Rain supreme". Honolulu Star-Bulletin. Retrieved 2008-03-19. - Western Regional Climate Center (2002). "Climate of Hawaii". Retrieved 2008-03-19. - Paul E. Lydolph (1985). The Climate of the Earth. Rowman & Littlefield. p. 333. ISBN 978-0-86598-119-5. Retrieved 2009-01-02. - Michael A. Mares (1999). Encyclopedia of Deserts. University of Oklahoma Press. p. 252. ISBN 978-0-8061-3146-7. Retrieved 2009-01-02. - Adam Ganson (2003). "Geology of Death Valley". Indiana University. Retrieved 2009-02-07. - Joan Von Ahn; Joe Sienkiewicz; Greggory McFadden (2005-04). "Hurricane Force Extratropical Cyclones Observed Using QuikSCAT Near Real Time Winds". Mariners Weather Log (Voluntary Observing Ship Program) 49 (1). Retrieved 2009-07-07. - Owen Hertzman (1988). Three-Dimensional Kinematics of Rainbands in Midlatitude Cyclones Abstract. PhD thesis. University of Washington. Bibcode:1988PhDT.......110H. - Yuh-Lang Lin (2007). Mesoscale Dynamics. Cambridge University Press. p. 405. ISBN 978-0-521-80875-0. Retrieved 2009-07-07. - B. Geerts (1998). "Lake Effect Snow.". University of Wyoming. Retrieved 2008-12-24. - Greg Byrd (1998-06-03). "Lake Effect Snow". University Corporation for Atmospheric Research. Retrieved 2009-07-12. - Karl W. Birkeland and Cary J. Mock (1996). "Atmospheric Circulation Patterns Associated With Heavy Snowfall Events, Bridger Bowl, Montana, USA". Mountain Research and Development (International Mountain Society) 16 (3): 281–286. doi:10.2307/3673951. JSTOR 3673951. - Glossary of Meteorology (2009). "Rainy season". American Meteorological Society. Retrieved 2008-12-27. - Costa Rica Guide (2005). "When to Travel to Costa Rica". ToucanGuides. Retrieved 2008-12-27. - Michael Pidwirny (2008). "CHAPTER 9: Introduction to the Biosphere". PhysicalGeography.net. Retrieved 2008-12-27. - Elisabeth M. Benders-Hyde (2003). "World Climates". Blue Planet Biomes. Retrieved 2008-12-27. - Mei Zheng (2000). "The sources and characteristics of atmospheric particulates during the wet and dry seasons in Hong Kong". University of Rhode Island. Retrieved 2008-12-27. - S. I. Efe, F. E. Ogban, M. J. Horsfall, E. E. Akporhonor (2005). "Seasonal Variations of Physico-chemical Characteristics in Water Resources Quality in Western Niger Delta Region, Nigeria". Journal of Applied Scientific Environmental Management 9 (1): 191–195. ISSN 1119-8362. Retrieved 2008-12-27. - C. D. Haynes, M. G. Ridpath, M. A. J. Williams (1991). Monsoonal Australia. Taylor & Francis. p. 90. ISBN 978-90-6191-638-3. Retrieved 2008-12-27. - Marti J. Van Liere, Eric-Alain D. Ategbo, Jan Hoorweg, Adel P. Den Hartog, and Joseph G. A. J. Hautvast (1994). "The significance of socio-economic characteristics for adult seasonal body-weight fluctuations: a study in north-western Benin". British Journal of Nutrition (Cambridge University Press) 72 (3): 479–488. doi:10.1079/BJN19940049. PMID 7947661. - Chris Landsea (2007). "Subject: D3 - Why do tropical cyclones' winds rotate counter-clockwise (clockwise) in the Northern (Southern) Hemisphere?". National Hurricane Center. Retrieved 2009-01-02. - Climate Prediction Center (2005). "2005 Tropical Eastern North Pacific Hurricane Outlook". National Oceanic and Atmospheric Administration. Retrieved 2006-05-02. - Jack Williams (2005-05-17). "Background: California's tropical storms". USA Today. Retrieved 2009-02-07. - National Climatic Data Center (2005-08-09). "Global Measured Extremes of Temperature and Precipitation". National Oceanic and Atmospheric Administration. Retrieved 2007-01-18. - Dr. Owen E. Thompson (1996). Hadley Circulation Cell. Channel Video Productions. Retrieved on 2007-02-11. - ThinkQuest team 26634 (1999). The Formation of Deserts. Oracle ThinkQuest Education Foundation. Retrieved on 2009-02-16. - "USGS 220427159300201 1047.0 Mt. Waialeale Rain Gage nr Lihue, Kauai, HI". USGS Real-time rainfall data at Waiʻaleʻale Raingauge. Retrieved 2008-12-11. - USA Today. Mt. Baker snowfall record sticks. Retrieved on 2008-02-29. - National Weather Service Office, Northern Indiana (2009). "8 Inch Non-Recording Standard Rain Gauge". Retrieved 2009-01-02. - Chris Lehmann (2009). "10/00". Central Analytical Laboratory. Retrieved 2009-01-02. - National Weather Service Office Binghamton, New York (2009). "Rainguage Information". Retrieved 2009-01-02. - National Weather Service (2009). "Glossary: W". Retrieved 2009-01-01. - Discovery School (2009). "Build Your Own Weather Station". Discovery Education. Archived from the original on 2008-12-26. Retrieved 2009-01-02. - "Community Collaborative Rain, Hail & Snow Network Main Page". Colorado Climate Center. 2009. Retrieved 2009-01-02. - The Globe Program (2009). "Global Learning and Observations to Benefit the Environment Program". Retrieved 2009-01-02. - National Weather Service (2009). "NOAA's National Weather Service Main Page". Retrieved 2009-01-01. - Glossary of Meteorology (June 2000). "Return period". American Meteorological Society. Retrieved 2009-01-02. - Glossary of Meteorology (June 2000). "Rainfall intensity return period". American Meteorological Society. Retrieved 2009-01-02. - Boulder Area Sustainability Information Network (2005). "What is a 100 year flood?". Boulder Community Network. Retrieved 2009-01-02. - Peel, M. C. and Finlayson, B. L. and McMahon, T. A. (2007). "Updated world map of the Köppen-Geiger climate classification". Hydrol. Earth Syst. Sci. 11: 1633–1644. doi:10.5194/hess-11-1633-2007. ISSN 1027-5606. (direct: Final Revised Paper) - Susan Woodward (1997-10-29). "Tropical Broadleaf Evergreen Forest: The Rainforest". Radford University. Retrieved 2008-03-14. - Susan Woodward (2005-02-02). "Tropical Savannas". Radford University. Retrieved 2008-03-16. - "Humid subtropical climate". Encyclopædia Britannica. Encyclopædia Britannica Online. 2008. Retrieved 2008-05-14. - Michael Ritter (2008-12-24). "Humid Subtropical Climate". University of Wisconsin–Stevens Point. Retrieved 2008-03-16. - Lauren Springer Ogden (2008). Plant-Driven Design. Timber Press. p. 78. ISBN 978-0-88192-877-8. - Michael Ritter (2008-12-24). "Mediterranean or Dry Summer Subtropical Climate". University of Wisconsin–Stevens Point. Retrieved 2009-07-17. - Brynn Schaffner and Kenneth Robinson (2003-06-06). "Steppe Climate". West Tisbury Elementary School. Retrieved 2008-04-15. - Michael Ritter (2008-12-24). "Subarctic Climate". University of Wisconsin–Stevens Point. Retrieved 2008-04-16. - Bureau of Meteorology (2010). "Living With Drought". Commonwealth of Australia. Retrieved 2010-01-15. - Robert Burns (2007-06-06). "Texas Crop and Weather". Texas A&M University. Retrieved 2010-01-15. - James D. Mauseth (2006-07-07). "Mauseth Research: Cacti". University of Texas. Retrieved 2010-01-15. - A. Roberto Frisancho (1993). Human Adaptation and Accommodation. University of Michigan Press, pp. 388. ISBN 978-0-472-09511-7. Retrieved on 2008-12-27. - Climate Change Division (2008-12-17). "Precipitation and Storm Changes". United States Environmental Protection Agency. Retrieved 2009-07-17. - Dale Fuchs (2005-06-28). "Spain goes hi-tech to beat drought". London: The Guardian. Retrieved 2007-08-02. - Goddard Space Flight Center (2002-06-18). "[[NASA]] Satellite Confirms Urban Heat Islands Increase Rainfall Around Cities". National Aeronautics and Space Administration. Retrieved 2009-07-17. Wikilink embedded in URL title (help)[dead link] - Jack S. Bushong (1999). "Quantitative Precipitation Forecast: Its Generation and Verification at the Southeast River Forecast Center". University of Georgia. Retrieved 2008-12-31. - Daniel Weygand (2008). "Optimizing Output From QPF Helper". National Weather Service Western Region. Retrieved 2008-12-31. - Noreen O. Schwein (2009). "Optimization of quantitative precipitation forecast time horizons used in river forecasts". American Meteorological Society. Retrieved 2008-12-31. - Christian Keil, Andreas Röpnack, George C. Craig, and Ulrich Schumann (2008-12-31). "Sensitivity of quantitative precipitation forecast to height dependent changes in humidity". Geophysical Research Letters 35 (9): L09812. Bibcode:2008GeoRL..3509812K. doi:10.1029/2008GL033657. - P. Reggiani and A. H. Weerts (2007). "Probabilistic Quantitative Precipitation Forecast for Flood Prediction: An Application". Journal of Hydrometeorology 9 (1): 76–95. Bibcode:2008JHyMe...9...76R. doi:10.1175/2007JHM858.1. Retrieved 2008-12-31. - Charles Lin (2005). "Quantitative Precipitation Forecast (QPF) from Weather Prediction Models and Radar Nowcasts, and Atmospheric Hydrological Modelling for Flood Simulation". Achieving Technological Innovation in Flood Forecasting Project. Retrieved 2009-01-01. |Look up precipitation in Wiktionary, the free dictionary.| - World precipitation map - Collision/Coalescence; The Bergeron Process - Report local rainfall inside the United States at this site (CoCoRaHS) - Report local rainfall related to tropical cyclones worldwide at this site - Global Precipitation Climatology Center GPCC
http://en.wikipedia.org/wiki/Precipitation_(meteorology)
4.4375
The evolution of slavery is crucial to understanding the importance of currently standing issues. Slavery began in 1440 when Portugal started to trade slaves with West Africa. By the 16th century, Western Europeans developed an organized system of trading slaves. However, the slave trade did not run as smoothly as expected. Slaves were revolting and tried to flee the hardships of labor. Regardless of these attempts, slavery expanded, leading to the "Triangle Trade." This trade, between Europe, Africa and the Americas, is held responsible for the dispersal of Africans in the Western hemisphere. This organized system lasted until the 1800's. Shortly after the War of Independence there was an intended law to abolish slavery. This law was stalled when the United States allowed the slavery to continue until 1800. A federal law, which was passed in 1793, allowed for the Fugitive Slave Act, which continued the slave trade and prohibited the freedom of the Africans. I n order to understand the origins of the Middle Passage one must know its purpose. The Middle Passage was a systematic process of retrieving Africans for the "Trans-Atlantic Slave Trade," as workers in the Atlantic world. This process combined the organization of voyages in Europe and the United States. In this time period, the slaves were transported to slave factories and were held captives of their own freedom. Before the Middle Passage began a slave trade already existed in Africa, but this slave trade was much different than the one that Europe would create for the Africans as the Atlantic World developed. The difference was, in Europe the slaves were dehumanized and viewed as property while in Africa, humans were still humans. Also some of the reasons that the Africans were enslaved in their own country traced back to their current status. If a person had committed a crime, were prisoners of war, or had a debt that was unpaid then they were enslaved by a greater force. David Brion Davis, in his book, The Problem of Slavery in Western Culture, states: The difference here is that the European nation captured innocent Africans for their own purposes. Unfortunately the Africans had adapted to the slave trade and begun to sell themselves to the European nations. The question is did the Africans take part in their own downfall or was it strictly the Europeans that conquered their freedom? The slave trade was a very controversial issue. Many have argued and debated the purpose of the trade. The question raised is why Europeans needed this labor. The answer to this question is very broad and complex. One assumption is that Europeans needed this labor to flourish as nations. There was a vast amount of free land that needed to be tended too. Other assumptions made to validate the purpose of the slave trade were produced by Colin Palmer, professor of history at the New York Graduate Center. He proposes that Europeans possibly went back home abandoning their conquests. They could have used paid labor, or they could have enslaved other Europeans. The best result, for tfiem, came from free labor which was forced on the African, Palmer also questions why Europeans used African labor in the Americas.("Origins") This large event raised ideas of racism and the rise of the "white supremacist," The idea which developed was that religionwas a cornerstone to understanding the curse on Ham to be forever slaves. Whites developed the belief that Africans had no "souls." They were the symbol of indifference and hatred. Whites saw the individual as inferior creating stereotypes and hardships which were infricted upon the Africans by the other nations, because, in their theories, justifiable. It is no coincidence that these stereotypes allowed whites to ignore the finer points of the Christian religion. Sugar cane was the number one crop that produced the growth for Europe. It was brought to the New World from Spain by Christopher Columbus, later shipped to the rest of Europe. The growing sugar industry called for the usage of African slaves. Also the African slave labor and the plantations are what formed the Americas. The work that was performed on the plantations which, produced large quantities of sugar, created an even greater need for slaves, by the enslaved Africans brought to the Atlantic World by the Middle Passage. Regardless of these facts, extensive research proved that the slave trade was productive to European growth. Sugar trade was extremely important to the development of the Atlantic world.The slave labor resulted in the plantations production of large quantities of sugar cane. This resulted in the growth and flourishment of Europe. The origins of the Middle Passage are connected to Europe's need for "free" labor. Without this labor, the European nation would not of flourished, like it did. The issue at hand concerning the Middle Passage is why 25-30 million people were taken from their homeland to a foreign country to work. One of the questions raised is to what extent were the Africans involved on the slave trade? The Doloman tribe became wealthy from the traffic of slaves-their tribe's enemies were the subjects of the Transatlantic Slave Trade. Africans had become captives in their own country and later slaves in other nations. There were unjustly robbed of their freedom, dignity and happiness. These inevitable events were the factors that later created racism and the marginalization of black people. Davis, David Brion. The Problem of Slavery in Western Culture. New York: Oxford University Press, 1966. "Origins of the Middle Passage," The Middle Passage Web Site. http://www.3mill.com/middlepassage/origins.htm Accessed: 3-27-98 Related Web Sites Slavery In Our Time In the East, especially in the Arab-dominated nations of Sudan and Mauritania, slavery abounds. Tens, maybe hundreds, of thousands of black Africans have been captured by government troops and free-lance slavers and carried off into bondage. Often they are sold openly in ``cattle markets,'' sometimes to domestic owners, sometimes to buyers from Chad, Libya, and the Persian Gulf states. The Progress Report: Slavery Exists Even Today An article originally appearing as an Op-Ed in the New York Times, July 13, 1994. The American Anti-Slavery Group The first national conference on the abolition of slavery since the Civil War meets February 25, 1999. Online petition to end slavery world-wide. The Redemption of Slaves: Cash For Slaves - Photographic documentation of slaves bought back, out of slavery, by members of Christian Solidarity International from an Arab middleman who has secretly brought enslaved Africans back to their villages.
http://cghs.dadeschools.net/african-american/europe/slave_trade.htm
4.09375
Goal Directed Learning | Entry Level | Math - Students will be able to increase their accuracy in adding and subtracting numbers. - Using the interactive software Classworks, students will receive an initial math lesson, activities, a follow-up quiz and additional remediation if necessary. - If students get a math problem wrong, they can click on the Help feature which provides additional instruction. Sunshine State Standards NETS Profiles for Technology Literate Students - Use keyboards and other common input and output devices (including adaptive devices when necessary) efficiently and effectively. - Use general purpose productivity tools and peripherals to support personal productivity, remediate skill deficits, and facilitate learning throughout the curriculum. - Use technology resources (e.g., calculators, data collection probes, videos, educational software) for problem solving, self-directed learning, and extended learning activities. Grade Level: 3-5
http://www.fcit.usf.edu/matrix/lessons/goaldirected_entry_math.php
4.09375
Pronunciation refers to the way a word or a language is spoken, or the manner in which someone utters a word. If someone is said to have "correct pronunciation," then it refers to both within a particular dialect. A word can be spoken in different ways by various individuals or groups, depending on many factors, such as: the area in which they grew up, the area in which they now live, if they have a speech or voice disorder, their ethnic group, their social class, or their education. Syllables are counted as units of sound (phones) that they use in their language. The branch of linguistics which studies these units of sound is phonetics. Phones which play the same role are grouped together into classes called phonemes; the study of these is phonemics or phonematics or phonology. Phones as components of articulation are usually described using the International Phonetic Alphabet (IPA). PRONUNCIATION (Lat. pronuntiatio, from pronuntiare, proclaim, announce, pronounce), the action of pronouncing, the manner of uttering an articulate vocal sound (see Phonetics and Voice). The original sense of the Latin, a public declaration, is preserved in Spanish pronunciamiento, a manifesto or proclamation, especially as issued by a party of insurrection or revolution.
http://www.thefullwiki.org/Pronunciation
4.25
Feb. 2, 2011 Windshields that shed water so effectively that they don't need wipers. Ship hulls so slippery that they glide through the water more efficiently than ordinary hulls. These are some of the potential applications for graphene, one of the hottest new materials in the field of nanotechnology, raised by the research of James Dickerson, assistant professor of physics at Vanderbilt. Dickerson and his colleagues have figured out how to create a freestanding film of graphene oxide and alter its surface roughness so that it either causes water to bead up and run off or causes it to spread out in a thin layer. "Graphene films are transparent and, because they are made of carbon, they are very inexpensive to make," Dickerson said. "The technique that we use can be rapidly scaled up to produce it in commercial quantities." His approach is documented in an article published online by the journal ACS Nano on Nov. 26. Graphene is made up of sheets of carbon atoms arranged in rings -- something like molecular chicken wire. Not only is this one of the thinnest materials possible, but it is 10 times stronger than steel and conducts electricity better at room temperature than any other known material. Graphene's exotic properties have attracted widespread scientific interest, but Dickerson is one of the first to investigate how it interacts with water. Many scientists studying graphene make it using a dry method, called "mechanical cleavage," that involves rubbing or scraping graphite against a hard surface. The technique produces sheets that are both extremely thin and extremely fragile. Dickerson's method can produce sheets equally as thin but considerable stronger than those made by other techniques. It is already used commercially to produce a variety of different coatings and ceramics. Known as electrophoretic deposition, this "wet" technique combines an electric field within a liquid medium to create nanoparticle films that can be transferred to another surface. Dickerson and his colleagues found that they could change the manner in which the graphene oxide particles assemble into a film by varying the pH of the liquid medium and the electric voltage used in the process. One pair of settings lay down the particles in a "rug" arrangement that creates a nearly atomically smooth surface. A different pair of settings causes the particles to clump into tiny "bricks" forming a bumpy and uneven surface. The researchers determined that the rug surface causes water to spread out in a thin layer, while the brick surface causes water to bead up and run off. Dickerson is pursuing an approach that could create film that enhances these water-associated properties, making them even more effective at either spreading out water or causing it to bead up and run off. There is considerable academic and commercial interest in the development of coatings with these enhanced properties, called super-hydrophobic and super-hydrophilic. Potential applications range from self-cleaning glasses and clothes to antifogging surfaces to corrosion protection and snow-load protection on buildings. However, effective, low-cost and durable coatings have yet to make it out of the laboratory. Dickerson's idea is to apply his basic procedure to "fluorographene" -- a fluorinated version of graphene that is a two-dimensional version of Teflon -- recently produced by Kostya S. Novoselov and Andre K. Geim at the University of Manchester, who received the 2010 Nobel Prize for the discovery of graphene. Normal fluorographene under tension should be considerably more effective in repelling water than graphene oxide. So there is a good chance a "brick" version and a "rug" version would have extreme water-associated effects, Dickerson figures. Graduate students Saad Hasan, John Rigueur, Robert Harl and Alex Krejci, postdoctoral research scientist Isabel Gonzalo-Juan and Associate Professor of Chemical and Biomolecular Engineering Bridget R. Rogers contributed to the research, which was funded by a Vanderbilt Discovery grant and by the National Science Foundation. Other social bookmarking and sharing tools: - Saad A. Hasan, John L. Rigueur, Robert R. Harl, Alex J. Krejci, Isabel Gonzalo-Juan, Bridget R. Rogers, James H. Dickerson. Transferable Graphene Oxide Films with Tunable Microstructures. ACS Nano, 2010; 4 (12): 7367 DOI: 10.1021/nn102152x Note: If no author is given, the source is cited instead.
http://www.sciencedaily.com/releases/2011/02/110201155628.htm
4.28125
Although we tend to think of the Earth as an amazingly hospitable planet, at several times in the past it seems to have done its best to kill us all—or at least all of our ancestors. Several of the Earth's mass extinctions occurred around the time of elevated volcanic activity, but the timing has been notoriously difficult to work out; the fossil beds that track the extinction rarely preserve the evidence of volcanic activity and vice-versa. A study that will appear in today's issue of Science provides a new window into the end-Triassic mass extinction, the event that ushered in the start of the era of the dinosaurs. The study provides a precise timing of events of the extinction through a combination of new dating work and a link to the Earth's orbital cycles preserved in rocks near Newark, New Jersey (because when you think end-Triassic, you think New Jersey, right?). The timing of events shows that the extinction occurred at the very onset of the volcanic activity that signaled the breakup of the supercontinent Pangea, but that life began to recover even as the later eruptions were taking place. Volcanic activity takes place all the time, and while it can be devastating for the local environment (and you can use a very large definition of "local" for supervolcanoes), this isn't enough to set off a global extinction. For that, you need what are termed "flood basalt eruptions." These events are just what the name implies: molten rock comes flooding out of a rift and covers thousands of square kilometers in rock, often at depths of hundreds of meters. Then, before the Earth recovers, you do it all over again. The largest of these eruptions, which formed the Siberian Traps, has had the total volume of rock that erupted estimated at above a million cubic kilometers. Events like these tend to kill stuff, and not just locally, where everything ends up buried under tons of rock. The eruptions send a lot of sulfur compounds into the atmosphere, which form a haze that reflects sunlight back to space. The result can create a sharp, temporary cooling in climate (think the Year Without a Summer on steroids). Once that clears out, the massive amounts of carbon dioxide that were also released kick in, swinging the temperatures up to new highs. The oceans don't get off lightly, either, as the same carbon dioxide causes a rapid ocean acidification, which makes it extremely difficult for any shelled creatures to survive. Given that combination, it's no real surprise that the Siberian Traps eruptions set off the biggest mass extinction we know about, the end-Permian, which is commonly known as the Great Dying. Other large scale eruptions have also been associated with mass extinctions, including the one that killed the dinosaurs—that one produced India's Deccan Traps. As noted above, getting the timing of everything down well enough to assign definitive blame is not easy; the role of the Deccan Traps eruptions in killing off the dinosaurs was only settled by finding something else to blame. But we now have a very good picture that ties eruptions nicely to the end-Triassic extinction, which wiped out some of the dinosaurs' competitors, allowing them to assume ascendancy. Getting the timing right The eruption that may have triggered the end-Triassic extinction was caused by the breakup of the supercontinent Pangea, and took place along the rift that would eventually form the Atlantic ocean. It left behind the Central Atlantic Magmatic Province (CAMP), which is now spread across four continents, from France and Spain through to West Africa on one side of the Atlantic, and along the US' East Coast and down into Brazil on the other. The CAMP was built from a series of eruptions that left behind giant layers of basaltic rock; residents of New York City can see one by looking across the Hudson at the Palisades of New Jersey. Certainly, the eruptions were big enough to bring about global devastation. But did they? It has been hard to tell. Basalts don't typically include the sorts of minerals that allow us to do highly accurate dating through radioactive decay. And the basalts themselves contain no fossils, making it difficult to figure out how the timing of various events fit together. The key to piecing things together came from an area in New Jersey called the Newark basin. Deposits in the basin contain a number of layers that turned out to be incredibly informative. To begin with, these included several distinctive layers of basalts caused by individual eruptions that contributed to the CAMP. These were interspersed with sedimentary deposits, some of which contained signs of life that allowed the process of extinction and recovery to be tracked. The sediments, however, also contained periodic changes that appear to line up with the Earth's orbital cycles (the same ones that drive the modern glacial cycles). Scientists hypothesized that these came about as local currents changed, altering the deposit of sediments, but there was no way to confirm that. The new work involved drilling in to eight sites around the CAMP, a number of which yielded zircons associated with individual eruptions. These crystals, which are rarely associated with magmas, can be used for highly accurate uranium-lead dating. That gave them the timing of the individual eruptions, which could then be traced back to the Newark basin deposits. These confirmed that the dates predicted by orbital cycles lined up with those from the uranium-lead dates. With all this information in hand, it was possible to put together a chronology of both the extinction and the eruptions. (The authors refer to having a combination of geochronology and astrochronology.) The data now suggests that it didn't take repeated heating-cooling cycles to start the extinction; the very first flood basalt eruption apparent in the CAMP seems to have started eliminating species. Three other major eruptions occurred over a 600,000 year period, but the first did the most damage. In fact, there are some indications that species were starting to recover even as the final eruptions were taking place. Oddly, the CAMP may eventually play a role in reversing problems similar to the ones its formation caused. The rapid increase in CO2 we're currently creating is raising fears of a rapid ocean acidification that could lead to widespread extinctions as well. The rocks of the CAMP will react chemically with carbon dioxide and are being considered as a potential site of carbon sequestration.
http://arstechnica.com/science/2013/03/extinction-that-paved-way-for-dinosaurs-definitively-linked-to-volcanism/?comments=1
4.0625
How to Calculate Heat Emission from a Blackbody Using the Stefan-Boltzmann Constant You can use the Stefan-Boltzmann constant to measure the amount of heat that is emitted by a blackbody. Physicists have determined that a blackbody is an object that absorbs 100 percent of the radiant energy striking it, and if it’s in equilibrium with its surroundings, it emits all the radiant energy as well. In terms of reflection and absorption of radiation, most objects fall somewhere between mirrors, which reflect almost all light, and blackbodies, which absorb all light. The middle-of-the-road objects absorb some of the light striking them and emit it back into their surroundings. Shiny objects are shiny because they reflect most of the light, which means they don’t have to emit as much heat radiantly into the room as other objects. Dark objects appear dark because they don’t reflect much light, which means they have to emit more as radiant heat (usually lower down in the spectrum, where the radiation is infrared and can’t be seen). So how much heat does a blackbody emit when it’s at a certain temperature? The amount of heat radiated is proportional to the time you allow — twice as long, twice as much heat radiated, for example. So you can write the heat relation, where t is time, as follows: And as you may expect, the amount of heat radiated is proportional to the total area doing the radiating. So you can also write the relation as follows, where A is the area doing the radiating: Temperature, T, has to be in the equation somewhere — the hotter an object, the more heat radiated. Experimentally, physicists found that the amount of heat radiated is proportional to T to the fourth power, T4. So now you have the following relation: To show the exact relationship between heat and the other variables, you need to include a constant, which physicists measured experimentally. To find the heat emitted by a blackbody, you use the Stefan-Boltzmann constant, which goes in the equation like this: Note, however, that this constant works only for blackbodies that are perfect emitters. Most objects aren’t perfect emitters, so you have to multiply by another constant most of the time — one that depends on the substance you’re working with. The constant is called emissivity, e, which is a pure number between 0 (for a perfect reflector) and 1 (for a perfect absorber). The Stefan-Boltzmann law of radiation says the following: the temperature in kelvins. A person’s emissivity is about 0.98. At a body temperature of 37 degrees Celsius, how much heat does a person radiate each second? First, you have to factor in how much area does the radiating. If you know that the surface area of the human body is A = 1.7 m2, you can find the total heat radiated by a person by plugging the numbers into the Stefan-Boltzmann law of radiation equation, making sure you convert the temperature to kelvins: Then dividing both sides by t, you get You get a value of 550 joules per second, or 550 watts. That may seem high, because skin temperature isn’t the same as internal body temperature, but it’s in the ballpark.
http://www.dummies.com/how-to/content/how-to-calculate-heat-emission-from-a-blackbody-us.html
4.34375
Rationale: Children must learn to decode many different correspondences in order to learn how to read. In this lesson, the children will learn the o = /o/ correspondence. The children will be able to recognize the letter o and associated with the phoneme /o/ in written and spoken words. Elkonin Boxes for each student Elkonin boxes for the teacher (h, o, t, r,d,m,p,l,I,b,f,g,s,p,l) A set of letters for each child and teacher Chart paper with the tongue twister on it – Oliver, the octopus, hopped In the Big Top Educational Insights Phonics Reader Worksheet containing pictures of words with the o = /o/ phoneme. Primary writing paper and pencil for each student 1. Remind the students that the letter o = /o/ only when it is by itself. If there is another vowel, it does not say this sound. " Today we are going to learn about the letter o. (Write the letter on the board) We are going to learn what one of the sounds this letter makes is. I hear this sound a lot when I go to the doctor. My doctor will tell me, “Open up and say Ahhh”. Has your doctor ever said this to you? Well this is the same sound the letter o makes. It says /o/. Let’s all say that sound together. Good, now this time, let’s pretend like we are going to hold a stick on our tongue when we say this. Remember, only use your hand, don’t really touch your tongue. Let’s all try this. /o/. 2. Let’s try and see if we can find this sound in our tongue twister. Oliver, the octopus, hopped to Oz. Now you all try saying it with me. Oliver, the octopus, hopped to Oz. Great! Now this time when we say it, I really want to hear our doctor sound. Let’s try and break of the /o/ sound in the words as we say them. I will show you how and then we will all try together. O – liver, the o – ctopus, h – o – pped to O – z. Now you try. O – liver, the o – ctopus, h – o – pped to O – z. 3. Now, let’s use our letter boxes (Elkonin Boxes) just like we did yesterday with the short I = /i/ sound. When we do this, we are going to review some of the words we already know using different vowels and we are also going to learn some new words with our doctor sound. Remember, we use the letter o to represent this sound. (Pass out the boxes and the letters needed.) OK. Please turn all of your letters over to the lower case side. Before we all try some, watch me to make sure we remember how to do this. Remember, each box stands for one sound. Sometimes our sounds use more then one letter, so it is important letter we listen for sounds in our words. I am going to try a word with 3 sounds in it. (Put out three letter boxes) The word I am going to try is hot. /h/ /o/ /t/. I hear 3 sounds in this word, so in the first box, I am going to put the first sound. /h/. I will put the letter h here. Next, I hear our doctor sound, so I will put my o in the middle. The last sound I hear is a t. I will put that letter here. Hot! Now, let’s let you try some. As I call out a word, I want you to put the letters in your boxes. I will come around and help you if you need some help. (Call out words like, rod, mop, hot, lid, bob, fog, spot, slob, frog, flip) (Tell the students how many sounds there are in each word before you say the word. This way they will know how many boxes to have ready.) 4. Now I am going to write some of the words we just did on the board. I want you to read them aloud to me as a class. If you hear the doctor sound in the word, I want you to show me your tongue depressor move with your hand. Let me show you how and then we will all do this together. (Write rock on the board.) This word says, rock. See how our o is in the middle. This says /o/. It starts with /r/ and ends with /k/. Put it all together and this says rock. (Say it slowly and hold your hand up to your mouth.) Now you try. (Write the words from the letterboxes on the board) Great Job! 5. Let’s try writing a message on our paper. We are going to write a message about our pet frog. (point to the frog in the classroom. If you don’t have a pet frog, tell the students that they will be writing about a frog they might see in the pond) Let’s try and use some words that have the /o/ sound in them in our message. (Allow the students to use their inventive spellings to write their message) 6. (Pass out copies of In the Big Top to each pair of students.) This book is about a family who is in the circus. There are lots of people in this circus and they all have a lot of stuff. They are trying to figure out how to get all of their stuff into a little hot rod. How do they get everything and everyone to fit? (Allow them to read with a buddy as you walk around and scaffold when needed.) 7. Pass out the worksheet. This worksheet will have pictures of things with the /o/ phoneme in them. Some of the objects will not have this phoneme in them. Ask the children to circle the pictures that have the doctor sound in them with a pencil. Then ask them to write the name of the object underneath it on the line using their inventive spellings. This worksheet will be the students’ assessment. (Whitney Adams, Hop Scotch) (Kara Oglesby, Olly, Olly, Oxenfree) In the Big Top. Phonics Readers Short Vowels. Educational Murray, B. A and T. Lesniak. (1999). The letterbox Lesson: A Hands on approach to teaching decoding. The Reading Teacher, 52, 644-650 Click here to return to Guidelines.
http://www.auburn.edu/academic/education/reading_genie/guides/hensleybr.html
4.28125
Uncle Tom's Cabin Image Courtesy of the Library of Congress , written and published by Harriet Beecher Stowe in 1852, was the most popular 19th century novel and, after the bible, was second best selling book of that century. Over 300,000 copies were sold in the United States in its first year alone. The book’s impact on the American public on the issue of slavery was so powerful that when President Abraham Lincoln met Harriet Beecher Stowe at the start of the American Civil War he stated “so this is the little lady who made this big war.” This anti-slavery novel was controversial as soon as it appeared. Stowe used Uncle Tom’s Cabin to publicize the horrors of slavery, bringing them to the attention of thousands who heretofore had not been particularly sympathetic to the abolitionist cause. Its portrayal of slavery immediately increased the tensions between Southern slaveholders and non-slaveholding Northerners and as Lincoln’s comments suggested, brought the nation to civil war. Despite Stowe’s desire to portray slavery as a powerful blight upon the nation, she also did much to expand anti-black sentiment through her presentation of stereotypical black characters in the novel. Some of these stereotypes include the darkskinned mammy, “pickaninny” black children, and Uncle Tom, the obedient and long-suffering servant to his white master. Although these stereotypes exist within Uncle Tom’s Cabin , and existed before the book was published, most observers believe its good in advancing the abolitionist movement, leading to the downfall of the institution of slavery, outweighed the impact of the stereotypes. Uncle Tom's Cabin focuses on the struggles of a slave, Tom, who has been sold numerous times and has to endure physical brutality by slave drivers and his masters. One of Stowe’s central themes is that Tom, despite his suffering, remained steadfast to his Christian beliefs. He also inspired fellow slaves with his Christianity, leading to the bettering of the life his friend and fellow slave, Cassy. Tom, through his preaching of the word of God, convinces her to escape. His refusal to betray Cassy to his master leads to his own brutal death. Stowe humanizes slaves by making Christianity a central focus of their lives. She also establishes a commonality with devoutly Christian whites of that era. Moreover her novel implies that Christianity condemns the immorality of slavery, further isolating those who support the institution. Uncle Tom's Cabin is roughly based on the life a former Maryland slave, Josiah Henson, who struggles against slavery and who unlike the novel’s main character, eventually escapes to Canada. By relating the institution of slavery through the eyes of the slave and by focusing on the heroic struggle of one woman to gain freedom for herself and her child, Harriet Beecher Stowe’s message did much more than hundreds of abolitionist presentations to persuade Americans to oppose slavery. The book, read by hundreds of thousands of Americans and the plays it inspired, viewed by millions more, proved a powerful weapon in the campaign to end human bondage in the United States. Mason I. Lowance, Ellen E. Westbrook and R.C. De Prospo, The Stowe Debate: Rhetorical Strategies in Uncle Tom's Cabin (Amherst: University of Massachusetts Press, 1994); "Uncle Tom's Cabin and American Culture: A Multi-Media Archive," Edited by Stephen Railton; http://www.uncletomscabin.org/; http://zorak.monmouth.edu/~afam/stowe2.jpg; http://www.uncletomscabin.org/ University of Washington
http://www.blackpast.org/?q=aah/uncle-toms-cabin-1852
4.09375
Establishment and management of protected areas together with conservation, sustainable use and restoration initiatives in the adjacent land and seascape are central to Article 8 Conservation" of the Convention on Biological Diversity. The Convention on Biological Diversity defines protected areas as: "a geographically defined area which is designated or regulated and managed to achieve specific conservation objectives." the world conservation union defines protected areas as: "A clearly defined geographical space, recognised, dedicated and managed, through legal or other effective means, to achieve the long-term conservation of nature with associated ecosystem services and cultural values." Conservation" is defined by the Convention on Biological Diversity as the conservation of ecosystems and natural habitats and the maintenance and recovery of viable populations of species in their natural surroundings and, in the case of domesticated or cultivated species, in the surroundings where they have developed their distinctive properties. Protected areas are a vital contribution to the conservation of the world's natural and cultural resources. Their values range from the protection of natural habitats and associated flora and fauna, to the maintenance of environmental stability of surrounding regions. Protected areas can provide opportunities for rural development and rational use of marginal lands, generating income and creating jobs, for research and monitoring, for conservation education, and for recreation and tourism. As a result, all but a few countries have developed systems of protected areas. Articles 8(a) and 8(b) state that a system of protected areas forms a central element of any national strategy to conserve biological diversity. The word "system" in Article 8(a) implies that the protected areas of a country or region may be designated and designed to form a network, in which the various components may conserve different portions of biological diversity, often using a variety of approaches to management. In addition, Article 8(c) calls for the regulation and management of protected areas, while Article 8(d) aims to: "Promote the protection of ecostystems, natural habitats and the maintenance of viable populations of species in natural surroundings." Protected areas are a central part of the Convention in that the Parties themselves have consistently identified that their efforts to develop and maintain their national protected area system is the central element of their strategy to implement the Convention. Experience shows that a well designed and managed system of protected areas can form the pinnacle of nation's efforts to protect biological diversity. Such a system compliments other measures taken to conserve biological diversity outside protected areas. Drawing on global experience, IUCN has developed a system of six management categories for protected areas. The most comprehensive dataset on protected areas world–wide is managed by the UNEP-World Conservation Monitoring Centre in partnership with the IUCN World Commission on Protected Areas (WCPA) and the World Database on Protected Areas Consortium. IUCN was instrumental in the preparation of early UN list of protected areas and the UN List is now prepared jointly by the IUCN-WCPA, and UNEP-World Conservation Monitoring Center (WCMC). Based on the 2004 statistics, globally there are 104,791 protected areas listed in the World Database on Protected areas. The total area has also increased continuously from less than 3 million Km2 in 1970 to more than 20 million km2 2004. However, ecoregional and habitat representation remains uneven and coastal and marine ecosystems are particularly under represented. Existing systems of protected areas are not representative of all categories of biodiversity important for its conservation and sustainable use as set in Annex 1 to the CBD.
http://www.cbd.int/protected-old/intro.shtml
4.09375
New Earth-like planets: How did astronomers find them? NASA's Kepler spacecraft has spotted a pair of rocky Earth-sized planets orbiting a distant star. How do you find a new planet? Since 2009, NASA's Kepler spacecraft has been sitting in space, pointing its telescope at a patch of the sky near the constellations Cygnus and Lyrae. Its field of view, a region of the Milky Way galaxy about the size of two open hands raised to the cosmos, contains roughly 160,000 stars. Scientists on the Kepler team are interested not in these stars themselves, but in the planets that may orbit them.Skip to next paragraph Subscribe Today to the Monitor "The goal of the Kepler mission is to find planets like Earth in the habitable zones of their parent stars," said Guillermo Torres, a member of the Kepler team based at the Harvard-Smithsonian Center for Astrophysics. They are looking for Earth twins, because these are the likeliest candidates for worlds that could host extraterrestrial life. To find these alien Earths, the Kepler team uses a technique called the "transit method." They scour the data collected by the Kepler telescope looking for slight drops in the intensity of light coming from any of the stars in its line of sight. About 90 percent of the time, these dips in brightness signify that a planet "has passed in front of its star, essentially eclipsing the light," Torres told Life's Little Mysteries. Planets the size of Earth passing in front of a star typically cause it to dim by only one-hundredth of a percent — akin to the drop in brightness of a car's headlight when a fly crosses in front of it, the scientists say. To detect these faint and faraway eclipses, the Kepler telescope must be extremely sensitive and it must be stationed in space, away from the glare and turbulence of Earth's atmosphere. Using the transit method, Kepler has detected 2,326 "candidate planets" so far, Torres said. Those are dips detected in starlight that are probably caused by passing planets, but for which other alternative explanations haven't yet been ruled out. [Could There Be Life on the New Earth-Size Planets?] "The signals are an indication that something is crossing in front of the star and then you have to confirm it's a planet, not something else," he said. "Roughly 90 percent of the signals that Kepler detects are true planets. The other 10 percent of the cases are false positives. We're not happy with leaving the probability at 90 percent — we've set a higher bar — so, even though a priori we know a signal is 90-percent sure [to be a planet], we do more work." To confirm that a candidate is a true exoplanet — a planet outside our solar system — the Kepler scientists use the world's largest ground-based telescopes to study the star in question, looking for alternative explanations for the transit signal. "One example is an eclipsing binary in the background of the star. There could be two stars behind the star [of interest] that are orbiting each other and eclipsing each other, but because they're in the background they're much fainter. So their light is diluted by the brighter star," he said. With today's (Dec. 20) announcement of five new confirmed exoplanets orbiting a star called Kepler-20 located 950 light-years away, including two that are Earth-size, the number of confirmed exoplanets has moved up to 33. - A Field Guide to Alien Planets - Stunning Photo of New Solar System Captured by Amateur Astronomer - Will We Really Find Alien Life Within 20 Years?
http://www.csmonitor.com/Science/2011/1221/New-Earth-like-planets-How-did-astronomers-find-them
4.09375
A Pilot Geographical Information Systems (GIS) Assessment of Rocky Mountain Bighorn Sheep Habitat in and Around Great Basin National Park, NevadaGreat Basin NP Bighorn sheep are getting help from high technology, including GIS, as wildlife biologists try to prevent a symbol of the West from vanishing from Great Basin National Park in Nevada. Today, fewer than a dozen Rocky Mountain bighorn remain in the rugged Snake Range in and around the park, where hundreds of the majestic creatures once roamed. Early on, wildlife biologists tried to bolster the herd by adding bighorn sheep from other areas. Those efforts failed, and the herd continued to dwindle even after the creation of the national park in 1986. In 2001, wildlife biologists used GIS technology to see if the loss of sheep habitat played a role in the herd's woes. Using a GIS software program called ArcView developed by ESRI, researchers evaluated whether the herd had enough room to survive and grow. The resulting GIS map showed the sheep still had plenty of habitat in general but barely enough room to bear and raise their young. Bighorn ewes and their newborn need areas with plenty of grass and water during the spring lambing season. They also need open areas where they can spot mountain lions and other predators and escape pursuit over rocky slopes. The GIS map shows these to be in short supply for bighorn lambs. One GIS map layer identified the southerly, steep slopes where the snows melt early and provide grasses for forage. Another map layer showed the availability of water, and a third identified types of vegetation that indicate open areas and protection from predators. From this information, wildlife biologists and fire managers are further using GIS to plan remedies, such as prescribed burns and thinning of forests, to restore suitable lambing areas in hopes that the herd will one day thrive again in its historic range.
http://www.nps.gov/gis/mapbook/tech/23.html
4.03125
(Phys.org)—British researchers at King's College in London have succeeded in creating quantum dots by feeding earthworms soil laced with certain metals and then collecting the material excreted. They describe their research in their paper published in the journal Nature Nanotechnology. Quantum dots are nano-sized semiconducting materials with characteristics defined by their crystal shape. They are useful because of the unique way they emit or absorb light, similar in many respects to the florescence seen in some molecules. Thus far, their creation has proved to be useful in making LEDs, photovoltaic materials and very small lasers. In this new research, the team set out to determine if common earthworms could be used to create cadmium telluride quantum dots. The thinking was that because earthworms are known for their detoxifying abilities – they do so by shuttling toxins into a special layer of their gut – they might be able to cause certain metals to combine as they are processed, creating nano-sized materials that qualified as quantum dots. In this case, they fed several earthworms soil with sodium tellurite and cadmium chloride mixed into it, for 11 days. Afterwards, they examined the material excreted by the worms in their tissue and found in detoxifying the metals, the worms had indeed created cadmium telluride quantum dots. The creation of such quantum dots as part of a biological process leads to particles that are water soluble – that means that they might be put to use in biological settings. As one example, the researchers placed the worm-created quantum dots in a Petri dish along with cultured cancer cells obtained from mice. The cancer cells immediately absorbed the dots as was evidenced by shining UV light on them and witnessing their familiar green glow. In doing the same with other types of cells, the researchers found that it took some added manipulation to get them to soak up the dots, but in the end discovered it was possible. The team notes that that their research provides two new pieces of useful information: one – it's possible to make quantum dots using worms, and possibly other organisms, and two – it's possible to make quantum dots that might prove useful in living tissue as part of a system of diagnostic tools. Explore further: 3D printing tiny batteries More information: Biosynthesis of luminescent quantum dots in an earthworm, Nature Nanotechnology 8, 57–60 (2013) doi:10.1038/nnano.2012.232 The synthesis of designer solid-state materials by living organisms is an emerging field in bio-nanotechnology. Key examples include the use of engineered viruses as templates for cobalt oxide (Co3O4) particles1, superparamagnetic cobalt–platinum alloy nanowires2 and gold–cobalt oxide nanowires3 for photovoltaic and battery-related applications. Here, we show that the earthworm's metal detoxification pathway can be exploited to produce luminescent, water-soluble semiconductor cadmium telluride (CdTe) quantum dots that emit in the green region of the visible spectrum when excited in the ultraviolet region. Standard wild-type Lumbricus rubellus earthworms were exposed to soil spiked with CdCl2 and Na2TeO3 salts for 11 days. Luminescent quantum dots were isolated from chloragogenous tissues surrounding the gut of the worm, and were successfully used in live-cell imaging. The addition of polyethylene glycol on the surface of the quantum dots allowed for non-targeted, fluid-phase uptake by macrophage cells.
http://phys.org/news/2012-12-earthworms-quantum-dots.html
4.65625
- Grades: 6–8, 9–12 Plymouth Colony, the first permanent Puritan settlement in America, was established in December 1620 on the western shore of Cape Cod Bay by the English Separatist Puritans known as the Pilgrims. They were few in number and without wealth or social standing. Although their small and weak colony lacked a royal charter, it maintained its separate status until 1691. The Pilgrims secured the right to establish an American settlement from the London Company. The landfall (Nov. 19, 1620) of their ship, the Mayflower, at Cape Cod put the settlers far beyond that company's jurisdiction, provoking mutinous talk. To keep order, the Pilgrim leaders established a governing authority through the Mayflower Compact (Nov. 21, 1620). The 41 signers formed a "Civil Body Politic" and pledged to obey its laws. Patents granted by the Council for New England in 1621 and 1630 gave legal status to the Pilgrims' enterprise. To finance their journey and settlement the Pilgrims had organized a joint-stock venture. Capital was provided by a group of London businessmen who expected — erroneously — to profit from the colony. During the first winter more than half of the settlers died as a result of poor nutrition and inadequate housing, but the colony survived due in part to the able leadership of John Carver, William Bradford, William Brewster, Myles Standish, and Edward Winslow. Squanto, a local Indian, taught the Pilgrims how to plant corn and where to fish and trap beaver. Without good harbors or extensive tracts of fertile land, however, Plymouth became a colony of subsistence farming on small private holdings once the original communal labor system was ended in 1623. In 1627 eight Pilgrim leaders assumed the settlement's obligations to the investors in exchange for a 6-year monopoly of the fur trade and offshore fishing. Plymouth's government was initially vested in a body of freemen who met in an annual General Court to elect the governor and assistants, enact laws, and levy taxes. By 1639, however, expansion of the colony necessitated replacing the yearly assembly of freemen with a representative body of deputies elected annually by the seven towns. The governor and his assistants, still elected annually by the freemen, had no veto. At first, ownership of property was not required for voting, but freemanship was restricted to adult Protestant males of good character. Quakers were denied the ballot in 1659; church membership was required for freemen in 1668 and, a year later, the ownership of a small amount of property as well. Plymouth was made part of the Dominion of New England in 1686. When the Dominion was overthrown (1689), Plymouth reestablished its government, but in 1691 it was joined to the much more populous and prosperous colony of Massachusetts Bay to form the royal province of Massachusetts. At the time Plymouth Colony had between 7,000 and 7,500 inhabitants. Bibliography: Adams, J. T., The Founding of New England (1921; repr. 1963); Bradford, William, Of Plymouth Plantation, 1620–1647, ed. by S. E. Morison (1952); Deetz, James and Patricia Scott, The Times of Their Lives: Life, Love and Death in Plymouth Colony (2000); Demos, John, A Little Commonwealth — Family Life in Plymouth Colony (1988); Langdon, G. D., Jr., Pilgrim Colony (1966); Morison, S. E., The Story of the Old Colony of New Plymouth (1956); Smith, Bradford, Bradford of Plymouth (1951); Stratton, E. A., Plymouth Colony: Its History and People (1987).
http://www.scholastic.com/teachers/article/plymouth-colony
4.03125
Marine Magnetism Research Highlights The magnetism of the ocean basins and its interpretation in terms of seafloor spreading and the reversal history of Earth's magnetic field provides one of the cornerstones of the theory of plate tectonics. Plate tectonic theory contends that the earth's crust is composed of a number of plates that move over the surface of the planet. The oceanic plates are created at midocean ridges and are subsequently consumed at the major oceanic trenches. Continental crust appears to be "pushed" around by these oceanic plates. Magnetism plays a key role in understanding the timing of these plate motions. As ocean crust is created at a midocean ridge, the hot lavas cool and the magnetic minerals within the lavas permanently "freeze" in the direction and intensity of the earth's magnetic field at that time. As the plate slowly moves this crust away from the midocean ridge crest this magnetic record is preserved. We also know that the earth's magnetic field has reversed in polarity in a random fashion but with an average periodicity of about 400,000 years. Our last major reversal was 780,000 years ago when Earth's magnetic North Pole was at Earth's geographic South Pole. The Geomagnetic Polarity Time-Scale (GPTS) has been constructed for the last 150 Million years based on the magnetic anomalies measured from the oceanic plates. Magnetic studies here at Woods Hole can be subdivided either by geographic area or by technique. © Woods Hole Oceanographic Institution. All rights reserved
http://deeptow.whoi.edu/research.html
4.21875
Mars Magmas Once Contained a Lot of Water, Researchers Report 24 Jan 2001 (Source: Massachusetts Institute of Technology) Massachusetts Institute of Technology Deborah Halber, MIT News Office (617) 258-9276, [email protected] CAMBRIDGE, Mass. - Evidence from a Martian volcanic rock indicates that Mars magmas contained significant amounts of water before eruption on the planet's surface, researchers from the Massachusetts Institute of Technology, the University of Tennessee and other institutions report in the Jan. 25 issue of Nature. Scientists say that channels on Mars's surface may have been carved by flowing water and an ancient ocean may have existed there, but little is known about the source of the water. One possible source is volcanic degassing, in which water vapor is produced by magma spewing from volcanos, but the Martian rocks that have reached Earth as meteorites have notoriously low water content. This study shows that before the molten rock that crystallized to form Martian meteorites was erupted on the surface of the planet, it contained as much as 2 percent dissolved water. When magma reaches the planet's surface, the solubility of water in the molten liquid decreases and the water forms vapor bubbles and escapes as gas. The process is similar to the release of gas bubbles that occurs when you open a can of soda. Although this doesn't explain how water got into Mars in the first place, it does show that water on the red planet once cycled through the deep interior as well as existed on the surface, as similar processes have cycled water through the Earth's interior throughout geologic history. A VISITOR FROM MARS Timothy L. Grove, professor of Earth, Atmospheric and Planetary Sciences at MIT, and University of Tennessee geologist Harry Y. McSween Jr. analyzed the Mars meteorite Shergotty to provide an estimate of the water that was present in Mars magmas prior to their eruption on the surface. Shergotty, a meteorite weighing around 5 kilograms was discovered in India in 1865. It is one of a handful of proven Mars meteorites that landed on Earth. It is relatively young - around 175 million years old - and may have originated in the volcanic Tharsis region of the red planet. Its measured water content is only around 130-350 parts per million. But by exploring the amount of water that would be necessary for its pyroxenes - its earliest crystallizing minerals - to form, the researchers have determined that at one time, Shergotty magma contained around 2 percent water. They also have detected the presence of elements that indicate the growth of the pyroxenes at high water contents. This has important implications for the origin of the water that was present on the surface of the planet during the past. This new information points to erupting volcanos as a possible mechanism for getting water to Mars's surface. SQUEEZING HYDROGEN INTO ROCKS In the interior of Mars, hot magma is generated at great depth. It then ascends into the shallower, colder outer portions of the Martian interior, where it encounters cooler rock that contains hydrogen-bearing minerals. These minerals decompose when heated by the magma and the hydrogen is released and dissolves in the magma. The magma continues its ascent to the surface of the planet. When it reaches very shallow, near-surface conditions in the crust, the magma erupts and its water is released in the form of vapor. The magma holds the water-creating hydrogen as the rock circulates underneath the crust. It undergoes changes as it moves from areas of enormous heat and pressure to cooler areas nearer the surface. When it finally erupts through a volcano, the magma releases its water in the form of vapor. Grove recreates Mars and moon rocks in his laboratory for these studies. By subjecting synthetic rocks to conditions of high temperature and pressure, he can tell how much water was contained in magma at the time that its crystals were formed. "What my experiment can do is estimate how much water was involved in the process that led to the formation of Mars meteorites. The only way you can reproduce the unique chemical composition of these minerals is to have water present," he said. Other authors on the Nature paper include McSween's graduate student, Rachel C. F. Lentz; Lee R. Riciputi of the chemical and analytical sciences division of Oak Ridge National Laboratory; Jeffrey G. Ryan, a geologist at the University of South Florida; and Jesse C. Dann and Astrid H. Holzheid of MIT's Department of Earth, Atmospheric and Planetary Sciences. This work was partly supported by NASA.
http://solarsystem.nasa.gov/news/display.cfm?News_ID=633
4.40625
Solar energy has more potential applications than any kind of renewable energy. As air and water pollution continue to be a problem from coal burning, gas, and nuclear power plants, the use of photovoltaic panels could have the biggest impact on abating pollution problems globally. Yet most of us don’t know how PV panels are manufactured. Making conventional photovoltaic, or PV panels, has become a straightforward process, and the costs of production continue to decline. The basis for solar panels is silicone, similar to the material that circuit boards are made of. The crystalline silicone solar panels that are often seen on rooftops, roadsides, and in clustered arrays start as flat discs cut from a larger sheet. The discs are then polished and substances called dopants are added, which serve to alter electrical charges in the panel. Metal conductors are soldered to each disc and the panel is formed into a grid-like structure by aligning the conductor wires. A photovoltaic cell is formed and the assembly is covered by a layer of glass that supports the structure. Thermally conductive cement, which attaches the solar cell to a substrate in the back, is used to prevent overheating. Newer types of panels use more advanced manufacturing processes. One breakthrough in PV panel manufacturing is amorphous silicone solar panels, which are much thinner than the standard variety. Their thickness can be measured in micrometers. Vapor deposition of silicone creates a multi-layer solar cell in a continuous manufacturing process. Individual layers absorb specific parts of the light spectrum that comes from the sun. For some amorphous solar panels, one row of solar cells can be in the shade and the rest of the panel will still collect sunlight and generate electricity. Their composition is much less delicate than that of crystalline silicone, so amorphous panels are less likely to break when being handled. The viability of solar energy has generated competition in building more efficient, less expensive PV panels. Different designs have been developed, even for panels that do not use silicone. Conductive plastics have been developed into lenses and mirrors that rely on the laws of physics to focus sunlight onto small photovoltaic components. Solar sensitive dyes and inks have also be formulated that can act as photovoltaic elements, and even more cost effective are printing press systems that can churn out photovoltaic materials very quickly. While solar power is limited only by the availability of sunlight, PV panels are becoming thinner and cheaper. It is possible for anyone to install them and advanced designs can even take the place of shingles and tiles. Solar materials can be used to decorate the facades of buildings, and allow for self-sufficient power generation for homes and businesses. With the flexibility and low cost of PV systems, ease of production, and government incentives becoming more commonplace for using solar energy, there is almost no excuse not to take advantage of the sun’s mighty power.
http://sunetric.com/blog/post/facts-on-pv-panel-manufacturing/
4.125
Teacher resources and professional development across the curriculum Teacher professional development and classroom resources across the curriculum Reader response stresses the importance of the reader's role in interpreting texts. Rejecting the idea that there is a single, fixed meaning inherent in every literary work, this theory holds that the individual creates his or her own meaning through a "transaction" with the text based on personal associations. Because all readers bring their own emotions, concerns, life experiences, and knowledge to their reading, each interpretation is subjective and unique. Many trace the beginning of reader-response theory to scholar Louise Rosenblatt's influential 1938 work Literature As Exploration. Rosenblatt's ideas were a reaction to the formalist theories of the New Critics, who promoted "close readings" of literature, a practice which advocated rigid scholarly detachment in the study of texts and rejected all forms of personal interpretation by the reader. According to Rosenblatt, the New Critics treated the text as "an autonomous entity that could be objectively analyzed" using clear-cut technical criteria. Rosenblatt believed instead that "the reading of any work of literature is, of necessity, an individual and unique occurrence involving the mind and emotions of some particular reader and a particular text at a particular time under particular circumstances." Impact on teaching literature Over the last several decades, reader-response techniques have become firmly established in American classrooms. Language arts teachers at all levels now widely accept central tenets of the theory, particularly the notion that learning is a constructive and dynamic process in which students extract meaning from texts through experiencing, hypothesizing, exploring, and synthesizing. Most importantly, teaching reader response encourages students to be aware of what they bring to texts as readers; it helps them to recognize the specificity of their own cultural backgrounds and to work to understand the cultural background of others. Using reader response in the classroom can have a profound impact on how students view texts and how they see their role as readers. Rather than relying on a teacher or critic to give them a single, standard interpretation of a text, students learn to construct their own meaning by connecting the textual material to issues in their lives and describing what they experience as they read. Because there is no one "right" answer or "correct" interpretation, the diverse responses of individual readers are key to discovering the variety of possible meanings a poem, story, essay, or other text can evoke. Students in reader-response classrooms become active learners. Because their personal responses are valued, they begin to see themselves as having both the authority and the responsibility to make judgments about what they read. (This process is evident in the video programs, when students are asked to choose a line of poetry and explain why it is important to them.) The responses of fellow students also play a pivotal role: Through interaction with their peers, students move beyond their initial individual reaction to take into account a multiplicity of ideas and interpretations, thus broadening their perspective. Incorporating reader response in the classroom As increasing numbers of elementary, middle, and secondary school language arts teachers have come to accept reader-response theory over the last 25 years, the instructional techniques that support it have become more common in classrooms: Literature circles, journal writing, and peer writing groups all grew out of the reader-response movement. These teaching strategies value student-initiated analysis over teacher-led instruction, promote open-ended discussion, and encourage students to explore their own thinking and trust their own responses. Benefits and challenges of using a reader-response approach Research has shown that students in reader-response-based classrooms read more and make richer personal connections with texts than students using more traditional methods. They tend to be more tolerant of multiple interpretations, and because they learn techniques that help them recognize the ways in which their own arguments are formed, they are better equipped to examine the arguments of others. In short, reader response helps students to become better critical readers. While these techniques encourage a broad range of textual interpretations and reactions, students must learn, however, that not every response is equally valid or appropriate. The meaning of a text is not an entirely subjective matter, of course, and it is crucial that responses be grounded in the text itself and in the context in which the text is read. One way of guarding against students "running wild" is to make sure that there's a community restraint on interpretation. That is, if the teacher structures reader-response exercises carefully, each individual student is challenged by the discussion to go beyond his or her first response. Even though an individual reader's reactions are based on his or her own "schema" (the expectations that arise from personal experiences), he or she will realize in class discussion that not everyone shares that same perspective. © Annenberg Foundation 2013. All rights reserved. Legal Policy.
http://www.learner.org/workshops/hslit/session1/index.html
4.03125
This section will develop over time and will include a wide range of related topics such as: Emotion regulation, sleep, exercise and mood, nutrition, behavior change, social adjustment, esteem, and externalization/narrative therapy theory. - EMOTION REGULATION Emotion regulation refers to the ability to respond to emotions in a healthy manner. Frequently, this involves using coping skills to manage the intensity of a distressing emotion. In order to regulate, or effectively manage emotions, a person needs skills to decrease unpleasant emotions (e.g., frustration, anxiety) and increase pleasant emotions (e.g., happiness, joy). Emotion regulation can pertain to an immediate moment when an emotion arises and needs to be managed. For example, a person might need to reign in their frustration from a stressful morning commute in order to calmly conduct a meeting when they arrive at work. It can also pertain to a more general state of our ongoing life. For instance, a person might decide that they are distressed much of the time and thus they need to plan for more relaxing activities in order to help them feel more at ease on a regular basis. The ability to identify, accept, and experience a range of emotions begins to develop in early childhood. Emotion regulation skills are an important component of healthy living for children and adults of all ages. Throughout the lifespan, we have different circumstances that demand the ability to regulate or emotions and our behaviors. Adults need to monitor their anxiety about mortgages, parenting practices, juggling schedules, etc. Children need to monitor their anxiety about going to a new place or learning to tie their shoes. Although the tasks may look very different in terms of what the individual does to take care of their needs in a given situation, adults and children alike need to learn to assert their needs skillfully even when they are upset. This is one of the tasks of regulation that starts at birth and continues our entire life. Emotion regulation is a complex topic and is a theme in a significant body of research in human development and psychology more broadly. Much can be said about this topic and how it relates to the CrabbieMasters Program. Attend our workshops or training series and look for our video clips to learn more. For interested readers, we suggest finding good scholarly resources on emotion regulation. You will see that much of the content in our program fits squarely with the goals of helping children understand and manage their emotions so that they can be well-adjusted, get along with others, and have a sense of self-mastery. Many people in the United States do not get enough sleep on a regular basis. Even small amounts of sleep deprivation can lead to changes in mood, immune system response, and cognitive sharpness. In 1999, researchers conducted a review of 143 studies on sleep deprivation and concluded that "sleep deprivation strongly impairs human functioning." Researchers use the term "partial sleep deprivation" for a disrupted night of sleep, or sleeping less than an average amount needed in a 24-hour period (this is the most common type of sleep deprivation most of us face day-to-day). Partial sleep deprivation was shown to have the strongest effect on functioning when compared to short-term or long-term sleep deprivation in some studies. After reviewing over a hundred studies, researchers concluded that the most pronounced effect of sleep shortage was mood problems. So, this tells us we really need to attend to how much sleep we are getting if we want to keep The Crabbies in their place. Too-Tired is the Worst Crabbie in the Universe! Other problems that arise from not getting enough sleep include motor coordination difficulties and cognitive performance declines. Sleep deficits are associated with attention defecits in children, lower performance on standardized measures of academic acheivement, poor emotion regulation, behavior problems, and increased risk for psychological disorders (for further details and additional references see Sadeh, Gruber, & Raviv (2002). Sleep, neruobehavioral functioning, and behaviorl problems in school-age children. Child Development, 73, 405-417.). Other studies have found that sleep deprivation as little as one hour a night for only 3 nights is sufficent to drop academic performance nearly a grade level. Sleep is an important factor in memory, concentration, motivation, and thinking ability. We underscore the importance of good sleep for children and adults alike. - EXERCISE AND MOOD Exercise and physical activity through play are a great way to keep The Crabbies away. A little bit of exercise or physical play can improve a child's mood in the moment. Developing a life-long pattern of physical activity can promote good mental health throughout adolescence and adulthood. Did you know that the most effective treatment for mild to moderate depression is EXERCISE? People who exercise for at least 20 minutes a day (20 min of vigorous exercise or 40 min of moderate intensity exercise) experience less depression and anxiety symptoms. Exercise promotes healthy brain chemical balance, which leads to a sense of well-being shortly after the activity and sustained benefits over time when exercise is consistent. Exercise helps reset the body's stress response system, which is very important in today's world where we experience many constant stressors. The connection between physical activity and mental well-being in adults has been well-documented. Newer research shows that even short bouts of exercise leads to psychological benefits for children. One study demonstratd that just 15 minutes of physical activity increased positive moods and decreased anxiety in elementary school children. Here is a great local news article that discusses how some schools are improving academic performance by getting kids moving: http://www.startribune.com/local/south/148459015.html In 2010, the Center for Disease Control issued a paper that summarizes the research on physical activity and academic behaviors. http://www.cdc.gov/healthyyouth/health_and_academics/pdf/pa-pe_paper.pdf The findings general show that increasing physical activity throughout the school day improves academic performance and classroom behaviors. Eating healthy is a key to CrabbieMaster success. Good nutrition is a foundation for positive, stable moods and steady energy levels. Taking care of your body through nutrition is a great way to keep many of the Crabbies at bay. Poor nutrition is a biological factor that leads to emotional vulnerability. One of the most important things to watch is sugar content of foods. Foods high in sugar cause a chemical rush and then a chemical crash, puttIng the body on a roller coaster. Because our energy levels and moods are linked to the chemicals in our bodies and brains, our emotional well-being is linked to the foods we eat. Here is a scientific explanation of why it is important to limit sugar intake. The body's response to high sugar foods is to produce insulin, a hormone for regulating blood sugar. Insulin production and digestion of the high sugar food sets off a chemical reaction that leads to an increase of the neurotransmitter serotonin in the brain. High levels of serotonin are associated with feeling happy and satisfied. This feeling, when brought on by foods, is often called a "sugar high." A person feels pretty good until the "sugar crash" hits. The "sugar crash" is a result of the blood sugar dropping to below a healthy baseline. Blood sugar drops below a healthy level when we eat high levels of sugar because of over-production of insulin. You can basically think of the body's response to high doses of sugar as a panic response. The body must take quick and decisive action to regulate the sugar in our blood stream so that we do not have a dangerous chemical reaction in our cells. In a bit of a panic state, the body over-reacts. When our blood sugar is too low, we feel lethargic and are prone to moodiness. Children have less developed brains and are thus less able to use their sense of reasoning to help them regulate their mood and behavior. An adult may be capable of recognizing that they are having a sugar crash and that the need to hold it together until they can get a healthy snack to feel better. A child is at a bioligical disadvantage in being able to manage the mood effects of being on a sugar rollercoaster! It is also important for children to eat frequently in order to keep their blood sugar levels stable. By monitoring and limiting the sugar in your family's diet and by eating healthy foods, you can avoid these ups and downs. Some people tell us that after making changes in their eating habits, they notice a tremendous difference in how they feel and how well the day goes for their children. We believe good nutrition can help you beat ALL of The Crabbies! - BEHAVIOR CHANGE (expectations, consistency, contingency) - SOCIAL ADJUSTMENT (parent-child interactions, balance of power in parent-child relationships, peer relationships) - ESTEEM (confidence, autonomy, self-mastery) - EXTERNALIZATION & NARRATIVE THERAPY THEORY
http://www.crabbiemasters.com/research/
4.03125
The Fall of Rome (150CE-475CE) From the middle of the second century CE, The Roman Empire faced increasing Germanic tribe infiltration along the Danubian and Rhine borders, and internal political chaos. Without efficient imperial succession, Romans in from the third century set up generals as emperors, who were quickly deposed by rival claimants. Facilitating further territorial losses to Barbarian tribes, this continued until Diocletian (r. 284-305). He and Constantine (324-337) administratively reorganized the empire, engineering an absolute monarchy. Cultivating a secluded imperial tenor, Constantine the Great patronized Christianity, particularly in his new city Constantinople, founded on the ancient site of Byzantium. Christianization, in the Hellenized and Mediterranean cities and among certain Barbarian newcomers, proceeded with imperial support, and became the state religion under Theodosius (r. 379-95). Germanic tribal invasions also proceeded, as did battles with the Sassanids in the East. From 375 Gothic invasions, spurred by Hunnic marauding, began en masse, particularly in Danubian, Balkan areas. Entanglement with imperial armies resulted in Roman defeats, and increased migration into Roman heartlands as far as Iberia. The Empire, as military and bureaucracy, underwent a certain Germanization. From the death of Theodosius, the Eastern Empire followed its own course, evolving into the Hellenized Byzantine state by the seventh century, as repeated sackings of Latin Rome (410, 455), contraction of food supplies to the West, and deposition of the last Western Emperor (Romulus Augustulus) by the Ostrogoth Odovacar (476), ended any hope of recovering Pax-Romana in the Mediterranean basin. Gaul was controlled by a shifting patchwork of tribes. But though the Empire itself no longer existed, through the Christian Church, through the always idealized vision of glorious Rome, and through the political structures that evolved out of Rome's carcass, vestiges of the Empire played vital and identifiable roles in the formation of the early Medieval European world.
http://www.sparknotes.com/history/european/rome4/summary.html
4
Science Fair Project Encyclopedia A space rendezvous between two spacecraft, often between a spacecraft and a space station, is an orbital maneuver where the two arrive at the same orbit, make the orbital velocities the same, and bring them together (an approach maneuver, taxiing maneuver); it may or may not include docking. - A visit to the International Space Station (manned) by: - Visit to the Hubble Space Telescope (unmanned), for servicing, by Space Shuttle (manned), and possibly in future by the Hubble Robotic Vehicle (HRV) to be developed (unmanned) - Moon landing crew returning from the Moon in the ascent stage of the Apollo Lunar Module (LM), to the Apollo Command/Service Module (CSM) orbiting the Moon (Project Apollo) (both manned) - The STS-49 crew attached a rocket motor to the Intelsat VI (F-3) communications satellite to allow it an orbital maneuver Alternatively the two are already together, and just undock and dock in a different way: - Soyuz spacecraft from one docking point to another on the ISS - in the Apollo spacecraft, an hour or so after Trans Lunar Injection of the sequence third stage of the Saturn V rocket/ LM inside LM adapter / CSM (in order from bottom to top at launch, also the order from back to front with respect to the current motion), with CSM manned, LM at this stage unmanned: - the CSM separated, while the four upper panels of the LM adapter were disposed of - the CSM turned 180 degrees (from engine backward, toward LM, to forward) - the CSM connected to the LM while that was still connected to the third stage - the CSM/LM combination then separated from the third stage Another kind of "rendezvous" was in 1969, when the Apollo 12 mission involved a manned landing on the Moon within walking distance of the unmanned Surveyor 3, which had made a soft landing in 1967. Parts of the Surveyor were brought back. Later analysis showed that bacteria had survived their stay on the Moon. On August 12, 1962 Vostok 3 and Vostok 4 were placed into adjacent orbits and passed within several kilometers of each other, but did not have the orbital maneuvering capability to perform a space rendezvous. This was also the case on June 16, 1963 when Vostok 5 and Vostok 6 were launched into adjacent orbits. An example of an undesired rendezvous in space is an uncontrolled one with space debris. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Docking_maneuver
4.21875
Listening Passage Preview The student follows along silently as an accomplished reader reads a passage aloud. Then the student reads the passage aloud, receiving corrective feedback as needed. - Reading book - The teacher, parent, adult tutor, or peer tutor working with the student should be trained in advance to use the listening passage preview approach. Steps in Implementing This Intervention: Step 1: Sit with the student in a quiet location without too many distractions. Position the book selected for the reading session so that both you and the student can easily follow the text. (Or get two copies of the book so that you each have your own copy.) Step 2: Say to the student, "Now we are going to read together. Each time, I will read first, while you follow along silently in the book. Then you read the same part out loud." Step 3: Read aloud from the book for about 2 minutes while the student reads silently. If you are working with a younger or less-skilled reader, you may want to track your progress across the page with your index finger to help the student to keep up with you. Step 4: Stop reading and say to the student, "Now it is your turn to read. If you come to a word that you do not know, I will help you with it." Have the student read aloud. If the student commits a reading error or hesitates for longer than 3-5 seconds, tell the student the correct word and have the student continue reading. Step 5: Repeat steps 3 and 4 until you have finished the selected passage or story. - Rose, T.L., & Sherry, L. (1984). Relative effects of two previewing procedures on LD adolescents' oral reading performance. Learning Disabilities Quarterly, 7, 39-44. - Van Bon, W.H.J., Boksebeld, L.M., Font Freide, T.A.M., & Van den Hurk, J.M. (1991). A comparison of three methods of reading-while-listening. Journal of Learning Disabilities, 24, 471-476. Ask Occasional Comprehension Questions. You can promote reading comprehension by pausing periodically to ask the student comprehension questions about the story (e.g., who, what, when, where, how) and to encourage the student to react to what you both have read (e.g., "Who is your favorite character so far? Why?"). Preview a Text Multiple Times as a Rehearsal Technique. In certain situations, you may wish to practice a particular text selection repeatedly with the student, using the listening passage preview approach. For example, if the student is placed in a reading book that is quite difficult for him or her to read independently, you might rehearse the next assigned story with the student several times so that he or she can read the story more fluently during reading group.
http://www.interventioncentral.org/academic-interventions/reading-fluency/listening-passage-preview
4
The scientific community essentially agrees on the phenomenon of global change (IPCC, 2001). The main cause of climate change is the anthropogenic increase in greenhouse gas concentrations in the earth's atmosphere. Carbon dioxide (CO2) is the principal greenhouse gas. Its concentration in the atmosphere is the result of a cycle between different carbon pools: CO2is the product of the oxidation of carbon from these pools. The carbon cycle at the earth level is presented in the following diagram. Diagram 1: A simplified diagram indicating carbon pools and CO2fluxes between the earth and the atmosphere Source:Edinburgh Centre for Carbon Management(http://www.eccm.uk.com/climate.htm) CO2concentration in the atmosphere has increased by 31% since the beginning of the industrial era, from 280 to 360 ppm (IPCC, 2001). Anthropogenic emissions of CO2originate primarily from the burning of fossil fuels and deforestation in tropical regions. Some of these emissions (on the order of 6 GtC/year) are reabsorbed by the terrestrial and oceanic ecosystems. The net atmospheric increase (on the order of 3 GtC/year) is small compared to the size of the carbon pools. However, this flow, that began more than a century ago with the Industrial Revolution, continues to grow, and is sufficient to explain global warming and the resulting imbalance in the climate system. Carbon pool:A reservoir of carbon. A system which has the capacity to accumulate or release carbon. Forests are important carbon pools which continuously exchange CO2with the atmosphere, due to both natural processes and human action. Understanding forests' participation in the greenhouse effect requires a better understanding of the carbon cycle at the forest level. Organic matter contains carbon susceptible to be oxidized and returned to the atmosphere in the form of CO2. Carbon is found in several pools in the forest: · the vegetation: living plant biomass consisting of wood and non-wood materials. Although the exposed part of the plant is the most visible, the below-ground biomass (the root system) must also be considered. The amount of carbon in the biomass varies from between 35 to 65 percent of the dry weight (50 percent is often taken as a default value). · dead wood and litter: dead plant biomass, made up of plant debris. Litter in particular is an important source of nutrients for plant growth. · soil1 organic matter, the humus. Humus originates from litter decomposition. Organic soil carbon represents an extremely important pool. At the global level, 19 percent of the carbon in the earth's biosphere is stored in plants, and 81 percent in the soil. In all forests, tropical, temperate and boreal together, approximately 31 percent of the carbon is stored in the biomass and 69 percent in the soil. In tropical forests, approximately 50 percent of the carbon is stored in the biomass and 50 percent in the soil (IPCC, 2000). · Wood products derived from harvested timber are also significant carbon pools. Their longevity depends upon their use: lifetimes may range from less than one year for fuelwood, to several decades or centuries for lumber. The oxidation of carbon found in organic matter and the subsequent emissions of CO2result from the following processes: · respiration of living biomass, · decomposition of organic matter by other living organisms (also called heterotrophic respiration), · combustion (fires). The process of photosynthesis2 explains why forests function as CO2sinks, removing CO2from the atmosphere. Atmospheric CO2is fixed in the plant's chlorophyll parts and the carbon is integrated to complex organic molecules which are then used by the whole plant. Diagram 2: The carbon cycle in the forest The participation of forests in climate change is thus three-fold: • they are carbon pools • they become sources of CO2when they burn, or, in general, when they are disturbed by natural or human action • they are CO2sinks when they grow biomass or extend their area. The earth's biosphere constitutes a carbon sink that absorbs approximately 2.3 GtC annually. This represents nearly 30 percent of all fossil fuel emissions (totaling from 6.3 to 6.5 GtC/year) and is comparable to the CO2emissions resulting from deforestation (1.6 and 2 GtC/year). "Current scientific evidence suggests that managed and even old growth forests (of the temperate and boreal zone) sequester carbon at rates of up to 6 ton ha. These results question the paradigm that old growth forests are in equilibrium with a net carbon balance. On the other hand infrequent disturbances (fires, pest outbreaks, storms.) are triggering a sporadic, but massive return of carbon to the atmosphere"(Valentiniet al.,2000). A soil specialist has emphasized that "there is a potential for reversing some of these processes and sequestering carbon in soils in terrestrial ecosystems. The magnitude of the potential is estimated to be up to 50 to 75 percent of the historic carbon loss. Theoretically, the annual increase in atmospheric CO2can be nullified by restoration of 2 billion ha of degraded lands, which would increase their average carbon content by 1.5 ton / ha in soil and vegetation."(Lal, 2000) The carbon cycle (photosynthesis, plant respiration and the degradation of organic matter)in a given forest is influenced by climatic conditions and atmospheric concentrations of CO2. The distinction between natural and human factors influencing plant growth is thus sometimes very difficult to make. The increase of CO2in the atmosphere has a "fertilizing effect" on photosynthesis and thus, plant growth. There are varying estimates of this effect: + 33 percent, + 25 percent, and + 60 percent for trees, + 14% for pastures and crops (IPCC, 2001). This explains present regional tendencies of enhanced forest growth and causes an increase in carbon absorption by plants. This also influences the potential size of the forests carbon pool. There are still questions regarding the long-term future of the biospheric carbon pool. Several bio-climatic models indicate that the ecosystems' absorption capacity is approaching its upper limit and should diminish in the future, possibly even reversing direction within 50 to 150 years, with forests becoming a net source of CO2. Indeed, global warming could cause an increase in heterotrophic respiration and the decomposition of organic matter, and a simultaneous decrease of the sink effectiveness, thereby transforming the forestry ecosystems into a net source of CO2(Scholes, 1999). In 2000,Naturepublished the results of a simulation made by theHadley Center. It analyzed the possible effects of global warming and of the increase of atmospheric CO2concentration on plant life and the oceans, and the subsequent emissions by these pools during the course of the 21stcentury. They tested three hypotheses: · A 5.5 percent (4_ globally) increase of the average ground temperature. The model predicted the decline of a large part of the Amazonian forest, due to the increase in drought. The decomposition of the soil's organic matter would accelerate and the result would be an emission of 60 GtC by the earth's ecosystem. · An increase in CO2concentration to 700 PPM, with no rise of the global temperature: the earth biosphere would globally absorb 750 GtC. · A combination of the increase of CO2emissions and the temperature rise, with dramatic results: the CO2concentration in the atmosphere reaches 980 PPM, the average increase in ground temperature reaches 8_ (5.5_ globally), and the earth's biosphere emits 170 GtC (Coxet al.,2000). Coxet al.,2000. Acceleration of global warming due to carbon-cycles feedback in a coupled climate model.Nature, 408. The Hadley Center's simulation result is somewhat questionable, since it depends upon an uncertain direct link between an increase in earth temperature and respiration. The capacity of the vegetation to adapt to an increase in temperature is also largely unknown. An article written by 18 climate specialists published inScience(2000), gives a different opinion: "recent results from long-term soil warming in a boreal forest contradict the idea that the projected rise in temperature is likely to lead to forests that are now carbon sinks becoming carbon sources in a foreseeable future".This article postulates that the strength of the sink should increase in the future (by 10 to 20 percent) due to CO2fertilization, and then decline, followed by long-term saturation, due to the respiration increase caused by the rise in average temperatures (Falkowski P., Scholes R.J.et al.,2000). These forecasts refer to ecosystems that are not used for production, and are not managed or reforested. Carbon sinks and measures for reducing industrial emissions: complementary or opposing actions? Opposing the inclusion of carbon sinks in the Climate Convention negotiation process is often seen as an attempt to avoid more stringent emission reduction measures in the energy sector. However, it would be mistaken not to use the forestry potential simulateneously, since climate change is not a linear phenomenon, and there undoubtedly exist critical threshold levels beyond which the climate system would change unpredictably and timing of reduction measures counts (Pederson, 2000). TheEdinburgh Centre for Carbon Management(ECCM) created two simulations involving CO2concentrations in the atmosphere. The first compared a pessimistic forestry situation (constant deforestation, an inversion of the Amazon sink" to a source, and no large-scale reforestation), with an improved forestry situation (reduced deforestation and significant reforestation programs). In both cases, the atmospheric concentration exceeded 500 ppm, which the ECCM considers a critical threshold level for climate change, with a difference of ten years: about 2050 in the first variation, and 2060 in the second. This proved that forestry measures alone will not solve the problem of climate change. The other scenario involved a large reduction of greenhouse gas emissions in the energy sector, with the same variations in the forestry sector. With a pessimistic forestry situation, the critical threshold level was reached in 2070. With the improved forestry situation however, the threshold was never reached. Instead, the curve of CO2concentration in the atmosphere started to decrease in 2050 slowly until 2100. The ECCM concluded that the only way to fight climate change was to combine vigorous fossil fuel emission reductions with a voluntary program for improving forestry management, forest conservation and reforestation. Several actions can be taken in the forestry sector in order to mitigate climate change. Planting new forests, rehabilitating degraded forests and enriching existing forests contribute to mitigating climate change as these actions increase the rate and quantity of carbon sequestration in biomass. This potential has certain physical limitations such as plant growth and available area. Agro-forestry and the planting of multiple- use trees (fruit trees, rubber wood, etc.) also contribute to this objective. Tree planting projects are doubly interesting from the point of view of CO2sequestration, inasmuch as carbon storage in durable products such as boards, plywood, or furniture complements the permanent stock in standing trees. Even if the life span of products is limited, an average life span of several dozen years is still significant, since it allows to "gain time" while waiting for cleaner technologies in the energy and transportation sector to develop, and it can also help avoid concentration peaks of CO2in the planet's atmosphere. If a part of the annual harvest replenishes and increases the pool of wood products, the forestry sector's storage capacity can increase considerably without occupying more space in the landscape. The carbon reservoir in the forest biomass and soils is very large, highlighting the importance of conserving natural forest, and eliminating agricultural practices which contribute to the deterioration of these reservoirs. One aspect of the debate about carbon sinks is whether conservation activities should be accounted or not. These activities aim to protect a forest area threatened by human-induced deforestation, particularly from farming. Climate specialists consider this conservation option to be the "best strategy for sink maintenance" (Valentiniet al.,2000) to the extent that it contributes more effectively to carbon storage and preserves the biodiversity associated with old-growth forests. Numerous forestry activities emit greenhouse gases; these emissions can be curtailed by applying appropriate techniques. ·Forest harvesting can cause serious damage to the soil and the forest stand when carried out inappropriately. Reduced impact logging in the context of forest management and harvest plans involves using a set of techniques, such as pre-planning skidding trails; optimizing landings; directional felling; employing appropriate skidders, which reduce damage to soils, harvested trees and the remaining stand; these damages would heighten mortality and release carbon unnecessarily. ·Timber processing also generates a considerable quantity of waste wood, which could either be reduced, or used as a raw material for production or as fuel. Improving the forest industry's efficiency helps limiting the amount of wood waste created by the production process. This could be achieved by increasing product yield, reducing residues, or adding production lines which utilize them as parquet, moulding, etc. Using wood wastes in combined heat and power generation, thereby simultaneously generating heat for kiln-drying of wood, energy for running the machines, and electric power for the outside would reduce emissions and valorize these residues, which can substitute for fossil fuels3. Moreover, charcoal production also is a process of widely varying efficiency, depending on the method and techniques used, which could be improved. Using lumber instead of materials requiring large amounts of energy during production helps fight the greenhouse effect, e.g. in replacing concrete or steel constructions by wood as frames, beams, etc. Using 1m3of lumber in buildings sequesters 1 ton of CO2for an average period of 20 years, and reduces net emissions by 0.3 t of CO2if concrete is replaced, and 1.2 t of CO2if steel is substituted. Producing wood for energy purposes mitigates climate change by combining sink action with emissions reduction. Substituting fossil fuels, such as coal, natural gas, or oil by fuelwood for domestic use, electricity production, or industrial use, e.g. in iron smelters, reduces CO2emissions because wood is renewable. The expected sequestration of carbon through the growth of trees after sustainable harvest compensates for the CO2emitted by combustion. However, this assumes that fuelwood production does not cause irreversible deforestation, i.e. that wood stocks are managed in a sustainable manner. Good management may even increase the productivity of forests and hence their sequestration capacity both in above-ground and below-ground biomass. Different actions related to fuelwood can be taken: · Increasing fuelwood supply by creating new plantations or enhancing productivity of existing forests through forest management. The contribution to climate change mitigation depends on the size and permanence of the carbon pool, and on the fuelwood increment. · Increasing the energy efficiency of fuelwood use and derived products. Charcoal will often replace fuelwood in households. Improving and adapting stoves is necessary in order to raise energy efficiency and avoid the over-exploitation of certain species which have low wood density and burn rapidly. Charcoal contains two to five times more energy than wood by weight. Its use may also improve the distribution of fuelwood resources by reducing transportation costs from distant forest areas4. · Increasing the efficiency of charcoal production. In Africa, productivity ratios can be as low as 10 to 15 percent, which corresponds to energy ratios of 20 to 40 percent. There are techniques which can obtain conversion ratios of 25 to 30 percent, or energy ratios of 65 to 80 percent (Girard and Bertrand, 2000). These techniques are particularly important for Africa, where urbanization has caused households to rapidly shift from wood to charcoal5. The following table summarizes forestry activities that mitigate the greenhouse effect. Creation and managementof carbon sinks and pools Reduction of greenhouse gas emissionsby sources Biomass and soil organic matter in forests Emissions resulting from forestry activities or products Introduction of trees on non-forest or degraded forest lands: Conservation of threatened forests Combat against pests and fires Reduced impact logging Substitution: avoided emissions products with long Fossil-fuel substitution by Table 1: Forestry activities that mitigate the greenhouse effect In addition to helping protect the environment, forestry activities that mitigate climate change can provide global, regional and local benefits, as long as they are adapted to the local context. ·They can offer potential income to rural populations in forest areas. Industrial plantations can generate employment in nursery operations, harvesting, tending operations. Community plantation projects may involve direct payments to villagers by an investment fund. ·Timber plantation projects, particularly if undertaken in combination with efforts to increase forest industry efficiency, raise competitiveness by adding value to production and processing. They also help supply construction materials adapted to both urban and rural populations. In countries with large wood industries, such as Nigeria, Ivory Coast, Ghana, Cameroon, this could reduce the pressure on their natural forests. ·Reduced-impact logging techniques contribute to maintaining sustainable timber production by curbing forest degradation through destructive harvesting. · Multiple-use plantations can contribute to the combat against desertification and erosion in vulnerable areas. Tunisia and several Sahelian countries believe that they can also produce carbon sequestration, provide income and supply fuelwood to rural populations. · Conserving forests is a means of adapting to climate change. It helps provide protection against surface erosion, regulates water flows and limits landslides and rock falls. Forests at the coastline provide protection against wind and water erosion as well as water and sand intrusion. · Improving the management of natural forest ecosystems as a source of fuelwood or charcoal contributes to energy supply at a moderate cost, reducing the country's dependence on fossil fuel imports. Biomass energy development permits decentralized electricity production in areas inadequately served by the national electricity grids. This can be of particular interest to dry areas, especially in the Sahel. soil also contains mineral carbon from geological processes. 2Sugar synthesis from atmospheric CO2and water in the plants' chlorophyll parts. 3Waste-wood is the ultimate by-product of timber conversion. Using it for energy results in a net saving of fossil fuels and, therefore, a reduction of CO2emissions. 4The degradation of forest resources in Sahelian countries is primarily linked to improper selection of harvesting sites: forest stands close to cities are over-exploited, while more remote sites are underused. Alarming predictions of the 1970s concerning a fuelwood crisis have been partially refuted and the resource turned out to be more abundant and resilient than predicted, with possible exceptions, e.g. Mauritania. 5The transition to gas or oil is still impeded by low incomes, but the changeover is nonetheless inevitable.
http://www.fao.org/docrep/005/ac836e/AC836E03.htm
4.3125
English Worksheets for Second Grade Kids Some simple printable English worksheet for Grade 2 kids. Through these worksheets, kids will learn both common and proper nouns apart from various new words to increase their vocabulary. The main objective of these worksheets is to introduce kids to as many new words as possible to help them improve their writing skills and comprehension ability. These skills will help kids better understand the English language in higher grades. The format of these worksheets is simple, easy and engaging.
http://www.turtlediary.com/grade-2-worksheets/esl-efl-worksheets.html
4.0625
Absorption is the process whereby toxicants gain entrance to the body. Ingested and inhaled materials, nonetheless, are considered outside the body until they cross the cellular barriers of the gastrointestinal tract or the respiratory system. To exert an effect on internal organs a toxicant must be absorbed, although such local toxicity as irritation, may occur. Absorption varies greatly with specific chemicals and with the route of exposure. For skin, oral or respiratory exposure, the exposure dose (or, "outside" dose) is usually only a fraction of the absorbed dose (that is, the internal dose). For substances injected or implanted directly into the body, exposure dose is the same as the absorbed or internal dose. Several factors affect the likelihood that a foreign chemical or, xenobiotic, will be absorbed. The most important are: - route of exposure; - concentration of the substance at the site of contact; and - chemical and physical properties of the substance. The relative roles of concentration and properties of the substance vary with the route of exposure. In some cases, a high percentage of a substance may not be absorbed from one route whereas a low amount may be absorbed via another route. For example, very little DDT powder will penetrate the skin whereas a high percentage will be absorbed when it is swallowed. Due to such route-specific differences in absorption, xenobiotics are often ranked for hazard in accordance with the route of exposure. A substance may be categorized as relatively non-toxic by one route and highly toxic via another route. The primary routes of exposure by which xenobiotics can gain entry into the body are: Other routes of exposure—used primarily for specific medical purposes—are: For a xenobiotic to enter the body (as well as move within, and leave the body) it must pass across cell membranes (cell walls). Cell membranes are formidable barriers and major body defenses that prevent foreign invaders or substances from gaining entry into body tissues. Normally, cells in solid tissues (for example, skin or mucous membranes of the lung or intestine) are so tightly compacted that substances can not pass between them. Entry, therefore, requires that the xenobiotic have some capaability to penetrate cell membranes. Also, the substance must cross several membranes in order to go from one area of the body to another. In essence, for a substance to move through one cell requires that it first move across the cell membrane into the cell, pass across the cell, and then cross the cell membrane again in order to leave the cell. This is true whether the cells are in the skin, the lining of a blood vessel, or an internal organ (for example, the liver). In many cases, in order for a substance to reach its site of toxic action, it must pass through several membrane barriers. A foreign chemical will pass through several membranes before it comes into contact with, and can damage, the nucleus of a liver cell. Cell membranes (often referred to as ''plasma membranes'') surround all body cells and are basically similar in structure. They consist of two layers of phospholipid molecules arranged like a sandwich (referred to as a "phospholipid bilayer"). Each phospholipid molecule consists of a phosphate head and a lipid tail. The phosphate head is polar/ That is, it is hydrophilic (attracted to water). In contrast, the lipid tail is lipophilic (attracted to lipid-soluble substances). The two phospholipid layers are oriented on opposing sides of the membrane so that they are approximate mirror images of each other. The polar heads face outward and the lipid tails inward in the membrane sandwich, as illustrated in Figure 2. The cell membrane is tightly packed with these phospholipid molecules—interspersed with various proteins and cholesterol molecules. Some proteins span across the entire membrane providing for the formation of aqueous channels or pores. Some toxicants move across a membrane barrier with relative ease while others find it difficult or impossible. Those that can cross the membrane, do so by one of two general methods: either passive transfer or facilitated transport. Passive transfer consists of simple diffusion (or osmotic filtration) and is "passive" in that there is no requirement for cellular energy or assistance. Some toxicants can not simply diffuse across the membrane. They require assistance that is facilitated by specialized transport mechanisms. The primary types of specialized transport mechanisms are: - facilitated diffusion; - active transport; and - endocytosis (phagocytosis and pinocytosis). Passive transfer is the most common way that xenobiotics cross cell membranes. Two factors determine the rate of passive transfer: - differences in concentrations of the substance on opposite sides of the membrane (substance moves from a region of high concentration to one having a lower concentration. Diffusion will continue until the concentration is equal on both sides of the membrane); and - ability of the substance to move either through the small pores in the membrane or through the lipophilic interior of the membrane. Properties of the chemical substance that affect its ability for passive transfer are: - lipid solubility; - molecular size; and - degree of ionization (that is, the electrical charge of an atom). Substances with high lipid solubility readily diffuse through the phospholipid membrane. Small water-soluble molecules can pass across a membrane through the aqueous pores, along with normal intracellular water flow. Large water-soluble molecules usually can not make it through the small pores, although some may diffuse through the lipid portion of the membrane, but at a slow rate. In general, highly ionized chemicals have low lipid solubility and pass with difficulty through the lipid membrane. Most aqueous pores are about 4 ångström (Å) in size and allow chemicals of molecular weight 100-200 to pass through. Exceptions are membranes of capillaries and kidney glomeruli that have relatively large pores (about 40Å) that allow molecules up to a molecular weight of about 50,000 (molecules slightly smaller than albumen which has a molecular weight of 60,000) to pass through. Facilitated diffusion is similar to simple diffusion in that it does not require energy and follows a concentration gradient. The difference is that it is a carrier-mediated transport mechanism. The results are similar to passive transport but faster and capable of moving larger molecules that have difficulty diffusing through the membrane without a carrier. Examples are the transport of sugar and amino acids into red blood cells (RBCs), and into the central nervous system (CNS). Some substances are unable to move with diffusion, unable to dissolve in the lipid layer, and are too large to pass through the aqueous channels. For some of these substances, active transport processes exist in which movement through the membrane may be against the concentration gradient: they move from low to higher concentrations. Cellular energy from adenosine triphosphate (ADP) is required in order to accomplish this. The transported substance can move from one side of the membrane to the other side by this energy process. Active transport is important in the transport of xenobiotics into the liver, kidney, and central nervous system and for maintenance of electrolyte and nutrient balance. Many large molecules and particles can not enter cells via passive or active mechanisms. However, some may enter, as yet, by a process known as endocytosis. In endocytosis, the cell surrounds the substance with a section of its cell wall. This engulfed substance and section of membrane then separates from the membrane and moves into the interior of the cell. The two main forms of endocytosis are phagocytosis and pinocytosis. In phagocytosis (cell eating), large particles suspended in the extracellular fluid are engulfed and either transported into cells or are destroyed within the cell. This is a very important process for lung phagocytes and certain liver and spleen cells. Pinocytosis (cell drinking) is a similar process but involves the engulfing of liquids or very small particles that are in suspension within the extracellular fluid. The gastrointestinal tract (GI tract, the major portion of the alimentary canal) can be viewed as a tube going through the body. Its contents are considered exterior to the body until absorbed. Salivary glands, the liver, and the pancreas are considered accessory glands of the GI tract as they have ducts entering the GI tract and secrete enzymes and other substances. For foreign substances to enter the body, they must pass through the gastrointestinal mucosa, crossing several membranes before entering the blood stream. Substances must be absorbed from the gastrointestinal tract in order to exert a systemic toxic effect, although local gastrointestinal damage may occur. Absorption can occur at any place along the entire gastrointestinal tract. However, the degree of absorption is strongly site-dependent. Three main factors affect absorption within the various sites of the gastrointestinal tract: - type of cells at the specific site; - period of time that the substance remains at the site; and - pH of stomach or intestinal contents at the site. Under normal conditions, xenobiotics are poorly absorbed within the mouth and esophagus, due mainly to the very short time that a substance resides within these portions of the gastrointestinal tract. There are some notable exceptions. For example, nicotine readily penetrates the mouth mucosa. Also, nitroglycerin is placed under the tongue (sublingual) for immediate absorption and treatment of heart conditions. The sublingual mucosa under the tongue and in some other areas of the mouth is thin and highly vascularized so that some substances will be rapidly absorbed. The stomach, having high acidity (pH 1-3), is a significant site for absorption of weak organic acids, which exist in a diffusible, nonionized and lipid-soluble form. In contrast, weak bases will be highly ionized and therefore are absorbed poorly. Chemically, the acidic stomach may break down some substances. For this reason those substances must be administered in gelatin capsules or coated tablets, that can pass through the acidic stomach into the intestine before they dissolve and release their contents. Another determinant that affects the amount of a substance that will be absorbed in the stomach is the presence of food. Food ingested at the same time as the xenobiotic may result in a considerable difference in absorption of the xenobiotic. For example, the LD50 for Dimethline (a respiratory stimulant) in rats is 30 mg/kg (or 30 parts per million) when ingested along with food, but only 12 mg/kg when it is administered to fasting rats. The greatest absorption of chemicals, as with nutrients, takes place in the intestine, particularly in the small intestine. The intestine has a large surface area consisting of outward projections of the thin (one-cell thick) mucosa into the lumen of the intestine (the villi). This large surface area facilitates diffusion of substances across the cell membranes of the intestinal mucosa. Since the intestinal pH is near neutral (pH 5-8), both weak bases and weak acids are nonionized and are usually readily absorbed by passive diffusion. Lipid soluble, small molecules effectively enter the body from the intestine by passive diffusion. In addition to passive diffusion, facilitated and active transport mechanisms exist to move certain substances across the intestinal cells into the body, including such essential nutrients as glucose, amino acids and calcium. Also, strong acids, strong bases, large molecules, and metals (and some important toxins) are transported by these mechanisms. For example, lead, thallium, and paraquat (herbicide) are toxicants that are transported across the intestinal wall by active transport systems. The high degree of absorption of ingested xenobiotics is also due to the slow movement of substances through the intestinal tract. This slow passage increases the length of time that a compound is available for absorption at the intestinal membrane barrier. Intestinal microflora and gastrointestinal enzymes can affect the toxicity of ingested substances. Some ingested substances may be only poorly absorbed but they may be biotransformed within the gastrointestinal tract. In some cases, their biotransformed products may be absorbed and be more toxic than the ingested substance. An important example is the formation of carcinogenic nitrosamines from non-carcinogenic amines by intestinal flora. Very little absorption takes place in the colon and rectum. As a general rule, if a xenobiotic has not been absorbed after passing through the stomach or small intestine, very little further absorption will occur. However, there are some exceptions, as some medicines may be administered as rectal suppositories with significant absorption. An example, is Anusol (hydrocortisone preparation) used for treatment of local inflammation which is partially absorbed (about 25%). Many environmental and occupational agents as well as some pharmaceuticals are inhaled and enter the respiratory tract. Absorption can occur at any place within the upper respiratory tract. However, the amount of a particular xenobiotic that can be absorbed at a specific location is highly dependent upon its physical form and solubility. There are three basic regions to the respiratory tract: - nasopharyngeal region; - tracheobronchial region; and - pulmonary region. By far the most important site for absorption is the pulmonary region consisting of the very small airways (bronchioles) and the alveolar sacs of the lung. The alveolar region has a very large surface area (about 50 times that of the skin). In addition, the alveoli consist of only a single layer of cells with very thin membranes that separate the inhaled air from the blood stream. Oxygen, carbon dioxide and other gases pass readily through this membrane. In contrast to absorption via the gastrointestinal tract or through the skin, gases and particles, which are water-soluble (and thus blood soluble), will be absorbed more efficiently from the lung alveoli. Water-soluble gases and liquid aerosols can pass through the alveolar cell membrane by simple passive diffusion. In addition to solubility, the ability to be absorbed is highly dependent on the physical form of the agent (that is, whether the agent is a gas/vapor or a particle). The physical form determines penetration into the deep lung. A gas or vapor can be inhaled deep into the lung and if it has high solubility in the blood, it is almost completely absorbed in one respiration. Absorption through the alveolar membrane is by passive diffusion, following the concentration gradient. As the agent dissolves in the circulating blood, it is taken away so that the amount that is absorbed and enters the body may be quite large. The only way to increase the amount absorbed is to increase the rate and depth of breathing. This is known as ventilation-limitation. For blood-soluble gases, equilibrium between the concentration of the agent in the inhaled air and that in the blood is difficult to achieve. Inhaled gases or vapors, which have poor solubility in the blood, have quite limited capacity for absorption. The reason for this is that the blood can become quickly saturated. Once saturated, blood will not be able to accept the gas and it will remain in the inhaled air and then exhaled. The only way to increase absorption would be to increase the rate of blood supply to the lung. This is known as flow-limitation. Equilibrium between blood and the air is reached more quickly for relatively insoluble gases than for soluble gases. The absorption of airborne particles is usually quite different from that of gases or vapors. The absorption of solid particles, regardless of solubility, is dependent upon particle size. Large particles (>5 µM) are generally deposited in the nasopharyngeal region (head airways region) with little absorption. Particles 2-5 µM can penetrate into the tracheobronchial region. Very small particles (<1 µM) are able to penetrate deep into the alveolar sacs where they can deposit and be absorbed. Minimal absorption takes place in the nasopharyngeal region due to the cell thickness of the mucosa and the rapid movement of gases and particles through the region. Within the tracheobronchial region, relatively soluble gases can quickly enter the blood stream. Most deposited particles are moved back up to the mouth where they are swallowed. Absorption in the alveoli is quite efficient compared to other areas of the respiratory tract. Relatively soluble material (gases or particles) is quickly absorbed into systemic circulation. Pulmonary macrophages exist on the surface of the alveoli. They are not fixed and not a part of the alveolar wall. They can engulf particles just as they engulf and kill microorganisms. Some non-soluble particles are scavenged by these alveolar macrophages and cleared into the lymphatic system. The nature of toxicity of inhaled materials depends on whether the material is absorbed or remains within the alveoli and small bronchioles. If the agent is absorbed and is also lipid soluble, it can rapidly distribute throughout the body passing through the cell membranes of various organs or into fat depots. The time to reach equilibrium is even greater for the lipid soluble substances. Chloroform and ether are examples of lipid-soluble substances with high blood solubility. Non-absorbed foreign material can also cause severe toxic reactions within the respiratory system. This may take the form of chronic bronchitis, alveolar breakdown (emphysema), fibrotic lung disease, and even lung cancer. In some cases, the toxic particles can kill the alveolar macrophages, which results in a lowering of the bodies' respiratory defense mechanism. In contrast to the thin membranes of the respiratory alveoli and the gastrointestinal villi, the skin is a complex, multilayer tissue. For this reason, it is relatively impermeable to most ions as well as aqueous solutions. It represents, therefore, a barrier to most xenobiotics. Some notable toxicants, however, can gain entry into the body following skin contamination. For example, certain commonly used organophosphate pesticides have poisoned agricultural workers following dermal exposure. The neurological warfare agent, Sarin, readily passes through the skin and can produce quick death to exposed persons. Several industrial solvents can cause systemic toxicity by penetration through the skin. For example, carbon tetrachloride penetrates the skin and causes liver injury. Hexane can pass through the skin and cause nerve damage. The skin consists of three main layers of cells: - dermis; and - subcutaneous tissue. The epidermis (and particularly the stratum corneum) is the only layer that is important in regulating penetration of a skin contaminant. It consists of an outer layer of cells, packed with keratin, known as the stratum corneum layer. The stratum corneum is devoid of blood vessels. The cell walls of the keratinized cells are apparently double in thickness due to the presence of the keratin, which is chemically resistant and an impenetrable material. The blood vessels are usually about 100 µM from the skin surface. To enter a blood vessel, an agent must pass through several layers of cells that are generally resistant to penetration by chemicals. |By Daniel de Souza Telles (File:HumanSkinDiagram.xcf) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0-2.5-2.0-1.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons| The thickness of the stratum corneum varies greatly with regions of the body. The stratum corneum of the palms and soles is very thick (400-600 µM) whereas that of the arms, back, legs, and abdomen is much thinner (8-15 µM). The stratum corneum of the axillary (underarm) and inquinal (groin) regions is the thinnest with the scrotum especially thin. As expected, the efficiency of penetration of toxicants is inversely related to the thickness of the epidermis. Any process that removes or damages the stratum corneum can enhance penetration of a xenobiotic. Abrasion, scratching, or cuts to the skin will make it more penetrable. Some acids, alkalis, and corrosives can injure the stratum corneum and increase penetration to themselves or other agents. The most prevalent skin conditions that enhance dermal absorption are skin burns and dermatitis. Toxicants move across the stratum corneum by passive diffusion. There are no known active transport mechanisms functioning within the epidermis. Polar and nonpolar toxicants diffuse through the stratum corneum by different mechanisms. Polar compounds (which are water-soluble) appear to diffuse through the outer surface of the hydrated keratinized layer. Nonpolar compounds (which are lipid-soluble) dissolve in and diffuse through the lipid material between the keratin filaments. Water plays an important role in dermal absorption. Normally, the stratum corneum is partially hydrated (~7% by weight). Penetration of polar substances is about 10 times as effective as when the skin is completely dry. Additional hydration can increase penetration by 3-5 times which further increases the ability of a polar compound to penetrate the epidermis. A solvent sometimes used to promote skin penetration of drugs is dimethyl sulfoxide (DMSO). It facilitates penetration of chemicals by an unknown mechanism. Removal of the lipid material creates holes in the epidermis. This results in a reversible change in protein structure due to substitution of water molecules. Considerable species differences exist in skin penetration and can influence the selection of species used for safety testing. Penetration of chemicals through the skin of the monkey, pig, and guinea pig is often similar to that of humans. The skin of the rat and rabbit is generally more permeable whereas the skin of the cat is generally less permeable. For practical reasons and to assure adequate safety, the rat and rabbit are normally used for dermal toxicity safety tests. In addition to the stratum corneum, small amounts of chemicals may be absorbed through the sweat glands, sebaceous glands, and hair follicles. Since these structures represent, however, only a very small percentage of the total surface area, they are not ordinarily important in dermal absorption. Once a substance penetrates through the stratum corneum, it enters lower layers of the epidermis, the dermis, and subcutaneous tissue. These layers are far less resistant to further diffusion. They contain a porous, nonselective aqueous diffusion medium, that can be penetrated by simple diffusion. Most toxicants that have passed through the stratum corneum can now readily move on through the remainder of the skin and enter the circulatory system via the large numbers of venous and lymphatic capillaries in the dermis. Other Routes of Exposure In addition to the common routes of environmental, occupational, and medical exposure (oral, respiratory, and dermal), other routes of exposure may be used for medical purposes. Many pharmaceuticals are administered by parenteral routes. That is, by injection into the body usually via syringe and hollow needle. Intradermal injections are made directly into the skin, just under the stratum corneum. Tissue reactions are minimal and absorption is usually slow. If the injection is beneath the skin, the route is referred to as a subcutaneous injection. Since the subcutaneous tissue is quite vascular, absorption into the systemic circulation is generally rapid. Tissue sensitivity is also high and thus irritating substances may induce pain and an inflammatory reaction. Many pharmaceuticals, especially antibiotics and vaccines are administered directly into muscle tissue (the intramuscular route). It is an easy procedure and the muscle tissue is less likely to become inflamed compared to subcutaneous tissue. Absorption from muscle is about the same as from subcutaneous tissue. Substances may be injected directly into large blood vessels when they are irritating or when an immediate action is desired, such as anesthesia. These are known as intravenous or intraarterial routes depending on whether the vessel is a vein or artery. Parenteral injections may also be made directly into body cavities, rarely in humans but frequently in laboratory animal studies. Injection into the abdominal cavity is known as intraperitoneal injection. If it is injected directly into the chest cavity, it is referred to as an intrapleural injection. Since the pleura and peritoneum have minimal blood vessels, irritation is usually minimal and absorption is relatively slow. Implantation is another route of exposure of increasing concern. A large number of pharmaceuticals and medical devices are now implanted in various areas of the body. Implants may be used to allow slow, time-release of a substance (e.g., hormones). In other cases, no absorption is desired. For example, for implanted medical devices and materials (e.g., artificial lens, tendons and joints, and cosmetic reconstruction). Some materials enter the body via skin penetration as the result of accidents or violence (for example, weapons). The absorption in these cases is highly dependent on the nature of the substance. Metallic objects (such as bullets) may be poorly absorbed whereas more soluble materials that are thrust through the skin and into the body from accidents may be absorbed rapidly into the circulation. Novel methods of introducing substances into specific areas of the body are often used in medicine. For example, conjunctival instillations (eye drops) are used for treatment of ocular conditions where high concentrations are needed on the outer surface of the eye, not possible by other routes. Therapy for certain conditions require that a substance be deposited in body openings where high concentrations and slow release may be needed while keeping systemic absorption to a minimum. For these substances, the pharmaceutical agent is suspended in a poorly absorbed material such as beeswax with the material known as a suppository. The usual locations for use of suppositories are the rectum and vagina. National Library of Medicine Toxicology Tutor II, Toxicogenetics, Adsorption Disclaimer: This article is taken wholly from, or contains information that was originally published by, the National Library of Medicine. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the National Library of Medicine should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content.
http://www.eoearth.org/article/Absorption_of_toxicants
4.09375
Middle Level-Jr. High School Visual literacy strategies and classroom-ready lessons to build student content mastery and skills in US History for all learners. A Library of Congress Teaching with Primary Sources Project. It's 1492. What would Columbus tweet (WWCT)? What would the Arawak tweet? It's now 1516. Genocide is in full effect. What would Bartolom de Las Casas tweet? Learn about new educational programs at the David M. Rubenstein National Center for White House History at Decatur House and explore new resources for teachers and students. Participants will learn two effective strategies focusing on one essential question: Did the South have right to secede? Strategies that are used separately are combined to facilitate a group activity. Specific interactive instructional strategies and activities utilized at the beginning of a curricular unit to reinforce content utilizing social studies vocabulary. Pre-service and current elementary school teachers in a graduate social studies course utilized digital storytelling to document and accurately portray the first-person narratives of international students from multiple countries. The popular series of books (and first movie) will be examined for ways that they can be used to teach and engage students in social studies concepts, strategies, and topics. Project-based learning unites with Social Studies in this intriguing web-based inquiry as students communicate and compete with players from around the world in the global Landmark Games. Develop student global citizenship by linking the landscapes, peoples and cultures of two iconic mountains of the Pacific.Gain exciting, cross curriculum activities focused on Mt. Fuji and Mt. Rainier. Examine historical sites in Turkey to teach timelines in this lesson-to-go. Students will be introduced to major civilizations and come to understand the importance of location.
http://www.ncss.org/middleleveljrhighschool?page=3
4.125
Science Fair Project Encyclopedia Field of view Your field of view is that part of the observable world that you are able to see at any given moment. Different animals have different fields of view, depending on the placement of the eyes. Humans have a 180-degree forward-facing field of view, while some birds have a complete 360-degree field of view. In addition the vertical range of the field of view may vary. The range of visual abilities is not uniform across a field of view, and varies from animal to animal. For example, binocular vision, which is important for depth perception, only covers 140 degrees of the field of vision in humans; the remaining peripheral 40 degrees have no binocular vision (because of the lack of overlap in the images from either eye for those parts of the field of view). The afore-mentioned birds would have a scant 10 or 20 degrees of binocular vision. Similarly, color vision and the ability to perceive motion and shape vary across the field of view; in humans the former is concentrated in the center of the visual field, while the latter tends to be much stronger in the periphery. This is due to the much higher concentration of color-sensitive cone cells in the macula, the central region of the retina, as compared to the higher concentration of motion-sensitive rod cells in the periphery. Since cone cells require considerably brighter light sources to be activated, the result of this distribution is that peripheral vision is relatively much stronger at night. Different neurological difficulties cause characteristic forms of visual disturbances, illustrated below. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Field_of_view