text
stringlengths 197
641k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 15
2.65k
| file_path
stringlengths 125
126
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 58
142k
| score
float64 3.5
5.34
| int_score
int64 4
5
|
---|---|---|---|---|---|---|---|---|---|
Climate researchers working for NASA have determined the International Maritime Organization’s (IMO) 2020 sulfur fuel cap has succeeded in cutting pollution from ocean shipping. But it isn’t all good news.
A team of scientists led by Tianle Yuan determined lower sulfur content in heavy maritime fuels reduced and sometimes eliminated entirely a cloud phenomenon known as ship tracks.
Ship tracks are low-level clouds that can form in the sky and follow a ship sailing from one place to another. These so-called anomalous clouds perform a surprising function: Like other clouds, they reflect the sun’s energy away from the Earth, thereby helping to cool the planet.
Less sulfur in fuel means better air quality, particularly in coastal areas near shipping lanes and ports. But fewer ship tracks could therefore contribute to global warming. By how much is unknown, according to Yuan, a professor at the University of Maryland, Baltimore County and a NASA scientist.
“In terms of climate change and the Earth’s temperature, we will have some temporary warming effect,” he told Professional Mariner. “The trouble is we don’t exactly know how much of a warming effect.”
Yuan and nine other scientists published an article on this phenomenon in July in the journal Science.
Ship tracks form when water vapors in the air coalesce around tiny particles of pollution emitted by ship exhaust, Yuan explained. These long, linear cloud formations reflect more light, thus making them appear brighter than clouds seeded by non-pollution sources such as sea salt.
Scientists first discovered ship tracks in the 1960s when reviewing data from early satellites orbiting the Earth. A couple decades later, researchers figured out that ship tracks are useful for studying clouds and their broader effects on the Earth’s climate.
Studying these relatively small cloud formations over the entire Earth is no easy task. Yuan and his team developed an advanced algorithm to review satellite ship track data across the world between 2003 and 2020. When they reviewed results from 2020, the team noticed a surprising decline in ship track formations in every major shipping lane. They initially theorized the decline stemmed from a drop in global shipping during the Covid-19 pandemic. But that proved incorrect, as global maritime traffic barely changed during that period.
Later, they learned about the IMO regulation capping sulfur content in maritime fuel at .5 percent that took effect in 2020. The rule cut sulfur levels by 86 percent, changing the composition of exhaust leaving the stacks and sharply reducing the volume of sulfur particles that help create ship tracks.
“We didn’t even know the regulation that took effect in 2020 because it was totally unrelated to us and our research,” Yuan said.
“So, in addition to the climate science piece, this itself is interesting because it shows the regulation that changed in 2020 must be working because otherwise, we wouldn’t see that global change in ship tracks all of a sudden,” he added. “And it extends to 2021, too.”
The algorithm identified other interesting changes in trans-Pacific shipping over the years, including reduced ship track activity during the Great Recession starting in 2008 and later in the mid-2010s as China reduced imports of certain commodities.
Perhaps more significantly, the researchers found that earlier regulatory changes around sulfur emissions, such as the establishment of IMO Emission Control Areas on the U.S. West Coast, did not have a similar effect on ship tracks. Ocean carriers, the researchers determined, responded by changing their routing to avoid coastal zones where they are required to burn more expensive ultra-low-sulfur fuels.
An IMO spokeswoman said the agency welcomed the research confirming the success of the 2020 IMO sulfur requirements.
The broader effects of the 2020 fuel regulations are still unknown and likely will be for some time. But much of the world falls outside the emission control areas, meaning reduced sulfur output will lower pollution levels and improve air quality for tens of millions of people living near shipping lanes and ports around the world.
The longer-term question about how a decline in ship tracks will impact global climate and temperatures, if at all, is something that will take far more study to determine.
“Ship tracks are great natural laboratories for studying the interaction between aerosols and low clouds, and how that impacts the amount of radiation Earth receives and reflects back to space,” Yuan said in a statement released by NASA. “That is a key uncertainty we face in terms of what drives climate right now.” • | <urn:uuid:92ef7035-29a7-4df0-b458-135e92ecb74c> | CC-MAIN-2023-06 | https://professionalmariner.com/article/climate-researchers-find-surprising-effects-from-imo-sulfur-cap/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.952811 | 929 | 3.875 | 4 |
VO2 max Basics
One of the most basic physiological indicators of your fitness and running performance is your VO2 max. Here are some VO2 max basics. VO2 max or maximal oxygen uptake is a measure of how much oxygen your body can process to produce energy. It is measured in milliliters of oxygen per kilogram of body weight per minute – ml/kg/min. When you increase your running speed your body demands more and more energy to keep you on pace. To produce all of that energy your body uses up a lot of oxygen. Eventually you reach the level at which your body maxes out its ability to deliver and extract oxygen. At that point your oxygen consumption has reached its peak and remains mostly steady. At that level of exercise you have reached your VO2 max or maximal oxygen uptake. Any increases in your running pace or exercise intensity, past your VO2 max, must be fueled by anaerobic (without oxygen) energy sources.
The concept of VO2 max got its start back in the early 1920’s thanks to the efforts of two physiologists – A.V. Hill and Hartley Lupton. Hill and Lupton were the first to suggest that all energy came from either aerobic (with oxygen) or anaerobic (without oxygen) energy sources. Their ideas gave rise to the current popular theories of aerobic (with oxygen) and anaerobic energy production for runners.
Components of VO2 max
There are two principal physical components that determine your VO2 max
- A Big, Efficient Pump – First and foremost your VO2 max depends upon the ability of your cardiovascular system to deliver oxygen rich blood to your working muscles. High stroke volume (amount of blood moved through your heart with each beat), large, elastic veins and arteries capable of carrying the blood flow and a high maximal heart rate all contribute to an elevated VO2 max.
- Good Chemistry – Once the oxygenated blood reaches your muscles, they must be able to extract and use the oxygen to produce energy. Aerobic energy production takes place in structures called mitochondria in your muscle cells. A muscle that is more densely packed with mitochondria will be able to extract and use more oxygen. In addition to more mitochondria there are a number of muscle enzymes that help extract and use the oxygen. Both mitochondrial density and the availability of appropriate enzymes are increased through proper training.
VO2 max and Running Performance
In the past it was believed that VO2 max was the primary determining factor in running performance. Scientists and coaches in those days thought that a higher VO2 max translated directly to higher levels of running performance. Their thinking was logical – if you can process greater volumes of oxygen you should perform better as a runner. But, when statistics from elite runners were analyzed it appeared that it wasn’t that simple. When comparing the measured VO2 max of top level athletes to their running performances it was discovered that there was little correlation between VO2 max and performance.
Today we know that VO2 max is only one contributing factor in running performance. Other factors include:
- Running Economy – How efficient you are at running has a huge influence on your running performance. If two athletes with identical VO2 max levels were to compete against each other, the runner who has the best running economy will be victorious because they are able to run faster while using less energy.
- Muscle Elasticity – This is a measure of how much energy your muscles will return. Your muscles are like big springs. During the stance or foot strike phase of your running stride your muscles eccentrically contract and store a lot of energy. A highly elastic muscle will store and return a high percentage of that energy during the push off phase of your stride. A more elastic muscle returns more energy and give your more power in your stride. You run easier and faster while using less energy.
Improving VO2 max
In the past, increasing training volume was the workout of choice for improving VO2 max. While that type of training does increase VO2 max, the most recent studies agree that high intensity interval training does the best job at elevating VO2 max. Genetics plays a role in your VO2 max but you can improve your VO2 max by up to 60% through training. Most increases in VO2 max takes place in the first 8 to 12 weeks of training.
Estimating Your VO2 max Pace
Running at your VO2 max pace is a very efficient workout for improving your speed and speed endurance The most accurate way to determine your VO2 max pace is through a laboratory test. While a lab test is the most accurate way it is also expensive and not a realistic option for most runners. You can estimate your VO2 max pace with a 6 minute time trial on a track.
6 minute time trial
To perform this test go to your local running track and warm up thoroughly. Then run for 6 minutes at the fastest pace you can maintain for the entire test. Try to cover as much ground as possible in those 6 minutes. You can now determine your vVO2 max (velocity at VO2 max or VO2 max pace) using the following formula:
Distance covered in meters / 360 = meters covered per second
For example, if you covered 1600 meters in 6 minutes then your vVO2 max would be 1600/360 = 4.44 meters per second.
To convert your meters per second pace to a 400 meter pace divide 400 by meters/second. In this example the equation would be 400/4.44 = 90 seconds. Your VO2 max pace would then be 90 seconds per 400 meters.
To make this test more accurate do three of these tests and average the pace of the three tests. Don’t do the three tests on consecutive days. Allow a couple of days recover between each test.
You can also do a rough estimate of your vVO2 max by using your 3K pace or about 10 seconds per mile faster than your 5K race pace.
VO2 max is a complex subject. These VO2 max basics are just scratching the surface. We will be publishing more VO2 max basics in the future. | <urn:uuid:60a7200e-8dc3-4d4e-b2d9-e52eea1ab492> | CC-MAIN-2023-06 | https://runningplanetjournal.com/2020/08/13/vo2-max-basics/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.937579 | 1,249 | 3.828125 | 4 |
Physics & Chemistry
The physical and chemical status of the Scheldt estuary are studied intensively in Flanders and the Netherlands. These conditions are closely related to external influences such as climate change and pollution as well as to the morpho- and hydrodynamics and ecological functioning of the Scheldt estuary.
Water quality is determined by substance flows (exchange of substances via various processes) that occur due to the interaction between the water column and bottom, the land, the air, and the organisms within. Parameters used to describe the physical-chemical condition can vary strongly in time and space. Therefore, monitoring the physical-chemical system includes sampling the surface water, the water column, and the bottom on different locations in the estuary. In addition, continuous measurements at stations are combined with measurements of periodic boat trips. Standard physical parameters such as temperature and salinity are measured, but also oxygen concentration, light climate, nutrients and concentrations of substances that indicate pollution.
The Scheldt basin has a poor water quality for decades due to the intense exploitation of the Scheldt and its estuary. This caused a strong decline in plankton, benthos and fish stock diversity. In recent years, various efforts have been made to combat this decline. For example, more wastewater is purified and new mudflats and saltmarshes are created, which supply oxygen and filter an excess of nitrogen out of the water. These efforts have led to a slight improvement of the water quality and restoration of nature. | <urn:uuid:64de1f8a-1a46-4b4e-a8ca-f59266c85e6f> | CC-MAIN-2023-06 | https://scheldemonitor.nl/index.php/en/fysisch-chemisch | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.956242 | 308 | 3.703125 | 4 |
Moving around the world was just as common in the time of your ancestors as it is today. Between 1836 and 1914 millions of Europeans migrated to the United States in search of jobs and a better life. Trace the whereabouts of your emigrant ancestors in our migration records and passenger lists.
The United States was founded by people who set out to discover new lands, new opportunities and a better way of life. Between 1836 and 1914 millions of Europeans migrated to the United States, despite the harrowing fact that one in seven travelers died during the transatlantic voyage.
Our ancestors brought with them family, friends, and a wealth of culture that contributed to the melting pot that is American society today.
The history of immigration in the U.S. is divided into four major periods of mass migration. During the colonial period in the 17th century, about 400,000 people from England emigrated to the burgeoning New World. More than half of all European immigrants to Colonial America during the 1600s and 1700s arrived as indentured servants. The mid-19th century brought more immigrants from Northern Europe. In the early 20th century most new immigrants came from Southern and Eastern Europe. Then, in the middle of the 20th century, most immigrants came from Latin America and Asia.
By the year 1910, 13.5 million immigrants lived in the United States. Congress passed the Emergency Quota Act in 1921, followed by the Immigration Act of 1924, which was designed to restrict immigration from Southern and Eastern Europe.
Although immigrants have never been required to apply for citizenship, any foreign-born resident may apply for U.S. citizenship privileges and responsibilities. The process for U.S. citizenship often took many years and the application to become "naturalized" was an early step in the process.
Naturalization papers are important resources that provide lots of information about an immigrant.
This category contains passenger lists, naturalization records, and lists of immigrants from Russia, Italy, Ireland and Germany.
Passenger lists are an invaluable source for those with migratory ancestors.
Our Transatlantic Migration Index contains the names of over 40,000 individuals who travelled from North America to Great Britain and Ireland between 1858 and 1870. Ports in Great Britain and Ireland were required to record these details due to concerns over American support for an uprising by the Fenians in Ireland.
Irish emigrants continued to pass through Irish and British ports well into the late 19th and 20th centuries. The Passenger Lists 1890 – 1960 cover departures from many British and Irish ports to destinations worldwide. Included in these records are the passenger lists for the RMS Titanic and the thousands of immigrants who disembarked at Ellis Island after 1892.
From 1606 people emigrated from England to countries such as the United States, India, Canada, Australia, South Africa, and New Zealand. Emigration increased after 1815 when it became a means of poor relief. Emigration also increased during gold rushes in Australia, New Zealand, South Africa, and the United States. Emigration from England peaked in the 1880s. Records were not required for free emigrants to the United States until 1776; Canada before 1865; or Australia, New Zealand, and South Africa until the 20th century.
About 1855, passports were a standard document issued only to British nationals. They were in the form of a single-sheet paper document. The Aliens Act 1905 marked the beginnings of immigration control in Britain. It was aimed at preventing paupers and criminals from entering the country, and it introduced immigration controls and registration.
Following the British Nationality and Status Aliens Act 1914, passports came to include a photograph and physical description of the holder. Our Passenger Lists, available in partnership with The National Archives, hold the details of over 24 million passengers leaving the UK on long-haul voyages between 1890 and 1960. Use our Passenger Lists in combination with the Register of Passport Applications, and our other migration records, to learn more about the movements of your 19th and 20th century ancestors.
Permanent European settlement in Australia started in 1788 with the establishment of the British Crown colony of New South Wales. From around 1815, the colony began to expand rapidly as free settlers arrived from Britain and Ireland. Transportation of convicts was stopped in 1840, although it continued to Van Diemen’s Land and Moreton Bay (which later became Queensland) for several years more. Perth in Western Australia didn’t prosper and requested convicts. South Australia was the only Australian colony settled solely by free settlers.
With over 310,000 records covering 1788 to the late 1800s across Australia and New Zealand, our travel and migration records are a great genealogy tool for learning more about your ancestry.
The Convict Arrivals in New South Wales is built from government indent records and holds the details of 97,797 convicts who arrived in New South Wales between 1788 and 1842. The records include Mary Bryant, a Cornish convict sent to Australia, who became one of the first successful escapees from the Australian penal colony, with James Boswell helping her case in England. Thomas Muir, the Scottish Political Reformist, was transported to Australia for 14 years for attempting to change the political system in Britain, and was involved in political reform in the US, France and Ireland.
The Queensland Early Pioneers Index 1824-1859 is an invaluable resource for family genealogists researching the pre-separation period. The index contains 156,760 references to approximately 50,000 names, taken from 75 sources located in Brisbane. It has been compiled from primary sources and contains references to those who were living in what is now Queensland prior to separation from New South Wales at the end of 1859. Wide ranging sources, including convict, administration, immigration, law, land, newspaper, hospital and personal records. | <urn:uuid:16cf987f-5e54-436e-a070-ffb7f26453dc> | CC-MAIN-2023-06 | https://search.findmypast.com/search-united-states-records-in-immigration-and-travel?firstname=boniface&firstname_variants=true&lastname=de%20spolete | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.968327 | 1,187 | 4.09375 | 4 |
The Florida Keys coral reefs stopped growing or significantly slowed their growth at least 3,000 years ago and have been balanced between persistence and erosion ever since, according to a new study by the U.S. Geological Survey (USGS). The study, published in the journal Global Change Biology, also points to coral bleaching and disease outbreaks as signs that changing conditions may have recently tipped the 200-mile-long coral reef tract into a state of erosion.
USGS marine scientists based in St. Petersburg, Fla., analyzed 46 coral reef cores collected throughout the Keys, reconstructing the reefs’ growth from 8,000 years ago, when layers of living coral began building up on top of older bedrock, to the present day. Throughout that time, long-lasting climate cycles and associated changes in ocean temperatures have been the most-important factors controlling the growth of Florida’s reefs. A shift toward cooler water temperatures effectively ended the reefs’ development long before the visible declines in coral health and coral cover of the past few decades, the scientists reported.
“If you were to test a sample from the top layer of a typical Florida reef, you would most likely find that it’s between 3,000 and 6,000 years old,” said Lauren Toth, a USGS research oceanographer and the study’s lead author. “Florida’s reefs still had living corals after that time, they just weren’t building much new reef structure.”
The researchers examined reef core samples collected between 1976 and 2017 from Biscayne National Park in Miami-Dade County to Dry Tortugas National Park 70 miles from Key West. The scientists used radiocarbon dating—a standard technique for finding the ages of corals and other materials—and measured the amount the reef grew between the dates they had identified to create the first comprehensive reconstruction of coral reef growth along the entire Florida reef tract.
Modern coral reefs started growing off the Florida peninsula more than 8,000 years ago. Their most rapid growth rate, almost 10 feet every thousand years, peaked about 7,000 years ago, when water temperatures were ideal for coral growth, the USGS researchers found. About 6,000 years ago, the reefs’ growth rate slowed to about 3 feet per thousand years.
Corals can be killed by water that is hotter or colder than the narrow band of temperatures in which most species grow best, between about 65 and 85 degrees Fahrenheit. Most reefs are found in the tropics, but the Florida Keys reef tract is unique because it lies in subtropical waters.
About 5,000 years ago, a natural cooling cycle made the seas off Florida prone to winter cold snaps. In the colder conditions of the past few thousand years, the reefs became “geologically senescent,” meaning that reef growth was negligible and just a veneer of living coral remained, the study found. More than one-third of the reefs have not grown at all in the past 3,000 years, and the rest have not kept up with rising sea level.
Other factors, such as influxes of estuarine water from shallow Florida Bay, also stressed the Keys reefs, the researchers found. But they were likely not as important as the corals’ repeated exposure to cold water in winter.
Even after the Keys reefs stopped growing upward, they supported diverse communities of marine life, protected the island chain from storm waves and erosion, and provided other ecosystem functions for thousands of years. But in the last few decades, warmer water, coral diseases, bleaching and other stresses caused the reefs to begin eroding, the researchers said.
“For 3,000 years, Florida’s reefs have been balanced at a delicate tipping point. Although reefs were no longer growing, there was enough living coral to prevent them from eroding,” said Toth. “But with the dramatic declines in living coral in Florida and around the world in recent decades, we may now be on the verge of losing reef structures that took thousands of years to build.” | <urn:uuid:59c00a16-eca7-49c3-bff9-870ea1def4c8> | CC-MAIN-2023-06 | https://sensorsandsystems.com/florida-keys-coral-have-grown-little-in-3000-years/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.952079 | 837 | 3.6875 | 4 |
Play Therapy Explained
Children’s primary language is play. Although sometimes used with adults, play therapy is an approach primarily used to help children ages 3 to 12 explore their lives and freely express thoughts and emotions through play. Therapeutic play normally takes place in our playroom, where the child is encouraged to use free expression and allowing the therapist to observe the child’s choices, decisions, and play style. The goal is to help children learn to express themselves in healthier ways, become more respectful and empathetic, and discover new and more positive ways to solve problems. There are two approaches to play therapy:
Non-directive vs Directive Play Therapy
Non-directive play therapy is based on the principle that children can resolve their own issues given the right conditions and the freedom to play with limited instruction and supervision.
Directive play therapy uses more input from the therapist to help speed up results. Play therapists use both approaches, depending on the child.
When do you recommend Play Therapy?
Therapeutic play helps children with social or emotional deficits learn to communicate better, change their behavior, develop problem-solving skills, and relate to others in positive ways. It is appropriate for children undergoing or witnessing stressful events in their lives, such as a serious illness or hospitalization, domestic violence, abuse, trauma, a family crisis, or an upsetting change in their environment. Play therapy can help children with academic and social problems, learning disabilities, behavioral disorders, anxiety, depression, grief, or anger, as well as those with attention deficit disorders or who are on the autism spectrum.
What is the process like?
The parent or caregiver plays an important role in play therapy for children. An initial phone contact in which the parent can provide information to the clinician about the parent’s concerns. In a separate interview with the child, the therapist can make an assessment and begin to formulate a treatment plan. In the playroom, the child is encouraged to play with specific toys that encourage self-expression and facilitate learning positive behaviors. Arts and crafts, music, sand tray, dancing, storytelling, and other tools may also be incorporated into play therapy. Play therapy usually occurs in weekly sessions for an average of 20 sessions lasting 30 to 45 minutes each.
Why choose a Play Therapist?
Play therapists must undergo additional training and supervision specific to the art of play therapy. The play therapy room must have certain components available and the therapist can remain available as additional support with schools, pediatricians, and other members of the treatment team. | <urn:uuid:78ae90ef-e4d2-4bb3-a981-a328d1a21a5b> | CC-MAIN-2023-06 | https://silverpsychotherapy.com/services-offered/play-therapy/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.940213 | 523 | 3.703125 | 4 |
When a very massive star self-destructs as a supernova, the resulting fireball is so luminous that the event can be seen across billions of light-years' distance — if you look carefully enough.
That's the take-home message from observers led by Jeff Cooke (University of California, Irvine), who've tracked down a pair of supernovae that exploded 11.0 and 11.4 billion years ago. These "lookback times" smash the previous record of about 9 billion years and correspond to when the universe was only about 20% of its present age. A third ancient supernova identified in their study, published in the July 9th edition of Nature, dates to about 8 billion years.
So how'd they do it? Cooke's team utilized "stacking," a technique well known to backyard astrophotographers.
First, the observers identified very distant galaxies in survey images taken by the Canada-France-Hawaii Telescope atop Mauna Kea. Next they combined multiple images of the same star fields (to make the galaxies more evident) and then compared those "stacks" to ones made at different times.
Any change in a galaxy's brightness, particularly in a spot offset from its center, suggested that a supernova had occurred. Follow-up spectroscopy with the bigger Keck telescopes not only confirmed that some candidates were indeed supernovae but also allowed Cooke's team to calculate their redshifts (z = 2.357, 2.013, and 0.808) and, in turn, their age.
These titanic blasts are designated Type IIn supernovae, which result from very massive stars (50 to 100 Suns) that spew much of their mass into space before they explode. Once one of these unstable cores collapses and creates a supernova, the expanding shock wave slams into the previously ejected matter and creates a blaze of ultraviolet-rich light that takes months to fade. In fact, Cooke's team estimates that more than 90% of all supernovae detectable at such great distances must be of this type.
Astronomers are excited by this newfound ability to see such distant blasts. As Cooke points out in a Keck Observatory press release, studying the deaths of these early stars is essential to understanding the evolution of the early universe.
Moreover, the stacking technique should lead to the discovery of even more distant supernovae — possibly even a few of the very first stars to blow themselves apart.
And I'm guessing that these ultra-distant blasts will contribute much to the debate over the enigmatic concept of dark energy.
Astronomers are quick to point out that a potential Type IIn supernova is lurking, menacingly, in our interstellar backyard. The star Eta Carinae, just 7,500 to 8,000 light-years away, has already belched lots of its outer layers to space.
And the star itself is unstable. An outburst in 1843 made it second in nighttime brightness only to Sirius. It dimmed to 8th magnitude and remained there for decades, only to double in brightness during 1998-99.
Needless to say, observers are keeping a close eye on this one! | <urn:uuid:0eae8b17-1ffb-44fb-b3f0-3027a7a30d1d> | CC-MAIN-2023-06 | https://skyandtelescope.org/astronomy-news/image-stacking-nets-ancient-supernovae/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.951143 | 648 | 4.125 | 4 |
Let's figure out what swift is in general. The system of money transfers between banks has existed for as long as the banks themselves. The client gave an order to his bank and the money from his account was sent to the recipient's bank account in another bank, this worked fine while the bulk of the operation was carried out inside the country. As international trade developed, the number of transfers between banks in different countries increased. You have a factory in Italy, and you buy raw materials in Germany, and you need to transfer money for it from an Italian bank, where you have an account, to a German bank, where the seller has an account. This is already somewhat more complicated, different languages and different banking systems, different formats of interbank messages. The exchange of information about payments between banks took place by telegraph or fax in a free form, that is, they simply wrote "so much money, to such and such an account", and it was all sorted out manually.
This was the case before Swift was created, and even earlier it was done by telegraph or even by ordinary paper mail. Of course, the banks sorted it out and the money was transferred, but finding out all the details could take some time, during which either you were left without raw materials for production, or your supplier was waiting for his money. To simplify the exchange of interbank information, primarily for transfers abroad, the swift system was invented in the seventies. Its name translates as the Society of Worldwide Interbank Financial communication channels. The idea was to agree on common standards for the exchange of information between banks from different countries and create a reliable encrypted communication channel.
Swift was founded by 248 large banks from 19 countries. At first, it was used only in Europe, and in the late seventies, American banks joined the system. Well, now, swift unites 11,000 financial organizations from 200 countries of the world. Now the network is used not only for transferring money abroad, but also for conducting operations within the country. Swift is far from the only system for exchanging interbank information, but it is the most common, approximately 80 percent of all transactions pass through it. How does swift work?
Let's look at an example: you have an account in some bank and you want to transfer money from it to another person who has an account in another bank, in another country. Your bank sends a message through the system that the money has been sent. Without waiting for the physical receipt of money, another bank transfers them to the recipient's bank account, because it fully trusts the message received through swift and can process it quickly, since it is compiled according to a unified standard. The ease of processing allows, on the one hand, to speed up the transfer of money, and on the other hand, to reduce the commission for the transfer, to make it cheaper, because there is no need to manually disassemble the mail (What is there? On what account? From whom did it come?). Let's fix it again, swift is a system for transferring payment information between banks, these are not money transfers themselves, but only information. Theoretically, nothing prevents banks from crediting a payment to the recipient's bank account based on a message received in some other way. Russia is one of the most active users of swift. Russian banks are in second place in terms of the number of transactions and fifteenth in terms of total volume. | <urn:uuid:8b0cb5a6-6ea1-40b2-8a42-54698abd2034> | CC-MAIN-2023-06 | https://swift-directory.org/en/country/commonwealth-of-australia/108 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.979589 | 697 | 3.59375 | 4 |
An administrative division of a realm under the control of a central government. (Es 1:16; 2:3, 18) The Bible mentions jurisdictional districts in connection with Israel, Babylon, and Medo-Persia. (1Ki 20:14-19; Es 1:1-3; Da 3:1, 3, 30) The Hebrew and Aramaic word for “jurisdictional district” (medhi·nahʹ) comes from the root verb din, meaning “judge.”
Daniel the prophet was made ruler over all the jurisdictional district of Babylon, perhaps the principal one that included the city of Babylon. (Da 2:48) His three Hebrew companions, Shadrach, Meshach, and Abednego, were also appointed to serve in administrative capacities in this district. (Da 2:49; 3:12) Elam appears to have been another Babylonian jurisdictional district. (Da 8:2) Possibly because of having lived in the jurisdictional district of Babylon, the repatriated Jewish exiles are called “sons of the jurisdictional district.” (Ezr 2:1; Ne 7:6) Or, this designation may allude to their being inhabitants of the Medo-Persian jurisdictional district of Judah.—Ne 1:3.
At least during the reign of Ahasuerus (Xerxes I) the Medo-Persian Empire consisted of 127 jurisdictional districts, from India to Ethiopia. Jews were scattered throughout this vast realm. (Es 1:1; 3:8; 4:3; 8:17; 9:2, 30) The land of Judah, with its own governor and lesser administrative heads, was itself one of the 127 jurisdictional districts. (Ne 1:3; 11:3) Seemingly, however, Judah was part of a still larger political division administered by a higher governmental official. Apparently this official directed any serious complaints concerning the districts under his jurisdiction to the king and then waited for royal authorization to act. Also, lesser officials could request that the activities of a particular jurisdictional district be investigated. (Ezr 4:8-23; 5:3-17) When authorized by the king, jurisdictional districts could receive money from the royal treasury, and the royal decrees were sent by means of couriers to the various parts of the empire. (Ezr 6:6-12; Es 1:22; 3:12-15; 8:10-14) Therefore, all the inhabitants of the jurisdictional districts were familiar with the laws and decrees of the central government.—Compare Es 4:11.
The system of jurisdictional districts existing in nations of antiquity often made the lot of the subject peoples more difficult. This fact is acknowledged by the wise writer of Ecclesiastes (5:8).—See PROVINCE. | <urn:uuid:67d7886f-4ef0-4f26-b87b-655c9cf1eaea> | CC-MAIN-2023-06 | https://wol.jw.org/en/wol/d/r1/lp-e/1200002556#h=3:0-4:0 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.954525 | 601 | 3.828125 | 4 |
Coral Gardens Reef in Belize remains a refuge for Acropora spp. coral despite widespread devastation in other areas of the western North Atlantic/Caribbean, according to a study published September 30, 2020 in the open-access journal PLOS ONE by Lisa Greer from Washington and Lee University, Virginia, USA, and colleagues.
Once a key coral species providing the architectural framework for sprawling coral reef structures across the tropical western North Atlantic/Caribbean region, Acropora spp. coral populations have dramatically declined since the 1950s, and are now increasingly rare. Understanding the resilience and longevity of the remaining Acropora reefs in this area is critical to conservation efforts.
In order to test whether one of the largest populations of extant Acropora cervicornis in the western Caribbean was recently established (post-1980s) or a longer-lived, stable population, the authors collected 232 samples of premodern and recently dead A. cervicornis coral skeleton material across 3 sites at Coral Gardens Reef, Belize, using a subset of these samples for radiometric as well as high-precision uranium-thorium dating. Sample sites were chosen using a new genetic-aging technique to identify key sites and minimize damage to the living coral.
The data revealed coral samples ranging in age from 1910 (based on carbon dating) or 1915 (based on thorium dating) to at least November 2019; Greer and colleagues were also able to determine that Coral Gardens Reef has been home to consistent and sustained living A. cervicornis coral throughout the 1980s and up to at least 2019. While the authors cannot exclude the possibility of short gaps in the presence of A. cervicornis prior to 1980, the radiometric ages and continuous coral growth patterns found at the sample sites strongly suggests that Acropora coral has been growing and thriving at Coral Gardens for over 100 years.
Though the results from this study are not sufficient to determine exactly why this site seems to be a refuge for Acropora coral — Coral Gardens has been subject to the same increases in temperature and tropical storm/hurricane activity as reefs in the region with devastated coral populations, and the genetic diversity of Acropora is not unusually high — the findings here may be key to efforts to grow, preserve, conserve, and re-seed Caribbean reefs, as is identifying similar coral refuge sites in the area.
The authors add: “Now that we have identified an exceptional refuge for Caribbean Staghorn corals, we hope to better determine, in collaboration with reef scientists with many additional interests and expertise, the sources for success at Coral Gardens.”
Materials provided by PLOS. Note: Content may be edited for style and length. | <urn:uuid:209bcb56-c871-4889-a01c-34602e81e974> | CC-MAIN-2023-06 | https://www.contactspro.net/acropora-spp-coral-still-thrives-in-the-holdout-refuge-of-coral-gardens-belize/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.935655 | 562 | 3.8125 | 4 |
This article explains how the spectrum of energy can be recognized and captured, programmed, and distributed through a digital media.
Science has proven that the world is made up of light particles that turns into its physical form, once condensed. In ancient times, human beings have long believed that all matters are made up of what we call as atoms; which were thought to be the fundamentally indivisible particles and building blocks of everything.
An atom contains a central core called the nucleus, made of particles called protons and neutrons. The nucleus is surrounded by mostly empty space, except for very tiny particles called electrons that orbit the nucleus; also known as subatomic particles.
Pic : An Atom Structure. Reference : https://www.britannica.com/science/atom
In 1897, Physicist J.J Thompson discovered electrons in a cathode ray experiment. Many atom models were proposed prior and new particles such as protons and electrons were discovered. It was then known though; protons and neutrons can be divided to even smaller particles known as quarks. In an atom structure, the protons and neutrons are far larger than quarks and electrons, also presence of an empty space. For an example, if an atom were the size of a large city, each proton and neutron would be the size of a human; each quark and electron would be smaller than tiny freckles.
Through the discovery, each time when a new particle was discovered, it led us to more in-depth questions. What are the most fundamental building blocks of all matter? Are they pieces that makes up everything; from flowers to people to enormous galaxies that can’t it be broken down into anything smaller? Experts don’t know of any existence smaller besides quarks and electrons, neither they’re unsure of the simplest building blocks of matter. Those extensive discoveries tend to provide us the believe there always was something incredible yet or waiting to be discovered.
In 14th March 2013 (official), Higgs Boson was found in the Large Hadron Collider located at the European Organization for Nuclear Research (CERN), Geneva. The Higgs Boson, or popularly theorized as the “God’s particle”, is an elementary particle which is associated with the Higgs field – a quantum field which is responsible for masses of particles. The existence of this field explains the very question of “why particles have mass”.
Pic : The Higgs Boson. Reference: https://www.quora.com/What-is-the-Higgs-boson-and-can-it-travel-with-velocity-greater-than-that-of-light
Let’s now take look on how Nobel Prize Winner Sir Chandrasekhara Venkata Raman (C.V. Raman) and his known spectroscopic technique. By understanding this further, we can see Sri Pranaji’s ability on developing digital energy as a result from years of discoveries.
The Raman Effect was first discovered by physicist C.V. Raman (* 1888, † 1970) in 1928. The Raman Effect is based on inelastic light scattering at the chemical bonds of a sample. Due to vibrations in the chemical bonds, this interaction causes a specific energy shift in parts of the backscattered light which results in a unique Raman spectrum.
Pic : Scattering of Light by Molecules and Transparent Medium.
Similarly in the spiritual context, aura or energy field carries a distinctive atmosphere or certain qualities that emanate. Each aura field has its own signature and it is from here that energies were copied and reproduce into any forms desired, or as Sri Pranaji call as Digital Energy.
Siddhars and matters of God as Light
Here, spiritual scientists are also known as Siddhas. The Siddhas have stated that God is in the form of light, he/she does not have a name nor gender and is a pure source of energy. This primordial source of energy has its own quest in knowing it’s true potential hence duality happens – splitting one source into two and thereon. In a simple form, this explains as Paraatma and Jeevaatma (Brahman and Maya in Vedic scriptures)
This light, ‘Jothi’ as we called is from the ‘Paraatma’ which in its true essence of “Energy that is created stays in the things created.” The Jothi has various vibrations and spectrums according to the nature of matter.
Through this knowledge of Jothi, Sri Pranaji designed a method that could capture and transmit almost anything in any form energetically via a digital form. This technology may sound impossible even to some leading Gurus of this time.
But for Sri Pranaji, in order to believe this theory, you would need to experience its truth.
A Simple Test for You
To help you understand the concept by feeling, experiencing, and believing, look at the image below. Sri Pranaji has activated with energy that will make your mind-altering to a calm, peaceful, and passive mode.
Experience it for yourself. Look at the image for 30 seconds and close your eyes.
Through the combination viewing from the instructed, an energy signature commences and surges through from its cosmic perspective and to entering the energy body field or in other words; the aura of a human being. This signature alters the state and through condensed blocks; the energy spectrum then manifests onto the body system, promoting healing and well-being potentials.
Based on both science and knowledge of Siddhas, we can now learn to connect on both, understand the essences of spiritual power and certain matters of turning what seemed ‘impossible’ to possible, what’s just a form of matters into physical manifestation. In context, everything is possible for an individual once they know who he/she truly is; as they realize the power of within, unlocking hidden potentials as well as healings.
The Siddhas believes the whole purpose of God’s creation is to experience the process and human beings are referred to as higher intelligence through this evolution of understanding their own nature. Religion, a particular system of faith and worship creates nuances of Gods with many names, as well as categorizations on doing, patterns of belief, and faith.
Here’s a question to challenge yourself – why would (the so-called) God ‘waste His time’ in creating some ‘game’, lets you go through certain tests and then decides on rewards or punishment?
“You are not a Sinner, You are a Learner” – Sri Pranaji.
Thoughts to Ponder – on Human Body and Its Capabilities
Biologists state that virus has an intelligence of its own, has its own ability to change its DNA and RNA, able to transform itself to resist medicines continues to evolve. On the contrary, we human being who embodies much higher intelligence should be responsive, possess great body system towards any harms from viral attacks.
In today’s world, unfortunately, the results were quite the opposite. They explained (programmed) that our immune system does not heal on its own and we need external medications to support the healing. Over the evolutionary years, our immune systems were successfully weakened, be it on the ‘modern era’s’ situation, forms of stress, food, lifestyles etc. as well as productions of various kinds of medicines.
If we as human being, have the absolute power to self-heal, why have we become dependent on outer resources?
We urge you to think about this through and make your own conclusion.
Shakti Enlightenment Programme (SEP).
For those who are interested to learn further, Pranashakty Organization offers a 5-year program on the practice and art of energy manipulation, known as the Shakty Enlightment Programme (SEP). SEP allows individuals to find their own’s true divine potentials, awaken the unlimited power of the soul and understand the sole purpose of living.
SEP students will also be gradually inducted to ancient science of connecting to the energetic realm and will be ultimately equipped with the ability to experience for themselves every form of energetic practice known to man.
As the SEP is empirical in nature i.e not theoretical, the only scientific apparatus deployed throughout program is the senses inherent in the living human. SEP Students will be trained under the direct guidance of Sri Pranaji to sense, to visually perceive, to conjure, and to manipulate energies of various forms.
Upon successful completion, candidates will therefore be endowed with the most portable energy test kit for life – themselves! SEP graduates will thus be able to verify for themselves the quality of energy and its functionality. This is an important aspect to distinguish the highest truth. Successful candidates are also taught to conjure the energy to heal all dis-eases and manifest their deepest desires.
For further information about the program and testimonials, click here -https://www.pranashakty.org/sep/ | <urn:uuid:98847083-4759-445b-8964-46087a09f7ea> | CC-MAIN-2023-06 | https://www.corecellenergy.com/the-science-behind-digital-energy/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.955794 | 1,908 | 3.671875 | 4 |
Definition: Drag swimmers use a cyclic motion where they push water back in a power stroke, and return their limb forward in the return or recovery stroke. When they push water directly backwards, this moves their body forward, but as they return their limbs to the starting position, they push water forward, which will thus pull them back to some degree, and so opposes the direction that the body is heading. This opposing force is called drag. The return-stroke drag causes drag swimmers to employ different strategies than lift swimmers. Reducing drag on the return stroke is essential for optimizing efficiency.
Definition: Because of the difference in refractive index between air and water (or corneal tissue), a curved cornea is an image-forming lens in its own right. Its focal length is determined by the radius of curvature of the cornea. Many corneal eyes (eg: in land vertebrates) also have lenses, but the lens is flattened and weakened compared with an aquatic lens; most of the refractive power is provided by the cornea. Corneal eyes cannot focus in aquatic habitat. | <urn:uuid:eaec7b14-6b54-4ad8-b170-91d170e57166> | CC-MAIN-2023-06 | https://www.eol.org/pages/45510604 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.945944 | 226 | 3.96875 | 4 |
The blends are combinations of two substances that, when in contact, do not proceed to form a chemical reaction.
At the time when two substances form a mixture, depending on the properties of each substance, the conditions under which the mixture is formed, among other factors, homogeneous mixtures may be formed (mixtures whose components cannot be distinguished with the naked eye) or heterogeneous (mixtures whose components can be distinguished with the naked eye or by means of a measuring instrument). It is difficult to classify mixtures, as there are many types with relatively similar characteristics, but broadly speaking they are usually differentiated into:
- solutions. They are homogeneous mixtures. They are uniform, that is, any volume taken from a solution will have the same composition of its components. They are composed of a majority substance, called solvent, and one that is in lesser quantity, called solute. When the volumes of the different components of a solution are added, the final volume is not equal to this sum, since there are interactions between these components.
- colloids. They are mixtures with intermediate properties between homogeneous and heterogeneous. They are made up of two phases, a dispersed one that is in a smaller proportion and a continuous one that is in a greater proportion.
- Suspensions. They are heterogeneous mixtures. They are generally formed by solid particles suspended in some liquid.
Mixtures can be formed between substances that are in any state of aggregation, and the state of aggregation of the mixture is almost always the same as that of the substance that is in the greater amount.
Regarding the combinations of a substance in a liquid state and another in a gaseous state, the mixtures can appear having the gas in greater or lesser quantity:
- Gas mixture (liquid in gas). The gas can appear as a solvent when the liquid acts as a solute, that is, when the gas appears in greater proportion. This happens on rare occasions, for example in the water vapor in the air, which is considered a solution where the liquid particles dissolve in the gas. Although there are environments that are especially endowed with this vapor, the humidity in the air is something that is always present to a greater or lesser extent, so the combination can be seen at any time and place. On the other hand, some aerosols are colloidal mixtures of liquids in gases.
- Liquid mixture (gas in liquid). On other occasions, the gas is the substance that occupies a smaller proportion of the mixture, leaving the largest place for the liquid. This is the case of the dissolution of carbon dioxide in water, used to prepare drinks. There are also colloid-type mixtures of a gas in a liquid, such as shaving cream.
Examples of mixtures of liquids and gases
The following list brings together some examples of mixtures in which there is a liquid and a gaseous component.
- fizzy drinks.
- Dew, liquid particles in a gaseous medium.
- A bottle of beer.
- The foam of a shampoo.
- A cloud.
- A whipped egg white (liquid with air incorporated in its structure)
- Sparkling drinks.
- The oxygen of the sea.
- The water vapor present in the air.
Other specific blends
- gas mixtures
- Gas mixtures with solids
- Mixtures of solids with liquids | <urn:uuid:a1981e27-d245-4b5f-b4b0-e3374d9fcb44> | CC-MAIN-2023-06 | https://www.exampleslab.com/10-examples-of-gas-liquid-mixtures/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.95244 | 700 | 4.09375 | 4 |
Isn’t the Internet great? You can send and receive emails, shop online with your credit card, exchange files, or log in and manage remote systems.
It would be not-so-great if all the confidential information in those cases were to be exposed to prying eyes, hackers, or cyber criminals.
Featuring: the Invention of the Secure Sockets Layer (SSL) Protocol
SSL – Secure Sockets Layer – was invented to protect sensitive data in transmission. SSL is a security protocol designed to provide maximum security, while remaining simple enough for everyday use.
SSL, or the new generation version: TLS (Transport Layer Security), is responsible for keeping data private and ensuring it is transmitted between — and only between — the correct two end-points. SSL prevents the possibility that hackers positioned between the two end-points might siphon off or divert the data elsewhere.
What is an SSL Certificate?
An SSL Certificate is a small computer file that digitally combines a cryptographic key with an organization’s details. On a web server, for example, it allows secure connections to a web browser. Depending on the type of SSL Certificate being used by the organization, different levels of checks will be made by the Certificate Authority (CA) issuing the certificate. The CA itself holds a Root Certificate.
An SSL Certificate awarded to an organization is derived from the Root Certificate. The same Root Certificate must be present on the end user’s computer in order for the issued SSL Certificate to be trusted. Browser and operating system vendors work with Certificate Authorities, so the Root Certificate is embedded in their software.
End User and Organizational Points of View
For end users, SSL could hardly be simpler. Secure web addresses start with “https://” instead of just “http://”.
Users see a padlock symbol in their browser. And that’s about it.
By comparison, for organizations running email servers, ecommerce sites or hosting system administration resources, it’s a little more involved.
To authenticate themselves to users and customers, and prove to users they are working with the right entity, organizations need to acquire an SSL Certificate.
The Goal: Trusted Interactions Online
If the local Root Certificate and the remote-issued SSL Certificate are not correctly matched, the browser displays messages to the user concerning untrusted errors. If they are matched, the user can proceed with confidence.
The two parties (the local user’s browser and the remote web server) first exchange a symmetric encryption key. “Symmetric” means the same key is used to encrypt information that is transmitted and decrypt it on arrival at the other end. The “forward secrecy” built into the system ensures the short term symmetric key cannot be deduced from the long-term asymmetric key, for further protection against hacking.
Types of SSL Certificates
Three types of SSL Certificates exist.
1. Extended Validation (EV) SSL Certificates
These are issued only after the Certificate Authority has verified the exclusive right of the organization to use the domain name concerned and also a number of additional aspects:
- The legal, physical, and operational existence of the organization
- Consistency between the organization’s identity and official records
- Proper authorization by the organization of the issuance of the EV SSL Certificate
2. Organization Validation (OV) SSL Certificates
These include checking the right of the organization to use the domain name, and some, but not all, of the rest of the verification done in the case of the EV SSL Certificate above. End users can see additional information on the organization.
3. Domain Validation (DV) SSL Certificates
Finally, these limit verification to checking the right of the organization to use the domain name concerned. Consequently, end users will only see information about the encryption, not about the organization.
In Conclusion: Advantages of SSL Certification
SSL certification can be doubly advantageous for an organization.
First of all, it can ensure the confidentiality of the information being transmitted. Secondly, it proves to others that they can trust both the security and the identity of the organization. Also, just to make sure everything is under control, the Certificate Authority itself must also be audited annually to ensure it is fit to issue SSL Certificates.
Photo Sources: theatlantic.com | <urn:uuid:f9cc31a6-d98a-43e1-9f05-82f489121564> | CC-MAIN-2023-06 | https://www.hostingadvice.com/blog/ssl-certificates-explained-made-easy/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.905169 | 906 | 3.640625 | 4 |
St. Martin of Tours is known for his kindness and generosity. The story goes that he cut his cloak in half to share with a beggar, saving the man from the cold. Every year on 11 November, many people mark St. Martin's Day to commemorate his burial on this day in the year 397 AD.
In Central Europe, it has long been customary to eat a goose to celebrate St. Martin - although no one knows the precise origins of this tradition any longer. One reason could be that Eastern churches mark this day as the start of Lent, a period of fasting. Therefore, all the food that was forbidden during Lent needed to be eaten now. Also, it was traditional in many places to celebrate and feast on the eve of 11 November with food and drink.
Yet another reason could be that farmers often paid their lords or landowners with geese, so they always had geese on their farms. This made them a natural choice for a feast.
Another plausible theory lies in the legends about St. Martin. One legend goes that the townsfolk of Tours wanted to ordain him a bishop. He was a modest man and hid in a goose pen when they came looking for him. But the loud cackling of the geese gave him away and the people found him, after which he was ordained bishop. | <urn:uuid:e7a26465-7f36-478f-9563-7975d853e01e> | CC-MAIN-2023-06 | https://www.interismo.com/info/magazine/st-martins-day | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.986715 | 270 | 3.78125 | 4 |
You are here
December 13, 2010
Early Prion Detection May Be Possible
Researchers have developed a method for detecting prions that may lead to a practical test for diagnosing the fatal brain conditions caused by these infectious agents.
Prion diseases, also known as transmissible spongiform encephalopathies, include mad cow disease in cattle, scrapie in sheep and Creutzfeldt-Jakob disease in humans. These diseases are characterized by sponge-like holes in brain tissue. They are notoriously difficult to diagnose, untreatable and ultimately fatal.
Prions are actually misfolded forms of proteins naturally found in the body. Prions can convert normally folded prion protein molecules into an infectious form when they come in contact with each other. These misshapen prion proteins clump together and accumulate in brain tissue.
Miniscule amounts of infectious prions found outside the brain can be detected by current diagnostic tests, but these methods lack the speed and convenience needed for routine use. Other quicker approaches aren't sensitive enough to detect low levels of infection. A research team led by Dr. Byron Caughey of NIH's National Institute of Allergy and Infectious Diseases (NIAID) has been working to develop a better method for quickly detecting small amounts of prions.
The researchers described their new method—called real-time quaking-induced conversion, or RT-QuIC—in the December 2, 2010, online edition of PLoS Pathogens. RT-QuIC is about 50-200 times faster and much less expensive than animal bioassays that detect similarly small amounts of disease-causing prions.
RT-QuIC takes advantage of the ability of tiny amounts of prions to seed the misfolding of normal prion proteins in a test tube. The method involves testing a range of dilutions to see at what point the sample loses its seeding activity.
The scientists tested tissue samples from infected deer and sheep and were able to distinguish infected animals from normal ones in 2 days or less. They were also able to detect prions in nasal washes from infected hamsters.
"Although relatively rare in humans and other animals, prion diseases are devastating to those infected and can have huge economic impacts," says NIAID Director Dr. Anthony S. Fauci. "Scientists have promising concepts for developing therapies for people infected with prion diseases, but treatments are helpful only if it is known who needs them. This detection model could eventually bridge that gap."
Along with optimizing their test in the laboratory, the researchers are teaming up with other laboratories to extend the applications of RT-QuIC. Similar approaches might also aid the diagnoses of additional neurodegenerative diseases, such as Alzheimer's, Huntington’s and Parkinson's. | <urn:uuid:5cfc6358-ab96-47e2-93dc-a9301d579501> | CC-MAIN-2023-06 | https://www.nih.gov/news-events/nih-research-matters/early-prion-detection-may-be-possible | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.953432 | 568 | 3.90625 | 4 |
Students’ age range: 08-10
Main subject: Language arts and literature
Topic: Identify the correct use of the conjunctions and, but, or , because in sentences
Description: 1.Students will be shown a box reminiscent of a lucky dip marked ‘word dip’
2.Students will be informed that they will assist the teacher in selecting words from the box.
3.Selected students will be asked to place their hand in the box and pull out a word.
4.Students will be allowed to stick chosen words on the chalkboard.
5.Students will be asked to read the display of words placed on the chalkboard.
6.Students will then be asked, “Can anyone give ‘one word’ that we can use to classify all of these
7.Select a few students to answer and give appropriate feedback.
8.Students will be informed that today we will be looking at ‘Conjunctions’.
9.At this point, the classroom would be rearranged to facilitate the ‘fishbowl’ seating.
10.Students will be informed that they would use the ‘fishbowl’ strategy to discuss all their prior
knowledge relating to conjunctions.
11.Through peer facilitated questions, students would discuss suitable definitions for the term
‘conjunctions’, as well as ways in which the aforementioned conjunctions displayed on the board
could be used.
12.Through discussion, students would formulate sentences with which the highlighted conjunctions
could be used. Special attention would be paid to the effect the conjunctions have on the sentences. | <urn:uuid:bb03b1a9-4b3b-4f7d-ab8c-b612c7d3d251> | CC-MAIN-2023-06 | https://www.oas.org/ext/en/development/teacher-education-resources/Lesson-Plans/Details/tag/language-arts | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.926262 | 344 | 4.59375 | 5 |
Draw a flowchart in Excel
However, if you want to use different shapes and a complex topology, it is better to create your own flowchart.
Managers, system analysts, programmers, and engineers adopted flowcharts as a mean of communication for describing:
- Document workflows
- Data flows
- System operation flows
No wonder that building blocks and rules of flow charts were standardized by the American National Standards Institute (ANSI) and International Organization for Standards (ISO) more than 50 years ago. The current standard defines the drawing direction from top to bottom and left to right and specific symbols for different types of entities, actions, etc. E.g.:
|Rectangle with round corners marks for starting and terminating states|
|Box with straight corners represents the process stages|
|Parallelogram illustrates data input/output|
|Diamond marks conditional branching|
|Arrow shows the process flow|
All these and other useful visual elements you can find in the Shapes dropdown list of the Insert tab in Excel.
1. On the Insert tab, in the Illustrations group, select Shapes:
2. On the Shapes list, in the Flowchart group, choose the item that you prefer:
3. To add text in the selected shape, just double-click in it and enter the text.
4. To connect shapes, do the following:
- On the Insert tab, in the Illustrations group, click on the Shapes list and
then select one of the connectors in the Lines group:
- Select the beginning point in a border of the first shape and the ending point in a border of the
See also this tip in French: Comment créer un organigramme des opérations dans Excel. | <urn:uuid:5f0d5308-0edd-4615-93fb-4bdaabe8cbcb> | CC-MAIN-2023-06 | https://www.officetooltips.com/excel_2016/tips/draw_a_flowchart_in_excel.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.81614 | 378 | 3.90625 | 4 |
In discussing the history of public relations, the word propaganda must also be noted. The word has changed in meaning since its inception during the seventeenth century. Propaganda today can mean something negative, and it is not easy to define. However, according to the Institute for Propaganda Analysis, propaganda is an opinion offered by one or more persons that is designed to influence others' actions or opinions while referring to goals or ends that have already been determined.
This function of influencing others toward a predetermined end should not in itself be thought of as something negative. In fact, as H. Frazier Moore, journalism professor at University of Georgia, has pointed out: “In its broadest sense, propaganda is honest and forthright communication intended to advance a cause through enlightenment, persuasion, or a dedicated sense of mission. It is currently employed by religious, charitable, political, and social service institutions to influence the thoughts and actions of others for their best interests. In this sense, propaganda is legitimate persuasion.”
However, as Frederick E. Lumley observed fifty years ago, many totalitarian governments throughout the world have used propaganda to further their own devious and harmful regimes. These dictatorships have twisted facts and presented false and inflammatory information: "Propaganda of every kind awakens passion by confusing the issues; it makes the insignificant seem weighty; it makes the important seem trifling; it keeps the channels of communication full of exciting stuff; it keeps people battling in a fog."
The major distinction between advertising, as we know it today and propaganda is that the general public knows that the advertiser is attempting to persuade; the propagandist is more subtle. The advertiser tries to motivate the observer toward a certain course of action. In contrast, propaganda, as defined in the negative sense, contains a hidden or concealed goal or motivation. Most observers are not aware of the motivation—which is why propaganda can be bad. A perceptive and valid observation of public relations, and how it represents propaganda in the very best sense, was offered by Professor Moore: “Public relations is sometimes referred to as propaganda. Since they are deliberately designed to influence public opinion, public relations programs may be considered as propaganda in the best sense of the word.”
Most public relations programs are honest and straight-forward efforts to influence public opinion. As the word propaganda is commonly understood today, public relations is not propaganda; it is not a subversive activity that suppresses relevant facts, publishes false and misleading information, distorts the truth, and attempts to manipulate public opinion.
See the following articles for more information:
- The Roots of the Profession of Public Relations
- The Future of Public Relations
- Public Relations Serves Many Functions
- What is Public Relations | <urn:uuid:5726c84f-67c6-423f-91e9-cd94c3cd15e0> | CC-MAIN-2023-06 | https://www.prcrossing.com/article/900049054/Propaganda-of-Public-Relations-in-the-Historical-Times/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.95745 | 555 | 3.6875 | 4 |
About Communicable Diseases
A communicable disease such as a cold is a disease that spreads from person to person. When a person becomes sick with a communicable disease it means a germ has invaded their body. The spread often happens via airborne viruses or bacteria, but also through blood or other bodily fluid. Germs are tiny organisms (living things) that may cause disease. They are so small and sneaky that they creep into our body without being noticed. | <urn:uuid:eac1b7f0-c424-41de-abc3-cd496349a39e> | CC-MAIN-2023-06 | https://www.scitechnol.com/scholarly/communicable-diseases-journals-articles-ppts-list.php | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00640.warc.gz | en | 0.977759 | 91 | 4.09375 | 4 |
This is an interesting question!
Mosses belong to a group of plants known as the
BRYOPHYTES. Bryophytes have no roots, but they do have thin (one cell thick!) root-like structures which serve for attachment and water absorption. These are known as RHIZOIDS.
Some kind of mosses or Bryophytes attach to the rocks by these rhizoids. Most mosses have very little resistance to drying out, and because most of the mosses are confined to areas which are damp and sheltered, some kind of rocks are suitable for them to live.
In Wales, for example, the majority of the rocks are acidic in nature, and this is reflected in may occur where the rocks are more base-rich
(carboniferous limestone,for example). Once the
rock has the natural conditions for the moss to
grow(water, acid or basic nutrients), the moss is going to attach to the rock by means of the rhizoids.
Click Here to return to the search form. | <urn:uuid:6cda1e4e-621c-45d2-bda0-a78f7723ecff> | CC-MAIN-2023-06 | http://scienceline.ucsb.edu/getkey.php?key=975 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.959252 | 221 | 3.671875 | 4 |
In psychology, relaxation is the emotional state of low tension, in which there is an absence of arousal that could come from sources such as anger, anxiety, or fear. Relaxation is a form of mild ecstasy coming from the frontal lobe of the brain in which the backward cortex sends signals to the frontal cortex via a mild sedative. Relaxation can be achieved through meditation, autogenics, and progressive muscle relaxation. Relaxation helps improve coping with stress. Stress is the leading cause of mental problems and physical problems, therefore feeling relaxed is beneficial for a persons health. When we are stressed, the sympathetic nervous system is activated because we are in a fight-or-flight response mode, over time this could have negative effects on a human body.
Herbert Benson, a professor at the medical school at Harvard University, discovered the relaxation response which is a mechanism of the body that counters the Fight-or-flight response. The relaxation response reduces the body’s metabolism, heart and breathing rate, blood pressure, muscle tension, and calms brain activity. It increases the immune response, helps attention and decision making, and changes gene activities that are the opposite of those associated stress. The relaxation response is achieved through meditation. Benson's meditation technique involves these four steps:
- A quiet environment to help you focus
- A mental device to help keep your attention constant (a sound or word said repeatedly)
- A positive attitude to avoid getting upset over failed attempts
- A comfortable position
Autogenics was invented by Dr. Johannes Schulz in the 1920s. The process of autogenics is by relaxing your muscles deeply, and by doing so, your mind follows through and relaxes as well. There as six parts to autogenics training:
- Heaviness in parts of the body (arms and legs feel heavy)
- Warmth in parts of the body (arms and legs feel warm)
- Heartbeat (heart is calm)
- Breathing (breathing is calm)
- Warmth in the abdominal area
- Forehead is cool
Progressive Muscle Relaxation
Progressive muscle relaxation helps you relax your muscles by tensing certain parts of your body (such as your neck), and then releasing the tension in order to feel your muscles relaxing. This technique helps for people with anxiety because they are always tense throughout the day. | <urn:uuid:d3cace01-dd32-4906-84ea-440f03fcfc81> | CC-MAIN-2023-06 | http://tibetanbuddhistencyclopedia.com/en/index.php?title=Relaxed | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.95265 | 490 | 3.65625 | 4 |
Jan 11, 2023
Water Management in Desert Agriculture
Following more than two decades of drought in the western U.S., many watersheds are experiencing critical water shortages. In the Colorado River basin, the two largest reservoirs, Lakes Powell and Mead, are now at less than 20% capacity. In contrast, these reservoirs were nearly full in 2000. With agriculture responsible for 70-80% of the Colorado River water diversions, a great deal of scrutiny is being applied to crop production systems in this region.
Crop production systems in the desert Southwest are experiencing increasing pressure to document and improve irrigation efficiency. There are several ways to define irrigation efficiency. The primary definitions for irrigation efficiency include orientations with water conveyance and delivery systems (engineering efficiency), and also agronomic, economic, and environmental efficiencies.
It comes as no surprise that as an agronomist and soil scientist I tend to focus on the agronomic efficiency of irrigation for the benefit of the crop being grown. The crop plants are the centerpiece of the production process and the soil-plant system itself represents the primary objective of irrigation management.
Agronomic Irrigation Efficiency
Agronomic (crop and soil) considerations are centered on our ability to provide irrigation water for the sustainable production of a crop in the field. The three primary demands for crop water management include: 1) providing water for seed germination and stand establishment, 2) providing irrigation water to match crop-consumptive water use and avoid crop water stress, and 3) provide sufficient irrigation water to leach soluble salts from the root zone so that the soils can support crop production in a sustainable manner (Figure 1).
Agronomic efficiency at the field level focuses on the crop water demand
(CWD) through crop consumptive use (crop evapotranspiration
, ETc, which is the combination of evaporation and transpiration from the crop) and leaching requirements
(LR) which are dependent upon the crop and salinity of the irrigation water.
Agronomic efficiency can be estimated by considering the difference between ETc + LR = CWD
and the volume of irrigation water applied (IWA).
The leaching requirement (LR) can be estimated by use of the following calculation:
= salinity of the irrigation water, electrical conductivity (dS/m)
= critical plant salinity tolerance, electrical conductivity (dS/m)
This is a good method of LR calculation that has been utilized extensively and successfully in Arizona and the desert Southwest for many years. We can easily determine the salinity of our irrigation waters (ECw
) and we can find the critical plant salinity tolerance level from readily available tabulations of salinity tolerance for many crops (Ayers and Westcot, 1989). Additional direct references are from Dr. E.V. Maas’ lab at the University of California (Maas, 1984: Maas, 1986; Maas and Grattan, 1999; Maas and Grieve, 1994; and Maas and Hoffman, 1997).
To deal with crop water management at the field level agronomically, the crop and soil factors, there are some fundamentals that we can refer to for assistance.
Dr. Jeremy Weiss, program manager for the University of Arizona AZMET system, has recently developed two valuable tools that provide actual crop evapotranspiration (ETc) estimates from several AZMET sites in the lower Colorado River Valley and for several key vegetable crops, including lettuce (iceberg and romaine), broccoli, cauliflower, cabbage, and spinach.
The first tool provides accumulations of ETc values over the previous week and a range of dates and the planting date selected by the user. In both models, the Kc values presented in FAO-56 are used for appropriate stages with each crop. The reference evapotranspiration (ETo) measurements are taken directly from each AZMET site listed. Thus, by this method ETc estimates are made by use of equation 3.
To access this first crop-water estimate tool please refer to the following link:
The second tool provides accumulations of ETc values over the crop production cycle from planting (or the wet date) and the date of harvest.
This second tool gives us the opportunity to estimate agronomic efficiency of irrigation management. With this tool we can review total crop water use estimates (ETc) and include the leaching requirements (LR) to compare with the irrigation water applied (IWA) as described in Equation 1.
For example, using lettuce (either iceberg or romaine) with a 10 October 2022 planting date through 9 January 2023, we can see from this model the reference evaporation, listed as “water use” (ETo) and the cumulative crop evapotranspiration (ETc, or total crop-water use) of lettuce since the selected planting date.
Since planting on 10 October 2022 through 9 January 2023, a total crop water use (evapotranspiration), or the ETc, has been 11.27 ~ 11.3 inches in the Yuma Valley. We can also determine crop water use (ETc) last week (3-9 January), has been 0.52 inches in the Yuma Valley.
Similar estimates are provided by this model for several AZMET weather stations in the Yuma production area: Roll, Yuma North Gila, Yuma South, and the Yuma Valley. The Yuma Valley station is at the University of Arizona Yuma Agricultural Center and the Yuma South site is near Somerton, Arizona. A map of AZMET sites can be found in the following link:
When we include the leaching requirement for the crop, an overall estimate of field level irrigation efficiency can be made. We can estimate the leaching requirement as follows assuming an electrical conductivity for Colorado River water of 1.1 dS/m and the lettuce crop salinity tolerance of 1.3 dS/m.
Leaching Requirement (LR) = ECw/((5 X ECe)-ECw)
LR = 1.1 dS/m / (5 X 1.3) – 1.1 = 1.1/5.4 = 0.20 = 20% leaching requirement
LR = 11.3 X 0.2 = 2.26 ~ 2.3 inches
Thus, total crop water demand in this case = 11.3 + 2.3 = 13.6 or ~ 14 inches total
Contrasting this estimate of total crop-water use with the amount of irrigation water that was applied to the field can provide an estimate of the agronomic efficiency for a given field of lettuce. We can do the same thing using this tool for other primary leafy green vegetable crops.
We are working in times of decreasing water availability and increasing scrutiny in agriculture. It serves us well to measure and understand the relationship between crop demand (consumptive use plus leaching requirements) versus the irrigation water that is applied to a given field.
Allen, R. G., Pereira, L. S., Raes, D., & Smith, M. (1998). Crop evapotranspiration-Guidelines for computing crop water requirements-FAO Irrigation and drainage paper 56
. FAO, Rome, 300(9), D05109.
Ayers, R.S. and D.W. Westcot. 1989 (reprinted 1994). Water quality for agriculture. FAO Irrigation and Drainage Paper 29 Rev. 1. ISBN 92-5-102263-1. Food and Agriculture Organization of the United Nations Rome, 1985 © FAO.
Erie, L.J., O.A French, D.A. Bucks, and K. Harris. 1981. Consumptive Use of Water by Major Crops in the Southwestern United States. United States Department of Agriculture, Conservation Research Report No. 29.
Khokhar, T. 2017. World Bank.
Maas, E.V. 1984. Crop tolerance to salinity. California Agriculture, October 1984. https://calag.ucanr.edu/archive/?type=pdf&article=ca.v038n10p20
Maas, E.V. 1986. Salt tolerance of plants. Appl. Agric. Res., 1, 12-36.
Maas, E.V. and S.R. Grattan. 1999. Crop yields as affected by salinity, Agricultural Drainage, Agronomy Monograph No. 38.
Maas, E. V., and Grattan, S. R. (1999). Crop yields as affected by salinity, agricultural
drainage, Agronomy Monograph No. 38, R. W.
Maas, E. V., and Grieve, C. M. 1994. “Salt tolerance of plants at different growth stages,” in Proc., Int. Conf. Current Developments in Salinity and Drought Tolerance of Plants. January 7–11, 1990, Tando Jam, Pakistan, 181–197.
Maas, E. V., and Hoffman, G. J. (1977). “Crop salt tolerance: Current assessment.” | <urn:uuid:b58255b9-6ef2-434c-bc5a-c4c48184f8a3> | CC-MAIN-2023-06 | https://acis.cals.arizona.edu/agricultural-ipm/vegetables/vegetable-ipm-updates/2?Biometeorologypage=10&Trappingpage=12&Insectpage=7&PlantDiagnosticpage=9 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.897151 | 2,071 | 3.671875 | 4 |
How does a cell move? ‘Pull the plug’ on the electrical charge on the inner side of its membrane
Scientists at Johns Hopkins Medicine say that a key to cellular movement is to regulate the electrical charge on the interior side of the cell membrane, potentially paving the way for understanding cancer, immune cell and other types of cell motion.
Their experiments in immune cells and amoeba show that an abundance of negative charges lining the interior surface of the membrane can activate pathways of lipids, enzymes and other proteins responsible for nudging a cell in a certain direction.
The findings, described in the October issue of Nature Cell Biology, advance biologists’ understanding of cell movement and potentially can help explain biological processes associated with movement, such as how cancer cells move and spread beyond the original site of a tumor and how immune cells migrate to areas of infection or wound healing.
“Our cells are moving within our body more than we imagine,” says Peter Devreotes, Ph.D., the Isaac Morris and Lucille Elizabeth Hay Professor and Distinguished Service Professor in the Department of Cell Biology at the Johns Hopkins University School of Medicine. “Cells move to perform many functions, including when they engulf nutrients or when they divide.”
Many of the molecules involved in cell movement become activated in the leading edge of the cell, or where it forms a kind of foot, or protrusion, that orients the cell in a particular direction.
Tatsat Banerjee, a graduate student in Cell Biology and Chemical and Biomolecular Engineering Departments at Johns Hopkins and the lead author of the study, began to notice that negatively charged lipid molecules that line the inner layer of cell membranes were not uniform, as scientists previously thought. He noticed that these set of molecules consistently leave the regions where a cell makes a protrusion. Banerjee had a hunch that a general biophysical property, such as electrical charge, rather than a specific molecule, could be stimulating and organizing the activities of enzymes and other proteins related to cell movement.
Source: Read Full Article | <urn:uuid:8b0a7677-a8de-470b-9f29-23d950061b20> | CC-MAIN-2023-06 | https://arkhealthandselfreliance.com/health-news/how-does-a-cell-move-pull-the-plug-on-the-electrical-charge-on-the-inner-side-of-its-membrane/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.937463 | 424 | 3.65625 | 4 |
Everyone knows that computers run on ones and zeros. This is because CPUs are made up of billions of transistors, which are basically just on-off switches.
Any code you write needs to be processed by a computer and therefore has to be converted to binary instructions to work.
It isn’t just the execution of code that uses binary, it is also used for the storage of everything in memory and any files that you write to disk. Everything is stored as ones and zeros.
Before we get into exactly how binary works, we need to look are existing number system.
Current number system
All our numbers are represented with the digits 0 to 9. So, that is 10 characters in total that we use to represent every number from 0 to infinity.
You probably remember, in school, numbers being represented by number columns.
We can only put up to 9 in each column and once we need to go above that we have to move on to the next column.
This is what we call the base 10 number system. Each column is a power of 10.
Everyone knows the base 10 number system, and it is probably the one everyone uses, as we have 10 fingers.
Binary Number System
What if instead of using 10 characters to represent all the numbers, we only use 2.
This would then be a base 2 number system, if those 2 characters are 0 and 1, then that is what we call binary.
If we go back to our 4 columns. If in the base 10 numbers system each column represents a power of 10, in the base 2 number system each column represents a power of 2.
With these 4 columns, the maximum number we can store in binary would be 1s in each column, which works out as 8 + 4 + 2 + 1 = 15. If you include 0 as well that is 16 numbers in total, 0 to 15.
Binary numbers can also be used to represent letters as well. In the basic ASCII character set, the first 7 digits used to represent are used to represent 128 different characters.
A = 0100 0001 B = 0100 0010 C = 0100 0011 D = 0100 0100 E = 0100 0101
The 8th digit is then added to allow for all the other special characters, for a total of 256 characters in total.
Å = 1100 0101 Æ = 1100 0110 Ö = 1101 0110
Other Number Bases
It is not just binary numbers that are used in programming, we often encode data into other number bases.
Emails are encoded using base64. So unlike binary which use 0 and 1, base 64 uses the characters capital A to Z, lower case a to z, numbers 0 to 9 and the symbols +, / and =.
ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz 0123456789 +/=
Base 32 is typically used a lot as well. The benefit of base32 is that it only uses upper case letters A to Z and the numbers 0 to 7. As there aren’t any special characters, it is great for things like filenames or to be used in URLs.
And of course, we can’t forget base 16 which uses numbers 0 to 9 and letters A to F.
0 = 0 1 = 1 2 = 2 3 = 3 4 = 4 5 = 5 6 = 6 7 = 7 8 = 8 9 = 9 A = 10 B = 11 C = 12 D = 13 E = 14 F = 15
Base 16 is also known as hexadecimal and is most commonly used to represent colours in CSS.
Colours in hexadecimal are written as 3 sets of 2 characters.
Each set represents, Red, Green, and Blue respectively.
As we have 2 characters and a base16 number, the first character on the right is 16 to the power 0 which equals 1 and the second character is 16 to the power 1 which equals 16.
So, the highest number we can represent with two characters in hexadecimal is FF which would be 15 x 16 + 15 x 1 = 255.
Therefore, #FF0000 would be red, #00FF00 would be green and #0000FF would be blue.
This gives us a total of
256 x 256 x 256 = 16,777,216 colours that can be represented with hexadecimal.
We know computers need to store everything as 0 and 1. So, how do computer sizes relate to binary numbers?
Each binary number, 0 or 1, is stored as 1 bit in the computer.
There are then 8 bits to a byte 1000 bytes to a kilobyte 1000 kilobytes to a megabyte.
If we look at datatypes of variables we use in programming. An
unsigned byte can store a number from 0 to 255. A
byte is 8 bits, so it is an 8 digit binary number.
0000 0000 1111 1111
If we look at these as number columns, we have the left most 1 representing 2^7.
If we add these up, it is no surprise we get 255 as the total.
2^7 + 2^6 + 2^5 + 2^4 + 2^3 + 2^2 + 2^1 + 2^0 = 255
Now, what about negative numbers?
signed byte can only store between -128 to 127. The reason for this is we need to use one of the bits to store the sign of the number.
We use the left most bit for this purpose. If the left most bit is a 1 the number is negative and if it is a 0 the number is positive.
0000 0000 = 0 1000 0000 = -128
Signed numbers use what is known as the two’s complement representation.
Numbers 0 to 127 are represented in the same way we did above. However, for negative numbers, we count up from -128.
1000 0000 = -128 1000 0001 = -127 1111 1111 = -1
There are still 256 numbers that can be represented with this 8-bit binary number.
- -128 to -1 = 128 numbers,
- 1 to 127 = 127 numbers
- and 0
The more digits a binary number has, the larger the number that can be stored. So, an
int typically takes up 32 bits, which means it is a 32 digit binary number:
0000 0000 0000 0000 0000 0000 0000 0000
If it is a
signed int we can store numbers just up to over 2 billion and if it is an
unsigned int we can store just over 4 billion. | <urn:uuid:49441ea2-6294-4208-89ed-eea76615e9f1> | CC-MAIN-2023-06 | https://blog.alexhyett.com/binary-numbers | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.923035 | 1,417 | 4.5 | 4 |
As part of the Blue Circular Economy project, researchers at Scotland’s Environmental Research Institute (ERI) are seeking to better understand and highlight the interactions between birds and various forms of debris, including discarded marine plastics. Though marine debris has affected at least 36% of all seabird species through entanglement, we still do not fully understand which birds are affected, or where.
Since the launch of the Birds and Debris website in July 2019, there has been a great response with over 250 submissions of anthropogenic debris incorporated into birds’ nests, or entangling individual birds. These submissions have included over 50 different species, from all around the world, and will help researchers to determine which species are most reported and where, and which types of debris are most commonly involved.
However, work on the project continues and citizen scientists are encouraged to continue making submissions. By enlisting the help of as many people as possible, researchers hope to better understand the scale of the impact plastic pollution has on birds. Dr. Nina O’Hanlon, ornithologist at the University of Highlands and Islands’ North Highland College, of which the ERI is a part, explains what sort of material is required:
“Photographs are particularly welcome, of nests and entangled birds, as these can be used to glean further information on the type and amount of debris incorporated into nests, and how birds are entangled. By recording details on the type of debris we can hopefully find out more about the source of debris ending up in bird nests, or entangling birds, and therefore in the local environment.”
As Dr O’Hanlon says, the project will enable researchers to answer not just current questions, but also those which will arise in the future: “Not only can this help target actions to reduce debris in our oceans, and on land, but it can also be used to see how effective these actions are. For example, will improved port waste management facilities and extended producer liability for fishing gear help reduce fishery related debris in the ocean?”
If any of you birdwatchers or nature enthusiasts have had to restrict your excursions this year due to public health measures, not to worry – you may already have a wealth of useful material to hand: “Submissions can also be from any year. So far, the oldest image submitted is of Northern Gannet nests containing fishing rope on Sule Stack, Scotland from 1967!”
You can help with these investigations by uploading information for any bird species, anywhere in the world at https://www.birdsanddebris.com/. Be sure to check out the About the Project section for guidelines on how to gather images or other data in a way that is both responsible and safe for you and any wildlife you might encounter.
You can also see more evidence of the impact of marine debris on birds in our new video, and be sure to follow us on Twitter, Facebook, and Instagram to keep up with further news about the Blue Circular Economy project. | <urn:uuid:6ff3e057-be91-419d-aba9-53d388482055> | CC-MAIN-2023-06 | https://bluecirculareconomy.eu/update-on-birds-and-debris-we-still-need-your-help/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.950901 | 619 | 3.734375 | 4 |
V-I Characteristics of p-n Junction Diode
The Volt-Ampere or V-I characteristics of a p-n junction diode is basically the curve between voltage across the junction and the circuit current.
Usually voltage is taken across x-axis and current along y-axis.
Fig.1 shows the circuit arrangement for determining the V-I characteristics of a p-n junction diode.
The characteristics can be explained under three conditions namely zero external voltage, forward bias and reverse bias.
(i) Zero External Voltage:
When the external voltage is zero, i.e. circuit is open at K, the potential barrier at the junction does not permit current flow. Therefore, circuit current is zero as indicated by point O in fig.2.
(ii) Forward Bias:
With forward bias to the p-n junction i.e. p-type is connected to positive terminal and n-type is connected to negative terminal, the potential barrier is reduced.
At some forward voltage (0.7 V for Si and 0.3 V for Ge), the potential barrier is altogether eliminated and current starts flowing in the circuit.
From now onwards, the current increases with the increase in forward voltage. Thus a rising curve OB is obtained with forward bias as shown in fig.2.
From the forward characteristics, it is seen that at first (i.e region OA ), the current increase very slowly and curve is non-linear. It is because the external applied voltage is used to overcome the potential barrier.
However, once the external applied voltage exceeds the potential barrier voltage, the p-n junction behaves like an ordinary conductor. Therefore, current rises very sharply with increase in voltage (region AB). The curve is almost linear.
(iii) Reverse Bias:
With reverse bias to the p-n junction i.e. p-type connected to negative terminal and n-type connected to positive terminal, potential barrier at the junction is increased.
Therefore, the junction resistance becomes very high and practically no current flows through the circuit.
However, in practice, a very small current (of the order of μA) flows in the circuit with reverse bias as shown in fig.3.
In n-type and p-type semiconductors, very small number of minority charge carriers is present. Hence, a small voltage applied on the diode pushes all the minority carriers towards the junction.
Thus, further increase in the external voltage does not increase the electric current.
This electric current is called reverse saturation current.
In other words, the voltage or point at which the electric current reaches its maximum level and further increase in voltage does not increase the electric current is called reverse saturation current.
To the free electrons in p-type and holes in n-type, the applied reverse bias appears as forward bias. Therefore, a small current flows in the reverse direction.
The reverse saturation current depends on the temperature. If temperature increases the generation of minority charge carriers increases. Hence, the reverse current increases with the increase in temperature.
However, the reverse saturation current is independent of the external reverse voltage. Hence, the reverse saturation current remains constant with the increase in voltage.
However, if the voltage applied on the diode is increased continuously, the kinetic energy of minority carriers may become high enough to knock out electrons from the semiconductor atom. At this stage breakdown of the junction occurs. This is characterized by a sudden rise of reverse current and a sudden fall of the resistance of barrier region. This may destroy the junction permanently.
In germanium diodes, a small increase in temperature generates large number of minority charge carriers. The number of minority charge carriers generated in the germanium diodes is greater than the silicon diodes. Hence, the reverse saturation current in the germanium diodes is greater than the silicon diodes. | <urn:uuid:780b428f-6d3c-4147-901f-527eeda4ecb1> | CC-MAIN-2023-06 | https://electronicspost.com/v-i-characteristics-of-p-n-junction-diode/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.916285 | 814 | 3.5625 | 4 |
In May of 1940, the German army invaded France and the Low Countries. Using tactics that relied on the initiative of officers leading mechanized units comprising tanks and motorized infantry supported by aircraft, the Germans quickly outmaneuvered French and British forces deployed to fight a static conflict like the First World War on the Western Front. By early June, Britain had initiated a desperate effort to evacuate its forces from the European continent, and by the end of the month the French government had surrendered. This campaign has long been cited as an example of the importance of learning the correct lessons from a particular conflict. Williamson Murray has described this view of 1940 as follows:
Given the decisive result of the 1940 campaign, it is certainly tempting to conclude that the German army drew the right lessons from the First World War while their adversaries drew the wrong ones. Such a judgment, however, glosses over prolonged and contentious debates that took place in all of the armed forces that participated in the First World War.
War on Land↑
The static character of the war on the First World War on the Western Front surprised European military planners. Previous conflicts had demonstrated the impact of modern firepower on the battlefield, but most professional soldiers did not believe that it would preclude successful offensive operations. The late summer and autumn of 1914, however, saw the failure of offensives by both sides, and the war degenerated into a prolonged stalemate. Over the next three years armies attempted to restore mobility to the battlefield by coordinating the forward movement of infantry with artillery barrages of varying duration and intensity. It was only in 1918, however, that they developed methods capable of breaking the deadlock, in the process offering glimpses of future land warfare. In March, the German army launched Operation Michael, an offensive that opened with a brief but intense artillery bombardment featuring shrapnel, high explosives, and poison gas, closely followed by the advance of small units of elite infantry known as storm troops. These units broke through British lines, bypassing strongpoints and creating havoc in rear areas. Despite making rapid initial gains, the offensive lost momentum. Subsequent German offensives using similar tactics produced diminishing returns as the most highly trained infantry became casualties.
In the summer of 1918, the Allies took the offensive against the depleted German army, employing tanks to spearhead the advance of the infantry. The Battle of Amiens, launched on 8 August, saw British and Commonwealth infantry supported by tanks and aircraft advance up to eight miles in a single day. The tanks of 1918, however, were quite susceptible to both mechanical breakdown and enemy anti-tank weapons. As a result, they were not available in sufficient numbers to produce a similar result before the armistice in November. In addition, poor weather and German opposition limited the contribution of British aircraft to operations on the ground. The Allies defeated the German army on the Western Front primarily through a series of “step-by-step” operations, in which infantry advanced under the cover of rapid, extremely heavy artillery bombardments, seizing limited objectives and consolidating their gains while waiting for the artillery to arrive so that the process could be repeated. This method incurred significant casualties, and did not produce spectacular breakthroughs. But it inflicted heavy losses on the enemy, weakening the German army and forcing it to concede territory.
There were significant variations in the tactics developed by different armies on the Western Front. By the end of the war, however, they all included poison gas in their artillery bombardments as a means of incapacitating defenders at the outset of offensive operations. The 1920s saw growing public revulsion toward the use of gas, and consequent attempts by the League of Nations to prohibit its use. These efforts, however, failed to stop the development of chemical weapons or their employment against non-Europeans. In the 1920s, the Spanish used gas to help put down the Rif Rebellion in Morocco, while in the 1930s, Italy used gas in its conquest of Ethiopia. Nor did international agreements stop states from developing plans to use chemical weapons against enemy civilians during the Second World War. In 1944, for example, the Anglo-American Combined Chiefs of Staff developed a plan for a massive bombardment of German cities using phosgene and mustard gas. Given the widespread recognition among military planners of the utility of poison gas, it is perhaps surprising that it was never actually used in military operations in Europe from 1939-1945. While Japan used chemical and even biological weapons during its invasion of China, it did not employ them against European or American forces. What seems to have restrained political leaders and military commanders from employing these weapons during the Second World War was a recognition of their potency. The awareness that the enemy could retaliate in kind to the use of chemical weapons, potentially targeting civilians or prisoners of war, acted as a powerful deterrent.
This widespread recognition of the military value of poison gas, however, was the exception to the rule. The utility of other new weapons was subject to considerable discussion, with different armies drawing very different conclusions. Reflection on the lessons of the First World War began in the British army well before the conflict ended. The final year of the war saw debate between advocates of new technology such as tanks and proponents of more “traditional” tactics featuring infantry supported by artillery. Allied victories in the fall of 1918 appeared to vindicate the traditionalists, but British society had suffered unprecedented losses in the conflict, and military and political leaders recognized that it would not tolerate casualties on a similar scale in the next war. Moreover, British officers worried about the collapse of morale in a conscript army, as had occurred in the French army in 1917. In the 1920s, prolific British advocates of mechanized warfare such as J.F.C. Fuller (1878-1966) and Basil Liddell Hart (1895-1970) proposed a less costly route to victory that emphasized shock rather than attrition. As Liddell Hart wrote in 1926:
The rapidly developing tank offered a means to achieve this shock effect. By 1923, the Vickers medium tank was capable of speeds of up to twenty miles per hour. Thus, under Sir George Milne (1866-1948), Chief of the Imperial General Staff from 1926-33, the British army experimented with the concentration of armor and motorized units into formations in order to maximize their firepower and speed. Ultimately, however, a combination of factors deterred the British from embracing the concept of massed armor. The principal purpose of the army in the interwar years was policing Britain’s increasingly restive colonies, a task for which tank formations had limited utility. In addition, under the economic constraints of the 1930s, it became increasingly difficult to justify an expensive but unproven capability.
The experience of the First World War also discouraged British officers from giving tanks or motorized infantry an autonomous role on the battlefield. On the Western Front, British units had repeatedly suffered heavy casualties when they attempted to exploit their initial gains. British successes in 1918 had come largely as a result of careful coordination of units to achieve clearly defined objectives. Creating independent mechanized formations capable of advancing far more quickly than the rest of the army undermined the commander’s ability to coordinate the forces under his control. Thus, while the British recognized the potential of mechanization, they ultimately concluded that tanks could most appropriately be used in coordination with other arms such as infantry and artillery. As David French has observed, while this approach reflected the historical experiences of the British in the First World War, it “promised to negate many of the advantages of mechanization.”
Although France was also on the winning side of the war, its experience differed from that of Britain in two important respects. First, the northeastern region of France, which produced three-quarters of the country’s iron ore, was invaded and occupied by the German army for more than four years. Secondly, the French army had come much closer to collapse than its British counterpart, with widespread mutinies occurring after the disastrous Nivelle offensive in the spring of 1917. These experiences underlined for French military leaders the importance of defending the country’s frontier with Germany, and the necessity of limiting the exposure of French soldiers to the devastating effects of enemy firepower.
French officers did not simply resign themselves to a defensive strategy in the next war. Nor were they oblivious to the potential implications of new technology. Like the British, the French army had employed tanks in 1918 and after the war French officers advocated the use of massed armor for offensive purposes. A 1919 study conducted by the army’s general headquarters advocated the independent use of tanks, while an army manual published in 1920 decreed that tanks were most appropriately used on the offensive and “in mass”. In addition, while the army generally recognized the need for fortifications along the frontier, many senior officers including Marshals Ferdinand Foch (1851-1929) and Joseph Joffre (1852-1931) argued that fortified positions could facilitate offensive actions, limiting manpower required for the defensive and serving as launching points for offensives into Germany.
As the 1920s progressed, French governments shortened conscripts’ period of service in the army. After 1928, soldiers served for only a year. Since before the First World War, French officers had been skeptical of the ability of inexperienced conscripts to execute offensive tactics under fire and the mutinies of 1917 had reinforced this belief. Thus, the shortened term of conscription dampened officers’ enthusiasm for the offensive and lent weight to arguments in favor of defensive strategies and tactics. The concept that gained favor was the “methodical battle”, which emphasized “tightly controlled operations, in which artillery would dominate both battlefield and forward movement.” This doctrine stifled the initiative of junior officers and rendered the offensive use of maneuver nearly impossible. As became evident in 1940, this doctrine also made it more difficult for French officers even to grasp the very different tactics employed by their enemy.
The armies that suffered defeat in the First World War were even more amenable to new but unproven concepts like massed armor. Russia exited the war in late 1917, seeking an armistice immediately following the Bolshevik Revolution. Until 1922, however, the country was engulfed in civil war between Bolshevik forces and a loose coalition of opposed factions, backed by contingents from more than half a dozen foreign powers. When the fighting finally subsided, the victorious Red Army therefore drew lessons from a longer and more diverse period of conflict than its western counterparts. The most obvious lesson, and one recognized by all of the continental belligerents, was the importance of social and economic mobilization in order to increase the state’s ability to fight a prolonged war. Operations in Eastern Europe, however, were not characterized by the same high force-to-space ratios as those on the Western Front. Mobile operations therefore occurred throughout the First World War and the Red Army’s first victories of the civil war had seen independent actions by cavalry utilizing maneuver. Such victories were consistent with Russia’s military history, which included centuries of mobile operations on the steppe. Russia did not develop tanks in the First World War, but by the mid-1920s senior officers were recognizing the potential of mechanization to enable mobile operations. British experimentation with tanks in 1927-28 piqued Soviet interest. By the early 1930s, the works of Fuller and Liddell Hart had been translated into Russian and Red Army Field Service Regulations called for the use of tanks in independent groups to penetrate the enemy’s defensive system.
Soviet officers did not simply mimic British ideas. Theorists such as Viktor Triandafillov (1894-1931) and Mikhail Tukhachevsky (1893-1937) recognized that armor had the potential to achieve a shock effect through the use of maneuver, but they did not believe that this alone would be sufficient to achieve victory. Based on Russia’s experience in the First World War, as well as Marxist-Leninist ideology, they argued that the next war would require the total mobilization of society and all of the state’s resources. In such a conflict, victory could only be attained through the destruction of the enemy army, an objective too large to achieve in a single battle. In contrast to Fuller and Liddell Hart, Tukhachevsky argued that battle was
This would require successive, combined-arms operations featuring the coordination of infantry, artillery, armor and airpower. Developed by Tukhachevsky in the early 1930s, the concept of “deep battle” envisioned the use of infantry, artillery, and armor to attack enemy defensive positions while aircraft targeted strong points, interdicted enemy reserves, and even dropped paratroops in the enemy’s rear areas. Subsequent waves of armor and mechanized infantry would exploit initial gains, pushing into the enemy’s defensive system, and ultimately destroying it. Unfortunately for the Red Army, Joseph Stalin’s (1878-1953) purges claimed Tukhachevsky and many other members of the officer corps in 1937. As a result, the army entered the Second World War with a doctrine that was neither fully developed nor well understood. Its performance from 1939-42 reflected this. Of all armies in the interwar period, however, it was the Red Army that was most successful in combining the lessons of the First World War with the rapidly evolving capabilities of new technologies to develop a realistic approach to military operations in the next war.
At first glance, the lessons learned by German officers appear similar to those drawn by their Soviet counterparts. The German army’s response to defeat on the Western Front, however, resulted from a very different mixture of political constraints, strategic calculations, and military culture. Its assessment of the lessons of the First World War took place in the shadow of the Treaty of Versailles, which reduced dramatically Germany’s ability to use military power to achieve its grand strategic objectives. The treaty limited the size of the army to just 100,000 long-service volunteers, including 4,000 officers. This prevented the development of a large cadre of trained reservists who could be called to serve in the event of another war. The German General Staff and its intellectual training ground, the Kriegsakademie, were abolished. Moreover, the armed forces were prohibited from acquiring tanks, aircraft, anti-aircraft guns, and heavy artillery. To constrain further Germany’s ability to fight a major war, the treaty imposed strict limitations on the development and production of the country’s major arms manufacturers.
The task of rebuilding the army in the context of these restrictions fell to Hans von Seeckt (1866-1936), its commander from 1920-26. In an effort to maximize the effectiveness of his small force, Seeckt incorporated the members of the disbanded General Staff into the new officer corps and initiated a comprehensive program to distill the lessons of the First World War, establishing no less than fifty-seven committees to study the issue. Examining German battlefield successes, these studies emphasized the importance of offensive tactics that relied on maneuver and the use of initiative by junior leaders. Seeckt concluded that a highly-trained professional force could use speed and mobility to defeat larger conscript armies before they were able to mobilize fully for war. Such a conclusion was convenient, given the limitations imposed by the Treaty of Versailles, and the fact that Germany faced potential adversaries on both frontiers. It also reflected the longstanding preference of German officers for a professional army over a conscript force. Germany had not developed tanks in large numbers during the war, but British maneuvers as well as joint exercises with the Red Army in the late 1920s demonstrated their potential as a means of enabling mobile operations. By 1929, the German army was training to employ tank formations independent of slower-moving infantry.
Not all German officers agreed that a small professional force would be sufficient to win the next war. In 1935, Erich Ludendorff (1865-1937), Quartermaster-General of the army and de facto commander of the German war effort from 1916-18, published Der Totale Krieg, which argued that victory would require the mobilization of all of German society and its resources. Seeckt’s successors had similar views. Ludwig Beck (1880-1944), who served as Chief of the reconstituted General Staff from 1933-38, favored the independent use of tanks. He placed greater emphasis, however, on combined arms operations and believed that a mass conscript army was essential to victory. The ascension of Adolf Hitler (1889-1945) to power in 1933, however, had a decisive influence on the development of German strategy and tactics in two ways. A proponent of mechanization and technology more generally, Hitler supported advocates of radical concepts of armored warfare like Heinz Guderian (1888-1954). In addition, the pace of German expansion under Hitler outpaced the army’s preparations for a prolonged war requiring national mobilization. The German army thus went to war in 1939 using innovative mobile tactics led by independent tank formations supported by aircraft. These tactics produced rapid victories in 1939-40, but they did not result from a widespread consensus within the army regarding the lessons of the previous war. Rather than advocating victory through “blitzkrieg”, most German officers believed that the next major war would be a prolonged affair requiring the complete “militarization” of society. When Hitler took Germany to war before this could be achieved, the army improvised by employing tactics that reflected its experiences in 1917-18, as well as its institutional preference for a mobile offensive executed by a professional army. These tactics proved successful in the opening campaigns of the Second World War. By late 1941, however, the German army was engaged in a titanic struggle on the Eastern Front, for which it had not adequately prepared. In this context, the army’s inability to integrate fully the strategic lessons of the previous war became manifest.
War at Sea↑
Like their counterparts on land, naval officers were surprised by the character of the First World War at sea. The late 1800s had seen the evolution of weapons such as submarines and torpedoes, which posed a threat to the large battleships that had traditionally formed the core of European naval fleets. By the end of the century, however, improvements to armor as well as the development of countermeasures such as torpedo nets and search lights had reduced concerns about the vulnerability of the battleship. At the same time, the writings of the American naval historian and theorist Alfred Thayer Mahan (1840-1914) emphasized its centrality to maintaining command of the sea. The Battle of Tsushima (1905) during the Russo-Japanese War reinforced Mahan’s views about the importance of decisive naval battles. Thus, navies raced to build heavily armed and armored Dreadnought-class ships in the decade leading up to 1914.
Contrary to expectations, the decisive naval encounter never occurred. The Royal Navy’s Grand Fleet engaged the German High Seas Fleet at the Battle of Jutland in 1916, but the result was indecisive. While British losses in the encounter exceeded their own, the Germans were reluctant to risk another fleet engagement against the more powerful Royal Navy. Germany’s most effective naval weapon was in fact the submarine, which sunk nearly 12 million tons of Allied shipping during the war. While most of this total constituted unarmed merchant ships, the scale of losses caused serious concern among British leaders, who feared in early 1917 that the ongoing German submarine campaign would starve Britain out of the war. Rationing, product substitution, and the adoption of defensive naval tactics such as convoying combined to reduce the U-boat threat. But the war demonstrated clearly the ability of the submarine to disrupt the supply lines of states dependent on seaborne trade like Britain. The First World War also saw the dramatic growth in the role of aircraft in support of naval operations. Britain’s Royal Naval Air Service (RNAS) expanded from less than 1,000 personnel and 100 aircraft to 60,000 personnel and nearly 3,000 aircraft, which provided reconnaissance and artillery spotting for British naval vessels. By the end of the war, the Royal Navy had built twelve aircraft carriers and the Royal Naval Air Service was planning a torpedo-bomber raid against the High Seas Fleet.
The belligerents involved in the First World War recognized the exorbitant cost of modern naval vessels. As a result, they were able to agree on limits to naval construction that held for most of the interwar period. The Treaty of Versailles drastically reduced the strength of the German navy and prohibited it from possessing large battleships or aircraft carriers. The Treaty of Washington, signed in 1922 and renewed in 1930, limited the tonnage of battleships and aircraft carriers of the world’s other major fleets, specifying a ratio of 5:5:3:1.75:1.75 for the British, US, Japanese, French, and Italian navies respectively. It was only in the mid-1930s that Japan and Italy abandoned it. This, along with Germany’s renunciation of the Treaty of Versailles, finally forced the other powers to follow suit. While they could agree on the desirability of preventing another naval arms race, navies were less certain regarding the extent to which war at sea had changed. In retrospect, the impact of the submarine and the aircraft on naval warfare might seem as obvious as that of the tank on land. Nonetheless, navies proved reluctant to abandon what has been termed the “battleship paradigm”, continuing to conduct exercises involving set-piece engagements between opposing fleets.
Based on its experiences in the First World War, the German navy clearly understood the offensive potential of the submarine. While the Treaty of Versailles forbade it to possess aircraft or submarines, by the early 1920s the Germans were developing prototypes abroad in conjunction with other states. But German enthusiasm for submarines should not be overestimated. The German navy went to war in 1939 with an inventory of only fitfty-seven U-boats, far less than the 300 that Karl Doenitz (1891-1980), their leading proponent, deemed necessary for a large-scale offensive against enemy merchant shipping. Germany’s slow development of submarines during the 1930s stemmed in part from skepticism about their impact in future conflicts. Given the evolution of underwater detection methods such as the British ASDIC, many officers doubted that they would have the same impact as in 1917. More generally, many senior leaders continued to view the battleship as the primary instrument of naval warfare. Erich Raeder (1876-1960), commander of the German navy from 1928-43, dismissed the submarine as a weapon of the weak, funneling limited economic resources into the construction of surface vessels like battleships and cruisers. Even Hitler himself preferred large battleships over less glamorous but potentially more effective weapons like the U-boat. Thus, German construction of submarines in large numbers would only begin once the opening years of the Second World War had demonstrated their value once again.
The Royal Navy saw significant debate in the interwar period regarding the role of the battleship in future conflicts. Some officers argued that the inconclusive engagement at Jutland resulted from the excessive caution of the British commander, Admiral John Jellicoe (1859-1935). Had the Grand Fleet defeated its German adversary decisively, they maintained, the war might have been shortened. Nevertheless, most British naval officers recognized that even a decisive naval victory over the High Seas Fleet would not have neutralized the threat posed to merchant shipping by submarines. From the conclusion of the First World War, Britain therefore advocated an international agreement to abolish the submarine. While this effort proved unsuccessful, the Royal Navy also initiated a deception campaign, exaggerating the capabilities of ASDIC, its underwater detection technology, in order to convince adversaries and allies alike that further investment in submarines was not a worthwhile use of resources. This campaign had some success in slowing the German development of submarines, not least because it reinforced existing preferences for surface vessels.
The British also recognized the impact of aircraft on naval warfare, developing ambitious plans to develop twelve modern carriers. As was the case with the British army, however, the Royal Navy’s application of the lessons of the First World War was hampered by limited means and expanding responsibilities. The establishment of the Royal Air Force in 1918 resulted in the diversion of aircraft and talented aviators to other roles in the interwar period. In addition, the emergence of Japan as a revisionist power in the Pacific compelled Britain to invest in the development of a major naval base at Singapore and to commit resources to defending its Asian colonies. More generally, the Royal Navy continued to adhere to the battleship paradigm. Although financial constraints curtailed naval construction for most of the interwar period, when Britain began rearming in earnest during the late 1930s, the first priority of the Royal Navy remained battleships and slightly smaller cruisers. Thus, rather than twelve modern aircraft carriers, Britain entered the Second World War with only four “first-line carriers and three obsolescent ones.” Moreover, beyond its efforts to create a mystique around ASDIC, the Royal Navy did not develop adequate countermeasures to the submarine, entering the Second World War with a dearth of escort vessels, a focus on offensive tactics rather than proven defensive methods such as convoying, and little experience with the use of aircraft for escort duties.
The US and Japanese navies also remained committed to the battleship. The Japanese navy played only a minor role in the First World War, and its decisive victory at Tsushima in 1905 therefore continued to loom large in its conception of naval warfare in the interwar period. The Japanese spent approximately three times as much as the Royal Navy modernizing their existing battleships, and in the 1930s constructed the largest battleships in the world, the Musashi and the Yamato. Despite the efforts of American airpower advocate Billy Mitchell (1879-1936) to demonstrate the vulnerability of naval vessels by to aircraft, the US Navy also continued to conceive of the battleship as the centerpiece of its fleet, spending more than five times as much as the British on modernization of existing ships. It was geography and shifting strategic realities, rather than the lessons imparted during the First World War, that led the Japanese and Americans to develop aircraft carriers during the 1930s. As the likelihood of a conflict between the United States and Japan increased, both sides recognized that they would have to project naval power across the vast expanses of the Pacific. Carriers would be essential to protect their fleets as they moved beyond the range of their own land-based aircraft. Thus, while both navies made extensive use of carriers during the Second World War, they both initially conceived of them in a supporting role, protecting the fleet so that it could engage in the type of decisive battle envisioned by naval theorists since the nineteenth century.
War in the Air↑
In the decade after Orville Wright’s (1871-1948) first successful flight in 1903, the capabilities of winged aircraft developed quickly. The Italian army first used airplanes for military purposes in 1911 when it seized Libya from the Ottoman Empire. The First World War saw aircraft perform a variety of roles, including reconnaissance, support of ground and naval forces, and even bombing of civilian targets. After conducting Zeppelin raids against Britain from early 1915, the Germans began using winged aircraft to bomb British cities in 1917. By 1918, the British were retaliating in kind. While these rudimentary bombing raids had a negligible impact on the outcome of the war, they captured the imaginations of officers who saw a potentially decisive role for aircraft in future conflicts. The best known was Giulio Douhet (1869-1930), a staff officer in the Italian army. Interested in the military use of aircraft since well before the war, Douhet advocated bombing raids against enemy production centers as early as 1915. In 1921, he published Il Dominio dell’ aria [The Command of the Air], which argued that air power had the potential to avert bloody stalemates like that which had prevailed in Europe from 1914-18. Writing prior to the invention of radar, Douhet argued that a surprise attack by a fleet of bombers could inflict significant damage on population centers, devastating civilian morale and compelling the enemy to surrender. The only means of avoiding such an attack was to launch a preemptive strike against the air force of the adversary. Douhet therefore advocated the development of an air force consisting predominantly of bombers, capable of bringing future wars to a rapid and decisive conclusion.
Douhet was not alone in his thinking. Hugh Trenchard (1873-1956), the first Chief of Staff of the Royal Air Force upon its establishment in 1918, developed similar ideas independently, advocating the acquisition of bombers for offensive purposes rather than diverting resources to develop air defense capabilities. Trenchard and especially Douhet influenced officers elsewhere. In the United States, Billy Mitchell argued that air power had rendered land and naval forces obsolete, calling for strikes against urban centers behind enemy lines. In France, officers such as Pierre Fauré argued that airpower was capable of achieving a “quick and cheap victory” in future wars. In Germany as well, officers recognized the benefit of targeting enemy production centers.
It is worth noting, however, that the First World War offered very limited historical evidence to support the lessons derived by the leading advocates of air power. Bombing raids against civilian targets had certainly affected morale in Germany and Britain, but in neither country did they come close to inducing the type of collapse envisioned by Douhet. Indeed, Douhet argued explicitly that given the rapid development of air warfare, history offered few relevant lessons for the future. Not surprisingly therefore, air power advocates faced multiple criticisms based on analysis of the actual role of aircraft in the First World War. The Italian naval officer Giuseppe Fioravanzo (1891-1975) challenged Douhet, arguing that command of the air could be contested. Air forces were therefore foolish to concentrate on developing bombers that would be vulnerable to attack by smaller fighter aircraft. In France, General Albert Niessel (1866-1955) emphasized that ground forces would continue to play a decisive role in future conflicts. As late as 1942, when the capabilities of aircraft had grown significantly from 1918, British Admiral John Tovey (1885-1971) challenged Trenchard’s assertion that bombing enemy cities would be sufficient to win a war. Based on the events of the last major conflict, Tovey maintained that aircraft would be put to better use protecting the sea lines of communication on which Britain depended for survival.
As was the case with land and naval warfare, the lessons derived by military organizations regarding air warfare were shaped by geographic, institutional, and economic factors. Those organizations that engaged in rigorous post-war analysis tended to draw conclusions that emphasized the importance of cooperation between air and ground forces. In Germany, for example, Hans von Seeckt’s comprehensive assessment of the lessons of the First World War included examinations of the contributions of air power, conducted by multiple teams of officers. These studies revealed the importance of achieving air superiority before initiating other operations, the relative inaccuracy of bombing, and the crucial role of aircraft in supporting the German offensives of 1918. As discussed previously, Soviet theorists also emphasized coordination of air and ground forces, rather than the potential of strategic bombing.
These conclusions, however, were influenced by geographic considerations. Connected by land to potential adversaries, the Soviet Union and Germany were inclined to view aircraft primarily as a means of supporting ground forces, which had always played an essential role in the defense of their territories. In contrast, Britain and the United States were protected by water from imminent invasion and could therefore afford to gamble on Douhet and Trenchard’s unproven assertions about the decisiveness of airpower. Economic and institutional imperatives reinforced the Anglo-American infatuation with strategic bombing. From its establishment in 1918, the Royal Air Force faced hostility from both the Royal Navy and the British army, which saw little need for an independent air force. This hostility prevailed throughout the interwar period, as the services competed for shares of a shrinking defence budget. Defining strategic bombing as its central role allowed the RAF to argue that it was capable of achieving decisive results in a future war, independently of the other services. While the United States Air Force was not established until 1947, strategic bombing played a similar role for American air power advocates, helping them make the case for an independent air force in the 1920s and 1930s. This is not to suggest that these organizations disregarded entirely the lessons of the war regarding the value and potential of air power. Both Britain and the United States recognized the importance of air superiority, as well as air support to land and naval forces. The Germans also saw potential in strategic bombing. The relative value of these roles, however, was subject to debates, the outcome of which was influenced significantly by the particular circumstances of the states, organizations, and individuals involved.
The military lessons of the First World War were never obvious. A century after the end of the conflict, historians continue to debate how and why the Allies won it. Even if it was possible for military professionals involved in the war to identify weapons, tactics, or operational methods that produced success, the rapid pace of technological change during the interwar period made it very difficult to determine the extent to which these advantages would prevail over time. Through careful analysis of the events of the war, military officers were able to discern valuable lessons that proved effective during the Second World War. This was particularly true in the German army. Nevertheless, this essay demonstrates that it is overly simplistic to praise certain military organizations for drawing the “right” lessons from the war, while criticizing others for drawing the “wrong” ones. While rigorous analysis of the events of the First World War produced a variety of tactical and operational insights, the relative importance of these insights was subject to debate in all of military organizations that participated in the conflict. Moreover, the lessons that individual military organizations chose to emphasize reflected their very different geographic situations, ideological perspectives, economic realities, and institutional interests. For example, in the final months of the war both German and British officers discerned the possibility of using armor, motor vehicles, and aircraft to restore mobility to the battlefield. That the British did not apply this lesson to the same extent as the Germans resulted less from indifference or lack of imagination than from Britain’s different geographic, strategic, and economic realities. Overall, the military lessons derived from the war depended on the perspective of those who searched for them.
Nikolas Gardner, UAE National Defense College
- Murray, Williamson: Armored Warfare: The British, French and German Experiences, in: Allan R. Millett and Williamson Murray (eds.): Military Innovation in the Interwar Period, Cambridge 1998, p. 7. The opinions expressed in this article are those of the author and do not reflect the views of the National Defense College, or the United Arab Emirates government.
- Travers, Tim: The Evolution of British Strategy and Tactics on the Western Front in 1918: GHQ, Manpower and Technology, in: The Journal of Military History 54/2 (April 1990), pp. 191-92.
- Boff, Jonathan: Combined Arms during the Hundred Days Campaign, August-November 1918, in: War in History 17/4 (October 2010), pp. 459-478.
- Harris, Paul and Marble, Sanders: The ‘Step-by-Step’ Approach: British Military Thought and Operational Method on the Western Front, 1915–1917, in: War in History 15/1 (January 2008), pp. 17-42.
- Schmidt, Ulf: Preparing for Poison Warfare: The Ethics and Politics of Britain’s Chemical Weapons Program, 1915-1945, in: Bretislav Friedrich et al. (eds.): One Hundred Years of Chemical Warfare: Research, Development, Consequences, New York 2017, p. 99.
- Travers, The Evolution of British Strategy and Tactics 1990, pp. 183-85.
- Liddell Hart quoted in Heuser, Beatrice: The Evolution of Strategy: Thinking War from Antiquity to the Present, Cambridge 2010, p. 185.
- Murray, Armored Warfare 1998, p. 26.
- French, David: Raising Churchill’s Army: The British Army and the War Against Germany, 1919-1945, Oxford 2000, p. 43.
- Kier, Elizabeth: Imagining War: French and British Military Doctrine between the Wars, Princeton 1997, p. 43.
- Ibid., pp. 41-44.
- Murray, Armored Warfare 1998, p. 32.
- Strachan, Hew: European Armies and the Conduct of War, New York 2001, p. 158.
- Lee, Wayne: Waging War: Conflict, Culture and Innovation in World History, Oxford 2015, p. 420.
- Gat, Azar: A History of Military Thought from the Enlightenment to the Cold War, Oxford 2001, p. 635.
- Kipp, Jacob: Two Views of Warsaw: The Russian Civil War and Soviet Operational Art, 1920-1932, in: B.J.C. McKercher and M.A. Hennessy (eds.): The Operational Art: Developments in the Theories of War, Westport 1996, p. 53.
- Liedtke, Greg: Enduring the Whirlwind: The German Army and the Russo-German War, 1941-1943, Solihull 2016, p. 46.
- Murray, Armored Warfare 1998, pp. 36-37.
- Geyer, Michael: German Strategy in the Age of Machine Warfare, 1914-1945, in: Paret, Peter (ed.): Makers of Modern Strategy: From Machiavelli to the Nuclear Age, Princeton 1986, p. 559.
- Strachan, European Armies 2001, p. 162.
- Deist, Wilhelm: The Road to Ideological War, in: Murray Williamson, MacGregor Knox and Alvin Bernstein (eds.): The Making of Strategy: Rulers, States, and War, Cambridge 1994, p. 359.
- Black, Jeremy: Warfare in the Western World, Bloomington 2002, pp. 125-30.
- Till, Geoffrey: Adopting the Aircraft Carrier: The British, American and Japanese Case Studies, in: Millett and Murray, Military Innovation in the Interwar Period 1998, pp. 194-195.
- The treaty placed tonnage limitations on the battleships and aircraft carriers possessed by the signatories. Britain and the United States were each restricted to 525,000 tons, while Japan was allowed 315,000 tons. France and Italy were each permitted 175,000 tons. The agreement is often expressed as the ratio 5 (Britain):5 (United States):3 (Japan):1.75 (France):1.75 (Italy).
- Ibid., p. 220.
- Maiolo, Joe: Deception and Intelligence Failure: Anglo-German Preparations for U-Boat Warfare in the 1930s, in: Thomas G. Mahnken and Joseph A. Maiolo (eds.): Strategic Studies: A Reader, London 2014, p. 184.
- Heuser, The Evolution of Strategy 2010, p. 251.
- Maiolo, Deception and Intelligence Failure 2014, p. 195.
- Till, Adopting the Aircraft Carrier 1998, p. 207.
- Ibid., p. 198.
- Maiolo, Deception and Intelligence Failure 2014, p. 184.
- Till, Adopting the Aircraft Carrier 1998, p. 222.
- Gardner, Nikolas: “Military Thought from Machiavelli to Liddell Hart”, in: Matthew Hughes and William Philpott: Palgrave Advances in Modern Military History, London 2006, p. 76.
- Heuser, The Evolution of Strategy 2010, pp. 307-2088, 315.
- Muller, Richard: The Airpower Historian and the Education of Strategists, in: Richard Bailey, James Forsyth and Mark Yeisley (eds.): Strategy: Context and Adaptation from Archidamus to Airpower, Annapolis 2016, p. 115.
- Heuser, The Evolution of Strategy 2010, pp. 310-311.
- Murray, Williamson: Strategic Bombing: The British, American, and German Experiences”, in: Millett and Murray, Military Innovation 1998, p. 115.
- Muller, Richard: Close Air Support: The German, British, and American Experiences, in ibid., pp. 163-73.
- Bernstein, Alvin H. / Knox, MacGregor / Murray, Williamson (eds.): The making of strategy. Rulers, states, and war, Cambridge 1994: Cambridge University Press.
- Black, Jeremy: Warfare in the Western world, 1882-1975, Bloomington 2002: Indiana University Press.
- Dülffer, Jost: Weimar, Hitler und die Marine. Reichspolitik und Flottenbau 1920-1939, Düsseldorf 1973: Droste.
- Förster, Stig: An der Schwelle zum totalen Krieg. Die militärische Debatte über den Krieg der Zukunft, 1919-1939, Paderborn 2002: Schöningh.
- French, David: Raising Churchill's army. The British army and the war against Germany, 1919-1945, Oxford 2000: Oxford University Press.
- Gat, Azar: A history of military thought. From the Enlightenment to the Cold War, Oxford 2001: Oxford University Press.
- Kier, Elizabeth: Imagining war. French and British military doctrine between the wars, Princeton 1997: Princeton University Press.
- Millet, Alan R. / Murray, Williamson (eds.): Military innovation in the interwar period, Cambridge 2007: Cambridge University Press.
- Paret, Peter (ed.): Makers of modern strategy. From Machiavelli to the nuclear age, Princeton 1986: Princeton University Press.
- Posen, Barry R.: The sources of military doctrine. France, Britain, and Germany between the World Wars, Ithaca 1984: Cornell University Press.
- Strachan, Hew: European armies and the conduct of war, London; New York 1991: Routledge.
- Travers, Timothy: How the war was won. Command and technology in the British army on the Western Front, 1917-1918, Barnsley 2005: Pen & Sword Military Classics.
- Wettstein, Adrian: Zwischen Trauma und Erstarrung. Die französische Doktrin der Zwischenkriegszeit, in: Jaun, Rudolf / Olsansky, Michael M. / Picaud-Monnerat, Sandrine / Wettstein, Adrian (eds.): An der Front und hinter der Front. Der Erste Weltkrieg und seine Gefechtsfelder = Au front et à l’arrière. La Première Guerre mondiale et ses champs de bataille, Baden 2016: hier + jetzt, pp. 202-222. | <urn:uuid:de50a81d-3123-49cd-9656-05531b71c9d3> | CC-MAIN-2023-06 | https://encyclopedia.1914-1918-online.net/article/military_lessons_of_the_first_world_war | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.950333 | 9,045 | 3.921875 | 4 |
Perovskites are a class of synthetic materials that have a crystalline structure similar to that of the naturally occurring mineral calcium titanate. They have been the subject of many studies because they exhibit exciting and unique properties that can be tuned according to their composition. One of their potential applications is as catalysts for the synthesis of ammonia. In other words, specific perovskites can be placed inside a reaction chamber with nitrogen and hydrogen to promote the reaction of these gases to form ammonia.
Ammonia is a useful substance that can be employed in the production of fertilizers and artificial chemicals, and even as a clean-energy carrier in the form of hydrogen, which may be key in eco-friendly technologies. However, there are various challenges associated with the synthesis of ammonia and perovskites themselves.
The synthesis rate for ammonia is generally limited by the high energy required to dissociate nitrogen molecules. Some researchers have had some success using precious metals such as ruthenium. Recently, perovskites with some of their oxygen atoms replaced by hydrogen and nitrogen ions have been developed as efficient catalysts for ammonia synthesis. However, the traditional synthesis of perovskites with such substitutions usually has to be carried out at high temperature (over 800 degrees Celsius) and over long periods of time (weeks).
To address these issues, in a recent study carried out at Tokyo Tech, a group of researchers led by Prof. Masaaki Kitano devised a novel method for the low-temperature synthesis of one of such oxygen-substituted perovskite with the chemical name BaCeO3-xNyHz and tested its performance as a catalyst to produce ammonia. To achieve this, they made an innovative alteration to the perovskite synthesis process. The use of barium carbonate and cerium dioxide as precursors involves a very high temperature, which is required to have them combine into the base perovskite, or BaCeO3, because barium carbonate is very stable. In addition, it is necessary to substitute the oxygen atoms with nitrogen and hydrogen ions. On the other hand, the team found that the compound barium amide reacts easily with cerium dioxide under ammonia gas flow to directly form BaCeO3-xNyHz at low temperatures and in less time. “This is the first demonstration of a bottom-up synthesis of such a material, referred to as perovskite-type oxynitride-hydride,” explains Prof. Kitano.
The researchers first analyzed the structure of the perovskite obtained through the proposed process and then tested its catalytic properties for the low-temperature synthesis of ammonia under various conditions. Not only did the proposed material outperform most of the state-of-the-art competitors when combined with ruthenium, but it also vastly surpassed all of them when combined with cheaper metals such as cobalt and iron. This represents tremendous advantages in terms of both performance and associated cost.
Finally, the researchers attempted to elucidate the mechanisms behind the improved synthesis rate for ammonia. Overall, the insight provided in this study serves as a protocol for the synthesis of other types of materials with nitrogen and hydrogen ion substitutions and for the intelligent design of catalysts. “Our results will pave the way in new catalyst design strategies for low-temperature ammonia synthesis,” concludes Prof. Kitano. These findings will hopefully make the synthesis of useful materials cleaner and more energy efficient. | <urn:uuid:86956cfa-3fbc-4b3e-940f-7ad6c765b617> | CC-MAIN-2023-06 | https://energymetalnews.com/2019/11/23/efficient-bottom-up-synthesis-of-new-perovskite-material-for-the-production-of-ammonia/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.946354 | 720 | 3.734375 | 4 |
Parents, is your child’s teacher sending home high-frequency words for them to memorize? I have a tip that will help your child learn to automatically recognize those high-frequency words. (Sometimes called “sight words.” Hint: It’s not flashcards!
Next time your child is struggling to turn high-frequency words into sight words, (words that he or she can recognize immediately when reading), try this. According to the body of reading research known as the science of reading, this practice method is a more effective and efficient way to build word automaticity.
Here’s how it works…
- Say the word. For example, “can”
- Count the sounds you hear in the word: “I hear 3 sounds: /c/, /a/, /n/”
- Write the letter or letter that represents each sound: “We know that c can spell the /c/ sound. a spells the /a/ sound. n spells the /n/ sound.” Write each letter as you say them.
- Read the word.
But what about tricky words? Most tricky words are actually mostly decodable using common sound spellings. If they aren’t fully decodable, just follow the same procedure, but take time to point out and discuss the tricky part. This tricky part may be a truly irregular spelling, or it may just be a spelling pattern your child has not learned yet. Either way, you can point it out and tell your child that they will have to memorize that part, but they can sound out the rest of the word.
- Say the word: “said”
- Count the sounds you hear in the word: “I hear 3 sounds: /s/, /e/, /d/”
- Write the letter or letter that represents each sound: “We know that s spells the /s/ sound. d spells the /d/ sound. The tricky part of this word is that the ai spells the /e/ in this word. ai is an irregular way to spell the /e/ sound. Usually, we would use the e to spell /e/. So we just need to memorize that tricky part.“
- Read the word.
It’s fine to just write the words on a piece of scrap paper, or a whiteboard. I like to use grids to visually break up the word with one sound per box. This method is called phoneme grapheme mapping.
This method works much better than drilling with flashcards or any other activity that encourages students to memorize the shape of the word.
Recent science of reading research on teaching sight words supports incorporating phonics into our sight word instruction. Again, many of the high-frequency words we want children to recognize by sight actually follow regular phonics patterns. (And those that don’t are often mostly decodable.)
We can help students permanently store words in their brains as “sight words” when we use phonics methods in our sight word / high-frequency word / heart word routine.
If you want additional practice to help your child with high-frequency words, you might want to try these practice sheets that I created. Click here to learn more and see a preview. Each practice page in this resource includes phoneme grapheme mapping, and other activities that help students to attend to the sequence of letters, which are two phonics activities we can use to promote word automaticity.
Looking for the printable I used in the picture above? It’s part of this collection of phonics teaching slides. You can also get the page for free by clicking on the image below. 🙂 | <urn:uuid:aebd9451-b41d-4b53-9f95-c689b6454739> | CC-MAIN-2023-06 | https://homeliteracyblueprint.com/practicing-sight-words-try-this-instead-of-flashcards/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.942602 | 782 | 3.78125 | 4 |
“Hey, just one minute, eh?”
A new study has found an innate sense of justice even in children as young as three years old.
In the experiments conducted by researchers from Max Planck Institute of Evolutionary Anthropology in Leipzig, Germany and University of Manchester, UK, with young children aged three and five, it was seen that they strongly disapprove cheating, show concern for others and possess a sense of in-built restorative justice. In sets of experiments the children were seen to identify with the “victims of injustice” and returned snatched goods to their rightful owners. When deliberately set up obstacles prevented that, they still did not allow the “free riders” to keep the goods they grabbed from others.
Another important finding from the experiments was response of children to the needs of others as if their own. According to Keith Jensen of the University of Manchester, the ability to put oneself in the shoes of others, or empathize, is a central component of the sense of justice. He sees this sense of justice, focused on the harm suffered by the victims, unique to humans as providing a base for sociality and for punishment.
Whereas the punishment of freeloaders encourages cooperation in the human society, the chimpanzees were seen to be indifferent to foul play as long as they were not harmed themselves.
In experiments conducted at the Max Planck Institute with puppets, the children were observed to give the same responses to offenders, taking back the item snatched by the greedy puppet and returning it to the victimized one. And when a glass screen prevented them from doing that, they, nevertheless, did not allow the bad-mannered to hold on to the booty. The children were also seen to share with a puppet helping others.
- 1. “Three-year-olds help victims of injustice”, Max-Planck-Gesellschaft , 18 June 2015 | <urn:uuid:98622ec4-a8be-4518-978f-cb87df168e1d> | CC-MAIN-2023-06 | https://kurious.ku.edu.tr/en/news/sense-of-justice-in-children/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.979425 | 397 | 3.6875 | 4 |
Who says math has to be boring? With the Math Signs: Advanced lesson plan, your students will have a blast learning about all kinds of mathematical concepts. They’ll start by learning to identify different math signs, and then apply them in numerical statements. For example, they’ll need to determine if equations involving addition and subtraction are true or false. By the end of the lesson, your students will have a much better understanding of how to use math signs in the real world. And they’ll have had a lot of fun too! | <urn:uuid:8edaf3e8-b3c8-480c-8e90-f91174ccef99> | CC-MAIN-2023-06 | https://learnbright.org/lessons/math/math-signs-advanced-lesson/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.937989 | 115 | 3.890625 | 4 |
Lidar surveys is great to map terrain and land features, including vegetation, topography, and land cover. Ideal to assess the extent and severity of land erosion. With this technology, it can help map the terrain and identify any changes in topography. It can also help locate areas where erosion is occurring so that necessary steps can be taken to prevent more damage. Lidar surveys are becoming more common as they are proving to be an efficient and reliable way to assess land conditions.
Erosion is a serious problem for farmers. They can lose valuable topsoil, which is the key to a healthy crop. Lidar can help indentify areas at risk for erosion. By mapping the terrain, farmers can take steps to protect their fields.
Erosion is a natural process that can be monitored with lidar. By tracking the erosion of an area, scientists and land managers can better understand the process and its impact on the landscape.
River Channel Topography
The topography of a river channel can be complex. Lidar can be used to create a three-dimensional image of the channel, which can help to provide information on the depth and shape of the channel.
River Flow Modelling
Lidar can be used to model river flows. In addition to measuring the flow rate of water, lidar can also be used to measure the depth and width of a river. This information can then be used to create a model of the river’s flow. | <urn:uuid:c0d55313-3c9d-4303-8c99-4e4dd05aeda8> | CC-MAIN-2023-06 | https://lidareurope.com/applications/land-erosion/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.936486 | 314 | 3.890625 | 4 |
Scientific Name: Hymenaea courbaril
Botanical Family: FABACEAE CAESALPINACEAE
Common Name(s): Guapinol
This is a large tree that can reach up to 40 meters in height, with a large, rounded crown, covered with bright, compound leaves. Upon viewing, the leaves form a shape that resembles the footprint of a deer, which indicates to us that we are in the presence of our tree of the month of December, El Guapinol.
The fruits are a legume with a very hard and woody shell, inside it contains the seeds coated with a flour like pulp which is edible and sweet. The pulp of the guapinol fruit has a high nutritional value and can be used to flavor dishes or prepare an “atol”or custard.
The Aztecs applied the name of “cuahupinoli”, which means tree that produces pinole, because the seeds are wrapped in a powder with a feculent and pleasant taste similar to that of the roasted and ground corn that known with the name of pinole.
Its wood is fine, very hard and of high durability, widely used for the manufacture of furniture and crafts. In natural medicine it is used as an antiparasitic and for stomach disorders. | <urn:uuid:82abc4ff-7438-4361-9115-8512edcb895b> | CC-MAIN-2023-06 | https://monotiti.org/tree-month-guapinol/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.949947 | 273 | 3.53125 | 4 |
Toxic shock syndrome
What is it?
Toxic shock syndrome is a rare, life-threatening complication of bacterial infection that has been most often associated with the use of superabsorbent tampons and occasionally with the use of contraceptive sponges.
Often toxic shock syndrome results from toxins produced by Staphylococcus aureus (staph) bacteria, but the condition may also be caused by toxins produced by group A streptococcus (strep) bacteria.
While the syndrome often occurs in menstruating women, it can also affect men, children and postmenopausal women. Other risk factors for toxic shock syndrome include skin wounds and surgery.
Signs and symptoms of toxic shock syndrome develop suddenly, and the disease can be fatal. You can take steps to reduce your risk of toxic shock syndrome.
The signs and symptoms of toxic shock syndrome may include:
- A sudden high fever
- Low blood pressure (hypotension)
- Vomiting or diarrhoea
- A rash resembling a sunburn, particularly on your palms and soles — which, after a week or so, generally leads to peeling of the skin on your hands and feet
- Muscle aches
- Redness of your eyes, mouth and throat
Researchers don't know exactly how tampons may cause toxic shock syndrome. Some believe that when superabsorbent tampons are left in place for a long time, the tampons become a breeding ground for bacteria. Others have suggested that the superabsorbent fibers in the tampons can scratch the surface of the vagina, making it possible for bacteria or their toxins to enter the bloodstream.
It's not just young, menstruating women who can develop toxic shock syndrome. About half the current cases occur in nonmenstruating people, including older women, men and children. Toxic shock syndrome has occurred in women who had been wearing a diaphragm or a contraceptive sponge. It's possible for anyone to develop toxic shock syndrome in the course of a staph or strep infection. The syndrome may occur in association with skin wounds or surgery.
There's no one specific test for toxic shock syndrome. You may need to provide blood and urine samples to test for the presence of a staph or strep infection. Samples from your vagina, cervix and throat may be taken for laboratory analysis by using cotton swabs. | <urn:uuid:35d342d2-32ec-4901-998d-97efe05bec7f> | CC-MAIN-2023-06 | https://mytelehealth.info/health/toxic-shock-syndrome | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.901088 | 625 | 3.734375 | 4 |
How the forest ecosystem shapes its abiotic conditions and thereby itself
In a 2020 publication, Pierre Liancourt and Jiri Dolezal (Czech Academy of Sciences Trebon and University of Tübingen) address the topic of “abiotic facilitation” at the community level (i.e . biocoenosis). In ecology, facilitation refers either to a positive biotic or abiotic interaction between different species that is beneficial for at least one of the species or to a benefit that occurs due to improved environmental conditions (nutrients or microclimate) experienced by a species.
The vast majority of studies on this topic focuses on positive interactions between species, i. e. where both species benefit from each other. This kind of “+/+” interaction usually generates noticeable patterns, such as biomass accumulation, higher plant density or higher species diversity. These noticeable patterns lead research on ecological facilitation to focus primarily on environments where such enhancement is most likely and most apparent – for example, areas with sparse plant cover, such as high-montane or arctic areas, or early successional stages. However, this focus impedes to notice the large-scale potential of ecological support and its general importance in nature.
Ecological facilitation through the improvement of abiotic factors plays a significant role in shaping entire biotic communities. An example for this can be found in forests, where the biotic community generally has positive effects on the biophysical properties of its habitat. Largely, these effects occur due to the plant cover and its structure, which makes the abiotic environmental factors, such as microclimate and soil properties, more favorable for some species, i. e. closer to their ecological optimum. Such vegetation-related effects are, for example, reduced light incidence and soil drying rate, increased relative humidity, buffered temperature fluctuations or wind protection. Vegetation properties such as biomass, plant density, species richness and functional diversity are likely to play a decisive role. The interacting elements of the biotic community thus create certain conditions, patterns and structures, which in turn influence the community (feedback effect).
Abiotic support is most noticeable where vegetation is sparse. Inconspicuous and therefore often overlooked, on the other hand, are the facilitation effects in vegetation-rich communities where abiotic conditions are improved over a large area. The extent of such inconspicuous facilitation effects within a closed vegetation becomes most visible when compared to a corresponding, cleared, vegetation-free area. The effects of community-level nurturing effects can be significant: they can partially mitigate the effects of climate change on plant communities or, in the case of abrupt vegetation “collapse”, they can amplify them.
Further and more intensive research on ecological support that goes beyond positive 1:1 interactions (species A favors species B) and striking patterns could greatly improve our understanding of the role of ecological support at large scales. An important step is to understand how ecological support can mitigate the effects of climate change on biodiversity, how it influences species distribution, species community composition, species coexistence and ecosystem function, and interdependencies in species communities. This is because the consequences and context dependence of the interplay between community-wide effects, pairwise and indirect interactions in ecosystems are still largely unknown.
The review article by Liancourt and Dolezal makes clear how easily widespread but little noticeable ecological effects can be overlooked, although they play an essential role for ecosystems and also for their resilience, for example in climate change. Among other things, this affects our understanding and treatment of forests, whose polydimensionality is still poorly understood and often underestimated. The tree species composition, age and dimensional structure, canopy architecture, tree density, herbaceous layer, soil, (management) history, and overall species inventory of a forest or stand create a unique physiological environment that in turn affects all of the above. The forest as an ecosystem should be understood as a dynamic network of feedback loops and self-regulation. Any intervention in this system has effects; direct, apparent effects that we can see and easily track, but also indirect effects hidden from our eyes and knowledge, the magnitude of which cannot (yet) be readily appreciated. An appeal is therefore also derived from this study to keep interventions in ecosystems as minimal as possible and not to equate the obvious with the whole reality. | <urn:uuid:177b2fd2-40de-4c9f-aad4-c7692aba742b> | CC-MAIN-2023-06 | https://naturwald-akademie.org/studies/how-the-forest-ecosystem-shapes-its-abiotic-conditions-and-thereby-itself/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.937797 | 882 | 3.546875 | 4 |
Spins! It is a program used in the scientific community to fit directly imaged exoplanet orbits. Sounds great, but what does that mean?
Most of the exoplanets we image are located directly away from their host star, which means they have very long periods. Since we cannot constantly observe these objects, we often get “astrometry” – x and y (or r and If you are working in polar coordinates) the positions relative to the star at each time of observation, along with their uncertainty. An example of an astronomy table is shown in Figure 1.
Positions are useful to us because we can use them to estimate the orbits of planets in 3D – are the orbits eccentric or more circular? What is the semi-major axis of the orbit? Is it tilted orbit for us here on Earth? Since the live imaging method is relatively new, the community needed open source code that could help us answer these questions and better understand the orbits of these exoplanets. We want to learn more about the orbits of exoplanets because they can tell us a lot about how planetary systems are formed and how they will evolve over time.
Sarah Blunt, a graduate student at Caltech, led an effort to write orbit-friendly code in Python that’s open source and very straightforward to use, called orbitize! (Yes with an exclamation point!). It has since become a standard in the live imaging community for exoplanets. This code allows you to use planet positions to estimate the planet’s orbital parameters, using two main algorithms: Orbits For The Impatient (OFTI) and Markov-Chain Monte Carlo (MCMC). All six orbital parameters are shown in Figure 2.
In today’s post, we’ll learn how to download and use orbit! With the MCMC function to fit the possible orbits of the exoplanets HD 984 B (discovered by Meshkat et al. 2015), as shown in Figure 3. (Note: This tutorial assumes that you have Python 3 installed on your computer, along with the Python NumPy and Matplotlib packages – If you don’t have those installed, please click the links to learn how to install them).
We’ll need to coordinate our astrometry correctly for it to spin! It can be read. Spins! It takes a CSV file (it can be written on Google Sheets or Excel) where the columns, respectively, should be as follows:
- “Era” – This is the time of your observation of the astrometry, in the modified Julian date (MJD). You can use an online converter to convert our regular dates to MJD.
- “Object” – This refers to the object for which your astronomy is measured. Here, since we are using the astrometry from the planet, we always put the values ”1″ in this angle.
- “sep” – this is the separation of the body from the host star, in milliseconds (mas).
- “sep_err” – This is the uncertainty of your object’s separation in mas.
- “pa” – This is the angle in degrees of your planet’s position.
- “pa_err” – This is your planet’s uncertainty angle in degrees.
Once the spreadsheet is formatted, you must download it as a CSV file.
Figure 4 shows an example of how this data is formatted, for the HD 984 B. For the convenience of this tutorial, I’ve uploaded this CSV file to GitHub, so you can access it from there by clicking on this link.
Figure 4. Astronomy of HD 984 B that will be used in orbit!
Now that you have the file ready to go, we’ll move on to the Jupyter notebook for the rest of the tutorial.
To access the tutorial on how to download and run orbitize!, please click here!
orbit! The documentation page contains a description of how to use the OFTI algorithm and how to modify the orbit more specifically for your needs. The great thing about open source code is that it allows scientists and developers to contribute using a standardized way of doing scientific research, which is free and publicly available. Hopefully, as we develop our scientific methods and algorithms, we can start using and writing more open source code that is accessible to the public and the community!
Edited by: Jana Stoer
Featured image credit: Orbit! cooperation
About Clarissa Do O
I am a second year Physics graduate student at the University of California, San Diego. I study the orbital dynamics of exoplanets and also work on exoplanet instruments. My current work upgrades the adaptive optics of the Gemini Planet Imager 2.0, a tool intended to directly image and characterize exoplanets. | <urn:uuid:98729164-d2c3-4cab-ad08-42f2b2f2065e> | CC-MAIN-2023-06 | https://rdublog.com/blog/learning-more-about-exoplanet-orbits-using-orbitize/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.89974 | 1,015 | 3.5 | 4 |
Math in Spanish class?!? But we’re just learning how to say time in Spanish? Why would we need to do math? 🤯 This was me when we started learning the second half of the hour in my high school Spanish class.
We learned how to tell time in high school all at once. It was overwhelming! I didn’t want to repeat this experience. So I broke down how to say time in Spanish into three lessons to keep it simple. This also gives more repetition of time for each part of the clock so kids will learn it better and be able to speak it better. The activities are reading and listening which are the best type of activities for learning a new language!
This is the third lesson so make sure to complete the first time lesson and the second time lesson before starting this. If you haven’t started teaching your kids Spanish yet, check out my Start Here page for the best way to begin!
This lesson teaches the second half of the clock (minutes 31 – 59), and the activities include listening practice, reading practice, and games.
Download the FREE printables and instructions here:
This Is What I Did to Teach My Kids How to Say Time in Spanish
Before starting this lesson, there are a few requirements: knowing how to tell time in English AND knowing the numbers 1 – 29 in Spanish. Please, please, please do NOT do this lesson if your kids can’t tell time in English. Trying to teach kids to tell time in English and Spanish will be confusing and frustrating. Especially with this lesson. 🤪
And please do NOT do this lesson if kids don’t know the numbers 1 – 29. Trying to learn the numbers in Spanish at the same time as trying to do subtraction with those numbers will be too much. You can complete my lesson for numbers first if kids don’t know the numbers yet.
Total Physical Response (TPR)
Before we started this lesson, Aidan played the Kahoot from the second lesson to review the hours and first half of the clock (minutes 1 – 30). I gave Aidan the vocabulary list and very briefly explained to him how the second half of the hour works in Spanish.
Basically, when you hear a number and menos (like Son las ocho menos…) then that means the hour is the previous number on the clock.
For example: Son las ocho menos… = 7.
Then the number after menos is the minutes which you subtract from 60 to equal the time.
For example: Son las ocho menos diez = 7:50 (60 – 10 = 50).
This was Aidan’s face: 😦 If your kids are confused too after a brief explanation, don’t worry about it. Just jump into the activities, go slowly, and they’ll pick it up.
And don’t be like my students and think you can cut corners and say it like a digital clock. 😆 Every.single.year I have at least one student ask why we can’t just say time like what you see on a digital clock. That’s not a thing in Spanish.
After the brief explanation, I used the script to randomly say the times for this lesson while Aidan pointed to the flashcards. I made sure to pause for a second after saying menos so he could figure out the hour before trying to figure out the minutes. That seemed to help. After going through Column A, I had him listen to the video. He was able to find the times on the flashcards pretty quickly by the end of it.
There are two reading activities: the first is matching a clock with the time in a sentence and the second is reading the time and then writing it with numbers.
For another reading activity, I created a Kahoot (a quiz game) that only has the second half of the hour to practice what is in this lesson. If you don’t have a Kahoot account, you’ll need to make one if you want to play. It’s free to make an account and play.
Then search for “la hora – los minutos 31 – 59 fosternm” to play my Kahoot. There are lots of quizzes for time on Kahoot, which will be good to play as a review after completing this lesson.
Before we played Las carreras (the racing game for this lesson), we played Memoria (Memory). For more listening practice, I said the times in Spanish for the number cards and the time in English for the word cards. If you’re not sure about the times, you can use the vocabulary list as a cheat sheet. 😊
Besides Kahoot (which I always think of as a reading activity), this lesson has Las carreras – one of my favorite games to play with students. Aidan always likes it too. Las carreras means the races and normally my students play in small teams against each other.
But, I discovered this game still works even with only one kid. We use a timer and decide how much time Aidan has to complete a problem for that round. This time, he started with 1 minute which was way too much time. So for the second problem, he gave himself 30 seconds. Still too much time.
For the third problem, he gave himself 15 seconds. That was perfect. It gave him enough time to complete each problem, but there was still some urgency to finish because the clock was running out of time fast. If you’re playing with one kid (or your kids want to play individually), I highly suggest choosing a time together that will give them a chance to win but not be too easy.
Using these easy listening and reading activities will make learning how to say time in Spanish fun instead of stressful! Start with using the flashcards and complete the TPR activity. Then play some of the flashcard games for fun practice so you and your kids can tell time in Spanish!
Did you do these activities or know anyone who wants to start teaching their kids to tell time in Spanish? Please share with the buttons on the left!
If you took Spanish in high school, did your brain explode like mine when you were learning to tell time? Tell me about it in the comments below!
P.S. Are you looking for a quick and fun way to help your kids start learning Spanish? If so, check out my free Spanish for Kids Starter Guide! You can immediately use any of the 9 simple tips to introduce your kids to Spanish. Know what the best part is? You don’t have to know Spanish to use it! | <urn:uuid:6b05eba9-efa6-4b96-8920-5fc94afa6bac> | CC-MAIN-2023-06 | https://spanishschoolforkids.com/5-activities-how-to-say-time-in-spanish/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.956582 | 1,389 | 3.53125 | 4 |
DNS Records are used to point your domain to other servers. To connect your domain to a web server, you can use A and CNAME records. For email, you must create an MX. Please consult with your hosting provider to obtain the relevant information for your services.
DNS records are mapping files or systems that tell a DNS server to which IP address a particular domain is associated with. It also tells DNS servers how to handle requests that are being sent to each domain name. For example, when you type www.exampledomain.com into your browser and press Enter, the DNS will translate it to the exact IP address where the domain is hosted.
DNS Syntax Types Explained
Different strings of letters are used to dictate the DNS server actions. These letters are called DNS syntax. Below you will see a list of various DNS syntax with a short explanation of usage and meaning of each:
The “A” record stands for Address and it is the most basic type of DNS syntax. It indicates an actual IP address to a domain. The “AAAA” record (also known as IPV6 address) points a hostname to a 128-bit Ipv6 address. Regular DNS addresses are mapped for 32-bit IPv4 addresses.
The “CNAME” record stands for Canonical Name and its role is to make one domain an alias of another domain. CNAME is often used to associate new subdomains with an existing domain's A record.
The “MX” or Mail Exchange record is primarily a mail exchange server list that is to be used for the domain.
The “PTR” record stands for Pointer Record. This DNS syntax is responsible for mapping an Ipv4 address to the CNAME on the host.
The “NS” record stands for Name Server and it indicates which Name Server is authoritative for the domain.
An “SOA” record stands for State of Authority. It is obviously one of the most important DSN records because it saves essential information such as the date of the domain’s last update and other changes and activities.
An “SRV” record stands for Service. It is used for defining a TCP service where the domain operates on.
A “TXT” record stands for Text. This DNS syntax lets the administrators insert any text they would like added into the DNS record. It is often used for denoting facts or information about the domain. | <urn:uuid:5aac3dcb-8532-4ffa-8a6f-d779fbdd82d5> | CC-MAIN-2023-06 | https://support.nominus.com/hc/en-us/articles/360022443494-What-are-DNS-records- | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.920123 | 538 | 3.9375 | 4 |
A cup of water for a cup of wind?
One of today’s biggest challenges tied to the climate crisis is the growing, global demand for energy – one that makes it more crucial than ever to find sustainable alternatives to fossil fuels. In recent years, new green energy inventions and solutions have been found in wind, hydro and solar power. But with these weather-dependent technologies, what do you do when the wind stops blowing, or if the sun rarely shines? Is it, in fact, possible for one country to run on energy from another?
Since renewable energy sources like wind, hydro, geothermal and solar power production depend on the weather, they are – naturally – inconstant. The variability introduces a lot of challenges for those who must maintain the constant balance between energy supply and demand required for a stable electric power grid. Luckily, there are new solutions that can help overcome this problem and make sure that changes in the weather aren’t stopping us from building a more sustainable future.
One of these solutions is a European super grid – a wide electrical network that crosses national borders. Last year, the UK and Norway took an important step towards the development of such a network by becoming able to share renewable energy for the first time via the world’s longest subsea electricity interconnector, ‘North Sea Link’.
“The UK has a strong energy bond with Norway that goes back decades. North Sea Link is strengthening that bond and enabling both nations to benefit from the flexibility and energy security that interconnectors provide”.
Sharing Norwegian hydropower
The North Sea Link became operational in October 2021 and is a 1,400 MW high-voltage direct current submarine power cable between Norway and the United Kingdom. With its 720 km, it’s the longest subsea interconnector in the world. By sharing Norwegian hydropower, also known as waterpower, with the UK, the North Sea Link is supposed to reduce the burning of fossil fuels and avoid 23 million tons of carbon emissions in the UK by 2030.
But why is hydropower the way forward? The clean, renewable hydropower produces no air pollutants and shows the least greenhouse gas emission of all power generation technologies. Norway is actually somewhat of an expert in hydropower – with 90% of all power generation coming from the simple local resource of water. From a global perspective, hydropower accounts for around one-sixth of the total electricity supply.
But what does it mean for the North Sea Link if water levels drop in the Norwegian hydro reservoirs during dry weather? Well, the shortfall would simply be complemented by wind power from the UK. And the other way around: When UK wind power generation is high, and electricity demands are low, the North Sea Link will allow renewable power to be exported to Norway, preserving water in the Norwegian reservoirs. Pretty cool, right?
Combatting geothermal energy waste
Right now, an even longer interconnector is proposed between the UK and Iceland. It’s called ‘Icelink’ – a suitable name for an interconnector connecting another part of the world to Iceland’s huge supply of renewable hydro and geothermal energy.
In Iceland, more than 99% of total electricity consumption is supplied through either hydropower plants or geothermal resources. 90% of Icelandic households are heated with geothermal water, and clean, affordable, hot water is brought directly to their homes from boreholes via pipelines.
Former Vice President of the World Bank, Sri Mulyani Indrawati, has previously stated that she admires the ability of Icelanders to expand the exploitation of geothermal underground: “Geothermal is an alternative energy source that has great potential in Indonesia. We can learn a lot from the experience of the crisis experienced by the country of Iceland.”
"Geothermal is an alternative energy source that has great potential in Indonesia. We can learn a lot from the experience of the crisis experienced by the country of Iceland.”
Iceland is, in fact, the world's largest producer of electricity, measured per capita. Energy experts at Businessgreen.com have even stated that Iceland may end up being central to the whole of Europe's energy supply. Unfortunately, extremely large amounts are wasted, but the more we all learn –the better we become at utilizing the natural resources available, also across borders and through partnerships – the less is wasted.
Danish wind in a Finnish invention
But Iceland and Norway are not the only Nordic countries that would be contributing to the European power grid. For example, there is a plan to involve large offshore windfarms, connecting them to underwater cables and direct energy to participating countries all over Europe – and to build this, there is an increasing interest in Danish solutions within wind power technology.
Did you know that today’s modern windmill is actually inspired by a Finnish invention? In 1922, the Finnish engineer Sigurd Johannes Savonius invented the Savonius wind turbine. It was the first vertical turbine, consisting of several aerofoils on a rotating shaft or framework. These kinds of turbines are still being used today – not to produce power, but as a cooling system for large vehicles.
Share a little sunshine
With some countries rich in wind, some rich in wild waters and others rich in sunshine, sharing renewable energy across borders holds great potential to even out the unpredictability of weather, making renewable power supplies more secure and sustainable.
”A super grid is absolutely essential if Europe is to make widespread use of clean power supplies and significantly cut its emissions of atmosphere-warming carbon dioxide,” said Doug Parr, Chief Scientist at Greenpeace UK, to The New York Times.
”A super grid is absolutely essential if Europe is to make widespread use of clean power supplies and significantly cut its emissions of atmosphere-warming carbon dioxide”.
And it does not have to stop with Europe. Right now, there is also an ambitious proposal to plug North Africa into the power grid – by linking to a giant solar scheme in Morocco.
So, let’s assemble on our common mission to protect our planet by powering the world with affordable, green, clean energy. There are plenty of amazing solutions, ideas, and innovations out there. Let’s share those, along with a little sunshine, across our borders.
A little more info:
- Weatherwatch: when the wind drops – keeping renewable energy supplies steady by The Guardian
- National Grid powers up world’s longest subsea interconnector between the UK and Norway by National Grid
- National Grid in talks over plan for energy island in North Sea by The Guardian
- Indonesian Minister of Finance highlights great role of geothermal for Indonesia by Think Geoenergy
- Iceland's Sustainable Energy Story: A Model for the World? by the United Nations
- 13 Inventions We Have to Thank Finland for by The Culture Trip
- An Energy Supergrid for Europe Faces Big Obstacles by The New York Times | <urn:uuid:c4040cb5-c739-44be-af09-bf41966fcd63> | CC-MAIN-2023-06 | https://thenordics.com/index.php/trace/cup-water-cup-wind | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.937533 | 1,436 | 3.53125 | 4 |
|1st Battle of Manassas Map
|Civil War First Battle of Manassas Map
Cheers rang through the streets
on July 16, 1861, as Gen. Irvin McDowell's army, 35,000 strong, marched out to begin the long-awaited campaign to capture
Richmond and end the war. It was an army of green recruits,
few of whom had the faintest idea of the magnitude of the task facing them. But their swaggering gait showed that none doubted
the outcome. As excitement spread, many citizens and congressmen with wine and picnic baskets followed the army into the field
to watch what all expected would be a colorful show. These troops were mostly 90-day volunteers summoned by President Abraham
Lincoln after the startling news of Fort Sumter
burst over the nation in April 1861. Called from shops and farms, they had little knowledge of what war would mean.
On a warm July day in 1861,
two armies of a divided nation clashed for the first time on the fields overlooking Bull Run.
Their ranks were filled with enthusiastic
young volunteers in colorful new uniforms, gathered together from every part of the country. Confident that their foes would
run at the first shot, the raw recruits were thankful that they would not miss the only battle of what surely would be a short
war. But any thought of colorful pageantry was suddenly lost in the smoke, din, dirt, and death of battle. Soldiers on both
sides were stunned by the violence and destruction they encountered. At day's end nearly 900 young men lay lifeless on the
fields of Matthews Hill, Henry Hill, and Chinn Ridge. Ten hours of heavy fighting swept away any notion the war's outcome
would be decided quickly (see: Battle of First Manassas: A Major Turning Point?).
First Manassas: The Campaign
The first day's march covered only five miles, as many straggled
to pick blackberries or fill canteens.
lumbering columns were headed for the vital railroad junction at Manassas. Here the Orange and Alexandria Railroad met the
Manassas Gap Railroad, which led west to the Shenandoah Valley
. If McDowell could seize this junction, he would stand astride the best overland approach to the Confederate capital.
July 18, McDowell's army reached Centreville. Five miles ahead a small meandering stream named Bull Run crossed the route
of the Union advance, and there guarding the fords from Union Mills to the Stone Bridge waited 22,000 Southern troops under
the command of General Pierre G. T. Beauregard
. McDowell first attempted to move toward the Confederate right flank, but his troops were checked at Blackburn's Ford near
the center of Beauregard's defensive line. He then spent the next two days scouting the Southern left flank. In the meantime,
Beauregard asked the Confederate government at Richmond for help. General Joseph E. Johnston
, stationed in the Shenandoah Valley with 10,000 troops, was ordered to support Beauregard if possible. Johnston gave an opposing
Union force the slip and, employing the Manassas Gap Railroad, started his brigades toward Manassas Junction. Most of Johnston's
troops arrived at the junction on July 20 and 21, some marching from the trains directly into battle.
First Manassas: Morning, July 21, 1861
On the morning of July 21, McDowell sent his attack columns in
a long march north toward Sudley Springs Ford. This route took the Federals around the Confederate left. To distract the Southerners,
McDowell ordered a diversionary attack where the Warrenton Turnpike crossed Bull Run at the Stone Bridge. At 5:30 a.m. the
deep-throated roar of a 30-pounder Parrott rifle shattered the morning calm, and signaled the start of battle.
new plan depended on speed and surprise, both difficult with inexperienced troops. Valuable time was lost as the men stumbled
through the darkness along narrow roads. Confederate Col. Nathan Evans, commanding at the Stone Bridge, soon realized that
the attack on his front was only a diversion. Leaving a small force to hold the bridge, Evans rushed the remainder of his
command to Matthews Hill in time to check McDowell's lead unit. But Evans' force was too small to hold back the Federals for
Soon brigades under Barnard Bee and Francis Bartow marched to Evans' assistance. But even with these reinforcements,
the thin gray line collapsed and Southerners fled in disorder toward Henry Hill. Attempting to rally his men, Bee used Gen. Thomas J. Jackson's newly arrived brigade as an anchor. Pointing to Jackson, Bee shouted, "There stands Jackson like a stone wall! Rally
behind the Virginians!" Generals Johnston and Beauregard then arrived on Henry Hill, where they assisted in rallying
shattered brigades and redeploying fresh units that were marching to the point of danger.
First Manassas: Afternoon, July 21, 1861
About noon, the Federals stopped their advance to reorganize for
a new attack. The lull lasted for about an hour, giving the Confederates enough time to reform their lines. Then the fighting
resumed, each side trying to force the other off Henry Hill. The battle continued until just after 4 p.m., when fresh Southern
units crashed into the Union right flank on Chinn Ridge, causing McDowell's tired and discouraged soldiers to withdraw.
first the withdrawal was orderly. Screened by the regulars, the three-month volunteers retired across Bull Run, where they
found the road to Washington jammed with the carriages of congressmen and others who had driven out to Centreville to watch
the fight. Panic now seized many of the soldiers and the retreat became a rout. The Confederates, though bolstered by the
arrival of President Jefferson Davis on the field just at the battle was ending, were too disorganized to follow up their
success. Daybreak on July 22 found the defeated Union army back behind the bristling defenses of Washington.
|1st Battle of Manassas Campaign
|First Battle of Manassas Civil War History
National Battlefield Park | <urn:uuid:7cdab9a5-fec1-432b-be41-85b51599065a> | CC-MAIN-2023-06 | https://thomaslegioncherokee.tripod.com/thebattleoffirstmanassas.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.955777 | 1,344 | 3.5 | 4 |
NMDA Receptor (NMDAR)
The NMDA receptor is a receptor in the central nervous system that is responsible for initiating downstream effects using neurotransmitters. In order for the NMDA receptor to be activated, glutamate and glycine need to be activated. Then, the magnesium molecule has to be removed from the pore of the receptor. This type of receptor plays a significant role in a variety of neurological processes, including synaptic plasticity, learning, and memory. The receptor can be triggered by a wide variety of psychoactive drugs including ketamine and PCP. When the receptor is activated, it opens, allowing ions to pass through the membrane. As ions pass through the membrane, various neurotransmitters are released, causing changes in how people feel. | <urn:uuid:df58156e-99a3-47ed-a8ff-0315b69111a2> | CC-MAIN-2023-06 | https://tripsitter.clinic/psychedelics-glossary/nmda-receptor | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.929697 | 154 | 3.625 | 4 |
Share this articleFeatured in This Article
Exploring genetics to save endangered pocket mouse
Researchers are delving into the genomes of Pacific pocket mice to help inform conservationists about the best ways to recover the federally endangered species, which has only three populations left in the wild.
Despite its name, the Pacific pocket mouse (Perognathus longimembris pacificus) is not closely related to mice at all. A critically endangered subspecies of the little pocket mouse, it’s found in only three isolated locations within about 3 miles of the southern California coast.
“It’s called the Pacific pocket mouse because it looks like a mouse,” said Aryn Wilder, a senior researcher at San Diego Zoo Global who led a recent study on the rodent. “I think it’s actually more closely related to beavers in some phylogenies.”
Its range used to stretch from the Mexican border to Los Angeles, Wilder said, but around the 1930s, many of its populations began to disappear. “They were thought to be extinct for a while,” she said. “Then, they were rediscovered in the ’90s, and emergency listed under the Endangered Species Act.”
By then, only three populations remained: Two at Marine Corps Base Camp Pendleton and one about 36 miles away in Dana Point, a coastal city in Orange County, separated from the others by urban development.
The San Diego Zoo began a conservation breeding program for the species in 2012, taking individuals from each population as founders.
In a study published in Conservation Genetics, Wilder and her colleagues looked at microsatellite and mitochondrial DNA of the individuals bred in the zoo’s program to help them determine the best ways to reintroduce the animals to the wild. “The initial reason [for the conservation breeding program] was to have a source of individuals to use for reintroduction in parts of their range where they’re no longer found,” she said.
They found that the pocket mice from Dana Point — the smallest population and the one suffering the most from inbreeding — became less and less represented in the genetics of breeding program over time. Researchers believe the reason the Dana Point population’s genetics decreased has to do with genetic load — the buildup of harmful mutations in the genome.
“All populations have mutations in them,” Wilder said. “But normally natural selection does a good job of keeping harmful mutations at low frequency, so they’re rare in the population.” When populations get really small, though, natural selection doesn’t work so well, she said.
The Dana Point offspring did better when they were bred with other populations, however. As a result, she suggests managers consider boosting genetic diversity by moving Camp Pendleton pocket mice to the Dana Point population.
Having now sequenced the genomes of almost every individual in the conservation breeding program, Wilder said her research will shed light on the history of the three wild populations; how well the conservation breeding program captures genetic diversity, including harmful diversity; and how much their genes have fluctuated in the breeding program over time.
That information can guide not just Pacific pocket mouse reintroduction efforts, she said. It could also help conserve other endangered species by showing what determines the success of “genetic rescue,” where managers move individuals between populations to increase genetic diversity.
“It’s been a really successful strategy for increasing population health, but it does potentially come with risks,” she said. “The Pacific pocket mouse can help us understand those tradeoffs.”
[table id=kobilinsky /]
[table id=SMtag /] | <urn:uuid:bd47a3e9-99d8-4c03-bd3e-0928f45f749a> | CC-MAIN-2023-06 | https://wildlife.org/exploring-genetics-to-save-endangered-pocket-mouse/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.9483 | 770 | 3.703125 | 4 |
Keeping Alberta Affordable: Eligible seniors and families with children under 18 can apply for $600 affordability payments. Learn more and apply now
Clubroot is a serious soil-borne disease of cruciferous crops (canola and cabbage family) worldwide and was first identified in Europe in the thirteenth century. This disease is a major problem in cole crops (cruciferous vegetables) in some areas of British Columbia, Quebec, Ontario and the Atlantic provinces.
There have been two previous reports of clubroot in cole crops in Alberta. Thus, clubroot is not a new disease in Canada or Alberta. However, in 2003, clubroot was confirmed in several canola fields near Edmonton, Alberta, which was the first report on canola in western Canada.
Clubroot has continued to spread in central Alberta.
Clubroot was added as a declared pest to Alberta's Agricultural Pests Act (APA) in April 2007. The APA is the legislative authority for the enforcement of control measures for declared pests in Alberta. The Alberta Minister of Agriculture and Rural Development is responsible for this Act.
However, enforcement of pest control measures is the responsibility of the municipal authority, and Agricultural Fieldmen are responsible for enforcing pest control measures in their municipality. Pest inspectors have the power to enter land at a reasonable hour, without permission, to inspect for clubroot and collect samples. The owner or occupant of land has the responsibility for taking measures to prevent the establishment and spread of clubroot.
Clubroot can affect broccoli, Brussels sprouts, cabbage, cauliflower, Chinese cabbage, kale, kohlrabi, radish, rutabaga and turnip. Canola / rapeseed and mustard are also susceptible to this disease. Cruciferous weeds are susceptible as well. There are several weak, non-cruciferous hosts, but their contribution to disease development and carryover of the clubroot pathogen is not well understood.
This factsheet contains current information about clubroot in canola and describes options for Canadian canola growers to prevent this disease from being introduced and becoming well established in their fields.
- disease cycle
- clubroot symptoms
- prevention and management
The causal agent of clubroot is Plasmodiophora brassicae Woronin. In the past, this agent has been classified as a slime mould fungus (myxomycete), but more recently, it is regarded as a protist (an organism with plant, animal and fungal characteristics).
There are normally several different races or pathotypes present in established infestations. Plasmodiophora brassicae is an obligate parasite, which means the pathogen cannot grow and multiply without a living host. The life cycle of P. brassicae is shown in Figure 1.
Figure 1. Life cycle of Plasmodiophora brassicae, the pathogen that causes clubroot.(Source: Ohio State University).
Resting spores germinate in the spring, producing zoospores that swim very short distances in soil water to root hairs. These resting spores are extremely long lived, with a half-life of about 4 years, but they can survive in soil for up to 20 years. For example, Swedish research in clubroot-infested spring rapeseed fields found that 17 years were needed to reduce the infestation tonon-detectable limits.
The longevity of the resting spores is a key factor contributing to the seriousness of the disease. Resting spore germination is stimulated by exudates from the roots of host plants.
After the initial infection through root hairs or wounds, the pathogen forms an amoeba-like cell. This unusual cell multiplies and then joins with others to form a plasmodium, which is a naked mass of protoplasm with many nuclei. The plasmodium eventually divides to form many secondary zoospores that are released into the soil.
These second-generation zoospores re-infect roots of the initial host or nearby plants and are able to invade the cortex (interior) of the root. Once in the cortex, the amoeba-like cells multiply or join with others to form a secondary plasmodium. As this plasmodium develops, plant hormones are altered, which causes the infected cortical cells to swell. Clusters of these enlarged cells form "clubs" or galls (see Figures 2, 3 and 4).
Figure 2. Very severe clubroot on canola.
Photo credit: Kelly Turkington
Figure 3. Severe clubroot galls or "clubs" on canola root.
Photo credit: Kelly Turkington
Figure 4. Moderately infected canola root.
Photo credit: Valerie Sowiak
Some amoeba-like cells are able to move up and down roots in vascular tissue. After the secondary plasmodia mature, they divide into many resting spores within the gall tissue. The galls are quickly decayed by soil microbes, leaving millions of resting spores in the soil.
Although there are no airborne spores released by this pathogen, the resting spores are capable of moving with infested soil transported by wind or water erosion and field machinery.
Warm soil (20-24°C), high soil moisture and acid soil(pH less than 6.5) are environmental factors that favour infection and severe disease development. Unfortunately, these conditions exist in a significant portion of the traditional canola growing areas of Alberta.
High soil moisture areas of the field typically have the most severe infestations. These wet areas are found in depressions, spots with higher clay content or with subsoil horizons that cause poor water infiltration (such as Gray Wooded or solonetzic soils).
Clubroot Symptoms on Canola and Mustard
Clubroot galls are a nutrient sink, so they tie up nutrients, and severely infected roots of canola cannot transport sufficient water and nutrients for aboveground plant parts. Symptoms will vary depending on the growth stage of the crop when infection occurs. Early infection at the seedling stage can result in wilting, stunting and yellowing of canola plants in the late rosette to early podding stage.
Such symptoms may be wrongly attributed to heat stress during periods with high temperatures or to other diseases such as blackleg or Fusarium wilt. In such cases, proper diagnosis includes digging up wilted plants to check for gall formation on roots.
Infection that occurs at later stages may not show plant wilting, stunting or yellowing. However, infected plants will ripen prematurely, and seeds will shrivel. Thus, yield and quality (oil content) are reduced.
Swedish researchers found that infestations nearing 100% affected plants caused about 50% yield loss, while infestations of 10 to 20% led to 5 to 10% yield loss. This result is similar to sclerotinia stem rot infection in canola, where a general rule of thumb yield loss estimate is half of the percentage of infected stems. This is a reasonable comparison since both diseases restrict the flow of water and nutrients to developing seeds.
Patches of prematurely ripening canola due to clubroot infection (Figure 5) could be confused with other diseases such as sclerotinia, blackleg and Fusarium wilt. In such cases, proper diagnosis should include digging up affected plants to check for gall formation on roots. Swathing is an excellent opportunity to spot clubroot infestations.
Figure 5. Patchy, premature ripening.
Photo credit: Stephen Strelkov
If the suspected plants are not sampled until after swathing, root galls may have decayed already, and the typical whitish galls will no longer be present (see Figure 6). Instead, decayed root galls have a brown peaty appearance rather than the healthy white colour of unaffected root tissue, which should be a signal to carefully dig up more roots for closer inspection.
Figure 6. Decayed clubroot galls and whitish stem appearance.
Photo credit: Kelly Turkington
Hybridization nodules on canola roots (see Figure 7), although rare, could be confused with clubroot galls, but they appear as small, round nodules located at root nodes. The interior texture of a clubroot gall is spongy or marbled while hybridization nodules are uniformly dense inside, like healthy roots. Furthermore, hybridization nodules will not decay rapidly and do not have a peaty appearance like clubroot galls do.
Figure 7. Hybridization nodules on canola root.
Photo credit: Alvin Eyolfson
Phenoxy damage to canola may also induce galls on stem bases and roots (Figure 8). As with hybridization nodules, the phenoxy-induced galls will not decay rapidly like clubroot galls. Thickened, corky or split stem bases and curving stems are additional phenoxy symptoms that often occur in canola.
Figure 8. Root galls on canola due to phenoxy injury.
For confirmation of suspected canola clubroot galls, send samples to one of the commercial seed testing labs in the province.
Clubroot disease of canola and mustard fact sheet.
Prevention and Management of Clubroot in Canola and Mustard
Since clubroot infestations are still not widely distributed in Alberta, producers should take various precautionary measures to curb the spread of this disease outside the known infested areas.
Recommended preventative measures include the following:
- Use long rotations – grow canola not more frequently than once every four years (i.e. three years out of canola). Although this practice will not prevent the introduction of clubroot to clean fields, it will restrict this and other canola disease development within the field and probably avert a severe infestation. This rotation recommendation is similar to Europe where clubroot is more established, and, thus, more experience has been gained in management of this disease (see the Scottish clubroot factsheet by S. Oxley, 2007, noted in the References section at the end of this factsheet).
- Planting clubroot-resistant varieties on fields without the disease can be useful when clubroot is present nearby. This strategy relies on the genetic resistance to greatly reduce disease development/establishment compared to susceptible varieties if clubroot is inadvertently introduced to the field.
- Practice good sanitation to restrict the movement of possibly contaminated material (this approach will help reduce the spread of other diseases, weeds and insects too). The resting spores are most likely to spread via contaminated soil and infected canola plant parts. Thus, producers should follow the practice of cleaning soil and crop debris from field equipment before entering or leaving all fields. The equipment cleaning procedure involves knocking or scraping off soil lumps and sweeping off loose soil.
For risk-averse producers, the following additional cleaning steps may provide some extra benefit but involve considerably more work and expense:
- After removal of soil lumps, wash off equipment with a power washer.
- Finish by misting equipment with weak disinfectant (1-2% household bleach solution or EcoClear or HyperOx). The use of a disinfectant without first removing soil is not recommended because soil deactivates most disinfectants. The disinfectant must remain wet for 20 to 30 minutes on the equipment.
- Use direct seeding and other soil conservation practices to reduce erosion. Resting spores move readily in soil transported by wind or water erosion and overland flow.
- Scout canola fields regularly and carefully. Identify causes of wilting, stunting, yellowing and premature ripening – do not assume anything!
- Avoid the use of straw bales and manure from infested or suspicious areas. Clubroot spores are reported to survive through the digestive tracts of livestock.
- Avoid common untreated seed (including canola, cereals and pulses). Earth-tag on seed from infested fields could introduce resting spores to clean fields. Certain seed treatment fungicides may control spores on contaminated seed, but this observation needs further research to confirm.
Note: the risk of spreading clubroot by contaminated seed or straw is much lower than by the transportation of soil and plant debris on contaminated field equipment.
Managing clubroot after establishment in a canola field is difficult and long term.
There are clubroot-resistant Canadian canola varieties available now. However, clubroot resistance in European varieties has not been durable there, and since 2014, resistance in Canadian canola varieties has started to be overcome by several new pathotypes in central Alberta. Canola growers in high risk situations (confirmed clubroot in the field or area) should follow traditional canola rotation recommendations (one canola crop every four years) using clubroot resistant varieties. The one in four year rotation recommendation using resistant varieties is designed to slow down pathogen population shifts to strains not controlled by current resistant varieties and allow time for new resistance sources to be bred into canola.
Resistance breakdown is not a change in the plant, but rather in the clubroot pathogen where it adapts to the resistance in the varieties being grown. There are several prevalent clubroot race(s) or pathotypes in the Alberta infestations, with one pathotype dominating. However, characterization of single spore-derived isolates from the populations indicates there are additional pathotypes and mixtures of pathotypes.
Clubroot galls may still be found at low levels in fields seeded to resistant varieties! Under high pressure, small galls may develop on plants with clubroot resistance. Galls can also occur on volunteers from previous canola crops of susceptible varieties, mustard family weed species and in the small percentage of off-types in hybrid seed (usually non-hybrid parental lines where one parental line may be clubroot susceptible).
Currently, there are no registered fungicides for clubroot control or suppression in canola. Although there are fungicides registered for clubroot control in cole crops around the world, the relatively high cost and application method (transplant bed drench or broadcast incorporation) make them unsuitable for canola field production.
Liming acid soils to above pH 7.2 has shown poor or erratic results for clubroot control in cole crops in British Columbia and eastern Canada. Given the inconsistency and high cost, liming is not a reliable option for clubroot control in canola.
Calcium cyanamide, an old form of nitrogen fertilizer with fungicidal properties, has shown promise for reducing clubroot in cole crops, but high application rates, significant cost and limited availability make it a poor option for canola.
Volunteer canola and susceptible weeds (mustard family, dock and hoary cress) must be controlled in the rotation crops. There is some evidence that a few non-cruciferous crops such as orchardgrass and red clover may be weak hosts for clubroot disease, but the rotational effect of such crops on clubroot incidence and severity is likely of little practical significance.
In combination with the rotation strategy, sanitation and soil conservation measures should be practiced to keep contaminated soil and infected crop debris from being transported from infested fields. Whenever practical, infested fields should not be worked in when wet since more mud will stick to equipment and then be transported to clean fields.
There has been one report from Norway of lower clubroot severity under reduced tillage. Thus, reduced tillage or direct seeding may help to combat a clubroot infestation, and the fewer tillage operations will help to avoid the transport of contaminated soil. Similarly, all equipment traffic into infested fields should be minimized – for example, service and nurse trucks should remain on the road and field equipment be brought to them.
Clubroot disease is a serious concern in Alberta. Understanding the disease cycle, recognizing the symptoms and adopting good prevention and management practices can assist in its control.
Cao, T., Tewari, J.P., and Strelkov, S.E. 2007. Molecular detection of Plasmodiophora brassicae, causal agent of clubroot of crucifers, in plant and soil. Plant Dis. 91:80-87.
Ekeberg, E. and Riley, H.C.F. 1997. Tillage intensity effects on soil properties and crop yields in a long-term trial on morainic loam soil in southeast Norway. Soil & Tillage Res. 42: 277-293.
Friberg, H., Lagerlof, J., and Ramert, B. 2006. Usefulness of nonhost plants in managing Plasmodiophora brassicae. Plant Pathol. 55:690-695.
McDonald, M.R., Kornatowska, B. and McKeown, A.W. 2002. Management of clubroot of Asian Brassica crops grown on organic soils. Abstract S08-0-11, XXVIth International Horticulture Congress, Toronto.
Oxley, S. 2007. Clubroot disease of oilseed rape and other brassica crops. Technical Note 602. Scottish Agricultural College. Accessed on-line October 5, 2010. https://www.sruc.ac.uk/media/5cmlcpcj/tn602-clubroot.pdf
Strelkov, S.E., Tewari, J.P., and Smith-Degenhardt, E. 2006. Characterization of Plasmodiophora brassicae populations from Alberta, Canada. Can. J. Plant Pathol. 28:467-474.
Tewari, J.P., Strelkov, S. E., Orchard, D., Hartman, M., Lange, R., and Turkington, T.K. 2005. Identification of clubroot of crucifers on canola (Brassica napus) in Alberta. Can. J. Plant Pathol. 27:143-144.
Tremblay, N., Bélec, C., Lawrence, H., and Carisse, O. 1999. Clubroot of crucifers – control strategies. Agriculture and Agri-Food Canada. Saint-Jean-sur Richelieu, PQ. 3 pp.
Wallenhammar, A.C. 1996. Prevalence of Plasmodiophora brassicae in a spring oilseed rape growing area in central Sweden and factors influencing soil infestation levels. Plant Pathol. 45:710-719.
Wallenhammar, A.C., Johnsson, L., and Gerhardson, B. 1999. Clubroot resistance and yield loss in spring oilseed turnip rape and spring oilseed rape. Proceedings of 10th International Rapeseed Congress, Australia.
Was this page helpful?
You will NOT receive a reply on your feedback. Do NOT include personal information. To get answers to questions, use Alberta Connects.
Your submissions are monitored by our web team and are used to help improve the experience on Alberta.ca. | <urn:uuid:f4652993-14f9-4586-a0e5-63247c25acc2> | CC-MAIN-2023-06 | https://www.alberta.ca/clubroot-disease-of-canola-and-mustard.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.907971 | 3,952 | 3.5 | 4 |
Understanding Electrocardiograms and Stress Tests
What is an electrocardiogram (ECG)?
An electrocardiogram (ECG or EKG) is one of the simplest and fastest procedures used to assess the heart. Electrodes (small, plastic patches) are placed at certain locations on the child’s chest, arms and legs. When the electrodes are connected to the ECG machine by lead wires, the electrical activity of the child’s heart is measured. The cardiologist uses this information to decide whether the child needs further tests or the proper course of treatment for the child’s heart problems.
Why is an ECG performed?
The electrical activity of the heart is measured by an electrocardiogram. By placing electrodes at specific locations on the body (chest, arms, and legs), our specialists can get a “picture,” or tracing, of the electrical activity in the heart. Changes in an ECG from the normal tracing can show one, or more, of several heart-related conditions. Learn more about the electrical system of the heart. Some medical conditions that can cause changes in the ECG pattern include, but are not limited to, the following:
- Conditions in which the heart is enlarged. These conditions can be caused by various factors, such as congenital (present at birth) heart defects, valve disorders, high blood pressure or congestive heart failure.
- Ischemia. A decreased blood flow to the heart muscle due to clogged or partially-clogged arteries.
- Conduction disorders. A dysfunction in the heart’s electrical conduction system that can make the heart beat too fast, too slow or at an uneven rate.
- Electrolyte disturbances. An imbalance in the level of electrolytes, or chemicals, in the blood, such as potassium, magnesium or calcium.
- Pericarditis. An inflammation or infection of the sack that surrounds the heart.
- Valve disease. Malfunction of one or more of the heart valves that may interfere with blood flow within the heart.
- Chest trauma. Blunt trauma to the chest, such as that suffered in a car accident.
An ECG may also be performed for other reasons, including, but not limited to, the following:
- During a physical examination to obtain a baseline tracing of the heart’s function. (This baseline tracing may be used later as a comparison with future ECGs, to see if any changes have occurred.)
- As part of a workup prior to a procedure, such as surgery, to make sure no heart condition exists that might cause complications during or after the procedure.
- To check the function of an implanted pacemaker.
- To check the effectiveness of certain heart medications.
- To check the heart’s status after a heart-related procedure, such as a cardiac catheterization, heart surgery or electrophysiological studies.
What is the procedure for an ECG?
An ECG can be performed almost anywhere, as the equipment is very compact and portable. At CHOC we provide ECGs in our Heart Center, as well as the inpatient unit of the hospital and the Julia and George Argyros Emergency Department at CHOC. Our specialists even take ECG equipment to local schools to test athletes. The equipment used includes the ECG machine, skin electrodes and lead wires that attach the electrodes to the ECG machine. An ECG normally takes approximately five to 10 minutes, including attaching and detaching electrodes. Getting an ECG typically includes the following steps:
- The child lies flat on a table or bed for the procedure.
- The ECG technician uncovers child’s chest only exposing the necessary skin.
- Electrodes (small, plastic patches) are attached to the child’s chest and one electrode is attached to each arm and leg.
- The lead wires are attached to the skin electrodes.
- Once the leads are attached, the technician may key in identifying information such as the child’s name and age into the machine’s computer.
- The ECG is started. The child must lie still and not talk during the procedure, so as not to interfere with the tracing. Caregivers can usually be present in the room and involved in reassuring and encouraging their child during the procedure. At this point, it will take only a few minutes (or less) for the tracing to be completed.
- Once the tracing is completed, the technician will disconnect the leads and remove the skin electrodes.
Depending on the results of the ECG, additional tests or procedures may be scheduled to gather further diagnostic information.
How is the exercise ECG test performed?
At CHOC, stress tests are performed on the third floor of the hospital in our Heart Center. The equipment used includes an ECG machine, electrodes (small, plastic patches that stick on the skin), and lead wires which attach to the skin electrodes. A blood pressure cuff is also used to monitor your child’s blood pressure response during exercise. A treadmill or stationary bicycle is used for exercise.
Each child has an initial, or “baseline,” ECG and blood pressure readings done prior to exercising. He or she will walk on the treadmill or pedal the bicycle during the exercise portion of the procedure. The incline of the treadmill will be gradually increased, or the resistance of the bicycle will be gradually increased, in order to increase the intensity level of exercise. The ECG and blood pressure will be monitored during the exercise portion of the test. Symptoms are also carefully monitored during exercise. The child will be asked to exercise only to the best of his or her ability. Following exercise, ECG and blood pressure readings are monitored for a short time, perhaps another 10 to 15 minutes or so.
The procedure will take approximately one hour, including check-in, preparation and the actual procedure. After the procedure, a hospital stay is not necessary, unless your child’s doctor determines that the child’s condition requires further observation or hospital admission.
The child may feel a little tired or sore for a few hours after the procedure, particularly if he or she is not used to exercising. Otherwise, the child should feel normal within a few hours after the procedure, if not sooner.
Depending on the results of the exercise ECG, additional tests or procedures may be scheduled to gather further diagnostic information. | <urn:uuid:86fcdf75-5d5d-4b06-80a3-b9c07c2a1f33> | CC-MAIN-2023-06 | https://www.choc.org/heart/cardiac-catheterization-program/electrophysiology-program/electrocardiograms/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.926858 | 1,322 | 3.8125 | 4 |
Evolutionary changes are sometimes driven by relationships to other organisms. Researchers discovered that the Luna Moth evolved long spinning tails to defend against bats in a 60-million-year-old nocturnal “evolutionary arms race.”
I really think Luna Moths are one of the most spectacular organisms on the planet. They are really incredible in terms of their color — they’re light green — and they’re really big and have tails. One of the things that I find really amazing about them is that they’re very common in North America, or at least in eastern North America, and they come to our lights pretty easily at night. So they’re active at night and they’ll oftentimes come to your porch light and you’ll get to see them.
What’s truly interesting about them is that they have these very long tails that, until recently, we didn’t know anything about them and why they have them. But after work with some of my collaborators we decided to test why they might have these tails and discovered that they’re used as acoustic deflectors against predatory bats. So bats use echolocation to hunt their prey and many of their prey are insects and moths. So these tails that these Luna Moths have actually spin behind their bodies in a manner that looks somewhat like a propeller. And the propeller appears to look like a smaller moth behind the larger moth and the bat that’s hunting will get lured toward this tail or the “smaller moth” and then bite the tail off and the Luna Moth can then escape.
One thing we know is that there’s a lot of other species that have tails in the world. So what we’re trying to do now is use the Museum collection and understand how these shapes and forms of different tails have arisen and why these moths have them.
Associate Curator, McGuire Center for Lepidoptera and Biodiversity
Florida Museum of Natural History
Luna Moth (Actias luna)
From Alachua Co., Florida, 2016 | <urn:uuid:b0a261a1-ce48-4bc9-8a0a-81372447f16b> | CC-MAIN-2023-06 | https://www.floridamuseum.ufl.edu/100years/luna-moth/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.965231 | 461 | 4.0625 | 4 |
The reasons for tooth decay are the same, regardless of age. Tooth decay comes about when you have accumulated plaque with harmful bacteria. These organisms feed on the sugar you are consuming in your diet, causing cavities. The bacterium metabolizes sugar and turns it into acid that attacks your tooth structure.
Foods that are damaging for your teeth can lead to plaque and tooth decay. Therefore it becomes vital to learn about foods that you should avoid or limit to keep your teeth as healthy and vibrant as possible.
Most people are aware that diet and exercise play a significant role in keeping them healthy and fit. However, they do not realize the importance of a disease-free mouth for a healthy body. Poor oral health can adversely impact your quality of life by affecting your physical, mental, and social well-being. Oral infections and missing teeth can influence how you eat, speak and socialize.
Certain foods can contribute to oral issues that can affect your overall well-being.
Sugary drinks like soda, cold drinks, preserved juice, and sports and energy drinks can be harmful if you consume them for long durations or very often.
Chocolates and candies can also detrimental to our teeth if consumed often and frequently. These delicious treats can stick to the teeth and remain inside the mouth for a long time unless you brush your teeth immediately after consumption. As a result, sugars in our mouth are fermented by the bacteria in dental plaque into acids that dissolve tooth enamel.
Alcohol consumption can be dangerous to your dental health. They destroy the soft tissues in the mouth and reduce saliva production in our mouth. Saliva helps to wash away the food particles and safeguards the teeth against acids.
Citrus foods like lemons, tomatoes, and raw mangoes can cause tooth decay, especially when consumed entirely without any other foods. You can eat such foods if you thoroughly rinse your mouth after eating them. This will help reduce the impact of acid and avoid dental problems.
White bread and potato chips contain high starch, when stuck in your teeth can also bring about acid production by the bacteria in dental plaque. To remove the starch particles, rinse your mouth thoroughly with water if you can't brush just then.
Excessive tea and coffee consumption can also stain and discolor the teeth. They may be avoided in excess to retain your pearly whites at their best.
If you have any questions about your dental health, visit us at Hallmark Dental Group, 335 East St George Blvd #201, Saint George, UT 84770, or call us at 435-310-4812 and schedule an appointment. | <urn:uuid:6b5466fb-476f-459d-90e1-cb9e0fe0320c> | CC-MAIN-2023-06 | https://www.hallmarkdentalgroup.com/blog/foods-you-should-avoid-to-keep-your-teeth-healthy/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.947464 | 532 | 3.59375 | 4 |
Dengue fever is a viral infection that is spread through the bite of particular breeds of mosquito. It is widespread across the Pacific Island countries, Asia, Central and South America, and Africa.
On this page, you can find the following information:
- What causes dengue fever?
- What are the symptoms of dengue fever?
- How is dengue fever diagnosed?
- How is dengue fever treated?
- How is dengue fever prevented?
- Which are the dengue-affected countries?
Key points about dengue fever
- The common symptoms of dengue fever are fever, severe headache, pain behind your eyes, pain in your joints and muscles (hence being also known as ‘breakbone fever’) and a rash.
- Get immediate medical attention if you have the above symptoms or are unwell after travel to dengue-affected areas.
- Most cases of dengue are not life threatening. In rare cases, dengue fever can worsen to a severe form called dengue haemorrhagic fever or dengue shock, which can cause death.
- If you make a number of visits back to a dengue-affected country over the years, you are at risk of picking up dengue fever repeatedly.
- For most travellers, there is no vaccination to protect against the disease, so preventing mosquito bites is the best form of protection.
- If you are travelling to dengue-affected countries use insect repellent, wear protective clothing and stay in places where there are mosquito screens on windows and doors.
|See your doctor if you are unwell after travel to a dengue-affected area, or call Healthline (free in New Zealand) on 0800 611 116 for advice.|
Dengue fever is caused by the dengue virus which is spread through the bite of particular breeds of mosquito (Aedes mosquito). These mosquitoes are prevalent in Pacific Island countries, Asia, Central and South America, and Africa, and are commonly found in cities and urban areas. This mosquito is not present in New Zealand. Although the most common time for mosquito bites is early morning and late afternoon, dengue-carrying mosquitoes bite all through the day.
Mosquitoes become infected with dengue after biting sick humans who have dengue virus in their blood. If an infected mosquito later bites another human, it can pass on the dengue virus. There are 4 types of dengue viruses known to cause disease in humans. A person infected with one type of dengue will only become immune to that type. They will not be immune to other types of dengue and could be at higher risk of severe disease if they later contract another type.
Dengue fever cannot be spread directly from person to person.
Symptoms of dengue fever tend to develop within 4–7 days of being bitten by the infective mosquito.
The common symptoms of dengue are:
- sudden fever
- severe headache
- pain behind your eyes
- feeling very tired
- muscle and joint pain (ankles, knees, elbows)
- rash on your arms and legs, severe itching, your skin peeling
- nausea (feeling sick) or vomiting (being sick).
Dengue fever is sometimes described as being similar to a severe flu-like illness, and may also be referred to as ‘breakbone fever’ because of the severe muscle and joint pain. Generally, younger children and those with their first dengue infection have a milder illness than older children and adults.
|Get checked if you are unwell after travel to dengue-affected areas such as the Pacific Island countries, Asia, Central and South America and Africa, or call Healthline (free in New Zealand) on 0800 611 116 for advice.|
Severe dengue or dengue haemorrhagic fever
In rare cases, dengue fever can worsen to severe dengue, which can result in shock, severe bleeding, organ failure and even death. You are at greater risk of this if you have had dengue fever before and are infected with a different strain of the virus. This is important if you make a number of visits back to a dengue-affected country over the years, as you are at risk of picking up dengue fever repeatedly. Initially, severe dengue has the same symptoms as dengue fever, but after a few days, your condition worsens rapidly.
Seek immediate medical attention if any of the following warning signs appear:
See a doctor immediately if you think you may have dengue fever. Early diagnosis can help to reduce the risk of complications. Your doctor will ask about your symptoms and any recent travel, and will do a physical examination. Blood tests are required to diagnose dengue fever.
Most people infected with dengue will only have a mild illness and may not even be aware they have been infected. There is no specific medical treatment for dengue.
Your doctor may advise you to:
- have bed rest
- drink plenty of fluids
- take medicines such as paracetamol to reduce fever and ease pain. Do not take aspirin or non-steroidal anti-inflammatory agents such as ibuprofen, naproxen or diclofenac because they can increase the risk of bleeding.
The illness usually lasts up to 10 days but recovery may take some time; you may feel tired and depressed for weeks.
For people who show warning signs of severe dengue, hospital admission is required. Treatment may include drips (intravenous fluids and replacement of lost electrolytes).
For most travellers, there is no suitable vaccine to prevent dengue. The best way to avoid infection in areas where there are dengue-carrying mosquito populations is to protect yourself against being bitten and to reduce potential mosquito breeding sites. All people in dengue-affected areas, whether you have been previously infected or not, should take precautions to prevent being bitten.
A new vaccine for dengue has been approved in some countries where dengue fever is common. It is only useful for people who have previously been infected with dengue fever, as it can actually increase the risk of severe dengue for people who have never had the disease before. This vaccine is not routinely (or widely) available in New Zealand. Please see your local travel medicine specialist for further advice.
Protect against mosquito bites indoors
- Use screens on doors and windows.
- Use insect sprays.
- Use mosquito coils.
- Use a mosquito net over your bed at night. New bed nets often have insecticide already on the net but, if not, you can spray the net with insecticide.
- Turn on air conditioning if you have it and close all windows and doors – this is very effective at keeping mosquitoes out of the room.
The dengue-carrying mosquito can be around during the day so keep covered day and night.
Protect against mosquito bites outdoors
- Wear an insect repellent cream or spray containing no more than 50% diethyltoluamide (DEET) or 30% DEET for children. Higher concentrations are no more effective and can be harmful. Products containing 20–25% picaridin (also known as icaridin) or 30% lemon eucalyptus oil (also known as PMD) can also be used. Read more about insect repellents and how to use them safely.
- When using sunscreen, apply repellent over the sunscreen.
- Wear light-coloured protective clothing such as long-sleeved shirts, long pants and hats.
- Wear clothing treated with an insecticide such as permethrin. Clothing can be bought pre-treated, or you can buy permethrin and treat your own clothes. Permethrin-treated clothing can be washed several times and still provide protection against insects. Regular insect repellent applied to clothes can also provide temporary protection, but must be reapplied at regular intervals.
- Wear shoes rather than sandals.
- Use zip-up screens on tents.
- Avoid areas where mosquitoes are most active.
The mosquito that transmits dengue is commonly found in urban areas, so avoiding rural travel will not protect you against dengue fever.
Reduce mosquito breeding sites
Dengue-carrying mosquitoes generally breed in stagnant water found in containers (eg, discarded tyres, uncovered barrels, buckets) rather than in rivers, swamps, open drains, creeks or mangroves. The disease is particularly common in urban areas where standing water is near to homes and provides an ideal breeding ground for the carrier mosquitoes.
To eliminate breeding sites:
- empty any containers that hold water in and around the place you are staying
- cover all water tanks, cisterns, barrels and rubbish containers
- remove or empty water in old tyres, tin cans, bottles and trays
- check and clean out clogged gutters and flat roofs where water may have settled
- change water regularly in pet water dishes, birdbaths and plant trays
- trim weeds and tall grasses, as adult mosquitoes seek these for shade.
Fight the bite, day and night
Dr Laupepa Va'a from the Ministry of Health talks about how people travelling overseas can avoid being bitten by mosquitoes that might carry diseases such as dengue fever. and Zika virus.
(Ministry of Health, NZ, 2018)
Areas often affected with dengue fever include:
- North Queensland, Australia
- Pacific Island nations
- Asia (including Cambodia and India)
- Central and South America
- Sub-Saharan Africa.
Dengue outbreaks can occur anywhere in tropical countries as shown in the map below, so if you are planning on travelling, check the risk level before you go.
Dengue Ministry of Health, NZ (Fijian, Samoan, Tongan)
Dengue fever (Samoan, Fijian, Tongan, Cook Island Maori) Auckland Regional Public Health Service, NZ
Avoiding bug bites while travelling Ministry of Health, NZ
Travel advice Safe Travel, NZ
Insect repellent DermNet, NZ
Insecticides and the skin DermNet, NZ
- Dengue fever, zika and chikungunya Auckland Regional Public Health Service, NZ
- Dengue fever Dermnet NZ
|Dr Li-Wern Yim is a travel doctor with a background in general practice. She studied medicine at the University of Otago, and has a postgraduate diploma in travel medicine (Otago). She also studied tropical medicine in Uganda and Tanzania, and holds a diploma from the London School of Hygiene & Tropical Medicine. She currently works in clinical travel medicine in Auckland. | <urn:uuid:cb9c7170-113e-4bc4-bad6-209307882295> | CC-MAIN-2023-06 | https://www.healthnavigator.org.nz/health-a-z/d/dengue-fever/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.927353 | 2,301 | 3.75 | 4 |
Seawater contains a variety of salts, and when seawater evaporates, these solids are left behind. The most abundant salt in seawater is sodium chloride (NaCl) which will be referred to in this article simply as salt (technically it is called halite).
Layers of salt occur naturally in the geologic record, comprising an abundant source of salt for human consumption worldwide. Today, some salt deposits are land derived, as when salty water seeps from the rocks of Grand Canyon, evaporates and leaves a salty residue. Others are related to enclosed coastal lagoons, which fill up with seawater during a storm, but whose waters are trapped and evaporate between storms. Thus, salt deposits are classed as evaporites.
If a basin of seawater 100 feet thick were to evaporate, only about 2 feet of salt would be left behind. Can seawater evaporation account for all "evaporites"? If so, multiplied millions of years would be necessary for their build up, for some salt beds are extremely thick and wide. The salt deposits often occur in layers covering thousands of square miles with salt hundreds of feet thick.
Old earth uniformitarian thinking postulates an enclosed basin or coastal lagoon which repeatedly floods and evaporates over long periods of time, allowing thick deposits of salt to build up. The mind boggles at huge basins undergoing identical cycles of flooding and evaporation uncountable times, all the while remaining in the same location for millions and millions of years. By contrast, modern lagoons fill in, migrate, erode—there is no long-term stability for coastal features.
The regionally extensive salt beds in the geologic record are quite different from evaporites forming today. Seawater contains many chemical and mineral impurities as well as both single-celled and multi-celled plants and animals and any exposed dry lagoon will be an active life zone. Thus, modern evaporites are quite impure. But the major salt deposits in the geologic record are absolutely pure salt! Salt mines simply crush it and put it on the store shelf. Surely these large, pure salt beds are not evaporated seawater. Some other process must have formed them.
As with many features in geology, catastrophic views are replacing the old, impotent uniformitarian ones. Many have observed that the large salt accumulations occur in basins formed by major tectonic downwarping, often associated with ancient volcanic eruptions. The evidence does not fit with the idea of a trapped lagoon. Where are the fossils? Where are the impurities?
Many now think the salt was extruded in superheated, supersaturated salt brines from deep in the earth along faults. Once encountering the cold ocean waters, the hot brines could no longer sustain the high concentrations of salt, which rapidly precipitated out of solution, free of impurities and marine organisms.
The great Flood of Noah's day provides the proper context. During the Flood, great volumes of magma, water, metals, and chemicals, were extruded onto the surface from the depths of the earth, as the "fountains of the great deep" (Genesis 7:11) spewed forth hot volcanic materials. Today we find them (especially salt) interbedded with Flood sediments, just as the "Back to Genesis" model predicts.
* Dr. Morris is President of the Institute for Creation Research.
Cite this article: Morris, J. 2002. Does Salt Come from Evaporated Sea Water? Acts & Facts. 31 (11). | <urn:uuid:bece7fc8-63e1-46d1-8264-521dad2ab8d5> | CC-MAIN-2023-06 | https://www.icr.org/article/does-salt-come-from-evaporated-sea-water/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.948654 | 738 | 4.03125 | 4 |
You should spend around 40 minutes on this task.
It is better for children if the whole families are involved in children’s upbringing rather than just their parents. Give your opinion on this.
Give reasons for your answer and include any relevant examples from your own knowledge or experience. Write at least 250 words.
It is often argued that the whole families’ involvement is better for bringing up a child rather than just bestowing this responsibility just on the shoulders of parents. Although it appears to be beneficial, it does have some demerits and practical obstacles.
When children grow in a joint family, with their grandparents, uncles, aunts and cousins, it gives them enough social interaction and can help to mould an acceptable character. Such children are most likely to learn qualities such as respect to elders, compassion to fellow beings and acceptance of other people’s attitudes compared to children who are supervised only by their parents. For example, a child who lives in a large family faces no difficulty in adjusting with another child of different traits and can often share his belongings with him or her. However, this is not easily possible for a child from a nuclear family.
Conversely, children who live with a lot of family members around would not get enough care and privacy when compared to those who are from a smaller family. To be clearer, in a nuclear family, parents can offer the best for their children, whether it is a product or a service. Children do have their private spaces too to an extent. These cannot be offered easily in a larger family because children need to share their belongings and services with other family members. There would also be occasional unhealthy competitions and bullying among the children. Finally, it is rarely possible for families to stay together at one point in this busy world, which makes joint families fewer in number.
To conclude, though upbringing children in a large family has merits, there exist some practical obstacles and difficulties.
Word count: 284 | <urn:uuid:7e6b24af-fa93-4fe3-adfe-6e94282a2a56> | CC-MAIN-2023-06 | https://www.ieltstrainingtips.com/it-is-better-for-children-if-the-whole-families-are-involved-in-childrens-upbringing-rather-than-just-their-parents-give-your-opinion-on-this/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.978547 | 397 | 3.546875 | 4 |
According to the World Health Organization, cataracts are responsible for 51% of cases of blindness worldwide - although this blindness is preventable with treatment. In fact, research shows that in industrialized countries about 50% of individuals over the age of 70 have had a cataract in at least one eye. This is partially because cataracts are a natural part of the aging process of the eye, so as people in general live longer, the incidence of cataracts continue to increase.
What are Cataracts?
Cataracts occur when the natural lens in the eye begins to cloud, causing blurred vision that progressively gets worse. In addition to age, cataracts can be caused or accelerated by a number of factors including physical trauma or injury to the eye, poor nutrition, smoking, diabetes, certain medications (such as corticosteroids), long-term exposure to radiation and certain eye conditions such as uveitis. Cataracts can also be congenital (present at birth).
The eye’s lens is responsible for the passage of light into the eye and focusing that light onto the retina. It is responsible for the eye’s ability to focus and see clearly. That’s why when the lens is not working effectively, the eye loses it’s clear focus and objects appear blurred. In addition to increasingly blurred vision, symptoms of cataracts include:
“Washed Out” Vision or Double Vision:
People and objects appear hazy, blurred or “washed out” with less definition, depth and colour. Many describe this as being similar to looking out of a dirty window. This makes many activities of daily living a challenge including reading, watching television, driving or doing basic chores.
Increased Glare Sensitivity:
This can happen both from outdoor sunlight or light reflected off of shiny objects indoors. Glare sensitivity causes problems with driving, particularly at night and generally seeing our surroundings clearly and comfortably.
Often colours won’t appear as vibrant as they once did, often having a brown undertone. Colour distinction may become difficult as well.
Compromised Contrast and Depth Perception:
These eye skills are greatly affected by the damage to the lens.
Often individuals with cataracts find that they require more light than they used to, to be able to see clearly and perform basic activities.
Early stage cataracts may be able to be treated with glasses or lifestyle changes, such as using brighter lights, but if they are hindering the ability to function in daily life, it might mean it is time for cataract surgery.
Cataract surgery is one of the most common surgeries performed today and it involves removing the natural lens and replacing it with an artificial lens, called an implant or an intraocular lens. Typically the standard implants correct the patient’s distance vision but reading glasses are still needed. However as technology has gotten more sophisticated you can now get multifocal implants that can reduce or eliminate the need for glasses altogether. Usually the procedure is an outpatient procedure (you will go home the same day) and 95% of patients experience improved vision almost immediately.
While doctors still don’t know exactly how much each risk factor leads to cataracts there are a few ways you can keep your eyes healthy and reduce your risks:
- Refrain from smoking and high alcohol consumption
- Exercise and eat well, including lots of fruits and vegetables that contain antioxidants
- Protect your eyes from UV radiation like from sunlight
- Control diabetes and hypertension
Most importantly, see your eye doctor regularly for a comprehensive eye exam. If you are over 40 or at risk, make sure to schedule a yearly eye exam. | <urn:uuid:e90712a6-b1eb-413c-9b64-73b403980580> | CC-MAIN-2023-06 | https://www.insighteyecarewny.com/2017/06/19/cataract-awareness-and-prevention-2017/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.94581 | 775 | 3.625 | 4 |
Preschool room is where children between 3 and 5 years old learn and develop. In this room there will always be at least 1 staff member per every 10 children. The preschool curriculum will focus on developing each child's increasing independence, and expand on their social skills and knowledge.
- The preschool classroom provides many stimulating learning areas that provide the children with the space to continuously build on their skills. Science projects encourage investigation, experimentation and answering questions. Technology activities allow the children to use simple tools from crayons to microscopes and computers. Engineering skills allow the children to recognize problems and test solutions to them. Art activities encourage creativity and illustration. Finally, math skills are implemented through numbers, patterns, shapes and more!
- Communication is fostered in the preschool class through open-ended questions, conversations, writing, word games, songs, and stories that pave the way for reading skills.
- Teachers facilitate relationships and provide the children with opportunities to work in large groups, small groups, pairs, and individually
- The preschool class participates in weekly Spanish and Music classes, which allow them to build their bi-lingual skills, and to build a repertoire of familiar songs and experiment with different kinds of movements.
- Community partnerships are established with field trips, visitors, and volunteers. | <urn:uuid:f652527f-6068-4610-b51b-d627926b2a33> | CC-MAIN-2023-06 | https://www.johnheinzchilddevcenter.com/classrooms/preschool | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.946773 | 259 | 3.8125 | 4 |
For observers near lakes, evening and nighttime choruses may include the haunting vocalizations of the Common Loon. These loons breed on lakes with clear water, abundant fish, and lots of small islands (which often serve as nesting sites). Common Loons are capable of a number of vocalizations including wails, yodels, and tremolos. Pairs will often sing duets comprised of all these vocalization types just after sunset and sometimes into the night. Birds are highly territorial in the early weeks of the breeding season during which territorial vocalizations (yodeling) and fighting between individuals may be observed. Pairs of loons can be observed foraging together by day prior to incubation. Both pair members assist in the creation of their nest sites constructed at the water’s edge and both members incubate eggs once laid. Chicks leave the nest with their parents within 24 hours of hatching and are initially completely dependent on their parents for food (such as crayfish and small fish). Young loon chicks may be carried on the back of their parent.
Safe Dates: May 15th to July 20th (applicable for only the S or H codes).
Breeding Evidence: If you hear Common Loon wails or tremolos within the safe dates, use code S and upgrade to S7 if heard at the same location 7 or more days later. For a silent bird on a potential breeding lake, a loon may be considered in appropriate breeding habitat (H) if within the safe dates. Nest sites may be obscured by vegetation and difficult to see. In such cases when a loon is observed visiting a probable nest site, you can use code N. Common Loons may be observed courting, displaying, or copulating on open water (code C). If territorial defense (which may consist of yodeling vocalization and physical altercations) is observed, use code T. Once hatched, chicks are always in the presence of their parents and can be highly visible (code FL). When a chick is discovered, behaviors such as feeding young (code FY) can often be observed. | <urn:uuid:d202f9a3-c77f-4a98-995e-1dfe22093536> | CC-MAIN-2023-06 | https://www.mainenightjar.com/common-loon | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.955034 | 439 | 3.53125 | 4 |
Engineers have created flexible, lightweight materials with 4D printing that could lead to better soft robotics and tiny implantable biomedical devices. The stiffness can be adjusted more than 100-fold and can be reshaped on demand when heated.
3D printing, also known as additive manufacturing, turns digital blueprints to physical objects by building them layer by layer. 4D printing is based on this technology, with one big difference: it uses special materials and sophisticated designs to print objects that change shape with environmental conditions such as temperature acting as a trigger.
The engineers created a new class of “metamaterials” — materials engineered to have unusual and counterintuitive properties that are not found in nature. Previously, the shape and properties of metamaterials were irreversible once they were manufactured. But the engineers can tune their plastic-like materials with heat, so they stay rigid when struck or become soft as a sponge to absorb shock.
The stiffness can be adjusted more than 100-fold in temperatures between room temperature (73°) and 194 °F, allowing great control of shock absorption. The materials can be reshaped for a wide variety of purposes. They can be temporarily transformed into any deformed shape and then returned to their original shape on demand when heated. | <urn:uuid:a577a28c-c892-46e8-98e6-ee7762b47750> | CC-MAIN-2023-06 | https://www.medicaldesignbriefs.com/component/content/article/mdb/insiders/mdb/stories/34280?r=46874 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.967629 | 255 | 3.90625 | 4 |
Three PBL Shifts Can Add Rigor & Clarity
Reviewed by Rebecca Berger
As a middle school teacher who has used Project Based Learning (PBL) for the past few years in my classroom, I found this book to be incredibly helpful. However, if I had stumbled upon this book without ever having tried PBL, I might have been scared off by all of the “requirements” to pull off PBL successfully.
The book begins by defining PBL and giving a rationale for PBL in the classroom and then moves on to discussing how to nurture learners to be both competent and confident. Then, as the title suggests, McDowell proposes three major shifts in project design – clarity, challenge and culture – that will increase the probability that PBL will impact all learners in the classroom.
Each chapter includes questions for reflection and a piece on next steps to help you create or improve your own PBL units, as well as sample checklists, rubrics, calendars and protocols to help you visualize how to put theory into practice.
Here are my takeaways from each of the three design shifts.
Design Shift 1: Clarity
Clarity is the idea that students need to be absolutely clear on what they are learning, where they are in their learning, and what they need to do next. They also need to be able to monitor, evaluate and improve their own learning. Students should constantly be able to answer these four questions:
- Where am I going in my learning?
- Where am I now in my learning?
- What next steps am I going to take in my learning?
- How do I improve my learning and that of others?
I like these four questions because they have the potential to impact both students and teachers. That is, in order to ensure that students can answer each of these four questions effectively, teachers need to be absolutely clear on the learning intentions and success criteria. (Note: McDowell spends much time talking about surface, deep and transfer levels of learning that are critical to any PBL unit and are integral to establishing success criteria.)
Design Shift 2: Challenge
Teachers need to activate learning by “aligning teaching practices with the particular surface-, deep- and transfer-level knowledge of students.” McDowell poses these questions to the teacher as he/she plans learning activities.
Surface: What instructional approaches will support students in understanding foundational knowledge (e.g. vocabulary, facts) related to learning outcomes?
Deep Level: What instructional approaches support students in connecting and contrasting ideas?
Transfer Level: What instructional approaches support students in applying the learning outcomes to project expectations?
Though some educators might assume that in PBL classrooms the teacher is “the guide on the side” as students work on their own or in small groups to solve a problem, McDowell makes it clear that teachers have a critical role in providing timely direct instruction, which he terms “workshops,” in the PBL classroom.
Design Shift 3: Culture
Teachers need to actively create an environment in which students feel comfortable discussing their own performance and giving feedback to peers, have the language to talk about their own learning, and have autonomy to take responsibility for their learning. McDowell discusses specific strategies that individual teachers, schools and districts can use to create such a culture.
My summary here of Rigorous PBL by Design just scratches the surface of everything that this book has to offer. If you are a PBL-educator or aspiring PBL-educator, or a leader in a PBL-focused school, I highly recommend you take the time to read this dense but informative book.
Rebecca Berger has been teaching in the middle grades at an independent school for the past eleven years. She earned her teaching credential and her M.A. at Hebrew Union College Jewish Institute of Religion. | <urn:uuid:135f8b00-639b-4aa0-b008-7529fdecdcb4> | CC-MAIN-2023-06 | https://www.middleweb.com/36905/three-pbl-shifts-can-add-rigor-clarity/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.963557 | 789 | 3.53125 | 4 |
The Renaissance Man
Who Was Leonardo Da Vinci?
While Leonardo da Vinci is best known as an artist, his work as a scientist and an inventor make him a true Renaissance man. He serves as a role model applying the scientific method to every aspect of life, including art and music. Although he is best known for his dramatic and expressive artwork, Leonardo also conducted dozens of carefully thought out experiments and created futuristic inventions that were groundbreaking for the time.
His keen eye and quick mind led him to make important scientific discoveries, yet he never published his ideas. He was a vegetarian who loved animals and despised war, yet he worked as a military engineer to invent advanced and deadly weapons. He was one of the greatest painters of the Italian Renaissance, yet he left only a handful of completed paintings.
Navigate this website to learn more about Leonardo's brilliant and imaginative mind, and the art, inventions, and discoveries that he made.
Da Vinci — The Artist
Leonardo sought a universal language in painting. Using perspective and his experiences with scientific observation, Leonardo tried to create faithful renditions of life. This call to objectivity became the standard for painters who followed in the 16th century.Da Vinci - The Artist >
Da Vinci — The Inventor
Throughout his life, Leonardo had brilliant and far-out ideas, ranging from the practical to the prophetic. Leonardo recognized that levers and gears, when applied properly, could accomplish astonishing tasks.Da Vinci - The Inventor >
Da Vinci — The Scientist
Da Vinci bridged the gap between the shockingly unscientific medieval methods and our own trusty modern approach. The sheer range of topics that came under his inquiry is staggering: anatomy, zoology, botany, geology, optics, aerodynamics and hydrodynamics among others.Da Vinci - The Scientist > | <urn:uuid:7659649d-2e1c-4d05-9e15-41750e7bdda5> | CC-MAIN-2023-06 | https://www.mos.org/leonardo/node/1 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.963367 | 382 | 3.875 | 4 |
When it comes to history, circa 5th century BC, the Greeks considered the Celts (Keltoi) as one of the four great ‘barbarian’ people – with their independent realms extending all the way from the Iberian peninsula to the frontiers of the upper Danube, encompassing large swathes of Western and Central Europe.
From the cultural perspective, these Celtic tribes and bands posed the antithesis to the so-presumed Mediterranean ideals, with their distinctive approach to religion and warfare. But of course, beyond the misleading ‘barbarian’ tag, there was more to the historical scope of these ancient people, particularly the fierce Celtic warriors.
- High Chieftains, Nobles, and ‘Magistrates’
- The ‘Men of Art’ in Celtic Culture
- The Scope of Clientage in Celtic Society
- Low-Intensity Celtic Warfare and Mercenaries
- The ‘Solution’ of Wealth and Prestige
- Feasting and Raiding
- Druids and The Otherworld
- The Arms and Armor of the Celtic Warrior
- The Battle Deployment of Ancient Celts
- The Contrast between Rich Clothes and Ritual Nudity
- The Frenzied Charge and Cacophony of the Celtic Warriors
- The Lime-Washed Hair and Tattoos
High Chieftains, Nobles, and ‘Magistrates’
Like most tribal scopes of ancient times, the basic framework of the Celtic society was composed of extended families and clans who were based within their particular territories. These collective groups, often categorized as Celtic tribes, were ruled by kings or high chieftains, with power sometimes shared by dual authorities.
Over time, by circa 1st century BC, some of the Celts, especially in Gaul (modern France), were ruled by elected ‘magistrates’ (similar to Roman consuls) – although these figureheads only wielded nominal power. The real decision-making was left to the assembly of free men, while the military orders (like raiding and conquests) were still carried forth by an even smaller group of nobles, among whom the kings and chieftains were chosen.
This brings us to the basic hierarchy of the ancient Celts, where the nobles obviously formed the minority of elites. They were followed by the aforementioned free-men of the Celtic tribes, who often formed the warbands and retainers (Celtic armies) of their chiefs. But the majority of the common Celtic people were probably of ‘unfree’ origin, whom Julius Caesar likened to slaves.
Now from the practical perspective, this was an oversimplification. That is because the Celts were not really dependent on slaves for the functioning of their social and economic affairs, as opposed to their Mediterranean neighbors. So the unfree nature of many of these folks might have been similar to the limited rights of serfs from later medieval Europe.
However, the Celts (especially the elites) actually thrived on the trading of slaves – whom they rounded up in raids. And these captured men and women were often bartered in return for luxury goods from Rome and distant Greece.
The ‘Men of Art’ in Celtic Culture
Interestingly enough, in spite of their (often misleading) ‘barbarian’ tag, the Celtic society held the so-categorized ‘men of art’ in high regard. In fact, in ancient Ireland, the Druids were called forth as ‘men of art’ and accorded special privileges by the ruling class.
Similarly, bards, artisans, blacksmiths, and metalworkers were often heralded as men of art, given their contributions to the crafting of morale-boosting songs (in Celtic languages), ostentatious jewelry, and most importantly mass weapons – items that had high value in the Celtic society.
In fact, the categorization of ‘men of art’ was so important that the nobles often endowed themselves with similar titles. This was complemented by their patronizing of various types of craftsmen, who in turn were responsible for furnishing special apparel and accouterments for their chosen lords and leaders.
In essence, the flourishing and encouragement of art was an integral part of the Celtic society, with the status being used to both fuel and associate itself with the ‘men of art’. Archaeology rather reinforces such an art-based social system, as could be evidenced by the Celtic material culture found across parts of continental Europe.
The Scope of Clientage in Celtic Society
We fleetingly mentioned how the Celtic society could be basically divided into three groups – the rich nobles, the free-men retainers, and the majority of common folks (who enjoyed better standards than Mediterranean slaves). Intriguingly enough, the entire societal scope was structured in a way that allowed these three groups to be connected to each other – and the system was based on clientage.
Simply put, like the later feudal times, the scope of clients meant that the lower-ranking group pledged allegiance to their political superiors in return for security (like the common folks) and employment (like the free men) within the tribe.
On the other hand, the number of retainers (or clients) of a noble mirrored his standing within the tribe; with a higher number of followers obviously reflecting the elite’s greater prestige and power. It should be also noted that many nobles depended on the free men for support during times of war and confrontations.
Now while this interconnected system was based on practicality, it was strengthened by vows of loyalty that were not taken lightly – and thus had rigorous consequences for those who broke such established ties. Moreover, given the importance of familial ties in the Celtic tribes, the client system was sometimes reinforced with the exchange of hostages and fostering of children.
And in desperate situations, clientage even extended to entire tribes, as was the case during Caesar’s Gaul campaign (the Gallic Wars) when the Aedui called upon their allied clients for battle.
Low-Intensity Celtic Warfare and Mercenaries
One of the intrinsic characteristics of an ancient Celtic society was based on the mutual appreciation of physical security. This in turn endowed the nobles with the power of ‘providing’ security. And the scope of security was needed quite regularly since the Celts were often involved in ‘aggressive’ activities, ranging from cattle rustling, slave raiding, and trading to even clan-based vendettas and warfare.
In fact, these low-intensity conflicts rather prepared the young warriors for actual Celtic warfare, not only psychologically (since courage was not seen as a virtue but rather viewed as expected behavior), but also tactically, like honing his weapon-handling and most importantly demonstrating his martial reputation as a warrior.
One of the ways to gain such a reputation was to join the mercenary bands that operated in many geographical locations dotted around ancient Europe and the Mediterranean. A pertinent example would obviously entail the many Celtic warriors employed by the great Hannibal. Among the Carthaginian general’s Gaulish contingent, the Celtic cavalry was specially held in high regard due to their effectiveness in close-combat and elite status (often led by noblemen).
The Celts also proved their value as mercenaries in the armies of Syracuse and even the Diadochi (Successor) Kingdoms of Alexander, with one intriguing example relating how they operated as elite infantrymen in the military of the Ptolemies of Egypt (pictured above).
Many of these mercenary bands acted as pseudo-brotherhoods, with their army fraternity codes being distinct from the ‘ordinary’ soldiers of the numerous clans and tribes. Polybius noted how the Celtic mercenaries who arrived from the north to aid their Cisalpine Gaul brethren at the Battle of Telamon (against the Romans) were called the Gaesatae or simply ‘spearmen’. However, the term itself may have been derived from the Celtic word geissi, which roughly translated to bonds or sacred rules of conduct.
The ‘Solution’ of Wealth and Prestige
The hierarchy of the ancient Celtic society was partially inspired by the prestige of the leader or the chieftain. And this flair of prestige, in turn, was determined by the wealth he had acquired through numerous endeavors, ranging from raiding, and warring to even trading.
In essence, the war chiefs understood that the greater wealth they acquired, the bigger the chance that they will have to retain their clients and thus wield power. One of the by-effects of this simple economic system was mentioned in the earlier entry, where selected groups of Celtic warriors became mercenaries, thus gathering riches and spoils from the distant lands of Greece, Egypt, and even the Roman Empire; thus enhancing their prestige in their native lands.
Another interesting example would pertain to the trading of slaves. While rounding up slaves was relatively easy for the Celtic warbands given the loose structure of many fringe villages and settled lands (when compared to their Mediterranean counterparts), these slaves were often not integrated into the Celtic society.
Instead, they were traded for luxury goods like wine and gold coins. Now while for a Mediterranean merchant the deal was seen as being ‘too easy’ – since slaves were often more profitable than mere fixed commodities, the trade was practical for a Celtic warlord. That is because the acquisition of wines (and luxury goods) and their distribution among his retainers would actually reinforce his standing within the tribe structure.
Feasting and Raiding
Much like their Germanic neighbors, the ancient Celts gave special significance to feasting. These social gatherings, patronized by the nobles, almost took a ritualistic route, with a variety of ceremonial features and hospitality codes.
At the same time, the participants themselves often became drunk and wild, and their furor was accompanied by bard songs and even parodies that praised or made sarcastic remarks about their lineage and courage.
But beyond drunkenness and revelry, such feasts also mirrored the social standing of the patrons and the guests, with seating arrangements reflecting their statuses within the community (much like the later Anglo-Saxons).
Furthermore, even the meat cuts reflected the stature and prominence of the guest, with the choicest pieces being given to the favorite warriors. This champion’s portion could even be disputed by other warriors, which led to arguments and even fighting among the guests.
Furthermore, the feasts also served the practical purpose of military planning because such social gatherings attracted many of the notable elites and influential retainers. So while drinking and feasting, any Celtic warrior could boast of his planned raid for plundering and gathering spoils – and he could ask other followers to join him.
The scope once again reverted to prestige; war chiefs with greater social standing had more clients to support them in a quest to gather even more riches – thus alluding to a cyclic economy based on warfare.
Druids and The Otherworld
So far, we had been talking about the social aspects of the ancient Celts. However, a big part of the Celtic culture was based on the spiritual and supernatural scope. As a matter of fact, Celtic warriors tended to associate supernatural properties with many natural parameters, including bogs, rivers, lakes, mountains, and even trees.
The spiritual scope and its characteristics also extended to certain animals and birds, like horses, wild boars, dogs, and ravens. To that end, many of the Celts considered the tangible realm of man to be co-existing with the Otherworld where the gods and dead resided.
At times the boundary between these two realms was judged to be ‘thinned’, and as such few of the human sacrifices (like the Lindow Man) were possibly made to ‘send’ a messenger into this fantastical Otherworld.
The eminence of the Druids stemmed from their alleged capacity to ‘link’ and interpret the Otherworld. Their very name is derived from the cognate for oak trees; with the sacred grove of oak trees, known as drunemeton (in Galatia), being used for important rituals and ceremonies.
In that regard, while Druids were more popular in ancient Gaul and Britain, men with high social status who acted as the guardians of tribal traditions were fairly common in the Celtic world (even in distant Galatia in Asia Minor).
The Arms and Armor of the Celtic Warrior
All the freemen of the ancient Celtic society had the right (and sometimes duty) to bear arms, as opposed to the ‘unfree’ majority. The weapons they carried, though, were relatively uncomplicated with the spears and shields combination being the norm.
The nobility, however, tended to showcase their long swords as instruments of prestige (and slashing weapons), while also incorporating helmets and mail shirts as part of their battle panoply (although only worn by the higher status warriors). In contrast, ordinary warriors only carried their spears, and short shields, while eschewing any form of heavy armor.
Interestingly enough, other than the sword, the spear was also viewed as an esteemed (and practical) weapon of a warrior. Greek author Strabo described how the ancient Celtic warrior often carried two types of the spear – a bigger, heavier one for thrusting; and a smaller, flexible one for throwing and (sometimes) using in close combat.
As for defensive equipment, Greek traveler Pausanias commented on how the Galatae (Galatians – Celtic people who migrated and settled in central Anatolia) carried their distinctive shields. Livy further attested how the Celtic shields were relatively long with an oblong shape But practicality once again suggests that heavy shields were probably only carried by the elite retinues.
As for missile weapons, archaeological evidence suggests that bows were in very low demand for Celtic warriors. On the other hand, there are plenty of sling-stones that have been found around the hill-forts of southern Britain, thereby alluding to how slings were probably more favored than bows as weapons by some Celtic groups. In any case, the very warrior ethos of most Celtic societies possibly played a part in looking down upon projectile-based weapons.
The Battle Deployment of Ancient Celts
With all the talk about weapons, we must also understand that warfare was an intrinsic part of Celtic society. So while popular notions and Hollywood dismiss them as ‘barbarians’ who preferred to mass up and chaotically charge their enemies, the historicity is far more complex.
In fact, some ancient writers, like Polybius himself, mentioned how the Celts were no mere ‘column of the mob’. Instead, they probably deployed themselves on the battlefield based on tribes and their internal affiliations.
And almost mirroring their societal scope, the formations (or battle lines) of the army were inspired by the hierarchy. For example, the chosen and noble Celtic warriors boasting their reputation and courage were positioned on the front lines, surrounded by groups of other soldiers (who had their morale boosted by these champions).
These ‘super-groups’ with tribal affiliations carried forth their own standards and banners, often replete with religious symbolism (like guardian deities). And on a practical level, these standards were also used for rallying the front-line Celtic warriors, with contingents vying for supremacy and prestige on the battlefield.
The Contrast between Rich Clothes and Ritual Nudity
Pausanias talked about the Galatians (Galatae) and how they preferred to wear embroidered tunics and breeches with rich colors, often accompanied by cloaks striped with various tints. Archaeological evidence from Celtic graves and tombs also supports such a notion, with wool and linen clothing fragments often showcasing vibrant hues.
The nobles complimented their fashionable styles with opulence, including the use of gold threads and silk. Furthermore, the wealthy Celts (both men and women) also had a penchant for wearing jewelry items, like bracelets, rings, necklaces, torcs, and even entire corselets made of gold.
On the other hand, Polybius had this to say about the fierce Celts, circa 2nd century BC –
The Romans…were terrified by the fine order of the Celtic host, and the dreadful din, for there were innumerable horn-blowers and trumpeters, and…the whole army were shouting their war-cries…Very terrifying too were the appearance and the gestures of the naked warriors in front, all in the prime of life and finely built men, and all in the leading companies richly adorned with gold torcs and armlets.
So in contrast to ostentatious clothing items, few Celtic warriors willingly plunged into the battle naked. Now on closer inspection of the ancient accounts, one could discern that these ‘naked warriors’ mostly belonged to the mercenary groups (Gaesatae), which we had earlier described as being prestigious organizations.
Simply put, some of the warriors in such groups, bound by codes and rituals, dedicated themselves to martial pursuits dictated by symbolism. Viewing themselves as ardent followers of gods of war (like Camulos in Gaul), these adherents possibly felt protected by divine entities, and thus boisterously shunned body armor. However, the naked warrior did carry his shield because that particular item was considered an integral part of his warrior panoply.
The Frenzied Charge and Cacophony of the Celtic Warriors
For the ancient Celts, in a sense, a battle was seen as an opportunity to prove one’s ‘value’ in front of the tribe and gods. So while the tactics of warfare evolved throughout the centuries in ancient Europe, the psychological approach of the Celtic warriors to warfare largely remained unchanged.
And accompanying his psyche was the purposeful use of noise, ranging from battle cries, songs, chants, taunts, and insults to even specialized instruments like carnyx. This latter-mentioned object was usually a sort of a war horn that was shaped like an animal (often a boar), and its primary purpose was to terrify the enemy with ‘harsh sounds and tumults of war’ (as described by Diodorus Siculus).
Interestingly enough, the very word ‘slogan’ is derived from the late-Medieval term slogorne, which in turn originates from Gaelic sluagh-ghairm (sluagh meaning ‘army’; gairm pertaining to ‘cry’), the battle cry used by the Scottish and Irish Celts. The Celtic warbands were sometimes also accompanied by Druids and ‘banshee’ women who made their presence known by shouting and screeching curses directed at their foes.
Apart from psychologically afflicting the enemy, the ‘auditory accompaniment’ significantly drummed up the courage and furor of the Celtic warriors. By this time (in the beginning phase of the battle), the challenge was issued – when their champions emerged forth to duel with their opponents.
And once the single combats were performed, the Celts were driven into their battle-frenzy – and thus they charged at the enemy lines with fury. As Julius Caesar himself described one of the frenzied charges made by the Nervii at the Battle of the Sambre (in Gallic War Book II)-
…they suddenly dashed out in full force and charged our cavalry, easily driving them back and throwing them into confusion. They then ran down to the river with such incredible speed that it seemed to us as if they were at the edge of the wood, in the river, and on top of us almost all in the same moment. Then with the same speed they swarmed up the opposite hill towards our camp and attacked the men who were busy fortifying it.
The Lime-Washed Hair and Tattoos
Diodorus Siculus, along with other ancient authors, also mentions how the Celts used to artificially ‘whiten’ their hair with lime water. This practice probably alluded to a ritual where the warrior adopted the horse as his totem, and thus aspired for the blessings and protection of Eponia, the horse goddess.
Interestingly enough, the lime-washing possibly even hardened the hair to some degree (though overuse caused the hair to fall out), which could have offered slight protection against the fluky slashes directed towards the head. Many Celtic warriors, especially from the British Isles, also tattooed their skin with blue dye derived from the woad plant – as made famous by the Woad Raiders from the strategy game Age of Empires II.
Note* – The article was updated on 14th August 2022.
And in case we have not attributed or mis-attributed any image or artwork, please let us know via the ‘Contact Us’ link, provided both above the top bar and at the bottom bar of the page. | <urn:uuid:da60969e-77fb-43d5-b376-85bbe77bb155> | CC-MAIN-2023-06 | https://www.realmofhistory.com/2022/08/14/10-facts-ancient-celts-warriors/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.973413 | 4,320 | 3.65625 | 4 |
Retinitis pigmentosa (RP) is an eye disease that is inherited, and very rare. About one in four thousand Americans are affected by this disease. The retina, which is the light-sensitive portion of the eye, degenerates progressively over time. The result of this degeneration is the loss of peripheral vision, loss of central vision, night blindness, and sometimes blindness.
Retinitis Pigmentosa Symptoms
Childhood is when the first symptoms of retinitis pigmentosa generally appear. Usually both eyes are implicated in the disease. Sometimes RP doesn’t appear until older age, at age 30 or even older.
The main symptom of RP in the beginning stages is night blindness. Tunnel vision may develop in the later stages of the disease, where central vision is affected, and only a small portion of sight is available.
One study of patients suffering from RP revealed that, in patients 45 years and older, 52% had at least 20/40 central vision in one eye, 25% had 20/200 vision or below, and 0.5% were completely blind.
Causes of Retinitis Pigmentosa
Very little is known about the causes behind RP, beyond that it is an inherited disease. Scientists believe that defective molecules in our genes cause RP. This explains why the disease affects patients so differently.
If one parent carries the defective gene, it’s possible to get RP, even if your parents do not have the disease. Approximately one percent of the population are carriers of the RP recessive gene. Sometimes this recessive gene is passed on to the child, who will then develop retinitis pigmentosa.
RP affects the retina in the eye. The disease causes the light-sensitive cells that are located in the retina to die gradually. Most often, the cells that are used for night and peripheral vision, called rod cells, are affected. Sometimes the cells that are used to see color and for central vision, called cones, are also affected.
Diagnosis and Treatment
The main diagnostic tool employed is visual field testing. This test determines how much peripheral vision loss has occurred. Other diagnostic tools may be used to test night vision and color vision.
Few treatments exist for RP. What is available helps conditions associated with RP, not the disease itself. For patients older than 25, there is a prosthesis system that was recently approved. This system captures images via glasses, and transmits the signal captures to an implanted device located on the retina.
Most treatments center around helping the patient learn to deal with their vision loss. Psychological counseling, and occupational therapy, may be recommended. Technological instruments that help with low vision, such as illuminated magnifiers, can help patients with RP see as well as possible with their limited vision. Some doctors recommend vitamin A supplements as there is some evidence that vitamin A might help delay the progression of the disease.
For the future, scientists are hopeful that there will be additional treatments for RP, including new drug treatments and retinal implants. | <urn:uuid:70cdba1f-b934-409a-8695-78cb12d26242> | CC-MAIN-2023-06 | https://www.visionsource-kftvision.com/vision-care/your-eye-health/eye-conditions-info/retinitis-pigmentosa/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00720.warc.gz | en | 0.955564 | 614 | 3.828125 | 4 |
This archive of photographs was produced by the British missionary Alice Seeley Harris (1870-1970) during her time in the Congo Free State at the turn of the nineteenth into the twentieth century. Alice took over 1000 photographs depicting Congolese life, however, it is her images of the atrocities perpetrated in pursuit of rubber that have become internationally famous. These images were used by antislavery campaigners in Britain to raise awareness of the colonial violence which was used to force Congolese people to labour when the country was personal property of King Leopold II of Belgium during the period 1884-1908. Her photographs exposed the illusion that Leopold’s colony was founded on humanity and would ‘improve’ the lives of Congolese people. In Congo, as in other African colonies, European education, religion, technology, and medicine were all used as justification for the spread of colonisation. They also helped to mask, or make more palatable, the economic interests that drove European empire building including the theft of land, labour, and resources for profit. In contrast to Leopold's public statements about building a better future for Congolese people, Alice's images revealed the exploitation, domination, and brutality at the heart of the regime.
Alice's photographs should be understood in relation to both the history of the British empire and the racial thinking that underpinned it. Colonialism was based on ideas of European cultural superiority. Images of Africa produced by people from Europe often presented the continent’s rich and varied cultures as primitive, which produced new ways of seeing and valuing difference for European audiences. European colonisers used images and literature to depict African culture, religion, and society as unequal to their own – these kind of representations provided legitimacy for their claim that they had the right to rule others. The development of photography as a form of technology was in itself taken as a sign of the advancement of European peoples. Using this new technology to photograph the traditional ways of life in the colonies was a way of demonstrating European progress and modernity. By the late-nineteenth century, ethnographic photography became popular, it was a genre that represented colonial subjects as different; who could be categorised and ordered according to physical characteristics. These characteristics were then linked to ideas about intellectual capacity and morality, qualities many Europeans believed Africans lacked. For these reasons, despite Alice's antislavery activism, her photographs were not intended to represent Congolese people as equals. Instead they were designed to show people in Britain why they needed to intervene. This reinforced a sense of superiority for the British audience and confirmed their belief that the British empire was essentially a force for good. It is important to remember that although Britain was involved in antislavery in the Congo Free State, exploitative labour practices were common to all European empires and Britain was no exception.
Alice's photographs form part of an antislavery tradition in Britain that has spoken on behalf of enslaved people, rather than empowering them to speak for themselves. The images represent a European humanitarian mindset in which action must be taken on behalf of the passive victim whose helpless situation can only be addressed through appealing to a higher power, in this instance imperial Britain. What's more, this approach overlooks the responsibility colonizing powers had in creating the very conditions from which African people had to subsequently be saved from. Thus, Alice's photographs raise difficult questions about who has the power to represent, who has the power to bring about change, and who is denied this capacity both historically and in the present.
Guide for users
The photographs have been digitised along with their original captions. The original captions have been used to title each image. This is part of the work of preservation but the captions sometimes use language and concepts that are not in common parlance today. For example, half caste; although this language is offensive to modern audiences it is important to understand how viewers would have understood the image during the period, including the use of racial language to shape the meaning of the photograph. Search terms do not replicate this language.
You can search the images using geographic location. The original spelling of the place names contained within the caption have been used for the title of the image, however, some place names have changed their spelling over time e.g. Loanda; and Luanda. Tags have used the modern spelling of the place name. Items are tagged with place names from the period as well as the modern place name e.g. Leopoldville; and Kinshasa. You can search via Country - the place where the image was produced e.g. Angola.
The images have been tagged using generalised description of the individuals who feature in them e.g. African child; or European man. These terms are inadequate as they do not allow for the specificity that should be attributed to individual subjectivity, they also remove peoples' right to self-definition. The captions for the images do not contain the detailed information about the sitters which would allow for a greater degree of clarity. Judging a persons race or ethnicity based on a photograph risks wrongly attributing or imposing meaning, however, in order to make the archive searchable these terms have been used.
Each image has a zoom function will allows the viewer to examine the photograph in detail. If you click on the image you can navigate with the zoom to look at an individual's stance, expression, and other details. Humanitarian photography has employed techniques which have tended to erase the individual and present a suffering mass. The zoom function has been included so that viewers can engage with the people represented as individuals.
A selection of the photographs were used in the Congo Atrocity Lantern Lecture. Part of the glass slide collection owned by Antislavery International and housed at the Bodleian Library, University of Oxford has been digitised by this project. You can search Related Items to view the lantern slides, or you can click through to the Congo Atrocity Lantern Lecture.
Both the original Alice Seeley Harris Archive and the Congo Atrocity Lantern Lecture represented African people through the colonial gaze. In replicating these archives we are very aware of the potential to reinstate that particular way of seeing difference. In order to make sure that this mode of representation is balanced by material which is self-representative we have commissioned two projects Decomposing the Colonial Gaze: Yole!Africa; and You Should Know Me: Photography and the Congolese Diaspora. You can search Alternative Tags, or you can click through to these collections to find new material which has been inspired by and critically engages with the historic archive.
The project has also collaborated closely with the Antislavery Knowledge Network, which is based at the University of Liverpool, and seeks community-led strategies for creative and heritage-based interventions in sub-Saharan Africa.
Copyright and takedown policy
Copyrights to all resources are retained by Antislavery International, who have kindly made their collections available for educational and non-commercial use only. All efforts have been made to obtain copyright permission for materials featured on this site. If you are aware of instances where the rights holder(s) has not been given an appropriate credit, please let us know. If you hold the rights to any item(s) included in this resource and oppose to its use, please contact us to request its removal from the website.
This archive would not have been possible without the generous access given to the project by Antislavery International. In particular we would like to thank Dr Aidan McQuade and Dr Anna Shepherd. The digitisation was completed by the Bodleian Library, University of Oxford. Archivist Lucy McCann gave invaluable help with locating the full archive. We would like to thank Nick Cistone and Linda Townsend for their assistance with this process. Mike Gardner at the University of Nottingham has lent his technical support throughout the project. Discussions about this project were greatly enhanced by conversations with Dr Mark Sealy (Director, Autograph ABP) and Dr Richard Benjamin (International Slavery Museum). Congolese artist Sammy Baloji offered unique insights into the relationship between past and present forms of representation. This project was supported by the Arts and Humanities Research Council. Further thanks go to the Antislavery Knowledge Network, based at the University of Liverpool.
Marouf Hasian Jr., Alice Seeley Harris, the atrocity rhetoric of the Congo Reform Movements, and the demise of King Leopold's Congo Free State, Atlantic Journal of Communication, 23:3, (2015), pp. 178-92
Kevin Grant, 'Christian critics of empire: Missionaries, lantern lectures, and the Congo reform campaign in Britain', Journal of Imperial and Commonwealth History, 29:2, (2001), pp. 27-58
Kevin Grant, 'The limits of exposure: Atrocity photographs in the Congo reform campaign', in Fehrenbach, Heide and Rodogno, Davide (eds), Humanitarian photography: A history, (Cambridge: Cambridge University Press, 2015) pp. 64-88
Fuyuki Kurasawa, The sentimentalist paradox: On the normative and visual foundations of humanitarianism, Journal of Global Ethics, 9:2 (2013), pp. 201-14
John Peffer, 'Snap of the whip / Crossroads of shame: Flogging, photography, and the representation of atrocity in the Congo Reform campaign, Visual Anthropology Review, 24:1 (2008), pp. 55-77
Christina Twomey, 'Framing atrocity: Photography and humanitarianism,' History of Photography, 36:3 (2012), pp. 255-64
Mark Sealy, "http://etheses.dur.ac.uk/11794/1/Sealy_Revised_Phd_Decolonizing_the_Camera-_Photography_in_Racial_Time__.pdf' Decolonising the camera: Photography in racial time' (Unpublished PhD thesis, University of Durham, 2016)
Sharon Sliwinski, The childhood of human rights: The Kodak on the Congo, Journal of Visual Culture, 5:3 (2006), pp. 333 - 363
Sharon Sliwinski, 'The childhood of human rights: The Kodak on the Congo, Journal of Visual Culture, 5:3 (2006), pp. 333 - 63
http://www.liverpoolmuseums.org.uk/ism/exhibitions/brutal-exposure/alice-seeley-harris.aspx" Brutal Exposure at the International Slavery Museum.
https://soundcloud.com/autographabp/alice-seeley-harris-interview" Interview with Alice Seeley Harris
https://autograph.org.uk/exhibitions/congo-dialogues" Congo Dialogues: Alice Seeley Harris and Sammy Baloji, Autograph ABP
T. Jack Thompson, http://www.internationalbulletin.org/issues/2002-04/2002-04-146-thompson.pdf" Light on the dark continent: The photography of Alice Seeley Harris and the Congo atrocities of the early twentieth century https://olijacobsen.files.wordpress.com/2015/03/missionary-campaigns-and-atrocity-photographs.pdf
Daniel J. Danielsen and the Congo: Missionary campaigns and atrocity photographs (Brethren Archivists and Historians Network, 2014)
https://www.liverpool.ac.uk/politics/research/research-projects/akn/">Antislavery Knowledge Network, University of Liverpool | <urn:uuid:014d5ff8-1f15-498c-95ce-73a2432cac86> | CC-MAIN-2023-06 | http://antislavery.nottingham.ac.uk/solr-list?q=&facet=collection%3A%22Alice+Seeley+Harris+Archive%22+AND+59_s%3A%22Photography%22+AND+59_s%3A%22Women%22+AND+59_s%3A%22Colonialism%22+AND+92_s%3A%22San+Tom%C3%A9%22 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.942089 | 2,410 | 3.59375 | 4 |
By Patrick Hunt –
What is the relationship between ancient Egyptian kingship and animal husbandry, specifically the practice of owning, tending and herding animals like cattle? Ancient cattle pens have been found in Nilotic contexts going back at least eight thousand years into the Neolithic, possibly the earliest examples of bovid domestication in the world with likely auroch ancestry (Bos primigenius africanus). Even older associations between humans and cattle can be found possibly as far back as 12,000 years where horn cores at Tushka in Egyptian Nubia were placed over human burials. The practice of keeping and breeding cattle is attested at least as far back as predynastic Egypt in sites like Merimda Beni Salama (4900-4300 BCE) and cattle are the most commonly common domesticated animals in ancient Egypt, as Shaw and Nicholson summarize.
How does this ancient practice of cattle breeding relate to kingship? Some of our best sources on early Egyptian kingship are presented in Ancient Egyptian Kingship, co-edited by O’Connor and Silverman with a series of seminal articles by noted Egyptologists. To glimpse just the briefest idea of how important and sacred animals – especially herd animals – were to Ancient Egypt, Geramond’s illustrated tome, An Egyptian Bestiary: Animals in Life and Religion in the Land of the Pharaohs is a useful resource.
Many funerary representations emphasize the value of animal domestication in terms of herding in both this life and the afterlife, often as a symbol of wealth, status and the security of well-being in the continuity of the good life. For example, the Nebamum tomb painting in the British Museum from circa 1350 BCE, among many others, depicts animal husbandry. In particular, the inscription with the Nebamun wall painting has the herdsman saying, “Come on, get away…pass on in quiet and in orderâ€. Part of that order is imposed by proper animal husbandry under divine kingship. Other memorable depictions, among numerous others, include long-horned cattle in a Saqqara relief from the 6th Dynasty Mastaba of Kastjemni and cattle threshing wheat in the 18th Dynasty, Tomb of Menna at Gurna. Sacred cows are also frequent in Egyptian iconography, as in the [Sacred] Cow of the West, the Patroness of the Theban Necropolis. A representative sacred cow can be seen for example in the19th Dynasty Tomb of Amenemipet at Deir el-Medineh, complete with sacred menat collar , flail (flagellum), solar disc between horns and kohl-outlined eyes. Thus cattle herding, power and wealth in these animals – not to mention herding of other animals like goats and sheep – are intermeshed in documented Nilotic culture.  Several types of cattle horns can be seen in images, from forward-pointing as in Nebamun’s tomb to lyriform long horns as at Saqqara or inward-curling as in the Narmer Palette (also known as the Great Hierakonpolis Palette), although this latter depiction may be stylized and not necessarily true to nature as the earliest image.
In Egypt since prehistory the king ruled over his subjects who were “cattle of the god†as poetically described in the Instruction of Merikare (circa 2025-1700 in Middle Egyptian) where King Kheti III tells his son Merikare how to rule over his people who are to be “equipped with knowledge, established with lands and endowed with cattle†and Merikare is specifically instructed to “provide for men, the cattle of God.â€
The identification of the Egyptian king and later pharaonic imagery is replete with cattle and cows, not even fully mentioning Hathor – also in her role as the Lady of the West and sacred cows in general – including the Seven Cows of Heaven and their Bull in the provisioning of bread and beer in the afterlife as the Book of the Dead (Book of Coming Forth By Day) Chapter 145 iterates. Ironically, while animal husbandry is about humans nurturing animals, one important divine inversion is that many pharaohs like Amenhotep III have depicted themselves as being nurtured by Hathor or being suckled and protected by her maternal power.
Another line of evidence comes from the god Osiris, lord of the underworld, as reflected in some of his symbolic imagery of one role of kingship where part of his divine iconography shows strong associations with animal husbandry. The particular Osirian visual attributes that convey “dominion over Egypt†equated with power in animal husbandry are the crook and flail Osiris holds in his hands. Although he is not always depicted with crook and flail in his hands and these are not his only attributes of kingship, these two are the most frequent instruments he holds and appear to transfer to subsequent kingship once his iconography is established (noted with the antecedent of the flail in the Narmer palette as discussed shortly). The Papyrus Hunefer (Hunefer’s Book of the Dead) states of the birth of Osiris after his father Geb gave way: “You seized the crook and flail while still in the womb, not yet having emerged on earth.†While the cult of Osiris is not emphasized early on in Egypt, only rising fully in the 9-10th Dynasty (circa 2175-1975 BCE), these are nonetheless ancient symbols of kingly power over herd animals. That Osiris is identified with kingship is also well-established as Silverman asserts: “…in the Pyramid Texts, the earliest large collection of religious inscriptions, the king is usually addressed as the Osiris, king so and so, thus equating the king with that deity in an implied metaphor.†The fusion of funerary and fertility connotations made Osiris the god of resurrection and by the 5th Dynasty deceased rulers were equated with Osiris and his festivals at Abydos where in myth his head, the most important part of his body, was buried. Early on Osiris had absorbed the identity of the older shepherd god Anedjti who may have been a deified prehistoric ruler and whose visual crook awet attribute may have been the source of a crook emblem of Osiris.
Plutarch also perceived in Classical antiquity further connection between Osiris and herd animals, specifically in his Moralia where he attempts to understand Egyptian myth and religion: “[Osiris] delivered the Egyptians from their destitute and brutish manner of living. This he did by showing them the fruits of cultivation…“In course of time he [Osiris] became king of Egypt, and devoted himself to civilizing his subjects and to teaching them the craft of the husbandman; he established a code of laws and bade men worship the gods.†Plutarch also connected Apis (as a bull deity) and Osiris, saying that Apis was the bodily image of Osiris, yet another connection between Osiris and bovids.
Perhaps the most important association between cattle and rulership can be found in the crook and the flail. Because they are Osirian representatives, many dead kings are likewise depicted with crook and flail. As symbols of power of the king over his human subjects like cattle, the “herdsman of humanityâ€, the crook and flail are images vital to Egyptian kingship. The crook was shaped to grab the neck of herd animals and its name is associated with the verb “to rule†(HqA as heka in Egyptian) and is also used in an epithet descriptive name of Osiris. It is thus the strongest evidence for herding and ruling sharing a common domain. The flail (or flagellum) can even be seen considerably early, a little before 3000 BCE, in the slate Narmer Palette, side B (serpopard side, not the smiting side) upper register, shown in the right hand of the king just under his n’r (catfish) mr (chisel) hieroglyph when wearing only the Lower Crown. Naturally, Osiris and kings also often hold the was scepter, a common symbol of power and dominion as well.
According to Shaw and Nicholson, “The most prominent items in the royal regalia were the so-called ‘crook’ (heka), actually a scepter symbolizing ‘government’ and the ‘flail’…â€Â
Increasing interest in tracing ancient Egyptian divine kingship can be shown in academic addresses, seminar papers and conferences around the world. That Egyptian kingship and animals are inextricably connected deserves further study that will yield more fruitful information about ancient Egypt and the continuity of wealth and power in animal husbandry.
  Ian Shaw and Paul Nicholson. Dictionary of Ancient Egypt. British Museum Press with Harry Abrams, 1995, “animal husbandry†33-4.
  David O’Connor and David Silverman, eds. Ancient Egyptian Kingship. Leiden: E. J. Brill, 1995.
  Philippe Germond, An Egyptian Bestiary: Animals in Life and Religion in the Land of the Pharaohs. London: Thames and Hudson, 2001.
  British Museum, Room 61, Tomb Reliefs of Nebamun, #EA 37976. For an excellent discussion of Nebamun’s motifs, see Richard Parkinson. The Painted Tomb-Chapel of Nebamun. British Museum Press, 2008.
  Germond, long-horned cattle, Saqqara, 6th Dyn. Mastaba of Kastjemni, 53; cattle threshing wheat, 18th Dyn. Gurna, Tomb of Menna, 8.
  ibid., Germond, 185.
  R. O. Faulkner, tr. “The Instruction of Merikare†in W. K. Simpson. The Literature of Ancient Egypt. Yale University Press, 1973, 180-92 (near the end).
  Papyrus Hunefer BM 9901 plate 183, British Museum, London.
  Toby Wilkinson. The Rise and Fall of Ancient Egypt. Bloomsbury Press, 2011, 148-51.
  Silverman, “The Nature of Egyptian Kingship†in O’Connor and Silverman (supra) 1995, 61-2.
  Shaw and Nicholson, 213-4
  Plutarch, De Iside et Osiride (Moralia) 13, 29
  concerning HqA (“to ruleâ€), the related word “herder’s crookâ€and rulership, one of the best sources for related Egyptian topics is www.reshafim.com (from the Reshafim Kibbutz in Israel) which has a well-documented compilation and discussion of references and source texts.
 Shaw and Nicholson, 75
 “Religion and Power: Divine Kingship in the Ancient World and Beyond.†Oriental Institute, University of Chicago, Feb 23-4, 2007; David O’Connor, “Revealing and Preserving the Origins of Egyptian Kingship at Abydos, Egypt.†Stanford Archaeology Center, Stanford University. | <urn:uuid:df3428e6-18a0-4f13-8e23-43ef71dad4cd> | CC-MAIN-2023-06 | http://www.electrummagazine.com/2014/06/egyptian-kingship-and-animal-husbandry/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.942472 | 2,599 | 3.734375 | 4 |
by June Manning Thomas and Marsha Ritzdorf
If urban planning is to support the equitable distribution of public goods and services, it must recognize and address the dismal conditions of millions of Americans who are poor or people of color.
The primary focus of contemporary planners and planning students should be on finding and advocating solutions that help eliminate the problems of today’s cities. Any meaningful solution will need to be grounded in a thorough understanding of the race, gender, and class inequalities of American life. One of the most significant and dramatic stories in the history of twentieth-century U.S. cities has been the growth and evolution of the African American population. In the early 1900s, the African American population was simply one of many ethnic and racial groups living in U.S. cities. By the 1950s, massive emigrations from the rural South to the urban North had changed the complexion of cities. By the 1990s, successive waves of in-migration by rural African Americans and out-migration by mobile whites had created several predominantly black cities.
African Americans became so visible in many central cities that some scholars defined their predominance and spatial isolation indications of city decline. Indeed, throughout the twentieth century, racial prejudice shaped the lives of Blacks as surely as it shaped metropolitan areas. Long after officially sanctioned racial prejudice subsided, racial oppression and inequality lingered. Poverty grew more concentrated, and the quality of social life unraveled. Physical deterioration became the norm.
The twentieth century also witnessed the evolution of professions that were dedicated to improving urban life and reducing urban decline. Prominent among these was urban planning. Branching off from the municipal reform movement, and away from the social work and housing reform movements, urban planning aimed to create well-planned, orderly cities that allowed people to live free of slums, blight, and physical disorder. As the planning profession evolved, its practitioners attacked various maladies affecting urban areas. They joined efforts to remedy social problems, and they created initiatives designed to redevelop specific areas, such as the central business districts. From the early part of the century, when planning focused on creating land use controls and regulating growth, to the end, when planners did these things plus many more, the profession’s stated goal was to improve the experience of urban life for all residents. However, the reality was often far different.
Throughout the twentieth century, the community of urban African Americans connected with the community of urban planning professionals. At times those connections were sources of conflict and oppression, at other times sources of reform and cooperation. Planning tools were and are often used for the purpose of racial segregation. Examples are exclusionary zoning laws and separatist public housing programs. Urban renewal clearance projects that bulldozed black communities into oblivion could also be classified as oppressive. But these were not the only interactions between the black urban population and the profession. During the 1960s, collective public guilt generated basic changes in urban planning professionals as well as in national policies. Some planners – whose ranks gradually became more diversified racially – dedicated their lives to fighting for the rights of the poor and distressed. Such dedication took the form of “social” or “advocacy” planning, neighborhood planning, or equity planning.
The precise nature of this dualistic relationship of conflict versus cooperation needs further clarification. Few historians of urban African-Americans give full and impartial treatment to the role of urban planning. Few historians of U.S. urban planning acknowledge the full influence of race and racial injustice on the profession. Contributions made by African American women to urban planning efforts are underappreciated.
In general, what is needed is an overview of the critical linkages between the urban planning profession and the nation’s most visible racial minority. Race and racial injustice influence all efforts to improve urban society. Urban planning, an active profession, purports to help improve civic life in metropolitan areas. It cannot do so unless its practitioners more clearly understand the historical connections between this people and this field.
Planning and Public Policy
The period after World War II saw two simultaneous processes: (1) the movement of the White middle and working classes to the suburbs, a movement spurred by the return of World War II veterans and the assistance of home mortgage insurance programs, and (2) the consolidation of ghetto boundaries. It is for this era that we have the best documentation concerning the relationship between African American urban life and planning decisions. As several scholars have demonstrated, political leaders’ desire to shape black residence patterns profoundly influenced public housing and urban renewal policies. Just as urban migration of rural blacks and other ethnic minorities was the demographic motivation for racially exclusionary zoning and restrictive covenants during the period between the world wars, the need to contain blacks in restricted sections of cities influenced public policy decisions after World War II.
The movement to the suburbs by the white middle and working classes, which one author calls a true “metropolitan revolution,” clearly established decentralization as the dominant urban pattern for the following decades. This decentralization, however, was exclusionary. For example, Levittown, New York, a well-known suburban community that set the pattern for numerous others, housed 82,000 residents in 1960, not one of whom was African American. Although white families found new opportunities opening up in freshly constructed suburbs, African American families experienced disproportionate overcrowding and limited mobility within the central cities left behind.
A series of federal policies set the stage for these conditions. Urban renewal was one of the most invidious. Often called “Negro removal” by critics, it provides countless examples of the interconnection of racial change with local policy. Urban renewal systematically destroyed many African American communities and businesses and, for most of its history, failed to safeguard the rights and well-being of those forcibly relocated from those homes and businesses. That clearance for urban renewal worked in conjunction with clearance for highway construction only made matters worse. Backed by the federal government, cities simultaneously cleared out slums and displaced racial minorities from prime locations for redevelopment and highway construction. These policies shaped and defined the black ghetto.
The 1960s, the era of civil rebellion, brought several important changes. The widespread civil disorders, which were volatile but predictable responses to long-standing racial oppression, forced significant alterations in federal policies. President Lyndon Johnson, attempting to build a “Great Society,” initiated new programs that focused on eliminating poverty and empowering low-income communities. With the War on Poverty’s community action agencies, citizens gained the power to supervise community improvement directly. Under Model Cities, local citizen governing boards also helped direct local redevelopment and made their own contributions to the redefinition of urban planning.
Well-known planning practitioners began to question the assumptions of traditional land use and redevelopment planning as well as the racial bias inherent in the profession. Proponents of advocacy planning suggested that the appropriate response to inner-city conditions was for planners to stop trying to represent public interest – an impossible task, leading planners to represent the status quo – and to work instead to help empower disenfranchised groups. Another response was for planners to develop “suburban action” programs promoting racial and income integration. Paul Davidoff, premier advocate planner and champion of suburban integration, urged planners to champion non-exclusionary fair housing laws, low and moderate income housing, and progressive zoning and subdivision requirements.
The Housing and Community Development Act of 1974 killed the oppressive urban renewal program, but it also brought the promising Model Cities experiment to a halt. With the 1974 act, which created Community Development Block Grants (CDBGs), the federal government withdrew from high-profile attempts to target funds to distressed central-city efforts, defined and guided by local citizens. Instead, in city after city, citizens who had just begun to exercise some control over the redevelopment of their neighborhoods experienced the shock of government withdrawal. Although in later years the CDBG program somewhat improved on its record of participation, in general the program placed decision making in the hands of city government and dispersed national funding via a formula that spread increasingly scarce redevelopment funds to populous suburbs as well as to a wide range of cities.
Previous efforts to mesh social, economic, and physical development strategies, a mixture allowed under Model Cities, succumbed under the pervasive “bricks and mortar” orientation of the CDBG program. Any illusions that inner-city residents might have had that a benign federal government would “gild” their ghetto died quickly with the unstable funding, unpredictable longevity, and strong downtown focus that characterized urban-related programs such as action grants and economic development assistance funds in the 1970s, 1980s, and early 1990s. The mid-1990s brought promising federal program initiatives, such as Empowerment Zones/Enterprise Communities. But by that time African American families, even those in suburbia, remained highly segregated. They earned less money than others per capita and per family, and experienced much narrower options of residence than did other Americans.
African American Initiatives and Responses
Unfortunately, much of the writing about the relationship between the African American community and urban planning has focused on victimization. Of course, victimization, injustice, and oppression are important parts of the story. But throughout the twentieth century, African Americans have refused to be passive actors in this process. They documented their situation, built indigenous institutions, and undertook initiatives designed to improve community life. Scholars such as W.E.B. DuBois carried out path-breaking research, and organizations such as the National Urban League and the National Association of Colored Women made major contributions – which, while documented in other ways, are undocumented in the annals of planning history – to planning efforts in their own communities.
Early in the century, African American women often focused on the civic improvement of their communities. While they, like white women, had no legal or voting rights in the public world of politics, they were very active. Yet they, like their African American brothers, are invisible from the records of their time that planning historians commonly consult. For example, The American City, a periodical that began publication in 1909, was “the” source of information about urban issues, problems, and projects throughout the early part of this century. Between 1909 and 1920, only one article in any way related to African Americans, and it concerned the creation of a segregated low-income housing project. In 1912, an entire issue reported on white women’s organizations. Future work will need to look at the contributions of women who participated in projects linked to traditional urban planning, such as housing, parks, land projects, and sanitation, or who made a place for themselves in male-dominated organizations such as the Urban League.
The Urban League exemplified African American leadership and response to planning throughout much of the twentieth century. During the years of migration, local chapters actively sponsored day camps, food drives, employment programs, and numerous other activities. In the 1950s, these chapters were often leaders in the efforts to document the initial abuses of the urban renewal program. The Chicago branch’s 1968 report, The Racial Aspects of Urban Planning: Critique on the Comprehensive Plan of the City of Chicago, clearly identified the role of institutional racism in the planning process and offered proposals for change. As they noted, “Abstract statements about the goal of equality, while welcomed, are no substitute for technical work dealing with the realities of racism.”
By the 1970s, African American communities began to realize that environmental problems in their communities were related to discriminatory exposure to both toxic substances and unwanted land uses. Lead poisoning, especially from exposure to lead-based paint in substandard urban housing, was an issue of social justice that demanded their attention. The combined efforts of inner-city activists and a small group of physicians/scientists ultimately forced the issue onto the public agenda. A Philadelphia coalition brought a lawsuit against the federal Department of Housing and Urban Development (HUD) to ensure that HUD property was inspected, and if necessary, cleaned of all offending lead. Over the next two decades, groups identified myriad other urban environmental issues and added environmental justice to their civil rights agendas.
A range of other kinds of African American self-help efforts have persisted in recent years, particularly community development. Rather than wallow helplessly in defeatism, black politicians, faith-based groups, and community-based organizations in some cities have carried out remarkable, heroic efforts to preserve and improve their communities. These initiatives addressed a myriad of issues, including but not limited to redevelopment, housing rehabilitation, redlining by financial and insurance institutions, commercial development, and social improvement programs for youth and families.
This article is excerpted from Urban Planning and the African American Community, by June Manning Thomas and Marcia Ritzdorf. It is reprinted by permission of Sage Publications.
June Manning Thomas is Professor of Urban and Regional Planning at Michigan State University.
Marcia Ritzdorf was Associate Professor of Urban Affairs and Planning at Virginia Polytechnic and State University until her death last year. | <urn:uuid:f6fdda7f-c2d6-41b8-9aa3-34e26046db87> | CC-MAIN-2023-06 | http://www.plannersnetwork.org/1999/03/urban-planning-in-the-african-american-community-in-the-shadows/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.960827 | 2,646 | 3.796875 | 4 |
Seafoods are actually and remarkably safe to eat. It accounts for no more than ten percent of all the reported food-borne diseases.
Nevertheless, seafood-borne disease cause a significant number of illness and deaths worldwide, and people should be wary of them.
Most health problems associated with seafood are due either to contaminants that are present in the environment where the seafoods (i.e. shellfish or fish) are grown or improper handling.
Many shellfish like oysters, muzzles, clams, etc. hardly move, hence, they need to filter a lot of water to obtain food. In the process disease causing bacteria, viruses and other microorganism that are present in their immediate environment tend to accumulate and concentrate in them.
It is thus, risky to eat raw or half cooked shellfish that are grown and harvested in polluted waters because of the possibility of food poisoning.
Pathogenic bacteria can also grow in a stale and poorly handled fish, but fresh and properly prepared deep sea fish can be eaten raw.
Another health risk that shellfish carry on a seasonal basis is red tide. Red tide which occurs at certain times of the year, is caused by the proliferation of dinoflagellates, a type of phytoplanktons that exude a lethal toxin. Dinoflagellate toxin is not destroyed by cooking.
In fish, removal of the entrails or abdominal organs before cooking makes the fish ready for human consumption, but there is no way to remove dinoflagellate toxin from shellfish. Hence, when the red tide warning is up, people should refrain from eating shellfish.
The dumping of chemical pollutants into the sea is an additional growing concern associated with seafoods because their chemicals find their way into the food chain and accumulate in the tissues of sea creatures.
Most seafood-borne diseases can be traced to pollution in the area where the sea creatures are harvested. Seafood-borne illnesses are best prevented by proper monitoring of the fish industry by the concerned government agencies.
At the personal level as consumer, people can help prevent these diseases by observing the following precautions:
1.) Eat only shellfish and fish that are cultured or harvested from known safe areas.
2.) Remove the entrails of fish before storing them in the freezer.
3.) Preferably, shellfish and fish should be cooked thoroughly before they are consumed. Eat seafood raw only if they are fresh and handled properly. Seafood that are safe to eat are deep sea fish. Fresh water fish have to be cooked because some of the parasites they carry can also caused illness in humans.
4.) Keep raw or live seafood separate from cooked seafood.
5.) Wash knives, chopping boards, and cooking utensils with soap and water before using them to prepare seafood.
6.) Observe proper personal hygiene before handling seafood. | <urn:uuid:e7513189-de77-45da-939c-76dd4cf13a5d> | CC-MAIN-2023-06 | https://affleap.com/how-to-prevent-seafood-borne-diseases/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.952106 | 590 | 3.6875 | 4 |
A new era in the exploration of the planets began when Galileo focused his primitive telescope on the Moon. No longer were the planets mere points of light, but other worlds. The emphasis in planetary astronomy shifted from explaining the planets' motions to studying their natures.
As better telescopes were developed, astronomers produced more accurate maps of the Moon than the sketches of Galileo's day. This is a copy of the earliest known photograph of the Moon, a daguerreotype taken in 1851. The development of photography was important in astronomy. Still, the human eye can discern more detail than could be captured on a photographic plate.
Radar astronomy was born in 1946, when a radio signal was sent from Earth and "bounced" off the surface of the Moon. Radar was first reflected off the surface of Venus in 1961. This 1988 radar map of the surface of Venus was produced by the Arecibo radio telescope in Puerto Rico.
All objects with a temperature above absolute zero emit some radiation. Objects hotter than about 1,000° C (1,800° F) emit radiation in the visible wavelengths -- light humans see. Most planetary bodies, however, are cooler than a few hundred degrees and emit very short radio waves (microwaves). The amount of microwave energy emitted by a planet is a measure of its temperature. These microwaves can be detected by sensitive radio telescopes with large antennas. | <urn:uuid:f348d490-8016-4515-b361-ecdc811d4de1> | CC-MAIN-2023-06 | https://airandspace.si.edu/exhibitions/exploring-the-planets/online/tools/earth-based.cfm | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.963276 | 283 | 4.25 | 4 |
ESA’s Asteroid Impact Mission: the reason why
ESA’s Asteroid Impact Mission, currently under study for launch in 2020 and arrival in 2022, would be humanity’s first probe to a double asteroid system. Targeting an approximately 180-m diameter asteroid – around the same size as the Great Pyramid of Giza – AIM would spend a busy six months gathering data on its surface and inner structure.It would then perform before-and-after measurements as the NASA-led Double Asteroid Redirection Test spacecraft impacts straight into it, in an attempt to change the asteroid’s orbital period – marking the very first time that humanity shifts a Solar System object in a measurable way. Success would make it possible to consider carrying out such an operation again if an incoming asteroid ever threatened our planet. The two missions combined are called the Asteroid Impact & Deflection Assessment, or ‘AIDA’ for short.But why do we need to plan such a ground-breaking experiment? Astrophysicist and Queen guitarist Brian May, ESA astronaut Luca Parmitano, the UK’s Astronomer Royal Sir Martin Rees and Canadian astronaut Chris Hadfield share their own thoughts. | <urn:uuid:4b3df5f7-7cc6-4f53-8318-b10e2e13f310> | CC-MAIN-2023-06 | https://asitv.it/media/v/b9fa87c4-c5df-472d-b6b0-add570ac3213/esa-s-asteroid-impact-mission-the-reason-why | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.919203 | 258 | 3.640625 | 4 |
Answer: D. using a binary search
In searching for a particular member of an array, the array is searched thoroughly until the member is found. The linear search or sequential search is a very simple algorithm that uses a loop to sequentially go through an array starting with the first element. it works by comparison. It compares every element and tries to find matches with the element being searched for. It only stops either when the element being searched for has been found or on getting to the end of the array. The algorithm will only get to the end of the array if the searched value is not in the array. | <urn:uuid:9af1e3f4-52ae-4e49-a9e4-b2a6371c9f29> | CC-MAIN-2023-06 | https://assignmentgeek.com/qa/regardless-of-the-algorithm-being-used-a-search-through-an-array-is-always-performed/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.952195 | 123 | 3.828125 | 4 |
List of Contents
- What is the meaning of Quality Education?
- What is need to deliver Quality Education?
- What steps have been taken by the Government for Quality Education?
- What are the gaps in our current education system?
- What are the constraints impeding delivery of quality education?
- What are the remedial measures?
|For 7PM Editorial Archives click HERE →|
The pandemic highlighted the shortcomings of our education system that is more focused on rote learning. This system pays very low regard to the creativity and mental wellbeing of children indicating lack of quality education. Further, the level of education possessed across regions is not uniform and the disadvantaged sections often have poor education levels.
The Government has undertaken a plethora of steps including the formulation of National Education Policy, 2020 with the vision of delivering quality education to every child. India is also a party to UN Sustainable Development Goals whose Goal 4 aims to deliver quality education for all. Nonetheless, there remain some bottlenecks which need to be duly addressed.
What is the meaning of Quality Education?
Quality Education is a comprehensive term that includes learners, teachers, learning environment, appropriate curriculum, engaging pedagogy, learning outcomes, continuous formative assessment, and adequate student support.
It warrants inculcation of critical thinking, creativity, scientific temper, communication, collaboration, multilingualism, problem solving skills, ethics, social responsibility, and digital literacy.
Attempt to improve quality of education will succeed only if it goes hand in hand with steps to promote equity and inclusion. This requires schools to be sufficiently equipped and prepared to address the diverse learning needs of all children with a special focus on children belonging to SC, ST, Minorities, Girl child etc.
Another dimension of quality is to address the rural-urban divide and regional disparities as also the digital divide.
What is need to deliver Quality Education?
Better Employment opportunities: It will allow the children to get jobs and get out of the vicious web of poverty. Further industry will be getting a robust supply of qualified personnel. India Skills Report 2021 estimates that only 45.9% of Indian youth possess sufficient employability skills.
Health and Wellbeing: Quality education covers the aspect of mental and physical well being that would improve health outcomes of the nation. It will also help in reducing the prevalence of suicides in children especially due to severe educational stress.
Reaping Demographic Dividend: India has more than 50% of its population below the age of 25 and more than 65% below the age of 35. This requires delivery of quality education to children or else be prepared to face the brunt of demographic disaster.
Curbing Regional Divide: Some states like U.P and Bihar lack in education levels versus states like Kerala and Karnataka. Further delivery of education is better in urban areas in comparison to rural regions. This gap needs to be addressed by focusing on quality education for all.
Tackling Social Problems: The lack of quality education makes children prone to social evils like Child Labour and Child Marriage. Ensuring quality education will ensure higher retention and decrease dropout rates in schools. As per the latest Unified District Information System for Education Plus (UDISE+ 2019-20) report nearly 30% of the students don’t transition from secondary to senior secondary level.
Adapting to Technological Advancements: The 21st century would be an era of Big data, Machine Learning (ML), Internet of Things (IoT) and other technological advancements. This means the curriculum, textbooks, pedagogy, and assessment need to be transformed.
Realization of Fundamental rights: The Constitution of India has provided many fundamental rights like free speech, equality before law, freedom of religion etc. All these rights can be enjoyed in true sense only when a person has been imparted with quality education.
What steps have been taken by the Government for Quality Education?
Right of Children to Free and Compulsory Education Act (RTE), 2009: It provides free and compulsory elementary education to children. It ensures realization of fundamental rights under Article 21-A.
National Education Policy 2020: It envisions a shift from the traditional teacher centered to learner-centric approach. The policy stresses on the core principles that education must develop. It includes the cognitive skills – both ‘foundational skills’ of literacy and numeracy, and ‘higher-order’ skills such as critical thinking and problem solving.
It also focuses on social and emotional skills– also referred to as ‘soft skills’, including cultural awareness and empathy, perseverance and grit, teamwork etc.
Samagra Siksha Abhiyan: It is an overarching centrally sponsored scheme for school education that sees learning as a continuum from pre-primary to higher secondary with focus on contextual, experiential, and holistic learning. It subsumed the three erstwhile Centrally Sponsored Schemes of SSA, RMSA and Teacher Education.
Rashtriya Avishkar Abhiyan (RAA): It aims to connect school-based knowledge to life outside the school, and making learning of Science and Mathematics a joyful and meaningful activity.
Performance Grading Index (PGI): A comprehensive 70 indicator-based matrix has been developed to grade the States/UTs, against certain common benchmarks and provide them a roadmap for making improvements.
National Initiative for School Heads’ and Teachers’ Holistic Advancement (NISHTHA): It is a first of its kind teacher training programme wherein the Government of India, through its academic bodies, NCERT and NIEPA, is taking a lead role in changing the landscape of in-service teacher training.
National Initiative For Proficiency in Reading with Understanding and Numeracy (NIPUN Bharat): It was launched in July 2021, to ensure that every child in the country attains Foundational Literacy and Numeracy (FLN) at Grade 3 by 2026-27.
PM eVidya: It is a comprehensive initiative under the Atma Nirbhar Bharat Programme, which unifies all efforts related to digital/online/on-air education to enable coherent multi-mode access to education.
It includes access to a variety of e-resources in 33 languages including Indian Sign Language over DIKSHA (One nation; One digital platform), Swayam Prabha DTH TV channels (One Class; one channel for class 1 to 12), Extensive use of Radio, Community radio, and Podcast – ShikshaVani.
What are the gaps in our current education system?
Excessive focus on rote learning: The curriculum tries to encourage memorisation of text rather than cultivating a conceptual understanding of issues.
Exams define intelligence: The current system equates passing of exams and exam scores with a student’s intelligence level. There is an excessive focus on completing the exam cycle rather than learning experience.
Discourages Creativity: Parents and teachers want to see children as doctors, engineers, bureaucrats etc. Children are rarely encouraged to pursue creative fields like writers, artists or adopt any other vocational skill.
Barriers for poor sections: Good quality private schools are not present in rural regions while the fees are very high in urban regions. Further, the 25% reservation for EWS candidates in private schools has also been bypassed by many schools.
Bias against Persons with Disabilities: They are often seen as a liability by many teachers and their special needs are generally ignored.
Coaching Culture: The proliferation of coaching institutions shows the deteriorating quality of education in India. Many school teachers also engage in teaching in coaching institutions after regular school hours for extra compensation.
Lack of Vernacular content: Good quality books and material is still unavailable in the vernacular medium that creates hardships for many students and impedes learning.
What are the constraints impeding delivery of quality education?
Financial Crunch: A recent World Bank study notes that India spent 14.1 % of its budget on education, compared to 18.5% in Vietnam and 20.6% in Indonesia, countries with similar levels of GDP. This hinders creation of quality infrastructure and retention of good talent in the education sector.
Quality of Personnel: The quality of teachers in many schools is still not up to the mark. Further, many teachers struggle to deliver lectures through the online medium as observed during the pandemic.
Digital Divide: The digital systems of many schools and universities are using obsolete technology. Further, many universities lack basic infrastructure to deliver quality education thereby impeding delivery in hinterland regions. Similarly many people don’t have access to digital devices like mobile phones and internet routers.
Adult Illiteracy: The lack of adult literacy allows individuals to focus on short term incomes via child labour and forgo long term good career options after inculcation of quality education.
Further, many are unable to operate the digital devices that hampered their children’s education during the pandemic times.
What are the remedial measures?
First, the Government should adopt a new system of education that is fair, robust, and removes the dependency on time-tabled exams. This is required to tackle any future pandemics or contingencies like disasters that disrupt the normal cycle. A mix of hybrid (online + offline) teaching should be promoted.
Second, the focus should be on learning through activities, discovery, and exploration in a child-friendly and child-specific manner.
Third, the assessment of students must be based on an integrated approach rather than mere textbook exams. Under this weightage should be given to indicators like peer interaction, curiosity potential, creativity acumen etc.
Fourth, to implement all these measures there is a need to support the education sector with adequate budgetary resources. Hence, it is important to increase the share of education to 6% of GDP as envisaged by NEP 2020.
The Government should make a significant headway from earlier policies by putting quality education as the top most agenda, strengthening the foundations of education, catering to the educational needs of the most disadvantaged, and making it a global leader in education. All this is desired to truly realize the vision of ‘Sabka Saath Sabka Vikas’. | <urn:uuid:a540a886-f5da-4250-8bf3-2902e9e27663> | CC-MAIN-2023-06 | https://blog.forumias.com/yojana-february-summary-quality-education-for-all/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.939336 | 2,097 | 3.515625 | 4 |
« AnteriorContinuar »
29. The word, therefore, or hence, frequently occurs. Το express either of these words, the sign .. is generally used.
30. If the quotients of two pairs of numbers, or quantities, are equal, the
quantities are said to be proportional: thus, if
; then, A is to B as C to D. And the abbreviations of the proportion is, A: B:: C: D; it is sometimes written A: B=C : D.
1. "A POINT is that which has position, but not magnitude*." (See Notes.)
2. A line is length without breadth.
"COROLLARY. The extremities of a line are points; and the intersections "of one line with another are also points."
3. "If two lines are such that they cannot coincide in any two points, without coinciding altogether, each of them is called a straight line."
"COR. Hence two straight lines cannot inclose a space. Neither can two
straight lines have a common segment; that is, they cannot coincide "in part, without coinciding altogether."
4. A superficies is that which has only length and breadth.
"COR. The extremities of a superficies are lines; and the intersections of one superficies with another are also lines."
5. A plane superficies is that in which any two points being taken, the straight line between them lies wholly in that superficies.
6. A plane rectilineal angle is the inclination of two straight lines to one another, which meet together, but are not in the same straight line.
N. B. 'When several angles are at one point B, any one of them is ex'pressed by three letters, of which the letter that is at the vertex of the angle, that is, at the point in which the straight lines that contain the angle meet one another, is put between the other two letters, and one of these 'two is somewhere upon one of those straight lines, and the other upon the 'other line: Thus the angle which is contained by the straight lines, AB, 'CB, is named the angle ABC, or CBA; that which is contained by AB,
*The definitions marked with inverted commas are different from those of Euclid,
BD, is named the angle ABD, or DBA; and that which is contained by BD, CB, is called the angle DBC, or CBD; but, if there be only one an'gle at a point, it may be expressed by a letter placed at that point; as the angle at E.'
7. When a straight line standing on another straight line makes the adjacent angles equal to one another, each of the angles is called a right angle; and the straight line which stands on the other, is called a perpendicular to it.
8. An obtuse angle is that which is greater than a right angle.
9. An acute angle is that which is less than a right angle.
10. A figure is that which is enclosed by one or more boundaries.—The word area denotes the quantity of space contained in a figure, without any reference to the nature of the line or lines which bound it.
11. A circle is a plane figure contained by one line, which is called the circumference, and is such that all straight lines drawn from a certain point within the figure to the circumference, are equal to one another.
12. And this point is called the centre of the circle.
13. A diameter of a circle is a straight line drawn through the centre, and terminated both ways by the circumference.
14. A semicircle is the figure contained by a diameter and the part of the circumference cut off by the diameter.
15. Two lines are said to be parallel, when being situated in the same plane, they cannot meet, how far soever, either way, both of them be produced.
16. A plane figure, terminated on all sides by straight lines, is called a rectilineal figure, or polygon, and the lines themselves taken together form the contour, or perimeter of the polygon.
17. The polygon of three sides, the simplest of all, is called a triangle; that of four sides, a quadrilateral; that of five, a pentagon; that of six, a hexagon; and so on.
18. Of three sided figures, an equilateral triangle is that which has three equal sides.
19. An isosceles triangle is that which has only two sides equal.
20. A scalene triangle is that which has three unequal sides.
21. A right angled triangle is that which has a right angle. The side. opposite the right angle is called the hypotenuse.
22. An obtuse angled triangle, is that which has an obtuse angle.
23. An acute angled triangle, is that which has three acute angles. 24. Of four sided figures, a square is that which has all its sides equal and all its angles right angles.
25. An oblong, or rectangle, is that which has all its angles right angles, but has not all its sides equal.
26. A lozenge, or rhombus, is that which has all its sides equal, but its angles are not right angles.
27. A parallelogram, or rhomboid, is that which has its opposite sides parallel, but all its sides are not equal, nor its angles right angles.
28. And, lastly, the trapezoid, only two of whose sides are parallel. 29. All other four sided figures are usually called trapeziums.
30. A diagonal is a line which joins the vertices of two angles not adjacent to each other. Thus, BC, in the diagram of Theor. 27. is a diagonal.
31. An equilateral polygon, is one which has all its sides equal; an equiangular polygon, one which has all its angles equal.
32. Two polygons are mutually equilateral, when they have their sides equal to each other, and placed in the same order; that is to say, when following their perimeters in the same direction, the first side of the one is equal to the first side of the other, the second of the one to the second of the other, the third to the third, and so on.
The phrase mutually equiangular has a corresponding signification. In both cases, the equal sides, or the equal angles, are named homologous sides or angles.
33. We shall give the name, equivalent figures, to such as have equal surfaces.
Two figures may be equivalent, though very dissimilar: a circle, for instance, may be equivalent to a square, a triangle to a rectangle.
34. The denomination, equal figures, we shall reserve for such as, when applied to each other, coincide in all their points of this kind are two circles, which have equal radii; two triangles, which have all their sides equal respectively, &c.
1. LET it be granted that a straight line may be drawn from any one point to any other point.
2. That a terminated straight line may be produced to any length in a straight line.
3. And that a circle may be described from any centre, at any distance from that centre.
1. THINGS Which are equal to the same thing, are equal to one another.
2. If equals be added to equals, the wholes are equal.
3. If equals be taken from equals, the remainders are equal.
4. If equals be added to unequals, the wholes are unequal.
5. If equals be taken from unequals, the remainders are unequal.
6. Things which are doubles of the same thing, are equal to one another. 7. Things which are halves of the same thing, are equal to one another. 8. Two magnitudes, lines, surfaces, or solids, are equal, if, when applied to each other, they coincide throughout their whole extent. They then fill the same space.
9. The whole is greater than any of its parts.
10. The whole is equal to the sum of all its parts.
11. All right angles are equal to one another.
12. "Two straight lines which intersect one another, cannot be both "rallel to the same straight line." | <urn:uuid:93162a17-cace-47ba-b91c-b4db6bfffdc3> | CC-MAIN-2023-06 | https://books.google.co.ve/books?id=0XowAQAAMAAJ&pg=PA5&focus=viewport&vq=angle+BAC&dq=related:ISBN8474916712&lr=&output=html_text | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.955254 | 1,821 | 3.65625 | 4 |
What is Lent?
Reprinted from the Episcopal Church Library
Early Christians observed “a season of penitence and fasting” in preparation for the Paschal feast, or Pascha (BCP, pp. 264-265). The season now known as Lent (from an Old English word meaning “spring,” the time of lengthening days) has a long history. Originally, in places where Pascha was celebrated on a Sunday, the Paschal feast followed a fast of up to two days. In the third century this fast was lengthened to six days. Eventually this fast became attached to, or overlapped, another fast of forty days, in imitation of Christ’s fasting in the wilderness. The forty-day fast was especially important for converts to the faith who were preparing for baptism, and for those guilty of notorious sins who were being restored to the Christian assembly. In the western church the forty days of Lent extend from Ash Wednesday through Holy Saturday, omitting Sundays. The last three days of Lent are the sacred Triduum of Maundy Thursday, Good Friday, and Holy Saturday. Today Lent has reacquired its significance as the final preparation of adult candidates for baptism. Joining with them, all Christians are invited “to the observance of a holy Lent, by self-examination and repentance; by prayer, fasting, and self-denial; and by reading and meditating on God’s holy Word” (BCP, p. 265).
Photo by Lightstock.com | <urn:uuid:0501467f-8bcb-4713-a3f6-60be98b6b90c> | CC-MAIN-2023-06 | https://diovermont.org/2017/01/31/what-is-lent/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.973547 | 317 | 3.5625 | 4 |
Satellites are Exposing Methane Leaks—and It Found a Huge One That’s Raising Eyebrows
The first satellite designed to monitor the planet’s methane leaks is definitely doing its job: a little-known gas well accident in Ohio is reportedly one of the largest methane leaks ever recorded.
Methane: the ambiguous greenhouse gas that almost no one seems to talk about, but the one that is arguably more dangerous—and abundant—than CO2. Methane is more potent than CO2, and scientists expect that as temperatures rise, the relative increase of methane emissions will outpace that of carbon dioxide from various environmental sources affected by climate change.
Why Do we Care About Methane?
Methane emissions comes from a variety of sources, almost all of which are amplified or exasperated by the effects of climate change and human activity. These sources include melting permafrost, microorganisms in freshwater systems, including gas-leaks from oil and mining sites (among other forms of fossil fuel production), agriculture and livestock farming, landfills and waste, biomass burning, rice agriculture, to name a few.
Like carbon dioxide, methane is a natural element that has natural cycles within the atmosphere—but the idea of climate change is the overstressing of these atmospheric compounds that throws natural systems out of whack. Basically, humans are driving climate change by putting too much methane into the atmosphere too quickly, and the Earth can’t keep up.
One major contributor to this methane problem is leaky and unregulated machinery fossil fuel sites; for example, when burned for electricity, natural gas is cleaner than coal and produces about half the carbon dioxide that coal does. But if the methane escapes into the atmosphere before it’s burned properly, it can warm the planet more than 80 times as much as the same amount of carbon dioxide over a 20-year period.
A Dutch-American team of scientists recently published the Proceedings of the National Academy of Sciences—a new space technology that detects leaks of methane from oil and gas sites in particular.
“We’re entering a new era. With a single observation, a single overpass, we’re able to see plumes of methane coming from large emission sources,” said Ilse Aben, an expert in satellite remote sensing and one of the authors of the new research. “That’s something totally new that we were previously not able to do from space.”
But this technology does not just detect methane—it has already reinforced the view that “methane emissions from oil installations are far more widespread than previously thought,” according to a New York Times article.
It is responsible for pointing to major leaks that are a threat to the environment and public health, and it is a good indicator of just how frequently these leaks are happening around the world.
Ohio Site Blowout
One specific event last year in Ohio is causing some major concerns. An accident in February 2018 at a natural gas well run by an Exxon Mobil subsidiary in Belmont County, Ohio released more methane than the entire oil and gas industries of many nations do in a single year, the research team found. The episode was so bad that about 100 nearby residents within a one-mile radius had to evacuate their homes while workers scrambled to plug the well.
The Exxon subsidiary, XTO Energy, said it could not immediately determine how much gas had leaked at the time. However, the European Space Agency had just launched a satellite with a new monitoring instrument called Tropomi, designed to collect more accurate methane measurements.
The group decided they would investigate to see if it could gather an accurate measure from the Ohio accident. The topic of methane leaks was not new for the natural gas world, and growing scrutiny over the reckless leakage of this colorless, odorless gas gave researchers that much more curiosity to look into the incident. One New York Times article dives into the growing “leak problem” within the natural gas industry.
The satellite’s measurements of the Ohio site showed that in the 20 days it took for Exxon to plug the well, about 120 metric tons of methane an hour were released. That amounted to twice the rate of the largest known methane leak in the United States, from an oil and gas storage facility in Aliso Canyon, California, in 2015 (though that event lasted for longer and had higher overall emissions).
While the Ohio blowout really caused havoc on methane emission levels and public health concerns, researchers said it also was a big indicator that other large leaks could be going undetected.
“When I started working on methane, now about a decade ago, the standard line was: ‘We’ve got it under control. We’re managing it,’” Dr. Hamburg said. “But in fact, they didn’t have the data. They didn’t have it under control, because they didn’t understand what was actually happening. And you can’t manage what you don’t measure.”
Exxon is unsure as to if the satellite readings are an accurate measurement of the methane levels released since Exxon spokesperson, Casey Norton, said the company’s own scientist reviewed images and pressure readings from the well to arrive at a smaller estimate of the emissions from the blowout. Exxon is currently in touch with satellite researchers to sit down and “talk further to understand that discrepancy and see if there’s anything we can learn.”
Methane leaks are actually somewhat common, and they pose a threat to both climate change and human health. Miranda Leppla, head of energy policy at the Ohio Environmental Council said there had been complaints about health issues among Ohio residents closest to the well including irritation, dizziness, and breathing problems.
The new space satellite collects a significant amount of data every single day. Scientists said that a critical task was how to more quickly sift through tens of millions of data points collected by the satellite to identify methane hot spots on Earth. Studies of oil fields in the US alone have indicated that a small number of sites with high emissions are responsible for a bulk of the methane releases.
As if the amount of data was not overwhelming enough, methane gas is also essentially invisible. Therefore, scientists need to use expensive aircraft and infrared cameras to make the invisible gas visible. Last week, The New York Times shared a visual investigation with airborne measurement equipment and advanced infrared cameras to expose six “super emitters” in a West Texas oil field.
However, there has already been one case where researchers noticed a large leak from a natural gas compressor in a station in Turmenistan, in Central Asia. Researchers estimated the emissions were extremely high and dangerous, and the leak has since been stopped after they raised alarms through diplomatic channels.
“That’s the strength of satellites. We can look almost everywhere in the world,” said Dr. Aben, a senior scientist at the Dutch space institute in Utrecht and an author on both papers.
Photo from The New York Times article
This satellite technology is not foolproof and without cons, though. Satellites cannot see beneath clouds, for example. Plus, scientists must do complex calculations to account for the background methane that already exists in the earth’s atmosphere.
Still, these complications do not undervalue the methane-detection technology scientists are using. Like nearly all technology, this device can always be improved.
There is still a lot more to explore about methane in the atmosphere, the potential contributions and risks, and the commonality of the silent leaks that have been and are still undetected. After all, there are huge environmental and public health consequences to consider.
Dr. Hamburg of the Environmental Defense Fund agrees:
“Right now, you have one-off reports, but we have no estimate globally of how frequently these things happen. Is this a once a year kind of event? Once a week? Once a day? Knowing that will make a big difference in trying to fully understand what the aggregate emissions are from oil and gas.” | <urn:uuid:7d11acdd-16d8-4e39-aff1-a131765dc5d8> | CC-MAIN-2023-06 | https://eponline.com/articles/2019/12/20/satellites-are-exposing-methane-leaks-and-it-found-a-huge-one-thats-raising-eyebrows.aspx?admgarea=Features | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.956752 | 1,681 | 3.671875 | 4 |
For every B. Ed. and D. El. Ed. pursuing students learning micro-teaching is important. It is because microteaching is to make teacher education programs scientific, effective, and meaningful. So in every Teacher Training Institution micro-teaching is practiced. To practice micro-teaching you have to make a micro lesson plan for that particular subject. Which is not an easy task. So, here is a model micro-teaching lesson plan for the Skill of Fluency in Questioning in Mathematics in PDF format.
How to Make Micro-Teaching Lesson Plan for The Skill of Fluency in Questioning for B.Ed.?
There are 10 major skills of micro-teaching practiced in Teacher Training Institutions or B.Ed. Colleges. The skill of fluency in questioning is one of them. Here in this post, we will discuss how to make a proper micro lesson plan of mathematics for the skill of fluency in questioning.
Micro-teaching is practiced in the third semester of B.Ed. the program which is under Dibrugarh University. It may vary from institution to institution. But the general purpose of the micro-teaching is same in every institution.
For every subject, you pursue in your B.Ed. course you have to make a micro lesson plan for it. That means you have to make a micro lesson for a particular subject for all of the ten skills. But in this post, you will only learn about making a micro lesson plan for the skill of fluency in questioning for the pedagogy of mathematics.
Components of The Skill of Fluency in Questioning:
For each skill, there are certain number of components. Every components indicate the activity you should do while performing that skill. There are eight components of the skill of fluency. These are –
Before you start making your micro lesson plan for mathematics on this skill you have to understand the meaning of every component of this skill. Also, it is better to clearly know the purpose of this skill.
Remember that you should use these components in the proper place of your micro lesson plan. Otherwise, your lesson plan may get rejected. So, here in this post, you will get the idea of making a proper micro lesson plan from our model.
PDF of Model Micro Lesson Plan For Fluency in Questioning for Pedagogy of Mathematics
As I mentioned that above you have to use the components of fluency in question in the proper place of your micro plan. That your question should match the components you are putting on the right side of your micro plan. Please have a look at our model micro plan for the skill of fluency in questioning in mathematics to understand better.maths-Fluency-in-Questioning
By properly observing you will get to know the use of components in a micro lesson plan very easily. Keep in mind that you have to perform your micro-teaching according to the lesson plan. That means you have to do the teacher’s activity according to the components.
the topic of the above Micro lesson plan of mathematics is “TRIANGLE” of class VII. You can choose your own topic from any class. Otherwise, you can just use our micro lesson plan for your final submission.
Please note that this micro-teaching lesson plan of mathematics for the skill of fluency in questioning is based on the format provided by Dibrugarh University. There might be a slight difference in your format from this. So please ensure before final submitting. | <urn:uuid:467af928-e024-425f-8629-cdbde598774d> | CC-MAIN-2023-06 | https://gyanshalatips.in/the-skill-of-fluency-in-questioning-in-mathematics-micro-plan/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.930368 | 760 | 3.5625 | 4 |
Radio propagation beacon
A radio propagation beacon is a radio beacon, which is mainly used for investigating the propagation of radio signals. Currently most radio propagation beacons use amateur radio frequencies. They can be found on HF, VHF, UHF, and microwave frequencies. Microwave beacons are also used as signal sources to test and calibrate antennas and receivers. Andy Talbot, G4JNT, gives the following definition for beacons licensed in the Amateur Radio service: A station in the Amateur Service or Amateur Satellite Service that autonomously transmits in a fixed format, which may include repeated data or information, for the study of propagation, determination of frequency or bearing, or for other experimental purposes.
The earliest record of radio propagation beacons goes back to World War II, when the German military operated propagation beacons on wavelengths of approximate 80 m and 10 m. Many propagation beacons were installed during the International Geophysical Year 1957-1958 These included amateur radio beacons OZ7IGY and GB3IGY (later GB3RAL) on 144 MHz, which are still operational, as well as the now defunct DM3IGY in East Germany on 28001 kHz. (19)
The majority of propagation beacons operate in continuous wave (CW or A1A) and transmit their identification (callsign and location) in Morse code. Some of them send long dashes (sometimes at varying power levels) to facilitate signal strength measurement. A small number of beacons transmit Morse code by frequency shift keying (F1A). A few beacons transmit signals in digital modulation modes, like radioteletype (F1B) and PSK31 (G1B).
Most beacons consist of a simple digital keyer, based on discrete digital electronics or a microcontroller, and a low power transmitter or tranceiver. FT-897, a budget HF tranceiver produced by Yaesu/Vertex, has a programmable beacon mode and is used in some temporary propagation beacon installations. Recently K6HX published a versatile Morse code keyer design based on the popular Arduino microcontroller platform.
Antennas usually are types with low directivity. There are exceptions, however, like the high power beacons with directional antennas, specifially set up for transatlantic VHF propagation experiments.
160 meters beacons
The IARU Region 2 (North and South America) bandplan reserves the range 1999 kHz to 2000 kHz for propagation beacons.
60 meters beacons
In addition to the DARC and RSBG beacon projects on 5195 and 5290 kHz (see below), Eddie Bellerby of UDXF discovered in March 2011 a new CW beacon on 5206 kHz, sending LX0HF, presumably from Luxembourg. Further intelligence indicates that the beacon is operated by Philippe LX2A/LX7I of the Luxembourg Amateur Radio Society. Two more european beacons are listed on 5 MHz, OV1BCN on 5290 kHz, operated by OZ1FJB and OK1IF on 5258.5 kHz from the Czech Republic, though their current status is unclear.
|OK1IF||5258.5 kHz||JO40HG||Inactive. Recording: |
|OV1BCN||5290.0 kHz||JO55SI||Op OZ1FJB |
|SZ1SV||5398.5 kHz||KM17UX||Op SV1IW & SV1JG. Inactive.|
30 meters beacons
|IK3NWX||10137.3 kHz||JN55VF||nr Monselice, PD 15m asl|
10 meters beacons
Most HF radio propagation beacons are found in the 10 meters (28 MHz) frequency band, where they are good indicators of Sporadic E ionospheric propagation. According to IARU bandplans, the following 28 MHz frequencies are allocated to radio propagation beacons:
|IARU Region||Beacon allocations|
40 MHz beacons
- The first radio propagation beacon on 40 MHz is OZ7IGY in Jystrup, Denmark (JO55WM) and transmits on 40021 kHz. Transmitted power is 22 W to a dipole antenna. Modulation is F1A keying (frequency Shift Keying), with a shift of 250 Hz.
- The Radio Society of Great Britain has a license for beacon transmissions from GB3RAL (Didcot, UK) on 40050 & 60050 kHz.
6 meters (50 MHz) beacons
In the 6 meters (50 MHz) band, beacons operate in the lower part of the band, in the range 50000 kHz to 50080 kHz. The ARRL bandplan recommends 50060 to 50080 kHz for beacons in the United States. Due to unpredictable and intermittent long distance propagation, usually achieved by a combination of ionospheric conditions, beacons are very important in providing early warning for 50 MHz openings.
4 meters (70 MHz) beacons
General beacon operations
Numerous beacons operate on 70 MHz in recent years. Their main purrpose is to dected the relativley rare and extreme Es (sporadic E) opennings, which exceed 70 MHz.
There is no definite international beacon allocation, due to various countries having different amateur radio allocations in this band. Generally beacons operate near the bottom end (70.000-70.100 MHz).(11) (12). Most respect the RSGB bandplan, staying below 70.050 MHz.
- 70.000-70.050 MHz: UK beacon allocation, including personal beacons on 70.030 MHz
Special beacon allocations
- USA: 70.005 MHz is allocated to the WE9XFT beacon. Transmits from Bedford, VA, under an FCC experimental license issued to Brian Justin, WA1ZMS, with a power of 3 kW. In 2012 this beacon shall transmit from the same location under new callsign WF9XRU.
- Austria: 70.045 MHz is allocated to the OE5QL beacon.
- Hong Kong: 71.757 MHz is allocated to the VR2FOUR beacon.
Beacons on 144 MHz and higher frequencies are mainly used to identify tropospheric radio propagation openings. It is not uncommon for VHF and UHF beacons to use directional antennas. Frequency allocations for beacons on VHF and UHF bands vary widely in different ITU/IARU regions and countries.
|Band||Beacon Sub-band (MHz)|
|IARU R1||IARU R2||IARU R3|
|33 cm||N/A||Varies Locally||N/A|
The current allocation in the United Kingdom, which also reflects IARU Region 1 recommendations, is the following:
|Band||Beacon allocation (kHz)|
ON0EME moon beacon
A beacon specifically for earth-moon-earth (EME or "moonbounce") reception became operational in 2012 in Belgium. The beacon uses call sign ON0EME and transmits on 1296.0 MHz with a very high power of 1000 kW ERP. The antenna is a solid parabolic reflector with a diameter of 3.7 m.
SHF and microwave beacons
In addition to identifying propagation, microwave beacons are also used as signal sources to test and calibrate antennas and receivers. SHF beacons are not as common as beacons on the lower bands, and beacons above the 3 cm (10 GHz) band are unusual.
|Band||Beacon Sub-band (MHz)|
|IARU R1||IARU R2||IARU R3|
|1.2 cm||Beacons are rare|
Optical and infrared beacons
Recently some groups of radio amateurs, especially in Britain, experiment with two-way communications on optical wavelengths. This activity has led to the design and installation of a few beacons operating on optical wavelengths. These beacons transmit modulated light using high intensity LEDs and are used mainly for equipment setting and calibration. An interesting example is the optical beacon located at GB3CAM (Wyton, UK) operating at 628 nm.
License-free experimental beacons
These are extremely low power experimental beacons which operate legally without a license on specific bands, which are reserved for very short range radio transmissions or for industrial, scientific and medical devices (ISM) and in which a limited level of radiated RF energy is allowed. They are operated as radio propagation experiments by radio amateurs and other radio hobbyists.
|Type||Frequencies||Countries||FCC Part 15 rules|
|LowFER||160-190 kHz||USA, Canada||§ 15.217|
|MedFER|| 510 & 1704 kHz|
|USA, Canada||§ 15.219|
|HiFER||13553-13567 kHz||USA, Canada||§ 15.225|
|49ers|| 49846 kHz|
Most radio propagation beacons are operated by individual radio amateurs or amateur radio societies and clubs. As a result, there are frequent additions and deletions to the lists of beacons. There are, however a few major projects coordinated by organizations like the International Telecommunications Union and the International Amateur Radio Union.
IARU Beacon Project
The International Beacon Project (IBP), which is coordinated by the Northern California DX Foundation (NCDXF) and the International Amateur Radio Union (IARU), consists of 18 HF propagation beacons worldwide, which transmit in turns on 14100 kHz, 18110 kHz, 21150 kHz, 24930 kHz, and 28200 kHz. The IARU/NDXF beacons transmit in turns on the five designated frequencies according to the following schedule, which repeats every 3 minutes:
|Slot||DXCC entity||Call||Location||Latitude||Longitude||Grid Sq||14100||18110||21150||24930||28200||Operator|
|01||United Nations||4U1UN||New York City||40º 45' N||73º 58' W||FN3ØAS||00.00||00.10||00.20||00.30||00.40||UNRC|
|02||Canada||VE8AT||Eureka, Nunavut||79º 59' N||85º 57' W||EQ79AX||00.10||00.20||00.30||00.40||00.50||RAC|
|03||United States||W6WX||Mt. Umunhum||37º 09' N||121º 54' W||CM97BD||00:20||00.30||00:40||00.50||01:00||NCDXF|
|04||Hawaii||KH6WO||Laie||21º 38' N||157º 55' W||BL11AP||00.30||00.40||00.50||01.00||01.10||(Off)|
|05||New Zealand||ZL6B||Masterton||41º 03' S||175º 36' E||RE78TW||00.40||00.50||01.00||01.10||01.20||NZART|
|06||Australia||VK6RBP||Rolystone||32º 06' S||116º 03' E||OF87AV||00.50||01.00||01.10||01.20||01.30||WIA|
|07||Japan||JA2IGY||Mt. Asama||34º 27' N||136º 47' E||PM84JK||01.00||01.10||01.20||01.30||01.40||JARL|
|08||Russia||RR9O||Novosibirsk||54º 59' N||82º 54' E||NO14KX||01.10||01.20||01.30||01.40||01.50||SRR|
|09||Hong Kong||VR2B||Hong Kong||22º 16' N||114º 09' E||OL72BG||01.20||01.30||01.40||01.50||02.00||HARTS|
|10||Sri Lanka||4S7B||Colombo||6º 6' N||80º 13' E||NJ06CC||01.30||01.40||01.50||02.00||02.10||RSSL|
|11||South Africa||ZS6DN||Pretoria||25º 54' S||28º 16' E||KG44DC||01:40||01.50||02:00||02:10||02:20||ZS6DN|
|12||Kenya||5Z4B||Kariobangi||1º 15' S||36º 53' E||KI88KS||01.50||02.00||02.10||02.20||02.30||ARSC|
|13||Israel||4X6TU||Tel Aviv||32º 03' N||34º 46' E||KM72JB||02:00||02:10||02:20||02.30||02:40||IARC|
|14||Finland||OH2B||Lohja||60º 19' N||24º 50' E||KP2Ø||02:10||02:20||02:30||02:40||02:50||SRAL|
|15||Madeira||CS3B||Santo da Serra||32º 43' N||16º 48' W||IM12OR||02.20||02.30||02.40||02.50||00.00||ARRM|
|16||Argentina||LU4AA||Buenos Aires||34º 37' S||58º 21' W||GFØ5TJ||02:30||02:40||02:50||00.00||00:10||ARC|
|17||Peru||OA4B||Lima||12º 04' S||76º 57' W||FH17MW||02.40||02.50||00.00||00.10||00.20||RCP|
|18||Venezuela||YV5B||Caracas||10º 25' N||66º 51' W||FK6ØNJ||02:50||00.00||00:10||00:20||00:30||RCV|
The original NCDXF/IARU beacon project, coordinated by John W6ISQ, consisted of nine 100W beacons which operated only on 14100 kHz on a coordinated 10 minute sequence. The beacons used to send a longer callup sequence, like "QST DE 4U1UN/B BEACON" followed by dashes with 100 W, 10 W, 1 W, and 100 mW, finally ending with "4U1UN/B SK". The original beacons were 4U1UN/B, W6WX/B, KH6O/B, JA2IGY, 4X6TU, OH2B, CT3B, ZS6DN and LU4AA. This network evolved into its current format with 18 beacons on five frequencies around 1999.(15) The current beacons consist of a Kenwood TS-50 tranceiver, a beacon controller, a vertical antenna and a GPS unit.
ITU sponsored beacons
As part of an International Telecommunications Union-funded project, radio propagation beacons were installed by national authorities at Sveio, Norway (callsign LN2A, 59.60420N - 5.291670E) and at Darwin, Australia (callsign VL8IPS, 12.60420S - 131.29200E). The beacons operated on frequencies 5471.5 kHz, 7871.5 kHz, 10408.5 kHz, 14396.5 kHz, and 20948.5 kHz.(6) (15) Since 2002, there have been no reception reports for these beacons and the relevant ITU web pages have been removed. (7) (20)
| HF Field-Strength measurement campaign
For a number of years, ITU-R Study Group 3 has been promoting a world-wide HF field-strength measurement campaign, the impetus for which arose from WARC HFBC-87 and the request for improved accuracy in HF propagation prediction. At that time, the Study Group recognised that significant improvements in HF propagation prediction methods needed a substantial body of new measurement data and to that end, administrations and organisations were invited to participate in the measurement campaign, either by installing suitable transmitters or by collecting long-term data from appropriate receiving systems. The campaign is specified in Recommendation ITU-R P.845 'HF field-strength measurement' and comprises a world-wide network of transmitters and receivers using coded transmissions on pre-determined frequencies.
The reasons for the campaign and the continuing need for participation in it, are underlined in Resolution ITU-R 27 (HF field-strength measurement campaign). So far, regular transmissions are being provided by the Administrations of Australia and Norway. Details of the transmitter in Norway, operated by the Norwegian Telecommunications Authority and Telenor Broadcasting, are given below:
Radio Beacon LN2A
Administrations and organizations participating in the work of ITU-R are invited to consider the possibility of participating in the campaign, either through the provision of transmissions or by the collection of field strength measurement data, both in accordance with the specifications given in Recommendation ITU-R P.845. For further details on the campaign, including the availability of a suitable receiving system, please contact the ITU-R Counsellor for Study Group 3 (Dr. Kevin A. Hughes) at ITU Headquarters, in Geneva.
The Norwegian Telecommunications Authority and Telenor Broadcasting would be pleased to acknowledge reception reports of LN2A with a QSL card.
The contact address is:
Norwegian Telecommunications Authority (Att. AYO/TF)
Radio Beacon VL8IPS
This beacon was established by IPS Radio and Space Services in conjuction with thw Royal Australian Navy.
- Identification signal (in Morse code): VL8IPS
- Location: Humpty Doo, near Darwin, Northen Territory, 12 deg 36 min S - 131 deg 16 min 51 sec E
- Hours of transmission: 24 hours per day
- Assigned frequencies: 5470 kHz, 7870 kHz, 10407 kHz, 14395 kHz and 20945 kHz
- Transmitter: Rockwell Collins HF-8022
- Transmitter power: approximately 2 kW on all frequencies
- Antenna: AEA 628D biconical monopole
- Mode: suppressed carrier SSB (USB & LSB) with the reference frequencies (suppressed carrier frequencies) 1225 Hz below the assigned frequencies, with the FSK "mark" 800 Hz above reference frequency and the FSK "space" 1650 Hz above the reference frequency.
- Signal duration and format: as specified in Recommendation ITU-R P.845; 4 min for each frequency, 20 min for all five frequencies according to the following schedule:
|Reference frequency (kHz)||Minutes after each hour|
|5470||00 - 20 - 40|
|7870||04 - 24 - 44|
|10407||08 - 28 - 48|
|14395||12 - 32 - 52|
|20945||16 - 36 - 56|
- Reception reports could be sent [email protected]
DARC beacon project
The Deutscher Amateur Radio Club (DARC) sponsors two beacons which transmit from Scheggerott, near Kiel (54.68750N - 9.791670E, JO44VQ). These beacons are DRA5 on 5195 kHz and DK0WCY on 10144 kHz. In addition to identification and location, every 10 minutes these beacons transmit solar, geomagnetic and ionospheric bulletins. Transmissions are in Morse code (CW) for aural reception, RTTY (45 baud 170 Hz at HH+10) and PSK31 (at HH+50). DK0WCY operates also a limited service beacon on 3579 kHz at 0720-0900 and 1600-1900 local time.
RSGB 5 MHz beacon project
The Radio Society of Great Britain (RSGB) operates three radio propagation beacons on 5290 kHz, which transmit in sequence, for one minute each, every 15 minutes. The project includes GB3RAL near Didcot (51.56250N - 1.291670W, IO91IN), GB3WES in Cumbria (54.56250N - 2.6250W, IO84QN) and GB3ORK in the Orkney Islands (59.02080N - 3.208330W, IO89JA).
Beacon GB3RAL, which is located at the Rutherford-Appleton Laboratory, also transmits continuously on 28215 kHz and on a number of low VHF frequencies (40050, 50053, 60053 and 70053 kHz).
A radio propagation beacon with callsign NAF was installed in 1983 at Cape Prince, Wales, AK. It transmitted both CW and FSK identification with 100 W to a three-band fan dipole on 5604, 11004 and 16804 kHz. The project, which included reception sites at Fairbanks, AK, Seattle, WA, State College, PA and San Diego, CA, was coordinated by the U.S. Naval Security Group Command and its purpose was to verify and calibrate HF propagation prediction software. (15) It is not known when the project was terminated.
Another propagation beacon was installed in 1991 at the Arctic Submarine Laboratory at Cape Prince of Wales, AK. The beacon operated on 25545 kHz and tranmitted the morse code letter "R". A reception facility existed at Fairbanks, AK, some 900 km away. The R beacon was used to study aurora and sporadic E events at high geographical latitudes.(18)
This is an a large scale amateur radio propagation beacon project which uses the WSPR (Weak Signal Propagation Reporter) transmission scheme available with the WSJT software suite, created by Joe Taylor, K1JT. The loosely-coordinated beacon transmitters and receivers, collectively known as the WSPRnet, report the real-time propagation characteristics of a number of frequency bands and geographical locations via the Internet. The WSPRnet website provides detailed propagation report databases and real-time graphical maps of propagation paths. WSPR Network operates on the following amateur radio frequencies (USB dial settings in kHz) 136.0, 502.4, 1836.6, 3592.6, 5287.2, 7038.6, 101387.0, 14095.6, 18104.6, 21094.6, 24924.6, 28124.6, 50293.0, 70028.6 and 144489.0 kHz.
The future of radio propagation beacons
It seems that there is no longer an interest in HF radio propagation by international organizations, government departments or research institutes, therefore they shall be operated only as part of the amateur radio service.
A slow process is underway to supplement morse code (CW) identification, which is mostly suitable for aural reception, with digital modulation patterns. The RSGB beacons on 5290 kHz already transmit such code for 30" in each transmission. In the 2011 RSGB Convention, Bo OZ2M shall talk about the introduction of machine generated modulation to most radio propagation beacons, in order to enable automatic monitoring.
Beacon timing functions are also modernized. When more beacons share a frequency, they are synchronized by electronic clocks locked to GPS satellite transmissions.
Notes and references
- Andy Talbot, G4JNT: "Amateur Beacons", Radio User, ISSN 1748-8117, 3(5), pp.56-58 (May 2008).
- Andy Talbot, G4JNT: "Amateur Beacons", Radio User, ISSN 1748-8117, 3(8), pp. 30-33 (August 2008)
- New IARU Region 2 bandplan introduced in January 2008
- Amateur Radio UK VHF Bandplan, Great Yarmouth Radio Club
- International Beacon Project by the Northern California DX Foundation (2008)
- HF 0-20 MHz beacons
- ITU Resolution ITU-R 27/1993: HF Field-strength measurement campaign (PDF)
- Aurora beacon DKØWCY by Deutscher Amateur-Radio-Club e.V. (DARC), 2004.
- Pat Hawker, G3VA: "The DK0WCY/DRA5 Propagation Beacons", Technical Topics Scrapbook - All 50 years, Radio Society of Great Britain, ISBN 9781-9050-8639-9, pp. 98 (2008)
- Mike Willis, G0MJW: "The GB3RAL VHF Beacon cluster", RadCom, 84(04), Radio Society of Great Britain, pp. 65-59, April 2008.
- The Four meters website: 70 MHz beacon list
- The Four meters website: RSGB 4m bandplan
- Southgate Amateur Radio Club: Luxembourg 60m beacon LX0HF
- Luxembourg: Une balise sur 60m LX0HF Radioamateurs-Online, March 11, 2011.
- G. Jacobs, W3ASK, T.J. Cohen, N4XX and R.B. Rose, K6GKU: "The New Shortwave Propagation Handbook", CQ Communications, Inc., New York, ISBN 0-945016-11-8, pp. 5-17, 5-18. (1995).
- Stuart Wisher, G8CYW: "More adventures in optical communications", RadCom, 88(05), Radio Society of Great Britain, pp. 41, May 2012
- Joe Lynch, N6CL: "VHF Plus", CQ Amateur Radio", 88(07), pp. 81, July 2012.
- Rose, R., Hunsucker, R.D. and Lott G.K.: "Results from a year-long Auroral-E measurement campaign", Naval Command, Control and Ocean Surveillance Center, San Diego, CA, April 1993.
- Martin Harrison, G3USF: "Getting started in... beacons, part 1", RadCom, 89(02), Radio Society of Great Britain, pp. 22, February 2013.
- Radio beacon
- JG2XA: AN HF Doppler investigation beacon project.
- Letter beacon
- High Frequency Beacon
Currently there are two regularly updated international beacon lists, compiled by Martin G3USF and Joost, ZS5S. Both lists are available on-line. An additional online list by WJ5O contains only 28 MHz (10 meter) beacons.
- Martin Harrison, G3USF: Worldwide List of HF Beacons.
- Martin Harrison, G3USF: Worldwide List of 50 Beacons.
- Joost Schuitemaker, ZS5S: List of active HF Amateur Radio Beacons.
- Joost Schuitemaker, ZS5S: Additional HF Beacon Information.
- Joost Schuitemaker, ZS5S: Inactive HF Beacons.
- William H Hays, WJ5O: Ten meter propagation beacons
- IARU/NDXF International Beacon Project
- Ken Reitz, KS4ZR: "Exploring the World of 10 Meter Beacons", Monitoring Times, May 2007, pages 14-16.
- R.Wilkinson, G6GVI, S.Cooper, GM4AFF, & B. Hansen, OZ2M: "The 70 MHz Beacon List", The Four Metres Website, 2008.
- John Jaminet, W3HMS and Charlie Heisler, K3VDB: "Building a beacon for 2401 MHz", CQ VHF, 10(3), CQ Communications, Inc, ISSN 1085-0708, pages 44–46, 2007.
- Andrew Talbot, G4JNT: Design and building of the 5 MHz beacons, GB3RAL, GB3WES and GB3ORK.
- Andrew Talbot, G4JNT: The Next Generation of Beacons for the 21st century (PPT format).
- UK Microwave Group (UKMuG): UK Amateur Radio & Microwave Beacons.
- GB3VHF – a beacon designed for the 21st Century
- IK0WRB beacon keyer, based on a PIC16F84 microcontroller.
- Aurora Beacon DK0WCY
- OV1BCN: a new HF propagation beacon on 5290.5 kHz.
- BEACONCLUSTER worldwide beacon maps by Richard Kaminski ON4CJU.
- K6HX beacon keyer using an Arduino microcontroller board.
- WB0RIO Morse Code Beacon Keyer: a beacon keyer based on CMOS digital componets, by G. Forrest-Cook, WB0RIO (1996).
- Beacons a Bunch: software resources for monitoring IARU/NCDXF beacons, compliled by Jeff Dinkins, AC6V.
- Low power 628nm (red) light beacon, co-sited with GB3CAM beacons at Wyton
- ON0EME moon beacon status
| FCC rules, §97.203 Beacon station.
- 10 Meter Amateur Radio Propagation CW Beacon Demo by KI7F (Youtube video).
- 10 Meter beacon equipment: Modified CB radio using a freakin` beacon controller (Youtube video).
- Arduino Morse Beacon Keyer by Mark VandeWettering K6HX (Youtube video).
| This article contains textual material from Wikipedia (TM). Wikipedia texts are licensed under the Creative Commons Attribution-Share Alike license.|
In short: you are free to distribute and modify the text as long as you attribute its author(s) or licensor(s). If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one.
Wikipedia article: Radio_propagation_beacon
This site is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com. Some links may be affiliate links. We may get paid if you buy something or take an action after clicking one of these. | <urn:uuid:c5ff36c0-78b4-4358-b737-517c3a65bce5> | CC-MAIN-2023-06 | https://hfunderground.com/wiki/index.php?title=Radio_propagation_beacon&oldid=5979 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.82499 | 7,112 | 3.703125 | 4 |
The Stone Age is a broad prehistoric period that covers a significant portion of human history. It is divided into three main periods: the Paleolithic, Mesolithic and Neolithic. The Paleolithic, or Old Stone Age, is the earliest of these and dates back to around 2.5 million years ago. It is characterized by the use of stone tools and the hunting and gathering lifestyle of early humans. The Mesolithic, or Middle Stone Age, saw the development of more sophisticated stone tools and the beginning of the practice of agriculture. The Neolithic, or New Stone Age, marks the emergence of settled communities, the development of pottery, and the domestication of animals.
During the Paleolithic period, early humans lived as nomadic hunter-gatherers. They created stone tools, such as hand axes, to aid in hunting and gathering food. They also created cave paintings, such as those found at Lascaux and Altamira, which depict animals, hunting scenes, and abstract symbols. The Paleolithic period is also known for the development of the first human societies and the emergence of language and culture.
The Mesolithic period, which lasted from around 10,000 BCE to 5,000 BCE, saw the development of more sophisticated stone tools, such as microliths, and the beginning of the practice of agriculture. People began to settle in one place and form communities, which led to the development of trade and the exchange of goods. The Mesolithic period also saw the emergence of new technologies, such as the bow and arrow, which greatly improved hunting efficiency.
The Neolithic period, which began around 5,000 BCE and lasted until around 2,000 BCE, marked the emergence of settled communities, the development of pottery, and the domestication of animals. This period also saw the rise of the first civilizations, such as those in Egypt and Mesopotamia, and the emergence of complex systems of government and religion.
The Stone Age was followed by the Bronze Age, which lasted from around 2,500 BCE to 1,200 BCE, and the Iron Age, which began around 1,200 BCE and lasted until around 600 BCE. During the Bronze Age, humans began to use bronze, an alloy of copper and tin, to make tools and weapons. This led to the development of more advanced societies, with the emergence of organized armies and large-scale construction projects. The Iron Age saw the widespread use of iron, which was stronger and more durable than bronze, and led to even more advanced technologies, such as plows, which greatly improved agricultural productivity.
The Stone Age, Bronze Age, and Iron Age are considered the three major stages of human development and are integral to understanding human history. The Stone Age is particularly important as it marks the beginning of human civilization and the emergence of culture, language and technology. The advancements made during the Stone Age, such as the development of stone tools and the invention of agriculture, paved the way for the great civilizations and empires of the Bronze and Iron Ages.
Today, we can visit many stone age sites around the world, such as Skara Brae in Scotland, and Harpoons in France, to get a glimpse of the lives of our ancient ancestors. We can also see replicas of the famous cave paintings of Lascaux and Chauvet cave to experience the masterpieces without causing damage to the original.
The knowledge and understanding of the stone age is also important for UPSC examination. It’s important to know the different stone age tools, paintings, materials and how it affected the society and how it evolved in the later period.
In addition to the historical significance, the study of the Stone Age also provides insight into the human condition and our relationship with the natural world. It teaches us about the ingenuity and resourcefulness of our ancestors and the challenges they faced in survival. The study of the Stone Age also highlights the importance of preserving and protecting our cultural heritage, as many of these sites and artifacts are at risk of damage or destruction due to natural causes or human intervention.
In conclusion, the Stone Age is an essential period in human history that laid the foundation for the great civilizations and empires of the past. It teaches us about the ingenuity and resourcefulness of our ancestors, as well as the importance of preserving and protecting our cultural heritage. The study of the stone age is also crucial for UPSC examination, as it is an important subject in the Indian history syllabus. With the help of these keywords, one can easily learn and understand the significant events and people of the stone age. | <urn:uuid:e33d0005-902a-4ddd-88de-a6caadf8244c> | CC-MAIN-2023-06 | https://historychannels.com/exploring-the-stone-age-understanding-the-roots-of-human-civilization/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.960077 | 923 | 4.03125 | 4 |
Type of facility
Conventional Hydro consists of a dam or diversion that diverts water to a powerhouse either at the base of the dam or further downstream via tunnels, conduits, and/or canals.
Pumped Storage is a method of generating power by moving water between an upper reservoir and lower reservoir using lower price non-peak power to pump the water to the upper reservoir then generating higher price power during times of high demand. Pumped storage hydro (PSH) projects are either open-loop (connected to a naturally flowing water feature) or closed-loop (not continuously connected to a naturally flowing water feature).
Mode of hydropower operation
Canal/Conduit generation is smaller-scale generation that is often added to existing irrigation and water supply canals. Even more limited is adding small microturbines to water and wastewater lines.
Peaking (Load Following) generation follows demand to maximize revenue from the project. This form of generation requires a storage capacity in order to save water during times of low power demand to generate later when the demand (price) is higher. This type of operation can result in erratic and rapid flow fluctuations downstream of the powerhouse as well as rapid fluctuations of reservoir levels. Intermediate peaking projects tend to have smaller reservoirs and occasional releases and are operated to moderate the intensity of peaking for hydro generation.
Reregulating is the process of using a storage reservoir to capture erratic upstream flows created by peaking generation and then generating in a baseload fashion where the flows change gradually.
Run-of-river generation occurs where a project lacks storage capacity and needs to generate in real time using whatever inflow the project is receiving from upriver. This type of generation may be considered baseload generation as it occurs 24/7 and the quantity of generation is controlled by natural climate and weather cycles.
Run-of-river/Peaking generation is a hybrid of peaking and run-of-river when a project has limited storage capacity and is mostly operated in a run-of-river mode with the addition of limited peaking generation.
Run-of-river/Upstream Peaking generation is a project without storage capacity and its inflow and outflow is controlled by upstream power generation.
* Learn more about modes of hydropower operation here. | <urn:uuid:cddb4d99-d12a-4081-ab9f-bdcefe4ae298> | CC-MAIN-2023-06 | https://hydroreform.org/on-your-river/?fwp_waterway=Clackamas%20River | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.941376 | 478 | 3.671875 | 4 |
Juneteenth marks a seminal moment in American history. The celebration, which takes place on June 19, commemorates the emancipation of those who were enslaved in the U.S.
While chattel slavery’s end is often tied to President Abraham Lincoln’s declaration of the Emancipation Proclamation on Jan. 1,1863, about 250,000 slaves in Texas didn’t learn of their freedom until June 19, 1865 — when Union General Gordon Granger arrived with an army to liberate them and announced that “[the] people of Texas are informed that, in accordance with a proclamation from the Executive of the United States, all slaves are free.” Since then, Juneteenth (short for June 19th) has become known as the country’s second independence day.
Liberation is core to the Innocence Project’s mission. In commemoration of Juneteenth, we offer five ideas for being a better ally in the struggle for racial justice and criminal justice reform, drawn from our five pillars of work: Exonerate, Improve, Reform, Support, and Educate.
The Innocence Project works to free and exonerate wrongfully convicted people and to reform the systems that lead to these injustices.
Similarly, it’s important for allies to proactively engage in breaking down systemic barriers that prevent vulnerable people from receiving fair and equal treatment by the country’s policing systems and criminal legal institutions.
Racial disparities within our criminal justice systems — from the stark difference in the arrests of Black and white people to the disproportionate sentencing of Black and brown people — highlight the extent of the discrimination. According to an ABC News analysis of 2018 arrest data voluntarily reported to the FBI by thousands of police departments across the country, for instance, Black people were, on average, five times more likely than white people to be arrested. Furthermore, a study published by the Proceedings of the National Academy of Sciences found that Black women and men, along with American Indian and Alaska Native women and men, were more likely than white women and men to be killed by police. Black men, for example, were 2.5 times more likely to be killed by police than white men, while American Indian women were between 1.1 times and 2.1 times more likely to be killed by police than white women.
The injustice has also manifested in the treatment of Black and brown defendants in court. According to a 2017 report from the United States Sentencing Commission, for example, Black men convicted of crimes continued to receive longer sentences than similarly situated white men. Black and Latinx people are also more likely than white people to be denied bail, face a higher cash bail, and be detained as a result of not being able to post bail, the Sentencing Project notes. More concerningly, Black people are the most likely to be wrongfully convicted for crimes they didn’t commit, according to the National Registry of Exonerations.
Addressing some of these issues begins with holding those in power accountable and making sure that people who are pushing for fair and equitable criminal justice systems are in those positions of power. Raising concerns at community board meetings or taking part in civilian review boards establishes a level of police oversight. Moreover, voting for candidates with a commitment to reform who are looking to fill local judiciary, prosecutor, or attorney general positions can help ensure that criminal legal systems are guided by those who better understand the impact of the law on marginalized communities.
In addition to providing the wrongly convicted with legal representation, the Innocence Project works through legal systems to improve the law and its practice. Attorneys take a range of approaches to help establish legal precedent in areas that are prone to inaccuracies (such as the use of unreliable forensic evidence and eyewitness testimony). Addressing these root causes of injustices is part of the organization’s larger endeavor to improve the systems for everyone and particularly those who are most vulnerable.
In the fight to improve criminal justice systems on a broader scale, allies need to first understand that the perception and treatment of arrested individuals too often depends on how they look and where they come from. For instance, people who live in neighborhoods with high levels of punitive police surveillance and fewer financial resources (many of whom are working-class people of color) are more likely to be arrested multiple times and experience racism. Those who live in higher-income neighborhoods and can hire their own attorneys, on the other hand, are more likely to be given a second chance by law enforcement.
Being a powerful advocate for criminal justice reform involves learning about racial and class biases. Allies who want to improve criminal justice systems might start by strengthening their understanding of how racial discrimination, classism, and the legal systems intersect in the U.S. and how they have, in turn, marginalized at-risk populations.
The Innocence Project engages in policy work, collaborating with Congress, state legislatures, and local officials to pass laws and policies that limit wrongful convictions. The work touches on issues including — but not limited to — police deception, misapplication of forensic science, proper compensation for exonerees, and access to post-conviction DNA testing. Reforming these issues through large-scale advocacy guarantees that everyone — not just the organization’s clients — are afforded a degree of justice.
Allies should consider how they can broaden their allyship to help those outside their personal circle. Recent racial violence across the U.S., alongside increasing political polarization, has highlighted a desperate need for reform in every facet of American society. Police murders of Black and brown people, for instance, have drawn attention to the lack of guidelines and laws that hold law enforcement accountable. In response, advocacy groups, together with local and state officials, have worked together to revamp police practices. In May, for example, Illinois passed legislation that banned the use of deceptive police tactics in the interrogation of minors. The law was rooted in the work and expertise of the Innocence Project, the Illinois Innocence Project, the Office of Cook County State’s Attorney Kim Foxx, and the Center on Wrongful Convictions at Northwestern University School of Law. Similar bills have also been introduced in New York and Oregon.
Pushing for national reform on issues like police and prosecutorial accountability — whether in the form of signing petitions, urging local officials to support a bill, or raising awareness through grassroots campaigns — benefits everyone.
The Innocence Project fights for the exoneration of its clients and provides support for exonerees who have spent many years behind bars and may struggle with rebuilding their lives upon their release. The organization’s social work department addresses exonerees’ needs on an individual basis, ranging from locating family numbers to finding housing.
One of the Innocence Project’s major efforts involves mandating restitution to freed, innocent individuals for the time they spent imprisoned. This year, for example, the organization helped lead efforts in Maryland and Idaho to compensate those who were wrongfully convicted at the state median household income (currently $85,000) and $70,000 respectively for each year of imprisonment. The new Idaho law also includes provisions that both provide restitution for pretrial detention and award wrongfully convicted people $25,000 for each year spent on parole or probation.
Supporting exonerees or backing organizations that provide the necessary resources to underserved populations is a critical role that all allies can play. This support, which can take many forms including fundraising or volunteer work, can go a long way in making criminal justice systems more equitable.
The work of the Innocence Project’s science and research team largely informs the organization’s reform efforts. Not only does the team push for a science-based evaluation of common forensic techniques and the inclusion of scientific evidence that may have been previously unavailable, it also provides resources on wrongful convictions to researchers and lawyers.
In allyship, education is critical as well. Being an effective ally consists of not only taking action but also proactively educating oneself. It is important to note that it is not the responsibility of those who have faced or are facing injustices in the criminal justice systems to educate allies about their experiences — allies should recognize the power they have to undertake that education themselves.
For those who are looking for a good place to start, consider one of the Innocence Project’s reading lists on wrongful convictions here. | <urn:uuid:05e03674-e8c0-4044-92b0-3ef5a251954d> | CC-MAIN-2023-06 | https://innocenceproject.org/on-juneteenth-here-are-5-ways-to-be-a-better-ally/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.959756 | 1,727 | 3.859375 | 4 |
Underwater Archaeologists May Have Discovered the Oldest Shipwreck in Lake Erie
After an ill-fated journey hauling boulders sank it, the Lake Serpent is at last ready to tell its story
The Lake Serpent, an eight-year-old, 47-foot schooner, left Cleveland in September 1829 for the 55-mile trip to the Lake Erie Islands. Upon arriving at the island rich with limestone, the ship’s crew collected a load of stone to return to Cleveland. (Four years later, the island would be bought by a pair of brothers, Atus and Irad Kelley. It’s been known as Kelleys Island since.)
The ship never made it back, one of thousands to sink on the Great Lakes; the bodies of Captain Ezera Wright and his brother Robert washed ashore in Lorain County, just west of Cleveland. The Lake Serpent was lost forever at the bottom of the lake.
On Friday, however, the National Museum of the Great Lakes, located in nearby Toledo, announced that the Serpent may have been found, and it is believed to be the oldest-known shipwreck in Lake Erie.
The history of the Great Lakes is a microcosm of the history of the United States. Command of the Great Lakes was an important front in the War of 1812, and small outposts dotted around them grew into some of the nation’s biggest cities — Detroit, Chicago, Buffalo and Milwaukee. The lakes became relatively inexpensive methods to ship cargo, from taconite pellets from Minnesota’s Mesabi Iron Range to grain from America’s breadbasket.
Read more on smithsonianmag | <urn:uuid:fb80e1ac-ff22-4f6c-9be0-c21ffec39de0> | CC-MAIN-2023-06 | https://jobbiecrew.com/underwater-archaeologists-may-have-discovered-the-oldest-shipwreck-in-lake-erie/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.960875 | 344 | 3.5 | 4 |
Soil seed bank under variegated thistle does not explain thistle dominance
Keywords:Variegated thistle, Silybum marianum, soil seed bank
Variegated thistle (Silybum marianum) is a large, spiny annual that often forms dense monospecific communities on dry ridges and sunny hillslopes. The owner of a typical Poverty Bay hill-country farm with persistent variegated-thistle infestations reported that winter applications of herbicide were ineffectual in the long term as more variegated thistles simply recolonised the sprayed sites. An absence of preferred species, particularly perennial ryegrass (Lolium perenne) and legumes (Lotus and Trifolium spp.), in the soil seed bank under dense thistle populations may explain the persistence of these monospecific populations. To test this hypothesis, soil samples were collected from a dense and sparse variegated-thistle population in each of seven paddocks and incubated in a glasshouse. Emerged seedlings were identified and counted. The incubation was repeated three times. Total soil seed numbers were similar under both the dense and sparse populations with similar numbers of preferred legumes under both. However, there were significantly more perennial ryegrass seeds under the dense variegated-thistle populations compared with the sparse ones. Domination of thistles in densely infested patches was not due to lack of preferred species, or indeed other weed species, in the soil seed bank.
Clifford PTP 1985. Effect of cultural practice on potential seed yield components of 'Grasslands Huia' and 'Grasslands Pitau' white clover. New Zealand Journal of Experimental Agriculture, 13: 301-306. <a>https://doi.org/10.1080/03015521.1985.10426098</a>
Dodd J 1989. Phenology and seed production of variegated thistle, <i>Silybum marianum</i> (L.) Gaertn., in Australia in relation to mechanical and biological control. Weed Research 29: 255–63. <a>https://doi.org/10.1111/j.1365-3180.1989.tb00910.x</a>
Groves RH, Kaye PE 1989. Germination and phenology of seven introduced thistle species in southern Australia. Australian Journal of Botany 37: 351–359. <a>https://doi.org/10.1071/BT9890351</a>
Hampton JG 1986. Effect of seed and seed lot 1000-seed weight on vegetative and reproductive yields of 'Grasslands Moata' tetraploid Italian ryegrass (<i>Lolium multiflorum</i>). New Zealand Journal of Experimental Agriculture, 14: 13-18. <a>https://doi.org/10.1080/03015521.1986.10426118</a>
Howe CD, Chancellor RJ 1983. Factors affecting the viable seed content of soils beneath lowland pastures. Journal of Applied Ecology 20: 915–922. <a>https://doi.org/10.2307/2403136</a>
Montemurro P, Fracchiolla M, Lonigro A 2007. Effects of some environmental factors on seed germination and spreading potentials of <i>Silybum marianum</i> Gaertner. Italian Journal of Agronomy 2: 315–320. <a>https://doi.org/10.4081/ija.2007.315</a>
Rahman A, James T K Grbavac N, Mellsop J 1995. Evaluation of two methods for enumerating the soil weed seedbank. Proceedings of the New Zealand Plant Protection Conference 48: 175–180.
Roche C 1991. Milk thistle (<i>Silybum marianum</i> (L.) Gaertn.). Pacific Northwest Cooperative Extension Publication (PNW382).
Tozer KN, Barker GM, Cameron CA, James TK 2010. Relationship between seedbank and above-ground botanical composition during spring. New Zealand Plant Protection; 63: 90–95.
Young JA, Evans RA, Hawkes RB 1978. Milk thistle (<i>Silybum marianum</i>) seed germination. Weed Science 26: 395–398. <a>https://doi.org/10.1017/S0043174500050189</a> | <urn:uuid:d3ea464a-ca59-4049-b097-74c5bdcb9379> | CC-MAIN-2023-06 | https://journal.nzpps.org/index.php/nzpp/article/view/181 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.81705 | 1,012 | 3.546875 | 4 |
In news- A recent study published in Nature Climate Change notes that this circulation, known officially as the Atlantic Meridional Overturning Circulation (AMOC), is losing its stability.
What does the study say?
Its findings support the assessment that the AMOC decline is not just a fluctuation or a linear response to increasing temperatures but likely means the approaching of a critical threshold beyond which the circulation system could collapse.
About Atlantic Meridional Overturning Circulation (AMOC)-
- The AMOC is a large system of ocean currents.
- It is the Atlantic branch of the ocean conveyor belt or Thermohaline circulation (THC), and distributes heat and nutrients throughout the world’s ocean basins.
- AMOC carries warm surface waters from the tropics towards the Northern Hemisphere, where it cools and sinks.
- It then returns to the tropics and then to the South Atlantic as a bottom current.
- From there it is distributed to all ocean basins via the Antarctic circumpolar current.
Impact of AMOC’s collapse-
- Without a proper AMOC and Gulf Stream, Europe will be very cold.
- Gulf Stream, a part of the AMOC, is a warm current responsible for mild climate at the Eastern coast of North America as well as Europe.
- AMOC shutdown would cool the northern hemisphere and decrease rainfall over Europe.
- It can also have an effect on the El Nino.
- AMOC collapse brings a prominent cooling over the northern North Atlantic and neighbouring areas.
- Sea ice increases over the Greenland-Iceland-Norwegian seas and to the south of Greenland, and a significant southward rain-belt migration over the tropical Atlantic.
- Freshwater from melting Greenland ice sheets and the Arctic region can make circulation weaker as it is not as dense as saltwater and doesn’t sink to the bottom.
Another study in February had revealed that AMOC had been relatively stable until the late 19th century. With the end of the little ice age in about 1850, the ocean currents began to decline, with a second, more drastic decline following since the mid-20th century
Reasons for the slowdown of AMOC-
- Climate models have long predicted that global warming can cause a weakening of the major ocean systems of the world.
- In July 2021, researchers noted that a part of the Arctic’s ice called “Last Ice Area” has also melted.
- The freshwater from the melting ice reduces the salinity and density of the water. Now, the water is unable to sink as it used to and weakens the AMOC flow.
- A 2019 study suggested that the Indian Ocean may also be helping the slowing down of AMOC as the Indian Ocean warms faster and faster, generating additional precipitation.
- With so much precipitation in the Indian Ocean, there will be less precipitation in the Atlantic Ocean, leading to higher salinity in the waters of the tropical portion of the Atlantic.
- This saltier water in the Atlantic, as it comes north via AMOC, will get cold much quicker than usual and sink faster. | <urn:uuid:68ff228d-5231-41b8-ad08-136e51d7e12a> | CC-MAIN-2023-06 | https://journalsofindia.com/atlantic-oceans-current-system-amoc/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.936939 | 653 | 3.875 | 4 |
Tonight: Our Moon is at its perigee for this lunar cycle (Image: Sophie Allen)
The moon is currently in its waxing waxing phase, meaning only 9% of the moon’s near side is visible tonight. But in the coming weeks it will move to the full moon phase, also moving away due to its elliptical orbit.
Tonight, at 01:31 GMT, the moon is at its perigee for this month. This means that the moon is at its closest point to the Earth. The moon tonight is about 362,826 km away, which is quite far, but compared to its median distance for this month (about 383,874 km), it is quite close. During the next set of the lunar cycle, the moon will be at its perigee on December 24 at 08:27 GMT and will be 358,270 km from Earth.
Perigee is important during astronomical observation because lunar features such as the Maria are slightly easier to see. Observers with binoculars could even see some of the larger craters, such as Tycho and Copernicus. These are the main features of the lunar surface and their history is incredibly interesting to know. Viewers with telescopes may not notice much more detail than usual, as the telescopes are extremely powerful, but some may see the dark side of the terminator (the apparent line that separates the light and dark halves).
Like most celestial objects, the moon doesn’t have a perfectly circular orbit. Instead, it has an elliptical orbit. This means that there is a point where the moon is closest to the Earth, and this point is called perigee.
Contrary to popular belief, the moon does not orbit the Earth 12 times a year (once a month), its orbital period is actually 27.3 days. Thus, the moon orbits the Earth 13 times a year. This is a sidereal month. The moon is also tidally locked with the Earth, causing it to lock in rotation. This is when the moon’s orbital period equals its rotation period. The rotation period is the time it takes for the moon to rotate 360 degrees on its axis. It is because of tidally locked rotation that we can always see the near side of the moon from Earth.
This does not mean that we can only see 50% of the lunar surface. Lunar libration allows us to see about 59% of the lunar surface per cycle. Lunar libration occurs in latitude and longitude, which is why during phase animations, the moon appears to nod and shake its “head” at the same time. Lunar libration in latitude is caused by the inclination of the moon’s orbital plane, which is 5.1 degrees above the ecliptic. Libration in longitude is caused by the different velocities of the moon’s path, since the orbital path around the Earth is ecliptic, and the flatter areas of its orbital path allow us to see slightly around the moon’s eastern limb.
The opposite of perigee is apogee. This is when the moon is at a point in its orbital period where it is farthest from Earth. The moon was last at its apogee on November 14, and was 404,921 km away. It reaches its apogee on December 12, at 00:28 GMT.
Our moon is responsible for our tides and almost completely protects us from meteor impacts. The moon has an incredible history, one that astronomers are learning about and constantly developing their own theories about how it formed. | <urn:uuid:b325929d-3cc6-49a4-9b03-0c9bf70c5a7c> | CC-MAIN-2023-06 | https://laorenda.com/our-moon-is-at-its-perigee-for-this-lunar-cycle/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.964345 | 741 | 3.859375 | 4 |
I’m writing a review of the “robust” australopithecines, and I’m reminded of how drastically our understanding of these hominins has changed in just the past decade. Functional interpretations of the skull initially led to the common wisdom that these animals ate lots of hard foods, and had the jaws and teeth to cash the checks written by their diets.
While anatomy provides evidence of what an animal could have been eating, there is more direct evidence of what animals actually did eat. Microscopic wear on teeth reflects what kinds of things made their way into an animal’s mouth, presumably as food, and so provide a rough idea of what kinds of foods an animal ate in the days before it died. Microwear studies of A. robustus from South Africa had confirmed previous wisdom: larger pits and more wear complexity in A. robustus than in the earlier, “gracile” A. africanus suggested more hard objects in the robust diet (e.g., Scott et al., 2005). A big shock came a mere 8 years ago with microwear data for the East African “hyper robust” A. boisei: molars had many parallel scratches and practically no pitting, suggesting of a highly vegetative diet (Ungar et al. 2008).
Stable carbon isotope analysis, which assesses what kinds of plant-stuffs were prominent in the diet when skeletal tissues (e.g. teeth) formed, further showed that the two classically “robust” hominins (and the older, less known A. aethiopicus) ate different foods. Whereas A. robustus had the carbon isotope signature of an ecological generalist, A. boisei had values very similar to gelada monkeys who eat a ton of grass/sedge. GRASS!
While microwear and isotopes don’t tell us exactly what extinct animals ate, they nevertheless are much more precise than functional anatomy and help narrow down what these animals ate and how they used their environments. This highlights the importance of using multiple lines of evidence (anatomical, microscopic, chemical) to understand life and ecology of our ancient relatives.
Cerling TE, Manthi FK, Mbua EN, Leakey LN, Leakey MG, Leakey RE, Brown FH, Grine FE, Hart JA, Kaleme P, Roche H, Uno KT, & Wood BA (2013). Stable isotope-based diet reconstructions of Turkana Basin hominins. Proceedings of the National Academy of Sciences, 110 (26), 10501-6 PMID: 23733966
Grine FE, Sponheimer M, Ungar PS, Lee-Thorp J, & Teaford MF (2012). Dental microwear and stable isotopes inform the paleoecology of extinct hominins. American Journal of Physical Anthropology, 148 (2), 285-317 PMID: 22610903
Kimbel WH, Rak Y, & Johanson DC (2004). The Skull of Australopithecus afarensis. Oxford University Press.
Robinson, J. (1954). Prehominid Dentition and Hominid Evolution Evolution, 8 (4) DOI: 10.2307/2405779
Ungar PS, Grine FE, & Teaford MF (2008). Dental microwear and diet of the Plio-Pleistocene hominin Paranthropus boisei. PloS One, 3 (4) PMID: 18446200 | <urn:uuid:61938334-b13e-4554-8a5b-d92e8a320490> | CC-MAIN-2023-06 | https://lawnchairanthropology.com/2016/07/28/dietary-divergence-of-robust-australopithecines/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.886685 | 754 | 3.625 | 4 |
Loneliness can have a significant impact on mental health. It is not uncommon for people who feel lonely to also experience depression, anxiety, low self-esteem, and a range of other mental health issues. In fact, research has shown that loneliness can be as harmful to health as smoking 15 cigarettes a day, and can increase the risk of early mortality by as much as 26%.
There are a number of reasons why loneliness can have such a negative impact on mental health. For one, loneliness can lead to a lack of social support, which is important for coping with life's challenges and stressors. In addition, loneliness can also lead to a lack of meaningful social interactions and activities, which can contribute to feelings of isolation and disconnection.
It is important to recognize the signs of loneliness and take steps to address it. Some ways to combat loneliness and improve mental health include:
Connecting with others: Make an effort to reach out to friends, family, and other supportive individuals in your life.
Participating in activities: Join a club, volunteer, or participate in other activities that allow you to interact with others and pursue your interests.
Seeking professional help: If you are struggling with loneliness and it is impacting your mental health, consider seeking the help of a mental health professional. Therapy and counseling can be very helpful in addressing loneliness and improving mental health.
Click here to book 1-1 counselling session with experts.
Practicing self-care: Engage in activities that promote physical and emotional well-being, such as exercise, meditation, and hobbies.
Remember, it is normal to feel lonely at times, but it is important to take action to address it and improve your mental health. | <urn:uuid:37e4ef80-5028-4e92-a5cf-4a226d357137> | CC-MAIN-2023-06 | https://mindspace.club/articles/2023/01/loneliness-and-its-impact-on-mental-health | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.962626 | 346 | 3.53125 | 4 |
(1) Under continuous pressure from local Black professional men during the 1920s, the Prince Edward County School Board reluctantly added high school grades to the all-Black Mary E. Branch Elementary School in 1930. Even then, the professionals themselves initially paid the teachers’ salaries. The blame for such slow and inadequate effort was always placed on the lack of funds. Although it was true that financing problems existed, all-White schools tended to fare better. The financial problems faced by Southern school systems were in fact exacerbated by the policy of separating Black and White students, when integrated schools would have been more cost-effective.
During the 1930s, the National Association for the Advancement of Colored People (NAACP) began a strategy of collecting information to prove that “separate” was not “equal.” In Virginia as elsewhere, curricula quality, bus transportation, buildings and equipment were being challenged as inadequate. Their goal was to win cases protesting the injustice of Plessy v. Ferguson in courts on the local level and than to take those cases on appeal to the nation’s highest court in the hopes of invalidating the ruling. Their legal strategy attacked racial discrimination in the public schools based on the unequal facilities provided for Black students. | <urn:uuid:10a15f4a-0c27-4132-82a8-b62e7abc4637> | CC-MAIN-2023-06 | https://motonmuseum.org/historical-background-1/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.978373 | 255 | 3.890625 | 4 |
What is it?
Plague is a life-threatening infection caused by the organism Yersinia pestis. There are three types of plague. Bubonic plague is the most common type in humans. Infected fleas transmit Y. pestis primarily among rodents. When an outbreak kills many rodents, infected fleas can jump to other animals and humans, spreading the infection. Improved living conditions and health services have made human outbreaks uncommon, but occasional plague cases occur.
Concern exists about the use of plague as a biological weapon. Plague bacteria could be put into a form that might be sprayed through the air, infecting anyone inhaling it and causing pneumonic plague. This form affects your lungs and can spread from person to person.
Fortunately, when given promptly, antibiotics usually effectively treat plague.
There are three types of plague: bubonic, septicemic and pneumonic. Signs and symptoms vary depending on the type of plague. It's possible to develop more than one type.
Signs and symptoms of bubonic plague generally appear within two to eight days of a plague-infected fleabite. After you're bitten, the bacteria travel through your lymphatic system, infecting the first lymph node they reach. The resulting enlarged lymph node (bubo) is usually 1 to 10 centimeters in diameter, swollen, painful and warm to the touch. It can cause so much pain that you can't move the affected part of your body. The bubo usually develops in your groin, but may also appear in your armpit or neck, depending on where the flea bit you.
Signs and symptoms of bubonic plague include:
- Buboes — swollen, painful, warm lymph nodes
- Sudden onset of fever and chills
- Fatigue or malaise
- Muscle aches
Septicemic plague occurs when plague bacteria multiply in your bloodstream. If septicemic plague occurs as a complication of bubonic plague, buboes may be present.
Signs and symptoms include:
- Fever and chills
- Abdominal pain, diarrhoea and vomiting
- Bleeding from your mouth, nose or rectum, or under your skin
- Blackening and death of tissue (gangrene) in your extremities, most commonly your fingers, toes and nose
Pneumonic plague — which can occur as a complication of another type of plague or by inhaling infectious droplets coughed into the air by a person or animal — is the least common form of plague. But it's also the most rapidly fatal. Early signs and symptoms, which generally occur within a few hours to a few days after inhaling contaminated droplets, include:
- High fever
- Signs of pneumonia, including chest pain, difficulty breathing and a cough with bloody sputum
- Nausea and vomiting
Pneumonic plague progresses rapidly and may cause respiratory failure and shock within two days of infection. If antibiotic treatment isn't initiated within a day after signs and symptoms first appear, the infection is likely to be fatal.
Plague has afflicted humans throughout history. The cause of plague, the Yersinia pestis bacterium, was discovered in 1894 by Alexandre Yersin. Soon after, scientists realised that fleas transmitted the bacteria. Three types of plague exist.
Bubonic plague is the most common type of plague in humans. It's usually caused by a bite from an infected flea. Y. pestis bacteria primarily infect animals such as squirrels, rabbits and prairie dogs. You may become infected by a fleabite if you're in close contact with such animals. The bacteria can also enter through a cut in your skin if you handle these animals. Domestic cats that come into contact with infected animals also may transmit the infection to humans.
Septicemic plague occurs when plague bacteria multiply in your bloodstream. This happens when bacteria transmitted by a fleabite enter directly into your bloodstream, or as a complication of bubonic or pneumonic plague.
Secondary pneumonic plague can develop if you're infected with another type of plague. In this case, the infection spreads to your lungs, causing a virulent pneumonia that can often be fatal. Primary pneumonic plague can occur when you inhale droplets coughed into the air by a person or animal with pneumonic plague.
Plague as a bioterrorism agent
Plague is also one of a number of potential agents of bioterrorism, along with anthrax, smallpox, botulism, tularemia and nerve gases. It's possible that plague bacteria could be turned into an aerosol and then be spread over large populations.
Certain factors may put you at greater risk of plague:
- Location. Naturally occurring plague outbreaks are most common in rural areas and in urban areas characterized by overcrowding, poor sanitation and a high rat population. Outbreaks can happen at any time of year. Plague is present on most continents other than Australia. The greatest number of human plague infections occurs in countries such as Madagascar, Tanzania and the Democratic Republic of Congo. However, the largest concentration of infected animals is in the United States — particularly in New Mexico, Arizona and Colorado — and in the former Soviet Union.
- Time of year. Most plague infections occur from May to October. During these months, infected rodents and fleas are most active and people are more often outside and exposed to them.
- Contact with certain animals. Rats, squirrels, rabbits and prairie dogs are common sources of infection. Domestic cats also may become infected by such animals and pose a transmission risk to humans. The disease usually spreads through fleabites, but you can also contract plague after being exposed to an infected animal that may have coughed infectious droplets into the air or through a break in your skin after handling an animal with plague. Groups at increased risk include veterinarians, cat owners, hunters, campers and hikers in areas with recent plague outbreaks among animals.
Complications of plague may include:
- Gangrene of your fingers and toes resulting from clots in the small blood vessels of your extremities
- Severe shock
- Sudden, severe lung failure (acute respiratory distress syndrome, or ARDS)
- Bloodstream infection (septicemia)
- Inflammation of the membranes and fluid surrounding your brain and spinal cord (meningitis)
With prompt treatment, the overall fatality rate from plague is less than 15 percent. Without treatment, mortality rates can be as high as 60 percent for bubonic plague and 100 percent for pneumonic plague. Death can occur within days after symptoms first appear if treatment doesn't begin promptly.
Your doctor may suspect plague if you live in a high-risk region or report a suspicious exposure. With the exception of a visible bubo, signs and symptoms often mimic other, much more common infectious diseases.
You'll likely be asked to describe the type and severity of your symptoms and tell your doctor about your recent history, including whether you've been exposed to sick animals or travelled to areas with plague.
Bubo and respiratory fluid examination
If plague is suspected, your doctor may confirm the diagnosis through microscopic examination of fluid extracted from your bubo, bronchi or trachea. Needle aspiration is used to obtain fluid from your bubo. Fluid is extracted from your airways using endoscopy. In this procedure, a thin, flexible tube is inserted through your nose or mouth and down your throat. A suction device is sent down the tube to extract a fluid sample from your airways.
Your doctor may also test blood drawn from your veins to diagnose plague. Y. pestis bacteria generally are present in your bloodstream only if you have septicemic plague. | <urn:uuid:cf91cd1e-af12-4f32-ad08-6f1c4c63a558> | CC-MAIN-2023-06 | https://mytelehealth.info/health/plague | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.927475 | 1,755 | 3.78125 | 4 |
Wood Cut. A form of relief engraving, where the parts of the image that are white or uncolored are carved away from the wood. An image is drawn directly onto a section of wood or on paper that is then transferred to a section of wood and carving is done parallel to the grain. If one thinks of a tree – it is cut lengthwise to form long planks. This would be the type of lumber used for a wood cut. Because of the direction of the grain, wood cuts did not hold up to modern printing presses and wood cuts could not be combined with metal or movable type in modern printing. Harper’s Weekly did not use wood cuts.
Wood Engraving: A relief printing process, like wood cuts. however wood engravings were carved from the cross-cut section of a hard wood tree trunk(boxwood was preferred). By carving on the end of the grain, the engraver enjoyed much more flexibility with tools and could exact very fine lines. By being perpendicular to the grain, cutting the wood in this manner, allowed the block to be inserted into the metal and movable types of the era. The compatibility of engraving and type made wood engraving the established printing process for nearly half a century. This was the image reproduction process used by Harper’s Weekly and the method by which Thomas Nast learned his trade. See a video demonstration of wood engraving here.
Lithography was the process of applying wax or grease onto a stone (less often a metal plate) and carving an image out of the surface application, and creating an image. A description below from the Philadelphia Print Shop, Ltd. :
The process is based on the principle that grease and water do not mix. To create a lithograph, the stone or plate is washed with water –which is repelled by the crayon– and then with ink –which is absorbed by the crayon. The image is printed onto the paper from the stone or plate, which can be re-inked many times without wear. A chromolithograph is a colored lithograph, with at least three colors, in which each color is printed from a separate stone and where the image is composed from those colors. A tinted lithograph is a lithograph whose image is printed from one stone and which has wash color for tinting applied from one or two other stones. Lithography is a planographic process and so no platemark is created when a lithograph is printed.
Lithography was invented by Alois Senefelder in 1798 but didn’t come into general use until the 1820s. After that time lithography quickly replaced intaglio processes for most illustrative and commercial applications, for the design was easier to apply to the stone or plate, it was much easier to rework or correct a design, and many more images could be produced without loss of quality than in any of the intaglio processes.
Lithographs and Chromolithographs were used to print the cartoons and colored cartoons for The San Francisco Illustrated Wasp, and its artist George F. Keller. Their experience with this form of printmaking originated from making cigar box labels.
For an excellent series on the history of printmaking, which includes details and examples of all these processes, I highly recommend viewing Richard Benson’s excellent videos for the MOMA. The videos are available in abbreviated segments (highlights), following the progressive history of printmaking, or you may view the eight-hour comprehensive look (which is broken up into segments for more defined viewing). The technique of Thomas Nast is explained at the 17-minute mark. | <urn:uuid:87003d63-09bf-45e9-b87e-5498dad9b4cd> | CC-MAIN-2023-06 | https://thomasnastcartoons.com/tag/print-making/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.959969 | 751 | 3.65625 | 4 |
Special Needs Students' Executive Function (P-12)
Presented by Kathryn Phillips
Specifically Designed for Special Education Staff, Speech Language Pathologists, Occupational Therapists, General Education Classroom Teachers, School Psychologists, Counselors, and Other Educators Working with Students in Preschool - Grade 12 Who Have Trouble Organizing Themselves for School Success
- Dozens of practical strategies that can be used to help students with special needs who have difficulty maintaining attention and organizing their time, tasks, personal space, and materials
- Practical ways you can adapt your instruction to enhance students’ ability to develop and use key executive function skills in reading, writing, math, study skills, and projects
- Help your students with special needs improve in these key executive function areas: organization, time management, study skills, task completion, impulse control, emotional self regulation, anger management, social skills, and memory
- Demonstrations, activities, examples, checklists, and much more, including a comprehensive digital resource handbook you can take back and begin using immediately with your students
Practical Ideas and Strategies
There has been a marked increase in the diagnoses of our students who have weaknesses in executive function skills. Common characteristics can include difficulty with task initiation, prioritization, completion, and the ability to think in an organized way to manage belongings, schedules and assignments. In this fast-paced seminar, Kathryn Phillips will demonstrate how to recognize and assess the impact on behavior and learning and most importantly, give you a toolbox filled with practical strategies to help your students with executive function difficulties. You won’t want to miss this day filled with highly effective ideas and interventions to help your students become more independent and develop greater executive control of their time, tasks and materials.
Ten Key Benefits of Attending
- Practical Strategies to Address Executive Function Weaknesses that Prevent Students from Finding Success in School
What skills should we expect at certain ages and how can we help students who don’t gain these vital executive function skills? Learn how you can recognize and strategize to teach your students who struggle to think and act in an organized way to manage their time, tasks, schedules, assignments, and behavior
- Strategies to Help Your Students Improve in Key Executive Function Areas
Executive functioning helps students to complete assignments, manage time, control impulsive behavior, have appropriate social behaviors, and organize their brains for learning … Learn strategies to help your students who have difficulty in these areas so they can experience success and become more independent
- Practical Ideas for Your Late, Lost and Unprepared Students
Your students may appear to be unmotivated and apathetic, but we now know that many lack basic executive function skills … Learn practical strategies to build executive functioning skills in students who lack them
- Executive Function Skills to Increase Student Success in Reading, Writing and Math
Learn how executive function skills impact specific academic areas … Strategies you can use immediately to develop skills that will help students organize information for learning
- How Executive Function Skills Impact Student Behavior and What You Can Do About It
Understand and learn practical solutions for impulse control, self-regulation and self-management … Help your students develop situational awareness to stop, think and plan before they respond negatively
- Discover Practical Strategies to Organize, Plan and Prioritize
You can help students process information in a more organized and logical way to select the tools and strategies they will need in order to plan for success
- Ways to Adapt Your Instruction and Classroom Structure
Special Designed Education, Sample IEP goals, apps, tools, and accommodations for students who struggle with executive function demands
- Discover the Connection to Brain Research: What it Teaches Us about Best Practices for Instruction
Executive function work is all based on current research about how the brain takes in, processes and stores information … Learn the practical application of this research and how it will greatly benefit your students
- Tools and Strategies to Teach Independence and Emotional Regulation
Learn how to help students become more independent with strategies that teach steps in planning, implementing the plan and self-evaluating when finished … Strategies students can use for emotional regulation
- Receive an Extensive Digital Resource Handbook
Each participant will receive a comprehensive digital resource handbook developed specifically for this seminar filled with strategies, ideas and research-based techniques that will support you when you return to your classroom and school
Outstanding Strategies You Can Use Immediately
- Strengthen your students with special needs EXECUTIVE FUNCTION SKILLS
- Dozens of practical strategies designed to increase attention, focus and impulse control
- Recognize and strategize to teach your students who struggle to think and act in an organized way to manage their time, tasks, schedules, assignments, and behavior
- Strategies for co-teaching, inclusive and general education classrooms
- Executive function skills to increase student success in social emotional functioning
- Flexible problem-solving strategies to fit the needs of specific students
- Emotional regulation strategies you can use immediately
- Simple yet effective systems for study skills
- Memory strategies for studying, test-taking, homework, and long-term project planning
- Clearly define key executive function skills and how they impact academic and social success
- Low-prep strategies you can use immediately in the classroom or resource room
- Proven ideas to help students plan their homework, manage short- and long-term projects/assignments and carry out tasks to completion
- Set up all your students for success in an inclusive classroom
- Dozens of practical strategies to teach students to remember, manipulate information, self-monitor, and self-check
A Message From Your Seminar Leader
Do you ever hear any of these statements about your students?
- “He’s just not motivated.”
- “She doesn’t seem to care about anything.”
- “He’s smart enough but he just won’t do the work.”
- “If only she would pay attention …”
- “He explodes over anything!”
Over the past decade research has exploded in the diagnosis and treatment of students who have difficulties in executive functioning. Executive dysfunction is thought to be the underlying neurological difficulty in disorders such as ADHD, autism spectrum disorders, traumatic brain injury, drug and alcohol exposure, behavioral and emotional disorders, as well as learning disabilities. The exciting news is that current research clearly indicates that this deficit can be effectively addressed with proper interventions.
In this stimulating and interactive seminar, designed for Preschool-Grade 12 inclusive and special education settings, you will learn how to recognize executive functioning deficits, assess their impact on learning and behavior, gain a toolbox of practical strategies for working with students, and learn how to integrate these strategies into core curriculum areas. You will leave with dozens of next-day ideas for writing, math, reading, study skills, long-term projects, and test-taking. Strategies in self-awareness, work completion, task initiation, planning, organizing, and goal setting will also be shared as well as ideas for impulse control, motivation, self-regulation, and more!
Don’t miss this opportunity to understand how executive functioning or dysfunction makes or breaks students’ ability to be successful in school, both academically and socially. Come and learn new strategies and interventions that will make a significant difference for all your students.
P.S. I know you have the choice in choosing a professional development day that will meet your needs for the year. I promise, you will not be disappointed!
Who Should Attend
Special Education Staff, Speech Language Pathologists, Occupational Therapists, General Education Classroom Teachers, School Psychologists, Counselors, and Other Educators Working with Students in Preschool - Grade 12 Who Have Trouble Organizing Themselves for School Success
Special Benefits of Attending
Extensive Resource Handbook
Each participant will receive an extensive digital resource handbook giving you access to countless strategies. The handbook includes:
- Step-by-step strategies for meeting the needs of your students with executive function deficits
- Multiple resources and next day ideas for organization, impulse control, memory, behavioral regulation, and attention/concentration
ASHA - CEUs
ASHA-Required Disclosure Statement for Kathryn Phillips:
Presenter for the Bureau of Education & Research and receives honorarium compensation. Independent contractor for teacherspayteachers.com and receives financial compensation.
No relevant nonfinancial relationships exist.
Semester Credit Option
Up to four graduate level professional development credits are available with an additional fee and completion of follow-up practicum activities. Details for direct enrollment with University of Massachusetts Global, a nonprofit affiliate, will be available at this program.
Meet Inservice Requirements
At the end of the program, each attendee will receive a certificate of participation that may be used to verify hours of participation in meeting continuing education requirements. | <urn:uuid:bba3ce16-18c3-4ad8-a0a4-e63aad193ee0> | CC-MAIN-2023-06 | https://www.ber.org/seminars/course/XF1/Special-Needs-Students-Executive-Function-P-12?sessCode=XF13F1-PAC | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.922638 | 1,849 | 3.6875 | 4 |
Root canals are tiny passageways that branch off from beneath the top of the tooth, coursing their way vertically downward, until they reach the tip of the root. All teeth have between one and four root canals. Many tooth problems involve infections that spread to the pulp, which is the inner chamber of the tooth containing blood vessels, nerves, and other tissues. When the infection becomes worse, it can begin affecting the roots. A traumatic injury to a tooth can also compromise the pulp, leading to similar problems. A diseased inner tooth brings a host of problems including pain and sensitivity as the first indications of a problem. However, inside of the tooth, a spreading infection can cause small pockets of pus to develop, which can lead to an abscess. Root canal therapy is a remarkable treatment with a very high rate of success and involves removing the diseased tissue, halting the spread of infection and restoring the healthy portion of the tooth. In fact, root canal therapy is designed to save a problem tooth; before the procedure was developed and gained acceptance, the only alternative for treating a diseased tooth was extraction.
Root canal treatment usually entails one to three visits. During the first visit, a small hole is drilled through the top of the tooth and into the inner chamber. Diseased tissue is removed, the inner chamber cleansed and disinfected, and the tiny canals reshaped. The cleansed chamber and canals are filled with an elastic material and medication designed to prevent infection. If necessary, the drilled hole is temporarily filled until a permanent seal is made with a crown. Most patients who have root canal experience little or no discomfort or pain and enjoy a restored tooth that can last almost as long as its healthy original.
In some cases, it may be necessary to extract a tooth. This can happen for a variety of reasons such as cases where a deciduous “baby” tooth is reluctant to fall out, a severely broken down and non-restorable tooth is present, or a “wisdom tooth” is poorly positioned and unable to fully erupt into place.
Tooth extraction procedures today are far less painful than ever before, thanks to powerful anesthetics and sedatives. In many cases, a patient who has a tooth pulled experiences little or no discomfort, and only minor bleeding.
Before a tooth is extracted, the area surrounding the tooth is numbed with a topical or injectable anesthetic such as Novocaine.
Patients with extracted teeth sometimes need to take an antibiotic, and at the very least, take precautions following the procedure to ensure that infection doesn't occur.
Smoking, vigorous brushing and rinsing, and drinking liquids through straws are discouraged during the post-operative period because they hinder healing and may cause the wound to open. Cold compresses applied to the outside cheek near the extraction area can help reduce any swelling and promote faster healing. | <urn:uuid:1d4ba177-1e6f-4aa5-ba5e-1ff5f78e2fa1> | CC-MAIN-2023-06 | https://www.boaindentalcare.com/services/oral-surgery | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.962094 | 606 | 3.703125 | 4 |
Chimpanzees are one of our closest extant, living, relatives. Let us
look closer at their classification and evolutionary history to
understand how this has happened. Ordering animals into a particular
classification is the way by which scientists organize and
understand how animals are related to one another.
Species- Pan troglodytes
Species- Homo sapiens
Chimpanzees are in the same kingdom, phylum, class, order, and
family. This meaning that they are both animals who possess
backbones, are endothermic (can maintain their own body
temperature) and have mammary glands as well as hair or fur.
They are both primates which includes all primate species, and
are in the same family which includes all of the great apes
including chimpanzees, bonobos, gorillas, and orangutans. Humans
and chimpanzees only differ in their genus and species, which
simply means that humans and chimpanzees are not exactly
identical in all characteristics, but that they are exceedingly
similar in many ways. But exactly how did humans and chimpanzees
become so closely related?
Most classification over the years
has been done based on homologous, or shared similar
traits, between animals. It makes it easier for
scientists to determine what we humans have in
common with other species, but this is not the only
way to clearly see the close relatedness of humans
share a common ancestor, a relative who lived before them from
which the primate including chimps and humans evolved. The
Hominoid common ancestor existed somewhere between 8 and 5
million years ago, just think about how long ago that was!
Evolution is the process by which our genetic material, the part
of biology that makes people unique, changes through time.
Humans and chimps diverged from one another roughly around 6
million years ago.
There is evidence in the form of fossils and genetic
composition, which supports the theory that, other than the
bonobo, the chimpanzee are our closest living relative.
Fossil evidence has come from all over the world to support the
evolution of humans from other hominid species, but there has
been a particular hot spot in Africa. This is very interesting
since many primate species including the chimpanzees and
gorillas are found in Africa. Olduvai gorge, Laetoli, and the
Great Rift Valley were all very famous paleontological sites
where fossils have been found. Many fossils have appeared to be
half human and half chimpanzee, giving us another reason to
believe that we are very closely related to chimps. A
from which we all seem to stem supports the notion that we are
very closely related to chimpanzees.
to fossil evidence that links humans as having evolved from the
same lineage that chimpanzee come from, there is also DNA
evidence. DNA is found in our chromosome within the cells of our
body and it holds the genetic information that makes each one of
us unique. Scientists have been able to remove DNA from the
cells of humans and chimpanzees alike to determine that we share
98% of our genes. This means that there is only 2% genetic
material that makes you unique from a chimpanzee!
Message from CAC'ers
Africa we reflected upon where and from whom we as humans
originated. The feeling of staring into the eyes of a
chimpanzee, which evolved from the same ancestor that we have
evolved from, was remarkable. It was like staring into the eyes
of a person that has changed looks over a few years. You may
not always recognize the outward appearance of a haircut, color,
or body size but the eyes are never changing and always remain
constant over time. Standing in Africa, where we as the human
race evolved, while looking into the similar but never before
seen eyes of a closely related chimpanzees was amazing. It felt
like visiting our homeland and meeting the relatives that live
in this far off land that we have, in our lifetime, never before | <urn:uuid:b7c48be0-cc5b-48c6-a08d-c4fa7079fc08> | CC-MAIN-2023-06 | https://www.conservenature.org/learn_about_wildlife/chimpanzees/chimp_taxonomy.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.952037 | 976 | 4.03125 | 4 |
Biopsy of lymph node is examination of an extracted piece of a lymph node under the microscope.The lymph nodes produce white blood cells that fight infections.
Biopsies are used to determine if the lymph nodes have enlarged or not, they also help in determining if the tumors in lymph nodes are malignant or not.
Following conditions give abnormal values in the biopsy:
Lymph node biopsies are used to determine the spread of cancer.
There are two types of lymph biopsies:
In excisional biopsy the patient will take a few days to recover because of the incisional wounds; the doctor will prescribe some medicines and advise post-surgery care. | <urn:uuid:9cfe2e97-e210-45b4-864b-9a8e5eb82a4f> | CC-MAIN-2023-06 | https://www.drkirankj.com/biopsy-of-lymph-node/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.876609 | 141 | 3.53125 | 4 |
The record for the coldest temperature ever achieved has been broken with the cooling of rubidium gas to 38 picokelvins (3.8 * 10-11 K). The work could lead to new insights into quantum mechanics.
Temperature is a measure of the energy in atoms' or molecules' vibrations. The lowest temperature theoretically possible is absolute zero – 0 K or −273.15 ºC (−459.67 ºF) – which would require a complete cessation of movement. That's probably impossible practically, but for decades physicists have shown we can get very, very close by using lasers to damp atomic motion.
In Physical Review Letters German scientists have reported going closer to zero than ever before.
Professor Ernst Rasel of Leibniz Universität Hannover and co-authors placed 100,000 rubidium atoms inside a magnetic trap at the top of the University of Bremen's 110 meter (360 feet) tall Bremen drop tower. The trap forms what is called a “matter-wave lens” that by focusing the atoms at infinity cools them to the point that they became a Bose-Einstein Condensate (BEC), a state of matter where collections of atoms can show quantum behavior as if they were a single subatomic particle/wave.
Turning off the trap allows the condensate to expand in all directions, cooling it further. The BEC was then allowed to free-fall down the length of the tower while detectors observed its behavior.
The whole process lasts just two seconds, although modeling suggests 17 seconds is possible, and the authors hope to exploit this longer timeline to explore BEC behavior with the distortions of vibrations removed.
In an accompanying viewpoint article the University of Portsmouth's Dr Vincenzo Tamma, who was not involved with the research, said the work could “test gravity at the quantum level.” Interference patterns in the BEC are determined in part by gravitational effects. With inconsistencies between our understanding of quantum physics and general relativity's description of gravity representing perhaps physics' greatest unsolved puzzle, the work provides an opportunity to explore physics at its most fundamental. Tamma also sees potential for the technique to search for certain forms of dark matter.
A hundred thousand atoms may sound like a lot, but it's actually about 50 million times less than make up the head of a pin, give or take a bit for variation in atomic, and pin-head, size. The coldest temperature achieved in something you could see was set when a 400 kilogram (882 pound) block of copper was cooled to 0.006 K. To get there, researchers required lead that had been mined thousands of years ago, giving time for radioactive isotopes formed through exposure to other radioactive elements in the ore to decay. This was provided through the fortuitous (for us) discovery of a Roman galley that sank of the coast of Sardinia bearing Spanish lead intended for use in the Roman civil wars.
[H/T: Popular Mechanics] | <urn:uuid:92da334d-f117-4d27-aa3a-cfe81ca7c6e2> | CC-MAIN-2023-06 | https://www.iflscience.com/new-record-for-coldest-temperature-is-getting-very-close-to-absolute-zero-61214 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.944612 | 616 | 3.8125 | 4 |
The Boer strategy
The Boers were fighting a primarily defensive war, occupying territory in Natal and the Cape Colony, and besieging British-held towns, to prevent the British from bringing the war to Republican soil. Colesberg was an important town – the last settlement in the Northern Cape before the railway crossed into the Free State.
Initially, only a small commando under Chief Commandant Esaias Grobler, who hailed from Phlippolis in the Free State, held the northern frontier at the Orange River, north of Colesberg. Grobler commanded the southern Free State or central front – an area which included the towns of Philippolis, Fauresmith and Jagersfontein. He was later joined by General Hendrik Schoeman of the Transvaal, General Piet de Wet and General Koos de la Rey.
The defence of the Free State’s southern border was initially conducted by 2 500 Free State burghers, divided into three units to defend the three main bridges into the Free State – Aliwal North, Bethulie and Norvalspont. There was also a unit at “Colesberg bridge”, on the road between Philippolis and Colesberg, but this was not seen as a major risk, possibly because it was located 30 km north of Colesberg itself. The other bridges were much more vulnerable. Donkerpoort station, just west of the present-day Gariep Dam town, was an important collection point for the Boer commandos.
The Boers’ next step was a major strategic one – the invasion of the Cape Colony itself, in order to stem the British advance far away from the Free State border. Another justification for the invasion was that the Cape Colony had betrayed its neutrality by hosting British troops. But neither Grobler nor Schoeman were keen to act aggressively - they just wanted to do a holding operation, to prevent British encroachment.
General Clements’s retreat
Clements realised that he could not hold the line around Colesberg, and withdrew to Arundel, leaving a great deal of booty at Slingersfontein camp for the Boers. In the last two days, 63 men under Clements had lost their lives or were seriously injured, and many had been captured by the Boers.
Disaster befell the Wiltshire Regiment, because they did not receive clear instructions on the retreat. By the time they arrived at Rensburg Station, it was already taken by the Boers, and the Wiltshires came under heavy fire. After a three-mile running battle, 57 Wiltshires were killed, including Major Macmullen; and 103 were captured.
De la Rey was now clearly on the offensive, and on 19 February, he attacked a British unit Rietfontein, near Arundel. He then moved southwards to threaten Clements’ base at Noupoort. But De la Rey’s offensive was doomed. Roberts’ strategy of attacking the Free State from the far south-west (the “Great Flank March”), was working. Bloemfontein was now under threat from the south-west. The Free State government recalled General Piet de Wet and numerous Free State units, in order to assist General Cronje at Paardeberg. In fact, the Boers started withdrawing from Colesberg itself.
In retrospect, this was a great strategic blunder on the part of the Boer High Command. De la Rey was very effective on the Southern Front, inspiring his men and inflicting losses on the British. Arundel, Noupoort and De Aar, as the lifeblood of the British war supplies, should have been a key target, striking Roberts in his rear. But the sudden helter-skelter recall of so many men to ward off Roberts’s march to Bloemfontein, meant that the Boers effectively lost two fronts – the western as well as the southern fronts. In effect, this decision sealed the fate of the two Republics, as Lord Roberts’s advance on Bloemfontein and Pretoria became unstoppable.
General Clements now started pushing northwards. On 24 February, he attacked the Boer positions at Kuilfontein Farm. The Boers made a determined stand, and in particular, the German commando distinguished itself. Each side had about 30 casualties. The British then reoccupied Vaalkop Hill (southwest of Colesberg), Rensburg siding (south of Colesberg) and Taaiboschlaagte (south-east of Colesberg).
On 26 February, General Grobler and General Lemmer retreated to Colesberg.
On 28 February, Clements’s forces marched into Colesberg unopposed. Clements moved northwards from Colesberg, reaching the Orange River bridge at Norvalspont (en route to Springfontein) on 3 March 1900.
General Hermanus Lemmer withdrew all his forces to the Free State side of the Orange River by 6 March, and blew up the Norvalspont bridge. The bridge was soon repaired by the approaching Inniskillings and Australian Regiment. On 7 March, General Lemmer’s rear guard crossed the Colesberg bridge (30 km from Philippolis), and blew up the bridge to slow down the British advance.
On 8 March 1899, General Clements occupied Norvalspont, just south-west of the present-day Gariep Dam, and Colonel Brabant, head of the Cape Colonial forces, occupied Jamestown. On 11 March, General Gatacre reached the Bethulie road bridge and managed to seize it before the Boers could blow it up. By now, the Boer retreat had become a headlong flight. Many Cape rebels laid down their arms and took an oath of allegiance to the Queen, before returning to their farms.
Clements crossed the Orange River at Norvalspont on 15 March. General Gatacre’s route was further east, and he reached Bethulie on 13 March. The two generals then met up at Springfontein.
These British units started their march to Bloemfontein. The southern Free State towns were captured effortlessly by the British, and a period of British control ensued.
“With Roberts at Bloemfontein, Gatacre at Springfontein, Clements in the south-west, and Brabant at Aliwal, the pacification of the southern portion of the Free State appeared to be complete” (Arthur Conan Doyle).
Genl Freek Grobler’s offensive in the west
On the same day, 12 February, General “Groot Freek” Grobler attacked a force of almost 300 Victorians, South Australians, Inniskillings and Wiltshires, at Windmill Camp and Pink Hill in the west. The Boer force consisted mainly of men from Waterberg and Zoutpansberg in the Transvaal, and were based at Bastard’s Nek, on the Petrusville road.
Pink Hill was strenuously defended by the Australians, and they kept the Boers as bay until the infrantry could withdraw. Major Eddy and two other officers were killed, and two others wounded. “As an exhibition of resolute courage on the part of comparatively untrained troops, this performance of the Australians is well worthy of mention” (Amery Vol. 3, p. 466). The Boers secured the hill in late afternoon, but were too exhausted to follow up their success.
The stand made by the Worcesters and Australians played a major role in maintaining the front.
General Clements in Colesberg
Tasmanians, Westralians, and the death of Mr Lambie
By early February, Lord Roberts was preparing for his “Great Flank March” from Hopetown to Bloemfontein. For this, he needed substantial men from the Colesberg front, so General French and several units joined him on the Western Front. French was replaced by General Clements, who had to maintain the Colesberg front with drastically reduced numbers of men – and at the same time convincing the Boers that Colesberg/Norvalspont would be the main point of attack. And in fact, the Boers concentrated more burghers at Colesberg, because they were duped by Roberts’ plan.
Due to the Boer activity near the Oorlogspoort River in the east, Major Stubbs and the Worcestershires occupied a hill (henceforth called “Stubbs Hill”, 18 km south-east of Colesberg), on 6 February. The Boers under General Celliers continued their reconnaissance along the river, with the aim of outflanking the British in the east. The Boers drove out some Tasmanians from some hills around Vergelegen Farm. This was the Tasmanians’ first experience of being under fire, and they had to retire by galloping back under heavy fire, back to Jasfontein. The same happened to an Australian detachment of Victoria Mounted Rifles.
On the same day (6 February), the West Australians were under fire for the first time. A group of 80 Westralians under Major Moor formed part of a force sent out towards Potfontein (25 km south-east of Colesberg). These men came under very heavy fire. Moor narrowly escaped capture, after he gave his horse to another man; he was saved by his subaltern on another horse.
During this frenetic episode, the only causalty was a non-combatant, Mr Lambie, an Australian correspondent, who had accompanied the troops. Mr Hales of the Daily News stayed with his friend, and was taken prisoner. He was subsequently released. (Lambie’s grave is now in Colesberg cemetery).
The Colesberg stalemate
The entire Colesberg front now (early February 1899) stretched across about 50 km. From west to east were the following British units:
Between Windmill Camp and Maeder’s Farm, with posts at Hobkirk’s Farm and Bastard’s Nek, 9 km west of Colesberg: Wiltshires, Inniskillings, South Australians and Victorians
Coleskop, 4 km west of Colesberg: The Bedfordshires and two guns
Kloof Camp, 3 km north-west of Colesberg: Wiltshires and New South Wales Military Infantry
McCracken’s Hill, 2 km west of Colesberg: The “immovable” Berkshires
Porter's Hill, 2 km south of Colesberg: The Bedfordshires, led by Col Carter of the Wiltshires
Rensburg Siding, 15 km south of Colesberg (British HQ): General Clements, with some Bedfordshires, Australians and Royal Engineers
Slingersfontein, 15 km south-east of Colesberg: The Worcestershires, the Royal Irish, some Inniskillings, the West Australians
Jasfontein, 18 km south-east of Coelsberg: A few Rimington Guards
General Clements himself, with some Bedfordshires, Australians, and Royal Engineers, at his HQ at Rensburg Station.
The Boer Commandos were placed, from west to east:
The Boers in the west: From Colesberg to Plessis Poort, and covering the road to Philippolis, with 1500-2000 men; they were led by General “Groot Freek” Grobler, a Transvaler
The Boers in the centre (within Colesberg): General Piet de Wet (Free State) and General Schoeman (Transvaal)
The Boers in the east: Genl de la Rey (Transvaal), with Genl Lemmer and Genl Cilliers, and Commandant van Dam of the Johannesburg Police.
New Zealand Hill
The British attack on the eastern side of Colesberg: New Zealand hill
After the disaster in the west, French now decided to attempt an attack on Colesberg from the east. Colonel Porter and his 6th Dragoon Guards occupied Slingersfontein Farm on 9 January. He was accompanied by the New South Wales Lancers, New Zealanders, and some Yorkshires. Slingersfontein became an important British base.
Now General de la Rey arrived in Colesberg, having fought successfully at Magersfontein. He blocked French’s ambitions to get across the railway east of Colesberg, and this raised Boer morale.
The Yorkshires held a high, steep hill (afterwards known as “New Zealand Hll”). At daybreak on 15 January, the Boers subjected the Yorkshires to a heavy fire. Under cover of the fire, a party of 50 Boers had climbed the steep north-western end of the hill and overran the Yorkshires. It was at this crucial stage that Captain WN Madocks of the New Zealand unit took command of the wavering Yorkshires, and giving the words, “Fix bayonets – Charge!” rushed forward, and re-captured one of the sangars. Then the New Zealanders rushed in, and the Boers retreated. This was a highly impressive performance by a junior officer.
The next day, on 16 January, the New South Wales Lancers under Lieutenant WV Dowling faced disaster to the east of Slingersfontein, when they were cut off by a party of 35 burghers (Pretoria Police), under Lieut PC de Hart. The Australians were blocked by a wire fence, and then made for a small hill nearby. A rapid firing duel ensued (The Friend 23 February), and four Australians were killed. Lieut Dowling and several other Australians were wounded, and 18 Australians were forced to surrender. After being captured, Dowling was treated in Dr Towart's Field Hospital (Blackie de Swardt, 963 Days at the Junction, p. 37.
The British defeat at Suffolk Hill
By now, French was ready for another assault on Colesberg, mainly from the West. He wanted to capture “Grassy Hill”, north-east of Colesberg (this hill was known, since then, as Suffolk Hill).
On the 5th January, Colonel Watson of the Suffolks requested permission from General French to launch a night attack, instead of the early morning attack that French had contemplated. French rather reluctantly agreed. That night, the Suffolks set off, with bayonets fixed (and rifles unloaded), up the gentle western slope, and not expecting any opposition. But the Heilbron commando and the newly-arrived Transvalers were close by, and began a withering fire.
Faced by this unexpected resistance, Colonel Watson wanted his men to go down the slope to regroup, and so he gave the order “Retire”; but that caused most men to rush blindly down the hill. Those who tried gallantly to carry out bayonet charges were generally mowed down. Even worse – in the confusion, the Suffolks came under fire from their own artillery. By 5.30 am, the surviving Suffolks surrendered to the Boers. Watson was killed, and most of his officers and men were killed, wounded or captured.
LS Amery (vol 3, p 138) described the next day: “The victors treated their wounded prisoners well, and were most sympathetic and courteous to the British burial party … They readily gave their help, and a pathetic scene took place at the open graveside. A grey-headed burgher asked leave to make an address. In a rough, simple way he deprecated war and the sacrifice of human life, and prayed for the time when all men should live at peace with each other. Then the assembled burghers sang a psalm”.
“Suffolk Hill” was a huge setback for the British. After that defeat, French concentrated on consolidating his forces.
The British decoy operations in the west
While General French tried to outflank the Boers east of Colesberg, he launched two decoy operations, to keep the Boers busy elsewhere - south and east of Colesberg.
In the first decoy, Colonel Porter of the 6th Dragoon Guards took over a small hill south of Colesberg, which was then used as an observation post. This hill was henceforth known (by the British) as “Porter’s Hill”. Porter and the New Zealanders then launched a blistering attack on Skietberg Hill, the mountain just south of Colesberg, where the Boers were ensconced.. The British also controlled Coles Kop (Towerberg), from where they shelled Colesberg. This turned out to be an artellery duel, with the Boers using the 15-pounder which they had captured recently at Stormberg.
In the second decoy, Major Rimington and his Rimington Guards moved from Jasfontein farm, south-west of Colesberg, in the early morning of 1 January. They went around the east of Colesberg, to attack Achtertang Siding. Commandant du Toit of the Philippolis Commando tried to head them off, but the Boers were repulsed. It became a stalemate, with the Boers refusing to give ground.
A thrilling sideshow was the fight to secure a derailed British train, on 2 January.
Coleskop, Kloof Ridge, Porter's Hill, and Gibraltar Hill
Genl French continued the pressure on the Boers from the western side of Colesberg. On 4 January, he occupied several strategic points, including the very high hilltop, Coleskop (Towerberg),
In the meantime, General Piet de Wet, Commandant du Toit (of Philippolis) and their Free Staters valiantly counter-attacked on the western side of the front. Their force included about 700 men (largely Transvalers), and 4 guns. They attacked the Inniskillings and Suffolks at Kloof Ridge. The Inniskilling Dragoons, led by Captain Edmund Arthur Herbert, saw some action here, dashing from Porter’s Hill on the south side of Colesberg, past Coleskop to the west, and on to Gibraltar Hill, which was occupied by the Boers. Although they came under heavy fire from the Boers, they managed to drive the Boers from Gibraltar Hill. The Australians lost several men in this engagement.
But General Schoeman failed to provide support to De Wet, and the Boers on Gibraltar Hill became trapped under artillery fire from the Suffolks as well as the 10th Hussars who came from Maeder’s Farm. Capt HB de Lisle’s 2nd Mounted Infantry stormed the outcrop; after a heavy fight, and losses on both sides, many Boers retreated, and some had to surrender. De Lisle’s talented leadership and the excellent shooting by his men contributed to this victory. The Boers became very demoralised.
The British attack on the western side of Colesberg: MacCracken’s Hill and Gibraltar Hill
On 30 December, French concentrated his forces at Rensburg siding, about 10 km south of Colesberg. This little station was occupied by the Berkshire Regiment.
On 1 January 1900, General French began a multi-pronged attack on the western side. This was led by Colonel Fisher of the 10th Hussars (also known as the Prince of Wales’s Own Regiment) and Col. McCracken of the Berkshire Regiment. Their main opponents were the Heilbron Commando and the Bethlehem Commando. Col Fisher moved his unit to Maeder’s Farm, west of Colesberg, on the early morning of 1 January. At the same time, Major McCracken and the Berkshires captured “McCracken Hill”. They scattered the Heilbron commando, who withdrew to Gibraltar Koppie. But Fisher failed to carry the flank movement. The Berkshires sat tight on McCracken Hill, but could not make any progress northwards.
Driving west from the Engen Garage - Coleskop is on the right, and McCracken Hill is on the right.
General French’s offensive:
1 January 1900
General French was now ready for a major offensive in the Colesberg area. He planned to use a highly mobile force to harrass the Boer positions, convoys and communication lines in order to work round their flanks. This would force them to retreat into Colesberg and eventually to withdraw across the Orange River, and the British would then occupy Colesberg.
French based his force at Noupoort, to protect the railway. His force consisted of the Second Berkshires, the 6th Dragoon Guards, as well as 75 New South Wales Lancers and the First New Zealand Contingent. Gradually, he grew his forces to about 2000, with the addition of the 1st Suffolks in early December. Soon after, the force was joined by the Inniskilling Dragoons and the 10th Hussars.
From Noupoort, French moved his forces northwards to Arundel, which became an important British base for the next six weeks. (today, Arundel is a deserted little siding, 20 km south of Colesberg, on the N9).
Genl Piet de Wet’s offensive
On 13 December 1899, General Piet de Wet was sent to Colesberg to stiffen Schoeman’s resolve. His headquarters were on the farm Kuilfontein (owned by Thomas Plewman, but abandoned in early November 1899.
Piet De Wet attacked Col. Porter at Arundel Station, with two cannons. Porter quickly reinforced his unit with artillery and mounted troops, and forced De Wet to retire. At Rensburg, General French also beat back a Boer attack. A few days later, Piet de Wet attacked French’s outpost at Vaalkop (a hill south-east of Kuilfontein) with field artillery, forcing the British (the 10th Hussars) to retire to Arundel.
The terrain around Colesberg is extremely rugged – apparently Lord Kitchener described it as “the most infernal country he ever saw”. Neither side would be able to defeat each other in those rocky outcrops. So each side tried to outflank the other. This meant that The Colesberg front now began to expand in eastward and westward directions – a wide front which would eventually span almost 50 km, from Jasfontein in the east to Bastard’s Neck in the west.
The British strategy
At the very earliest stage in the war, the British anticipated that they would launch their main attack against the Boer republics along the central railway, via Colesberg, into the Free State, as well as from Natal, the Queenstown direction, and from Kimberley. In mid-December 1899, the British were defeated in three important battles (Colenso in Natal, Stormberg, and Magersfontein near Kimberley). This was the infamous “Black Week”, which caused consternation in Britain. Some of its best regiments had been defeated by bands of fighting farmers!he Boers’ mobility on horseback made “one Boer worth three or four English”.
These defeats left the Colesberg front as the main British thrust northwards.
But not for long.
When Lord Roberts was appointed as supreme commander of the British forces, in December 1899, he decided to abandon all these proposed directions of advance. Instead, he would move to the Orange River (near Hopetown), and then strike directly across the Southern Free State towards Bloemfontein – the first thrust that would depart from the main railway lines.
This “Great Flank March” was extremely difficult (with no railway to rely on for supplies); it was also spectacularly successful. On 13 March 1900, Bloemfontein fell to Lord Roberts.
While Roberts was planning for his Great Flank March, the Colesberg “theatre of war” remained a tough test of wills between the British and the Boers. But increasingly, it became an elaborate decoy, to keep the Boer commandos busy, while Roberts gathered his forces and supplies further west, near Hopetown.
It certainly kept the Boers busy. The British were led by two very resourceful officers: Initially, General John French, and subsequently, General Ralph Clements.General Schoeman was completely outclassed by General French. While French was strengthening his force, and occupying strategic hills, Schoeman with his 4000 Boers remained passively at Colesberg.
It took two very good commandants, Piet de Wet and Koos de la Rey, and up to 7000 burghers to to block the British advance.
Initially, the main British units in the area were the Yorkshire Light Infantry and the Scottish Black Watch. On the 1st December 1899 the first English reinforcements, made up of 400 men of the New Zealand Mounted Rifles under the command of Major Robin, arrived at Naauwpoort.
By 5th December 1899, Colonel TC Porter arrived with a battalion of Suffolks, the second half of the battalion of the Black Watch, three squadrons of the Carabiniers, R & O Batteries of Horse Artillery and two breech loading 15-pounders.
The initial Boer advance southwards
The British forces in the Central Front were quite unprepared for the rapid Boer incursions. On 4 November, the Yorkshire Light infantry abandoned Noupoort station, and retired to Middelburg. The Boers were unstoppable. The Cape Colony declared military law in the northern parts of the Cape Colony.
The Boers occupied Aliwal North (13 November) and Colesberg (14 November). Everywhere, the Boers were greeted enthusiastically by local Afrikaners, but resentfully by English-speakers. In Colesberg, 160 men joined the Boer force. In Colesberg, the Boers arrested prominent English-speakers and kept them in the local jail – for a total of 96 days.
Then the Boers occupied Burgersdorp, Jamestown, Venterstad, Lady Grey and Barkley East. The Boers made their most southerly base at Arundel siding (20 km south of Colesberg). But the Boers, under the over-cautious General Schoeman, failed to occupy the important railway junction town of Noupoort! This was a major mistake, as the British built up Noupoort as their main base.
Schoeman had major disagreements with other Boer commanders (such as Piet de Wet) who wanted to pursue a more aggressive approach. Convinced that a large force under General French was bearing down on them from the south, Schoeman abandoned Rensburg station and fell back to Colesberg, leaving Rensburg siding available for the British as their northern base camp.
The Boer invasion of the Cape Colony
In October 1899, a total of 2 500 burghers (Boer fighters) massed on the Orange River. They were fighting a defensive war, to prevent the British from bringing the war to Republican soil.
Colesberg was an important town – the most northern town in the central front of the Cape Colony. The Boers expected a major British advance along the railway line. Initially, the Colesberg part of the front would be held by about 5 000 men under Lieut-General John French – a British commander who would have a brilliant career in this war, as well as World War I.
On 1 November 1899, the Free Staters, under General Esias Renier Grobler, occupied the Norvalspont railway bridge and Colesberg road bridge (on the road between Colesberg and Philippolis). The advance guard invaded the Cape Colony. On 4 November, two Free State commandos and some ZAR burghers invaded the Colony near Norvalspont, and destroyed railway bridges and telegraph lines.
Brother of Christian de Wet, and a dynamic leader in the Colesberg front
Commander of Transvaal burghers in Colesberg
Commanding Free State forces in Colesberg
Just east of the railway siding - the camp was here
Solitary, abandoned and atmospheric
Blindfolded black people brought into the camp
The old station sign
Always looming on the horizon ...
Travelling west from Engen Garage
Travelling west from the Engen Garage
He led the charge on Suffolk Hill
A lonely monument to the fallen soldiers
A steep climb
Driving out westwards from Colesberg - about 5 km
Protective walls at the summit
The Suffolk Memorial blends into the vast landscape
President Steyn's proclamation, 20 October 1899: "As Great Britain is at present at war with the people of the Orange Free State, and the territory of the Cape Colony is used as a basis of war operations against this State, without the wish of the peaceable inhabitants of the Colony being known, I have therefore commanded by officers to cross over into the territory of the Cape Colony with no other object than for the defence of my land and my people and for the preservation our independence".
Australia Hill and Worcester Hill
On 9 February, De la Rey went on the offensive. He attacked on a wide front, reaching west and east of Colesberg. The West Australian Mounted Infantry (“Westralians”) and the Inniskilling Dragoons – who later occupied Philippolis – were trapped on Stubbs Hill.
The Australians were forced back to Australian Hill (20 km south-east of Colesberg), and managed to stay put until sunset, when they managed to fall back in small groups. In effect, the Boer advance was stopped by the determination of this handful of Westralians, supported by four guns. Several British soldiers surrendered to the Boers, and Sergeant Hensman and Private Conway were killed. General Clements was highly complimentary, referring to the courage and determination shown by a party of 20 men of the West Australians under Captain Moor: “By their determined stand against 300 or 400 men, they entirely frustrated the enemy’s attempt to turn the flank of the position”.
On 11 February, Commandant Celliers shelled the large British camp at Slingersfontein.
On the very early morning of 12 February, the Worcester hills (or “Keeromskop”) were attacked by Transvalers and Bethlehem burghers; the rest of the Boer force kept up a heavy fire against the hill. This hill was held by Worcestershires, under Captain Hovell and Brevet-Major Stubbs. As a result of the heavy fighting, the British had to retreat from the crests of the hills. But as day broke, the Worcesters’ fine musketry drove the Boers off the hill. The Boers withdrew in late afternoon. Major Stubbs and Lieut-Col Coningham were killed. At the same time, other Boer units were threatening Slingersfontein Camp, Jasfontein and New Zealand Hill.
"The lion of the Western Transvaal" | <urn:uuid:11da9877-cf4b-4e6d-8032-8b5bbc47a419> | CC-MAIN-2023-06 | https://www.karoobattlefields.com/colesberg | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.971334 | 6,696 | 3.546875 | 4 |
Issue-Based Science | Developed by SEPUP | Lawrence Hall of Science
If you're familiar with current NGSS work you know the use of phenomena is a big deal. The issues in SEPUP middle and high school provide context for relevant and connected phenomena within the unit. Not just an event or process that links scientific content - it is the big idea that has meaning in students' lives and is anchored in real-world events. These issues connect the why for students, "Why does this matter?"
Issue-oriented science forms the foundation of SEPUP’s curriculum materials and it is the only secondary science program to do so.
What will students be doing?
Students using Lab-Aids programs quickly come to expect that they will be purposefully active during science. Science and Engineering Practices are a regular part of their week and are the main vehicle for content as students collect data, engage in relevant dialog, and keep notes both on what they've learned and their understanding as it develops.
Three Dimensional Learning
Connecting the three dimensions
Correlation documents, like those below, have value during a curriculum review but no matter how thorough, it is difficult to visualize the connectedness of the three-dimensions in such a format. Enter the Learning Pathway - a tool used by SEPUP during their middle school development of Issues and Science: Designed for the NGSS to be sure the curriculum truly weaves the dimensions together.
Equitable access to learn
Both middle and high school units are student-driven; consistently fostering student questions and making sure that students feel that their questions are leading the learning experience. All units have embedded opportunities for students to demonstrate their understandings and abilities in a variety of ways, including some that don’t rely on English speaking or writing skills.
Engaging in Argument from Evidence
Middle School Courses | Massachusetts
Issues and Science has earned the second-highest overall scores for middle school science curriculum. EdReports found Issues and Science to fully meet expectations for three-dimensional learning and assessment, to present phenomena and problems as directly as possible, and to use the assessment system to show evidence of increasing student sophistication in the content from grade six to grade eight.
Look through a typical unit
Take ten minutes to watch an overview of one unit, Energy, from Issues and Science developed by SEPUP. The first five minutes covers the unit's anchoring phenomenon, the unit issue, and a brief visit to each lesson, while the second half takes a deep dive into one activity. This is a great way to quickly see how SEPUP's NGSS units are structured.
Recommended Middle School Programs for Massachusetts
High School Courses | Massachusetts
SEPUP Science and Global Issues: Biology
SEPUP Science and Sustainability
Living on Earth
Feeding the World
Using Earth's Resources
Moving the World
Material World: A Global Family Portrait by Peter Menzel
EDC Earth Science
Hydrosphere: Water in Earth's Systems
Atmosphere and Climate
Earth's Place in the Universe
The Rock Cycle
PROFESSIONAL DEVELOPMENT AND TEACHER SUPPORTS
Professional development is a critical component of any new instructional material implementation, perhaps now more than ever. Lab-Aids works build internal leadership and long-term sustainability of the program. This professional development additionally supports a deepened understanding of the new standards and how they strive to serve all students.
Science educators have come to trust Lab-Aids as a valuable resource for engaging and worth-while professional development - whether it's an on-site implementation training, the Summer Academy to develop internal leadership, or a day focused on new standards or best-practice strategies.
Online Portal Trial
All programs allow teachers online access to the Teacher Edition, student books, relevant student sheets and visual aids, editable PowerPoints, assessment tools, and additional web content. This can be customized for each district's needs.
We welcome the opportunity to address any questions, should they arise. | <urn:uuid:5e055765-bdef-4c77-a79e-d5cfdd4edde3> | CC-MAIN-2023-06 | https://www.lab-aids.com/massachusetts | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.930212 | 863 | 3.796875 | 4 |
A research team at the Icahn School of Medicine at Mt. Sinai have identified certain sub-populations of brain cells located in the prefrontal cortex that are needed for normal sociability in adults and are also profoundly vulnerable to social isolation in juvenile mice. The prefrontal cortex in the brain is a key part of the brain that regulates social behavior. The study conducted on mice shows long lasting effects and also directs the way to potential treatments.
Loneliness and isolation are both recognized as serious threats to mental health. Young people are feeling an increasing sense of isolation even as the world becomes quite a bit more connected through digital platforms. The COVID-19 pandemic which has forced many countries around the world to implement school closures and social distancing, increases the need for better understanding of the mental health consequences of loneliness and social isolation.
Research has shown that being socially isolated during childhood is damaging to adult brain function and behaviors across mammalian species. However, the core neural circuit mechanisms are poorly understood.
The research team’s discovery sheds light on a previously unrecognized role of the sub-populations of brain cells in the prefrontal cortex. These cells are known as medial prefrontal cortex neurons projecting to the paraventricular thalamus which is the area of the brain that sends signals to specific components of the brain’s reward circuitry. If the team’s findings can be produced in humans, this could advance to treatments for psychiatric disorders that are associated with isolation.
The team also demonstrated that the vulnerable circuit they identified is a promising target for treatments for deficits in social behaviors. By stimulating the specific prefrontal circuit projecting to the thalamic area in adulthood, they were able to rescue sociability deficits caused by social isolation during the childhood.
The team discovered that in male mice 2 weeks of being socially isolated as soon as they are weaned could lead to a failure to create medial prefrontal cortex neurons eminated to the paraventricular thalamus for periods of social exposure in adults. They discovered that childhood isolation led to not only reduced excitability of the prefrontal neurons projecting to the paraventricular thalamus, but also increased inhibitory input from other related neurons. This suggests that a circuit mechanism underlies sociability deficits caused by childhood isolation.
In order to determine if acute restoration of the activity of the prefrontal projections to the paraventricular thalamus is sufficient to ameliorate sociability deficits in adult mice that had been put under juvenile social isolation, the researchers used a technique known as optogenetics to primarily stimulate the projections to the thalamus.
In addition, they used chemogenetics in the study which enabled them to stimulate particular neurons in freely moving animals through pulses of light. Chemogenetics allows non-invasive chemical control over populations of cells.
By employing both techniques, the team was able to substantially increase social interaction in the mice once light pulses or drugs were given to them. The team checked the presence of social behavior deficits just before stimulation and when they checked the behavior while the stimulation was going on, they found that the social behavior deficits were reversed.
Since social behavior shortfalls are commonly a characteristic of a lot of neurodevelopmental and psychiatric disorders like schizophrenia and autism, identification of these certain prefrontal neurons will lead towards therapeutic targets for improving the social behavior shortfalls that are shared across a spectrum of psychiatric disorders. The circuits found in this study could possibly be modulated through techniques such as transcranial magnetic stimulation and/or transcranial direct current stimulation.
To view the original scientific study click below | <urn:uuid:a8a6db9a-a566-4813-9b23-b02e4683844e> | CC-MAIN-2023-06 | https://www.lifecoderx.com/brain-circuit-damage-due-to-childhood-social-isolation/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.950532 | 721 | 3.640625 | 4 |
My children love to match the pebble shapes and to then sound the letters out. Could easily be chalked onto a desk or floor. Extend to think of something beginning with that sound or perhaps to make cvc words with the pebbles. Enjoy!
We're Going on a Phonics Hunt
Lovely activity, which is great fun and that children really enjoy. Can help to build vocabulary, creates lots of ideas when solving the clues, but is great for practising those initial sounds. Indoors or outdoors, make it fun!
Phonic Sorting Plates
Fun ways to practice initial sounds - object sorting or you make it a treasure hunt. You could follow it up with tracing the letters, chalking the letters outside or looking for them in a story. Whatever you do, make it fun!
Early phonics is all about recognising letter shapes and then remembering their sounds. Here's a fun way for children to seek out letters, colour mark them and practice the sound that they are. Challenge if you are able. This can easily be adapted for more challenging sounds and special friends, th, sh, ch, n, nk etc.
Read it, make it, write it!
An idea I've used many times. Simply write out your words, use cut up post-its or use magnetic letters for the child to make the words and then bits of paper for the child to practise writing the word. Gauge the words to where your child is with their learning, but add in a challenge. Use the word challenge as children love to know you're making it more tricky for them.
Name Recognition & Writing
Many little ones will be starting school in the next couple of weeks and being able to recognise their name is really helpful. Don't worry about the writing unless your little one wants to have a go, focus on the recognising.
Although this is labelled a writing post, it could easily be used to increase or improve oracy skills. Speech, language and vocabulary are crucial for younger children to develop and as they grow they will form a solid foundation for writing. Let your little one choose objects for you and vice versa. What incredible stories could you/they create together or independently? For those able to write, encourage creativity, fun and allow imaginations to run wild.
Contractions can be quite tricky for children, here's a way to practice and an activity you can keep coming back to
How about giving this a go this week? Can be as easy or a challenging as you decide. Whatever the topic, think about vocabulary together, then let them have a go. For a younger child, perhaps create together, encouraging and using their ideas. If you ask the children to use their names, ask them to think of a positive for each letter about themselves. You could simplify the whole activity by just using one word per letter i. e S - sunshine P - plant R - rainbows I - insects N- nests G - growing
Nice activity to develop language, fine motor and mathematical skills. Ask your little one how they think the buttons should be sorted and then start sorting. Encourage new words and the many different ways to sort. Challenge by adding some counting; which group has the most/least buttons, how many more in one group thsn another etc.
A nice activity for practising letter sounds, reading and thinking about words. You could make it more challenging by using 'special friends' such as sh, ch, th, ng, nk or remove more letters so that they have to form more or perhaps all of the word through sounding out. Make it a game, make it fun, and give lots of praise!
World Book Day
World Book Day 2021 is on the 4th March; just over a week away. Whether you're at home or in school why not celebrate the joy of books and enjoy the day. Here are a few ideas.
Tricky Word Hearts
An idea that was given to me by a friend; tried and tested and it was great. The heart hunt was the first bit of fun, followed by the excitement of who and if they had a 'secret' word. As they scribbled their colour over the top, they enthusiastically tried to read the word. Then they were more than happy to try a sentence and then wanted to hunt some more. Lots of love for tricky words!
Matching Lowercase & Uppercase Letters
A fun way to teach capital and lowercase letter matching. Make it fun and swap roles, with you doing the matcing of letters. Make a few 'accidental' mistakes so that your little one gets the chance to correct you. Add some challenge by adding a timer, how quickly can they match the letter pairs? Extend the activity by writing out the letter pairs. Enjoy!
Phonics and Blending Words
Nice way to practise the blending of words and to recognise letter sounds. You could use pictures from old magazines. This activity could be extended to writing a label or drawing the objects.
Rhymimg Word Post Boxes
Rhyming words can be great fun, but little ones grasping the concept can be tricky. Try to explain what a rhyming word is and relate to a book if possible. I love Julia Donaldson, so I often use her books to help children understand how rhyming words work. Ask your little one to think of words that rhyme for themselves, write them down and then show them the word endings, what do they notice? I've used 3 old shoe boxes, which I've made into post boxes, but you easily list the words. Enjoy!
Tricky Word Twister
Weekend is upon us, so why not indulge in a little family fun. Indoor or outdoors, adapt this entertaining game and practise some 'tricky words.' Lots of fun for everyone!
Make a Time Capsule
As we're in the strangest of times, why not make a capsule with your little one, recording the time. Children could add a newspaper, family picture, a letter to their older self, a handprint and a drawing. Just a few ideas, I'm sure you'll think of lots of others.
Show & Tell
Such a wonderful activity for all ages. It builds confidence, pride and self esteem in a child. It also helps to build relationships. Be patient with this activity, give your little one time to think about their toy/object. Allow them time to speak and then ask open questions so they can elaborate more. Swap over and allow the child to listen and then question. Enjoy!
Great activity for maintaining focus while practicing phonics. You could create alternative xylophones by using saucepans, tins, bottles etc. Be creative!
A fun way for children to blend early words and to practice letter sounds. Make it more challenging by adding in 'special friend' sounds sh, ch, th, ng, nk; ship, chin, thin, ring, pink. Make it fun!
An easy way to keep the phonics going at home. Ask your little one to make the words and then swap over and let them be the teacher. As the 'learner' make a few mistakes so they can correct you. Builds confidence and allows the child to know its OK to make mistakes, it's the trying that counts.
Against the Clock
Make it interesting, make it fun! This activity incorporates maths alongside reading. Great for all ages and abilities and is great to play with siblings as well as mum and dad. Enjoy!
Matching Objects to Initial Sounds
Taking the learning outdoors, makes it discreet. Children often don't know they are even doing it because the learning environment has changed, the four walls have gone. Looking a early phonics today, these are letters which are often written incorrectly and sounds mixed up. Practise doesn't always make perfect, but it always makes better!
This activity allows for lots of opportunity to work with letters and words. Letter recognition, blending, reading words; moving onto matching words in sentences, reading a sentence and then being able to construct their own sentences. Indoor or outdoor, make the learning fun, make it different. | <urn:uuid:ae1c5ae7-7ceb-42b7-8628-782ed4c6a70a> | CC-MAIN-2023-06 | https://www.makinglearningfunwithmrsh.com/literacy | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.953104 | 1,719 | 3.71875 | 4 |
Why is story time important for children? Story time helps children develop a lifetime love of books and reading and provides early language and literacy skills. Being a part of storytime helps extend reading experiences by incorporation of music and movement. Often times dances and movements are encouraged during the story, this builds memory with repetition. Bringing your child to story time also helps introduce your children to being part of a group and helps with school readiness. Story time provides phonological awareness and develops listening skills.
Story time lasts anywhere from 20-30 minutes. Preschool story time is designed for ages birth-5 and parents remain with the children. Opportunity for the parents to participate and interact with the children and the story are encouraged. Parents are the child’s first and best teacher! Throughout the toddler and preschool years, your child is learning critical language and enunciation skills. By listening to you read One Fish Two Fish Red Fish Blue Fish, your child is reinforcing the basic sounds that form language. “Pretend reading”—when a toddler pages through a book with squeals and jabbers of delight—is a very important pre-literacy activity. As a preschooler, your child will likely begin sounding out words on his own. For more information contact the Children’s Librarian, Ashley Bishop, by email at [email protected] | <urn:uuid:07d3b080-a324-49e4-a720-7ccec29bef8b> | CC-MAIN-2023-06 | https://www.oclibrary.org/childrens-department/story-time | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.945713 | 276 | 3.796875 | 4 |
Many Washington DC homes have hot water heating systems, also called hydronic systems. They make use of water’s excellent efficiency for transferring heat. Hot water circulates through the house in a network of pipes that connect to radiators or baseboard convectors that transfer the heat to the air. Return pipes cycle the water back to the boiler to be heated again.
The heart of the system is the hot water boiler. As you might expect, boilers get their name because they are boiling water to produce heat. That doesn’t mean there’s a cauldron of water bubbling away inside the boiler’s walls. Water inside the boiler is contained entirely within coils of pipes. Burners beneath the pipes heat the water as it circulates through the pipes. The burners can be gas or oil fired, or electric.
When a pot of water boils on a stove, it sends a lot of heat and steam into the air. Put a lid on the pot and the pressure from the boiling water lifts the lid to allow the steam to escape. Now imagine water being heated inside the coils above the burner. As the pressure builds it has nowhere to go so it drives the water out of the coils and into the network of pipes connecting to the radiators. As it circulates, the hot water pushes the cooler water through the pipes and back down to the boiler. The movement of the water through the system may be assisted by a motor-driven circulating pump connected to the return pipe where it enters the boiler. The pump creates negative pressure that helps cycle the water away from the boiler, through the pipes and radiators, and back to the boiler.
The basic operation of hydronic systems may sound simple, but safely and efficiently controlling it requires a series of sophisticated components. As the water is piped away from the burners and out of the boiler it flows through a valve connected to an expansion tank, which allows the water to expand as it heats. The expansion tank is a large, cylindrical object that hangs off the pipe exiting the boiler.
The large pipe heading away from the expansion tank is divided into a series of smaller pipes, each of which is connected to a zone valve, a small metal box with electrical wires attached to it. The zone valves are wired to the thermostats in the house. When the thermometer in the thermostat drops below the set temperature, it sends a signal to open the zone valve. The hot water from the boiler flows through the valve and into the series of pipes and radiators that service that zone. There may be one, two, or several zone valves depending on the number of zones in the house.
The boiler is also connected to the house water supply so it can be refilled if it loses water. The water supply is usually a small diameter copper pipe exiting the boiler and connected to a shut off valve. The shut off valve also has a pipe leading away from the boiler and connecting to the house’s cold water supply. There is also a pressure relief valve attached to another small diameter copper pipe running out of the boiler. The valve relieves excessive water pressure that builds up inside the boiler.
Oil heat boilers have an electric burner motor that pumps fuel oil out of the tank and into the boiler. The burner motor is attached to the boiler and should have a red reset button that pops up when the boiler shuts down from a malfunction. Gas powered boilers have a smaller gas valve that regulates the flow of gas into the boiler.
Exhaust gases from the burned fuel are vented through a large diameter stack rising up from the center of the boiler and into the chimney. The exhaust stack may be sealed or it may run into an even larger diameter vent with a cone-shaped skirt.
The aquastat is the electrical switching device that ignites the burners when a zone control sends a signal to the boiler calling for heat. The aquastat may be housed in a small metal box attached to the boiler, or it may be inside the boiler. In either case, it will have thick electric cables leading into it. The last critical component on the boiler is the pressure/temperature gauge. If a problem arises with the heating system, it allows the homeowner or the Washington DC boiler technician to determine if the boiler is overheating, losing pressure, or not functioning.
To schedule the annual maintenance for your boiler today, give Polar Bear Air Conditioning a call! | <urn:uuid:a87ac660-56d7-4a97-bdce-abe995fd4a5f> | CC-MAIN-2023-06 | https://www.polarbearairconditioning.com/blog/tag/boiler-maintenance/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.936551 | 911 | 3.59375 | 4 |
Education & Teaching degrees teach students the skills needed to instill the next generation of adults with the knowledge they require to succeed. Education degrees are tailored toward the level at which the student wishes to teach (elementary education, K-12, etc.). They delve into topics like psychology, management, and general education to give students a comprehensive education, thus allowing them to teach in schools.
* Teaching English as a Second or Foreign Language
* Teacher Education and Professional Development
* Student Counseling and Personnel Services
* Special Education and Teaching
* Education, General
* Variety of knowledge including history, English, and mathematics through liberal arts classes
* Methods in curriculum development and techniques for evaluating the efficacy of the curriculum
* Styles of observation and analysis of different teaching methods
* Extensive knowledge on the subject for the curriculum of focus
* Exposure to courses in child psychology, especially dealing with their stages of growth and development
* Effective methods for classroom management and discipline
* Ability to master techniques of instructional methods to successfully teach children
Those receiving an associate degree can find employment as preschool or kindergarten teachers. Graduates of a bachelor’s degree in education can work in elementary, middle, and high schools. And those earning a master’s or doctoral degree have a wider array of career opportunities. They can become administrators within schools or for whole districts, as well as professors at colleges or universities.
Primary and Secondary educators typically work in public or private schools. Post-secondary educators, on the other hand, can work in local, state, and private colleges and universities. These individuals also have the chance to work in research environments, collecting data for academic papers.
With education remaining a pillar of society, the Bureau of Labor Statistics projects a seven-percent increase in the employment of primary educators between 2016-2026. Secondary and post-secondary teachers follow with 8 and 15-percent growth, respectively.
Reflecting this growth is the earning potential in the area of education. The Bureau reported that primary educators earned a median salary of $56,900 in 2017, with secondary and post-secondary following at $59,170 and $76,000 annually While these figures represent national averages, it’s important to note that additional compensation can be earned through/based on continued education and credentials, as well as years of experience. | <urn:uuid:e666c35e-9f55-4b0d-82bd-1f0b8a1eb8e9> | CC-MAIN-2023-06 | https://www.thecollegeforu.com/programs/education-teaching/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.951045 | 475 | 3.5625 | 4 |
Responsible decision-making is the ability to make choices that are good for you and for others. It is also taking into account your wishes and the wishes of others. The ability to understand yourself, your actions, how your actions affect others, and what is socially acceptable all go into the responsible decision-making process. Throughout high school, your teen will become more and more independent until they are ready to leave your household. By continuing to support your teen and allowing them more responsibility and room to make their own decisions, you can put them on a path to success after high school.
The high school years are a time of great personal development as teens are further developing their identities, preparing for adulthood, and gaining more independence. Encouraging your teen’s social and emotional development is still important at this age, as these skills can be developed throughout life. While your teen is becoming more independent, it is important to remember that you are still needed. Reminding your teen that you care can go a long way in keeping them on track and planning for the future.
Your high-schooler should be able to identify legal issues related to substance use, like drunk driving.
Your high-schooler should be able to understand the impact of their choices on others. For example, they should know how picking on a classmate or friend will hurt that classmate.
Your teen should also be able to realize that what is right might not always be popular. For example, they may want to make friends with a transfer student while their peers decide to use the new kid as a target for bullying. If your child chooses to befriend the student anyway, they're showing that they are capable of making responsible decisions. Of course, your teen is still learning and growing. Be prepared for them to make great choices one day and awful ones the next as they continue to develop this skill.
Keep in mind that all adolescents have different social and emotional tendencies and behaviors and develop at different rates. The concepts highlighted in this section are based on the five sets of competencies developed by the Collaborative for Academic, Social, and Emotional Learning (CASEL). If you have concerns about your adolescent’s development, please contact your healthcare provider, his teacher, or his school counselor.
Learn more about how to support your teen with our 12th-grade decision-making tips page.
Parent Toolkit resources were developed by NBC News Learn with the help of subject-matter experts, including Maurice Elias, Director, Rutgers Social-Emotional and Character Development Lab; Jennifer Miller, Author, Confident Parents, Confident Kids; and Thomas Hoerr, Emeritus Head of School, New City School. | <urn:uuid:6982fa00-7289-4d72-a74a-d36d913fce3c> | CC-MAIN-2023-06 | https://www.today.com/parenting-guides/12th-grade-responsible-decision-making-skills-t178983 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.969841 | 545 | 4.0625 | 4 |
Human impacts in biodiversity hotspots
Island species and ecosystems are currently highly threatened by human activities and often confined to small patches of remaining native vegetation. Human impact takes place not only in the present but started several centuries (and in some cases millennia) ago when people first settled these previously uninhabited islands. People removed the native vegetation cover to start agricultural- practices, hunted species to extinction, and introduced exotic species. But why are some islands more impacted than others by human activities?
To answer this question, an international research team studied 30 islands in five archipelagos in the Atlantic Ocean: the Azores, Madeira, Canary Islands, Cape Verde, and the Gulf of Guinea Islands. The researchers performed a statistical analysis on several variables related with topography, climate, human activities and demography. ‘Our results show that those islands with a relatively large extent of remaining native ecosystems, generally have a more rugged topography, which suggests that biodiversity on islands with inaccessible landscapes is sheltered from human activities,’ explains Sietze Norder, first author of the study and researcher at both the Institute for Biodiversity and Ecosystem Dynamics (University of Amsterdam, the Netherlands) and the Centre for Ecology, Evolution and Environmental Changes – cE3c (University of Lisbon and University of the Azores, Portugal).
Although topography seems to play a major role, the modern patterns of native vegetation might also partly reflect changes in demography and socioeconomic trends since first human settlement. Therefore the research team gathered data to reconstruct historical demographic and socioeconomic changes in these archipelagos over the last centuries. This historical information (qualitative data) was used to contextualize the statistical findings (quantitative data).
Previous studies that relied either on qualitative approaches or quantitative approaches sometimes reached contrasting conclusions about the relative importance of environmental and societal drivers of land cover change. Norder: ‘Our study shows that interdisciplinary approaches that integrate quantitative and qualitative information have great potential to enhance our understanding of human-environment interactions’.
Similar to the islands in the Eastern Atlantic, islands worldwide have been largely transformed by human activities. Human impacts are not restricted to the removal of native vegetation, but include other changes as well, such as the introduction of exotic species, extinction of unique island species, and abiotic aspects such as soil erosion. ‘Rather than solely registering these changes on individual islands, the next step is to assess for different regions across the globe how and why human impacts on biodiversity differ,’ concludes Norder.
Sietze J. Norder, Ricardo F.de Lima, Lea de Nascimento, Jun Y. Lim, José María Fernández-Palacios, Maria M. Romeiras, Rui Bento Elias, Francisco J. Cabezas, Luís Catarino, Luis M. P. Ceríaco, Alvaro Castilla-Beltrán, Rosalina Gabriel, Miguel Menezes de Sequeira, Kenneth F. Rijsdijk, Sandra Nogué, W. Daniel Kissling, E. Emiel van Loon, Marcus Hall, Margarida Matos, Paulo A. V. Borges: ‘Global change in microcosms: environmental and societal predictors of land cover change on the Atlantic Ocean Islands’, in: Anthropocene (May 2020). https://doi.org/10.1016/j.ancene.2020.100242 | <urn:uuid:8c170d96-5839-42c9-8d4b-29fb13f6fa91> | CC-MAIN-2023-06 | https://www.uva.nl/en/shared-content/subsites/institute-for-biodiversity-and-ecosystem-dynamics/en/news/2020/05/island-biodiversity-best-conserved-in-inaccessible-landscapes.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.893083 | 715 | 4.03125 | 4 |
Wheat is mainly grown for use in human food production. The use of wheat in swine feeds is restricted to times when wheat is competitively priced with corn or other grains. The high price of corn has increased the discussion about the potential use of other grains, like wheat, in swine feeds. It is important to understand some of the limitations of using wheat in swine diets in order to make proper feeding decisions when it is economically advantageous to use wheat.
There are two type of wheat typically available to swine producers: hard red winter wheat and soft
red winter wheat. Pennsylvania, Ohio, Illinois, and Indiana are leading producers of soft red winter wheat varieties, which are manufactured into cake, cracker, and biscuit flours. In the Central and Great Plains states like Kansas, Oklahoma, Texas, and Nebraska, hard red winter wheat is grown for use in breads.
A nutrient comparison of hard red winter wheat, soft red winter wheat, and corn is shown in Table 1.
Table 1. Nutrient Content of Corn, Hard Red Winter Wheat, and Soft Red Winter Wheat
|Nutrient||Corn||Hard Red Wheat||Soft Red Wheat|
|Crude Protein, %||8.0||13.1||10.6|
|Crude fat, %||3.8||1.9||1.7|
|Av. Phosphorus, %||.05||.20||.15|
Wheat contains less energy but more protein and lysine than corn. Hard red winter wheat contains more phosphorus than corn, and both wheat types contain more available phosphorus than corn. Hard red winter wheat contains more protein and lysine than soft red winter wheat but contains less energy.
Although wheat contains more protein and lysine than corn, the balance of amino acids in wheat is rather poor. This means swine diets formulated with wheat should be balanced on a lysine basis, not a crude protein basis. Replacing corn with wheat on an equal protein basis decreases the dietary lysine content of the feed and results in slower gains and a poorer efficiency of feed utilization. Research from the University of Kentucky suggests wheat has a similar value to corn when diets are formulated on an equal lysine and energy basis. The feeding value of the two types of wheat appears to be similar for starter and grow-finish pigs.
Wheat available for use in animal feed is often product rejected for human food production. Low test weight, sprouted grains, and the presence of mycotoxins are all factors which prevent the use of wheat in human foods. These same factors can reduce the nutritional value of wheat for swine or even make it unsuitable for swine.
Wheat stressed by weather or disease often has a low test weight. As the bushel weight of wheat decreases, the energy level of wheat also decreases. If a swine diet contains low test weight wheat (low test weight not taken into account), pigs compensate by consuming more feed. Growth rate is often not influenced, but poorer feed efficiency will result. Low test weight wheat can be used in swine diets, but the reduction in energy needs to be taken into account to prevent a reduction in pig performance. Fat supplementation can be used or low test weight wheat can be blended with normal test weight grain to account for the reduction in energy content. The price paid for low test weight wheat should take the reduced energy content into account.
High rainfall just before harvest can cause wheat to sprout on the head. Sprouted grain typically contains less energy than non-sprouted grain. The lower energy level makes the feeding recommendations for sprouted wheat similar to those for low test weight wheat.
Fungal diseases of wheat can reduce the feeding value of wheat. Scab can be caused by several fungi in the genus, Fusarium. Kernels infected with scab tend to be shriveled, chalky white, and some grains will be pinkish in color. Zearalenone and vomitoxin (DON) have been the mycotoxins associated with scabby wheat. Zearalenone is commonly associated with reproductive problems in swine, and the presence of vomitoxin in feed typically reduces feed consumption. The level of zearalenone and vomitoxin in the complete feed should be less than 1 ppm. Since wheat available for animal feed use has typically been rejected for use in human foods, it is important to check for mycotoxin levels in wheat.
Wheat containing garlic bulblets can’t be used for human consumption. Wheat contaminated with garlic is subject to a rather severe price reduction. The performance of grow-finish pigs does not appear to be influenced by garlicky wheat containing up to 160 bulblets per pound. Wheat severely contaminated with garlic (> 600 bulblets per pound) is unpalatable to young pigs and can cause a garlicky flavor in pork. However, even severely contaminated wheat can be diluted with other grains to overcome the potential problems associated with garlicky wheat.
From a manufacturing standpoint, wheat can become very floury and can be somewhat unpalatable to pigs if ground too finely. Feeds containing finely ground wheat may flow poorly in feeders, and the incidence of stomach ulcers may increase. Wheat should be coarsely ground, and each kernel must be broken. A hammer mill with a ¼ inch opening in the screen, and a reduced hammer speed can result in a desirable particle size. If all else fails, the amount of wheat added to the diet can also be limited in an effort to overcome some of the difficulties associated with handling diets containing finely ground wheat.
Wheat can be successfully used in swine diets. Keep the following points in mind when considering the use of wheat in swine feeds.
1. The decision to use wheat should be based on economics.
2. Formulate diets containing wheat for lysine rather than protein.
3. The test weight of wheat should be determined, and wheat should be examined for sprouted grains and the presence of garlic bulblets.
4. Wheat should be tested for the zearalenone and vomitoxin. The complete feed should contain less than 1 ppm of each of these mycotoxins.
5. Coarsely grind wheat and make sure every kernel is broken.
6. Replace only a portion of your grain if finely ground wheat is a potential problem.
7. If you make a switch from corn to wheat, gradually increase the level of wheat in the diet to help pigs adapt to wheat containing diets.
For more information, contact us. | <urn:uuid:96a62152-dd02-4dbf-a99f-b05b6ec5130d> | CC-MAIN-2023-06 | https://www.wengerfeeds.com/wheat-in-swine-rations/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00800.warc.gz | en | 0.927271 | 1,424 | 3.609375 | 4 |
The main reason why there’s so much plastic in the oceans is because of the way our society currently functions.
People produce more plastic than before. They create trash because they buy things wrapped in plastic, and throw them away. Some people don’t care about recycling, so they just throw away the package and it doesn’t get recycled.
According to the EPA, the ocean is also one of the biggest places where plastic can get thrown away. There are thousands of plastic bags and bottles floating around in the ocean, because people do not use enough dumpster rental near me to discard all their junk effectively.
Sources Of Plastic And How They Get Into The Ocean
Though plastic is used extensively as packaging material, disposable cutlery, cups and even toys for kids, we generate about a million metric tons of plastic waste every single year.
Plastic is a man-made material which contains chemical additives and is derived from petroleum and natural gas. According to a recent study, the concentration of plastic in the world’s oceans will outnumber the fish itself within just a few decades. The cause of the worst plastic pollution is lack of awareness, improper education and lack of leadership.
9.1 billion tons of plastics that have been produced, only 9% have been recycled. The rest end up in landfills via dumpster rental or the oceans. And of course the ocean litter ends up in the stomachs of marine animals and birds.
Even if the ocean litter was stopped at the shoreline, it would still be an immense problem, since plastics are slow to break down. In fact, they never fully break down. Over time, they slowly fragment into tiny particles which often get mistaken for food by birds and fish.
How Does Plastic End Up In The Ocean And What Can Be Done About It
It is widely agreed that about 80% of plastic comes from land-based sources. Plastic litter in the ocean from shipping and fishing accounts for the remaining 20 percent. Taking this broad differentiation a step further, the sources of oceans plastic can become a little more difficult to determine.
From Land To Sea
It could be confusing to consider that even litter from cities and inland towns contributes to the growth of ocean plastic. The fact that the sea was downhill from all the water sources is critical in this situation because rubbish that enters waterways or rivers can very readily make its way to the ocean.
Take, for example, a single plastic bottle in the city. The bottle is placed in a garbage can on the street, where it is blown into streets and then into the storm drain, where it can eventually enter the river and be carried out to sea.
There Are Numerous Sources Of Land-Based Litter:
- Litter from garbage bins & storm water drains that is found on the ground.
- There are two main ways that trash ends up on the coast: through being dumped on the beach or by poor waste management practises.
- Litter can readily be blown into river from overflowing bins.
- Leakage from waste management systems, e.g. landfill sites, especially those near rivers or the coastline.
- Disposal of human waste, such as wet wipes and other sanitary goods, directly into the water supply.
The Effects Of Ocean Plastic Pollution
Ocean plastic pollution is a very serious problem, as it is killing a large number of sea animals. For example, it is killing whales and other marine mammals when they swallow it, often getting stuck in the animal’s throat or gut.
It is also clogging the ocean and killing coral reefs, which can be found both in shallow waters and in deep oceans. It is also killing fish and other aquatic life because the plastic can get caught in their mouths or can poison them by getting into the water, or even by blocking their gills.
Another big problem could be that trash that is dumped on land could end up in the ocean, because of powerful winds that pick up everything.
Finally, ocean plastic pollution could even be damaging the health of humans. The plastic toxins can be absorbed by the fish and then by the humans who eat the fish. As a result, the consequences of plastic pollution are serious and widespread, despite the fact that the plastic waste in the ocean is difficult to notice since it is only a small part of the ocean itself.
The Solutions To Ocean Plastic Pollution
As far as waste management is concernet, there are three main things that can be done :
1. Be careful with your choices and refuse single-use plastic items. The majority of the ocean plastic pollution comes from land. If you need something packaged, you should choose options that are sustainable.
2. Join beach clean-ups. Beach clean-ups are a fun way to take action and reduce ocean plastic pollution by putting all the junk in a dumpster rental. Plastic pollution is a much more serious problem than many people think. It is harmful to the environment, wildlife and sea animals, who cannot digest it.
3. Buy reusable bags for the grocery store. A lot of plastic bag pollution comes from the grocery stores. Incidentally, reusable bags are also a lot better for the environment. | <urn:uuid:72db0736-37d6-40f6-815e-55600db39c01> | CC-MAIN-2023-06 | http://1-rhinoceros.com/2022/05/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00120.warc.gz | en | 0.954177 | 1,061 | 3.5 | 4 |
Explanation: The dark-floored, 95 kilometer wide crater Plato and sunlit peaks of the lunar Alps (Montes Alpes) are highlighted in this sharp digital snapshot of the Moon's surface. While the Alps of planet Earth were uplifted over millions of years as continental plates slowly collided, the lunar Alps were likely formed by a sudden collision that created the giant impact basin known as the Mare Imbrium or Sea of Rains. The mare's generally smooth, lava-flooded floor is seen below the bordering mountain range. The prominent straight feature cutting through the mountains is the lunar Alpine Valley (Vallis Alpes). Joining the Mare Imbrium and northern Mare Frigoris (Sea of Cold) the valley extends toward the upper right, about 160 kilometers long and up to 10 kilometers wide. Of course, the large, bright alpine mountain below and right of the valley is named Mont Blanc. The tallest of the lunar Alps, it reaches over 3 kilometers above the surface. Lacking an atmosphere, not to mention snow, the lunar Alps are probably not an ideal location for a winter vacation. Still, a 150 pound skier would weigh a mere 25 pounds on the Moon.
|<< Previous APOD||This Day in APOD||Next APOD >>| | <urn:uuid:9e7b8fac-8def-4f3b-b60f-8626194bb186> | CC-MAIN-2023-06 | http://asterisk.apod.com/viewtopic.php?t=34181 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00120.warc.gz | en | 0.903726 | 265 | 3.609375 | 4 |