text
stringlengths 197
641k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 15
2.65k
| file_path
stringlengths 125
126
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 58
142k
| score
float64 3.5
5.34
| int_score
int64 4
5
|
---|---|---|---|---|---|---|---|---|---|
You are here
Dr Francis Leneghan gives a talk on Beowulf, one of the most important works in Anglo-Saxon literature.
The title of this collaborative project, 'Great Writers Inspire', naturally brings up several questions, most importantly of which is, 'What is a Writer?' In his talk on the Old English poem Beowulf, Francis Leneghan discusses that very concern. The term 'author' does not convey the same static quality in the Anglo-Saxon period as it does in the modern day. Beowulf could have existed in a multiple of versions, depending on how many Anglo-Saxon poets (scops) were around to interpret and re-tell the tale, much like the many interpretations of Shakespeare's 'Romeo and Juliet'. He also discuses the poet's taste for poetic license, taste, and embellishment that his own character, Beowulf, possesses. The poet invites the audience to consider the complex role of oral poetry, and how the audience - both Anglo-Saxon and modern - should interpret this work. Is this particular poet's rendition of events true? Is Beowulf a less trustworthy individual since he embellishes the nature of the fight when reporting back to Hygelac's court? Every performance and reading reshapes the poem and how we approach it, even to the modern day. The Beowulf-poet, in a sense, is more of a collective noun than an individual author. Also, if you want to hear what Old English sounds like, check out 2:25-2:32 to hear Leneghan speak the opening lines of the extraordinary poem, or 6:27-7:01 to hear the section in which Hrothgar's poet praises Beowulf by comparing him to the valiant men of old.
- Download: 1-leneghan-beowulf.mp4 Video | <urn:uuid:696e773f-221d-4068-b24d-6626fc9d7df6> | CC-MAIN-2023-06 | http://writersinspire.org/content/beowulf | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.955388 | 390 | 3.734375 | 4 |
Ukrainian-Soviet War, 1917–21
Ukrainian-Soviet War, 1917–21. A military struggle for control of Ukraine waged intermittently in 1917–21 by Ukrainian independentist forces and pro-Bolshevik elements seeking to establish Soviet rule. The struggle began shortly after the October Revolution of 1917. Notwithstanding the creation of the Ukrainian National Republic (UNR) on 20 November 1917, the Bolsheviks planned to seize power in Ukraine with the aid of Russian or Russified urban elements, Russian garrisons, and army units stationed near the front. Their armed uprising in Kyiv on 11 December 1917 was unsuccessful, however, and the Bolshevized army units were deported from Ukraine in stages. A pro-Bolshevik force under Yevheniia Bosh moving in on Kyiv was also disarmed by Ukrainian troops under Pavlo Skoropadsky near Zhmerynka and then sent off to Russia.
December 1917 to April 1918. Hostilities broke out in Ukraine after a series of diplomatic maneuvers. On 17 December 1917 the Petrograd-based Council of People's Commissars issued an ultimatum demanding that Bolshevik troops be granted the legal right to be stationed on Ukrainian soil. The ultimatum was rejected by the UNR. The Bolsheviks countered by proclaiming their own Ukrainian government (see the People's Secretariat) based in Kharkiv on 25 December, and then proceeded with a campaign to establish effective military control over Ukraine. The Ukrainian forces at that time consisted of a small volunteer detachment and several battalions of the Free Cossacks. The pro-Soviet forces in Ukraine included Russian army regulars stationed at the front, a number of garrisoned units, and Red Guard detachments composed of laborers from Kharkiv gubernia and the Donbas. Their main strength, however, lay in a large force of Red Guards from Russia, which had been stationed along the Ukrainian border. On 25 December that 30,000-strong army, led by Volodymyr Antonov-Ovsiienko, set off in four groups from Homel and Briansk toward Chernihiv–Bakhmach, Hlukhiv–Konotop, and Kharkiv–Poltava–Lozova.
The invasion by pro-Soviet forces was accompanied by uprisings initiated by local Bolshevik agitators in cities throughout Left-Bank Ukraine. The Bolshevik forces occupied Kharkiv (26 December), Lozova and Katerynoslav (now Dnipro, 9 January 1918), Oleksandrivske (now Zaporizhia, 15 January), and Poltava (20 January). The Briansk group captured Konotop (16 January) and Hlukhiv (19 January). On 27 January the Bolshevik army groups converged on Bakhmach and then set off under the command of Mikhail Muravev to take Kyiv.
The Central Rada prepared for the defense of the capital by sending advance forces of volunteers to Poltava and Bakhmach. One of those, the Student Battalion of Sich Riflemen, was annihilated by a vastly larger (4,000 troops) Bolshevik force at the Battle of Kruty, 130 km northeast of Kyiv, on 29 January. As the Soviet advance continued, an attempt was made to take Kyiv through an uprising organized by non-Ukrainian workers based at the Arsenal plant. Fighting broke out on 29 January and continued until 4 February, when the revolt was put down by a newly formed contingent of the Sich Riflemen and the Free Cossacks. Meanwhile the Bolshevik expeditionary force continued to move on the capital from Bakhmach and Lubny. On 8 February the Ukrainian government was forced to evacuate the city. Soviet troops under Mikhail Muravev's command entered Kyiv on 9 February and then carried out brutal reprisals against the Ukrainian civilian population.
After taking Kyiv the Bolsheviks launched an offensive in Right-Bank Ukraine, where they were engaged in battle mainly with Free Cossack forces. They moved into Volhynia and Podilia (led by the former Russian Seventh Army), where they took Proskuriv, Zhmerynka, Koziatyn, Berdychiv, Rivne, and Shepetivka and forced the Ukrainians back to a Zhytomyr–Korosten–Sarny defensive line.
The tide changed following Ukraine's signature of the Peace Treaty of Brest-Litovsk and the entry of German and Austrian troops into the conflict in late February as allies of the Central Rada. With a Ukrainian command of Gen K. Prisovsky and Symon Petliura the combined force rolled the Bolshevik troops out of Right-Bank Ukraine's centers, such as Zhytomyr, Berdychiv, Koziatyn, and Bucha, before regaining Kyiv on 1 March. Through March and April the German and Austrian armies took control of Left-Bank Ukraine, and the troops of Petro Bolbochan and Volodymyr Sikevych took the Crimea and the Donets Basin. Alarmed by the changed military situation, Vladimir Lenin ordered his representative in Ukraine, Grigorii Ordzhonikidze, to Ukrainize (at least ostensibly) the predominantly Russian forces of Volodymyr Antonov-Ovsiienko and Mikhail Muravev in a bid for more popular support. The maneuver proved unsuccessful. Continuing military setbacks gave Soviet Russia little choice but to comply with the articles of the Treaty of Brest-Litovsk and to sign a preliminary peace with the Ukrainian government on 12 June 1918.
December 1918 to December 1919. The second phase of the Ukrainian-Soviet War began with the fall of the German-supported Hetman government to the forces of the Directory of the Ukrainian National Republic. The Bolsheviks took advantage of the unsettled situation by forming a Provisional Workers' and Peasants' Government of Ukraine on 20 November 1918 and starting a military advance into Ukraine in December with an army led by Volodymyr Antonov-Ovsiienko, Joseph Stalin, and Volodymyr Zatonsky. The Directory protested the aggression, with diplomatic notes sent to the Soviet government on 31 December 1918 and on 3, 4, and 9 January 1919. Not having received a reply, the Directory was compelled to declare war against Russia on 16 January. The Ukrainian forces at that time consisted of two regular troop formations, the Zaporozhian Corps and the Sich Riflemen, as well as partisan detachments (see Partisan movement in Ukraine, 1918–22) led by otamans, such as Nestor Makhno, Nykyfor Hryhoriv, and Danylo Zeleny. The otamans, however, were politically unreliable and occasionally sided with the Bolsheviks.
In December 1918 and January 1919 the Bolshevik expeditionary force, aided by some of the otamans, captured Left-Bank Ukraine, and on 5 February it closed in on Kyiv, where it forced the Ukrainian government once more to flee from the capital. The Soviet attack proceeded on several fronts. A northern group moved along a Mozyr–Korosten and Lunynets–Sarny–Rivne line in an attempt to cut off the Army of the Ukrainian National Republic from the Ukrainian Galician Army (UHA) to the west. A southern group proceeded from the Kremenchuk-Katerynoslav region through Znamianka toward the Birzula–Koziatyn–Zhmerynka line in an effort to cut off the UNR troops from possible reinforcement by Entente forces. At a critical moment Otaman Nykyfor Hryhoriv threw his support behind them. The third Bolshevik army group proceeded from Kyiv to the Berdychiv–Koziatyn–Zhmerynka line in an effort to keep the northern and southern wings of the UNR Army divided.
The UNR army launched a counteroffensive in March, in which it defeated the Soviet forces along the Berdychiv–Koziatyn line and advanced almost to Kyiv, thereby effectively cutting off any possibility that the Soviets might march through Romania to Hungary in order to aid the Béla Kun regime. The Bolshevik forces retaliated in April (after the withdrawal of Entente troops) by marching on Zhmerynka and dividing the UNR army's southern flank from the force's main body. The southern group subsequently lost the support of Otaman Omelian Volokh and was forced to retreat into Romania, where it was disarmed (eventually returning through Galicia to Volhynia). At the same time, the UNR army was pushed back to a small parcel of territory approx 40–50 km wide in the Dubno-Brody region of southwestern Volhynia. Its position was weakened further with a coup attempt by one of its commanding officers, Volodymyr Oskilko.
The UNR Army's fortunes improved as Ukrainian peasants, disgruntled by the Bolsheviks' anti-Ukrainian policy and high requisition quotas, started to replenish insurgent ranks. But before the army itself could regroup, it faced an assault by Polish forces in the Lutsk region and advances from the Red Army in the north and southeast that took Rivne, Shepetivka, Proskuriv, and even Kamianets-Podilskyi. The UNR then reached a peace agreement with the Poles and reorganized its army into four groups—the Sich Riflemen and the Zaporozhian Corps, Volhynian Corps, and Southwestern Corps—with a total of approx 15,000 soldiers. In early June the UNR forces launched an offensive which retook Podilia and Kamianets-Podilskyi. The Red Army retaliated at the end of the month with a campaign that regained Proskuriv (5 July) and approached Kamianets-Podilskyi, which had been made the UNR's provisional capital. The UNR was then strengthened by the arrival of Yurii Tiutiunnyk with troops formerly under Nykyfor Hryhoriv, who had worked his way through the Reds' southern flank. The UNR Army launched a campaign which pushed the Bolshevik forces back to the Horodok [see Horodok (Khmelnytskyi oblast)]–Yarmolyntsi–Sharhorod–Dunaivtsi–Nova Ushytsia–Vapniarka line before being joined by UHA troops who had crossed the Zbruch River on 16–17 July; their arrival brought together a combined Ukrainian force of nearly 85,000 regulars and 15,000 partisans.
The subsequent campaign to take Kyiv proceeded with victories in Vinnytsia (12 August), Khmilnyk, Yaniv, Kalynivka, and Starokostiantyniv (14 August), Berdychiv (19 August), and Zhytomyr (21 August). On 31 August the Ukrainian troops entered Kyiv, only to discover that soldiers from Anton Denikin's Volunteer Army had arrived at the same time. Hostilities between the two forces were narrowly averted when the combined Ukrainian forces pulled out of the city. The Bolsheviks took advantage of the Ukrainians' standoff with Denikin's troops to move some of their forces from the Katerynoslav region to Zhytomyr. Meanwhile the leadership of the UNR and UHA split over how to deal with Denikin, a situation exacerbated by an outbreak of typhus among the troops. The UHA leadership finally made a separate peace with the Volunteer Army on 6 November. The military situation had worsened as Bolshevik forces, which had made substantial gains in Right-Bank Ukraine's areas formerly controlled by Denikin's troops, and the Poles moved into the western reaches of Ukraine. By the end of November the government and the UNR Army found themselves hemmed in by Soviet, Polish, and Volunteer Army troops. At a conference on 4 December the army decided to suspend regular military operations in favor of underground partisan warfare.
December 1919 to November 1920. The UNR Army under the command of Mykhailo Omelianovych-Pavlenko carried out an underground operation known as the First Winter Campaign in the Yelysavethrad (now Kropyvnytskyi) region against the Soviet 14th Army from 6 December 1919 to 6 May 1920. In addition to that action the UNR government concluded the Treaty of Warsaw on 22 April, and then launched a joint offensive with Polish troops against the Bolsheviks. By 7 May a Ukrainian division under the command of Marko Bezruchko entered Kyiv, but the success was short-lived. A Red Army counteroffensive led by Semen Budenny pushed the combined forces back across the Zbruch River and past Zamość toward Warsaw. After the decisive battle of 15 September the Polish-Ukrainian forces threw the Bolshevik contingent back as far as the Sharhorod–Bar–Lityn line in Podilia. The Poles concluded a separate peace with the Soviets on 18 October. The 23,000-strong UNR force continued fighting until 21 October, when its position became untenable. The UNR Army crossed the Zbruch River into Polish-controlled Galicia, where they were disarmed and placed in internment camps.
November 1921. The final military action of the UNR against the Soviets was a raid in November 1921 known as the Second Winter Campaign. The intent of the action was to provide a catalyst for the formation of partisan groups which would incite a general uprising against the Bolsheviks in Ukraine. The commander of the action was Yurii Tiutiunnyk. Two expeditionary forces were established, Podilia (400 men) and Volhynia (800 men). The Podilia group advanced as far as the village of Vakhnivka, in the Kyiv region, before returning to Polish territory through Volhynia on 29 November. The Volhynia group took Korosten and advanced as far as the village of Leonivka in the Kyiv region. On its return march it was intercepted by a Bolshevik cavalry force under the command of Hryhorii Kotovsky, however, and routed in battle near Mali Mynky on 17 November. Of the 443 soldiers captured by the Soviets 359 were shot on 23 November near the town of Bazar, in the Zhytomyr region, and 84 were passed on to Soviet security forces.
The Second Winter Campaign brought the Ukrainian-Soviet War to a definite conclusion. The partisan movement in Ukraine, 1918–22 remained active until mid-1922, but conventional military action by regular troops had ceased.
Kapustians’kyi, M. Pokhid ukraïns’kykh armii na Kyïv–Odesu v 1919 r.: Korotkyi voienno-istorychnyi ohliad, 3 pts (Lviv 1921–2; 2nd edn, Munich 1946)
Tiutiunnyk, Iu. Zymovyi pokhid 1919–1922 rr. (Kolomyia 1923)
Dotsenko, O. Litopys ukraïns’koï revoliutsiï, vol 2, bks 4–5 (Lviv 1923–4; repr, Philadelphia 1988)
Antonov-Ovseenko, V. Zapiski o grazhdanskoi voine, 4 vols (Moscow–Leningrad 1924–33)
Bezruchko, M. Sichovi stril’tsi v borot’bi za derzhavnist’ (Kalisz 1932)
Shandruk, P. (ed). Ukraïns’ko-moskovs’ka viina 1920 r. v dokumentakh (Vienna 1933)
Stefaniv, Z. Ukraïns’ki zbroini syly 1917–1921 rr., 3 vols (Kolomyia 1934–5)
Dotsenko, O. Zymovyi pokhid (6.XII.1919–6.V.1920) (Warsaw 1932; Kyiv 2001)
Omelianovych-Pavlenko, M. Zymovyi pokhid (Prague 1940)
Skaba, A.; et al (eds). Ukraïns’ka RSR v period hromadians’koï viiny 1917–1920 rr., 3 vols (Kyiv, 1967–70)
Udovychenko, O. Ukraïna u viini za derzhavnist’ (Winnipeg 1954)
Mirchuk, P. Ukraïns’ko-moskovs’ka viina 1917–1919 (Toronto 1957)
Shankovs’kyi, L. Ukraïns’ka Armiia v borot’bi za derzhavnist’, 1917–1920 (Munich 1958)
Udovychenko, O. Tretia zalizna diviziia, 2 vols (New York 1971, 1982)
La guerre polono-soviétique de 1919–1920 (Paris 1975)
Palij, M. The Ukrainian-Polish Defensive Alliance, 1919–1921: An Aspect of the Ukrainian Revolution (Edmonton–Toronto 1995)
Procyk, A. Russian Nationalism and Ukraine: The Nationality Policy of the Volunteer Army During the Civil War (Edmonton–Toronto 1995)
Tynchenko, Ia. Ukraïns’ke ofitserstvo: Shliakhy skorboty ta zabuttia. Chastyna 1, biohrafichno-dovidkova (Kyiv 1995)
Hrynevych, L. ‘Viis’kove budivnytstvo v Radians’kii Ukraïni (1917–pochatok 30-kh rokiv XX st.,’ in Istoriia ukraïns’koho viis’ka, 1917–1995, ed. Ia. Dashkevych (Lviv 1996)
Tynchenko, Ia. Persha ukraïns’ko-bil’shovyts’ka viina (hruden’ 1917–berezen’ 1918) (Lviv 1996)
Holubko, V. Armiia Ukraïns’koi Narodnoi Respubliky 1917–1918: Utvorennia ta borot’ba za derzhavu (Lviv 1997)
Kuchabsky, V. Western Ukraine in Conflict with Poland and Bolshevism, 1918–1923 (Edmonton–Toronto 2009)
[This article originally appeared in the Encyclopedia of Ukraine, vol. 5 (1993).] | <urn:uuid:01f6a721-c79a-4aa3-9e11-3fbf326824af> | CC-MAIN-2023-06 | http://www.encyclopediaofukraine.com/display.asp?page=2&ffpath=pages%5CU%5CK%5CUkrainian6SovietWar1917hD721.htm | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.941525 | 4,002 | 3.84375 | 4 |
The volcano of Yellowstone National Park has made headlines with research that presents a possibility that this supervolcano may erupt much sooner than expected.
According to one Fox News report: Arizona State University researchers have analyzed minerals around the supervolcano at Yellowstone National Park and have come to a startling conclusion. It could blow much faster than previously expected, potentially wiping out life as we know it.
According to National Geographic, the researchers, Hannah Shamloo and Christy Till, analyzed minerals in fossilized ash from the most recent eruption. What they discovered surprised them – the changes in temperature and composition only took a few decades, much faster than the centuries previously thought.
“We expected that there might be processes happening over thousands of years preceding the eruption,” said Till said in an interview with the New York Times.
The supervolcano last erupted about 630,000 years ago, according to National Geographic. Prior to that, it was 1.3 million years ago, per a report from ZME Science.
If another eruption were to take place, the researchers found that the supervolcano would spare almost nothing in its wrath. It would shoot 2,500 times more material than Mount St. Helen did in 1980 and could cover most of the contiguous U.S. in ash, possibly putting the planet into a volcanic winter.
Despite the concerns, researchers state that more research needs to be done before a definite conclusion can be drawn.
Researchers state that in June 2017 the supervolcano was hit with 464 earthquakes. They go on to state, however, that these earthquake swarms aren’t anything to be alarmed about.
NASA is currently working on a proposed solution to save mankind and life as we know it if an eruption were to occur. Click here to read more on NASA’s proposed plan.
Must Know Safety Tips In The Case Of A Volcanic Eruption
As with any other natural disaster, it is important to know what to do before, during, and after a volcanic eruption. To bring you the most detailed information possible on safety precautions, I took to the internet to bring you the best…and I came across this gem! (source)
Before an Eruption:
- Be prepared to take shelter or evacuate and review your plans with family members.
- Pick a safe place to meet.
- Put together an emergency supply kit.
If you evacuate:
- Tune in the radio or television for volcano updates. If told to evacuate do so. It can be dangerous to wait out an eruption.
- Listen for disaster sirens and warning signals.
- Take only essential items. Be sure to pack at least a one-week supply of prescription medications.
- Fill your vehicle’s gas tank.
- If no vehicle is available, make arrangements with friends or family for transportation, or follow authorities’ instructions on where to obtain transportation.
- Turn off the gas, electricity and water.
- Disconnect appliances to reduce likelihood of electrical shock when power is restored.
- Follow designation evacuation routes and expect heavy traffic and delays.
If you take shelter:
- Keep listening to your radio or watch television until you are told all is safe or you are told to evacuate. Local authorities may evacuate specific areas at greatest risk in your community.
- Close and lock all windows and outside doors.
- Place damp towels at door thresholds and other draft sources. Tape draughty windows.
- Turn off all heating and air conditioning systems and fans.
- Close fireplace and furnace dampers.
- Organize your emergency supplies and make sure all household members know where the supplies are located.
- Fill your clean water containers.
- Fill sinks and bathtubs with water as an extra supply for washing.
- Make sure the radio is working.
- Go to an interior room without windows that is above ground level.
- Ensure pets and livestock have clean food, water and shelter.
- Store all vehicles and machinery in a garage or other shelter.
- Call your emergency contact – a friend or family member who does not live near the volcano – and have the phone available if you need to report a life-threatening condition. Remember that communication services may be overwhelmed or damaged during an emergency.
During an Eruption:
- Don’t panic – stay calm.
- Follow evacuation orders, if issued by authorities.
- Stay indoors.
- Avoid areas downwind and river valleys downstream of the volcano.
- If outside, seek shelter (e.g. car or building).
- Keep doors, windows, dampers and ventilation closed until the ash settles.
- Use a respiratory mask, handkerchief or cloth over your nose and mouth.
- Do not tie up phone lines with non-emergency calls.
- Listen to your local radio for information on the eruption and cleanup plans.
- If there is ash in your water, let it settle and then use the clear water. Water contaminated by ash will usually make drinking water unpalatable before it presents a health risk.
- You may eat vegetables from the garden, but wash them first.
- Remember to help your neighbors who may require special assistance – infants, elderly people and people with access and functional needs.
After an Eruption:
- Go to a designated public shelter if you feel it is unsafe to remain in your home.
- Stay indoors until the ash has settled, unless there is a danger of the roof collapsing.
- Keep all heating and air conditioning units and fans turned off, and windows, doors, and fireplace and woodstove dampers closed.
- Clear heavy ash from flat or low-pitched roofs and rain gutters.
- Let family members know you are safe.
- Listen to the radio, watch TV or check the Internet often for official updates and information about air quality, drinking water and road conditions.
- Avoid running vehicle engines. Volcanic ash can clog engines, damage moving parts and stall vehicles.
- Avoid driving in heavy ash fall unless absolutely required. If you need to drive, keep the speed down to 50 km per hour or slower.
- Protect yourself from ash by wearing long-sleeved shirts and pants, using goggles and a respiratory mask
What are your thoughts on the new research found on this supervolcano? Share your thoughts with us in the comment section below.
Check out our previous article on tips for surviving natural disasters: Surviving Natural Disasters: Safe Points in the Household | <urn:uuid:d15de5ec-659f-455c-ae3c-874f4266f107> | CC-MAIN-2023-06 | https://blog.gunassociation.org/yellowstone-supervolcano/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.926762 | 1,346 | 3.9375 | 4 |
This course will focus on introductory chemical principles, including periodicity, chemical bonding, molecular structure, equilibrium and the relationship between structure and properties. Students will explore stoichiometric relationships in solution and gas systems which are the basis for quantifying the results of chemical reactions. Understanding chemical reactivity leads directly into discussion of equilibrium and thermodynamics, two of the most important ideas in chemistry. Equilibrium, especially acid/base applications, explores the extent of reactions while thermodynamics helps us understand if a reaction will happen. The aim of the laboratory will be to develop your experimental skills, especially your ability to perform meaningful experiments, analyze data, and interpret observations. This is a required course for Chemistry majors, but also satisfies UWE requirements for non-majors.
- Atomic structure, Periodic table, VSEPR, Molecular Orbital theory, and biochemistry:
- Introduction: why chemistry in engineering? Concept of atom, molecules, Rutherford’s atomic model, Bohr’s model of an atom, wave model, classical and quantum mechanics, wave particle duality of electrons, Heisenberg’s uncertainty principle, Quantum-Mechanical Model of Atom, Double Slit Experiment for Electrons, The Bohr Theory of the Hydrogen atoms, de Broglie wavelength, Periodic Table.
- Schrodinger equation (origin of quantization), Concept of Atomic Orbitals, representation of electrons move in three-dimensional space, wave function (Y), Radial and angular part of wave function, radial and angular nodes, Shape of orbitals, the principal (n), angular (l), and magnetic (m) quantum numbers, Pauli exclusion principle.
- Orbital Angular Momentum (l), Spin Angular Momentum (s), spin-orbit coupling, HUND’s Rule, The aufbau principle, Penetration, Shielding Effect, Effective Nuclear Charge, Slater’s rule.
- Periodic properties, Ionization Energies of Elements, Electron affinities of elements, Periodic Variation of Physical Properties such as metallic character of the elements, melting point of an atom, ionic and covalent nature of a molecule, reactivity of hydrides, oxides and halides of the elements.
- Lewis structures, Valence shell electron pair repulsion (VSEPR), Valence-Bond theory (VB), Orbital Overlap, Hybridization, Molecular Orbital Theory (MO) of homo-nuclear and hetero-nuclear diatomic molecules, bonding and anti-bonding orbitals.
- Biochemistry: Importance of metals in biological systems, Fe in biological systems, Hemoglobin, Iron Storage protein - Ferritin]
2. Introduction to various analytical techniques:
UV-Visible Spectroscopy, IR Spectroscopy, NMR spectroscopy, X-Ray crystallography
Spectroscopy: Regions of Electromagnetic Radiation, Infra-Red (IR) Spectroscopy or Vibrational Spectroscopy of Harmonic oscillators, degree of freedom, Stretching and Bending, Infrared Spectra of different functional groups such as OH, NH2, CO2H etc., UV-Vis Spectroscopy of organic molecules, Electronic Transitions, Beer-Lambert Law, Chromophores, principles of NMR spectroscopy, 1H and 13C-NMR, chemical shift, integration, multiplicity,
X-ray crystallography: X-ray diffraction, Bragg’s Law, Crystal systems and Bravais Lattices
- The Principles of Chemical Equilibrium, kinetics and intermolecular forces:
- Heat & Work; State Functions
- Laws of thermodynamics
- Probability and Entropy
- Thermodynamic and Kinetic Stability
- Determination of rate, order and rate laws
- Free Energy, Chemical Potential, Electronegativity
- Phase Rule/Equilibrium
- Activation Energy; Arrhenius equation
- Catalysis: types; kinetics and mechanisms
- Inter-molecular forces
4. Introduction to organic chemistry, functional group and physical properties of organic compounds, substitution and elimination reaction, name reactions and stereochemistry
Texts & References:
- Chemical Principles - Richard E. Dickerson, Harry B. Gray, Jr. Gilbert P. Haight
- Valence - Charles A. Coulson [ELBS /Oxford Univ. Press]
- Valence Theory - J. N. Murrell, S. F. A. Kettle, J. M. Tedder [ELBS/Wiley]
- Physical Chemistry - P. W. Atkins [3rd Ed. ELBS]
- Physical Chemistry - Gilbert W. Castellan [Addison Wesley, 1983]
- Physical Chemistry: A Molecular Approach -Donald A. McQuarrie, J.D . Simon
- Inorganic Chemistry: Duward Shriver and Peter Atkins.
- Inorganic Chemistry: Principles of Structure and Reactivity by James E. Huheey,
- Ellen A. Keiter and Richard L. Keiter.
- Inorganic Chemistry: Catherine Housecroft, Alan G. Sharpe.
- Atkins' Physical Chemistry, Peter W. Atkins, Julio de Paula.
- Strategic Applications of Named Reactions in Organic Synthesis, Author: Kurti Laszlo et.al
- Classics in Stereoselective Synthesis, Author: Carreira Erick M & Kvaerno Lisbet
- Molecular Orbitals and Organic Chemical Reactions Student Edition, Author: Fleming Ian
- Logic of Chemical Synthesis, Author: Corey E. J. & Xue-Min Cheng
- Art of Writing Reasonable Organic Reaction Mechanisms /2nd Edn., Author: Grossman Robert B.
- Organic Synthesis: The Disconnection Approach/ 2nd Edn., Author: Warrer Stuart & Wyatt Paul
Other reading materials will be assigned as and when required. | <urn:uuid:0085a721-3949-471a-a8c6-44e4a3336225> | CC-MAIN-2023-06 | https://chemical.snu.edu.in/undergraduate/major/btech-in-chemical-engineering/degree-requirement?qt-core_elective_courses=0 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.752332 | 1,251 | 3.90625 | 4 |
A more complicated explanation follows, but simply put, the McGurk effect happens when our eyes tell our ears what they’re hearing. It’s a wonderful demonstration of how vision and hearing are paired in language. In the the video that follows, the language that is ‘seen’ is the movement of the mouth. As we know, spoken words include both the sound component as well as the visual elements of how the lips and face move. In the end, vision must agree with what is said, if there is a disagreement, the eyes overrule the ears.
From: http://en.wikipedia.org/wiki/McGurk_effect May 24, 2013:
“The McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound. The visual information a person gets from seeing a person speak changes the way they hear the sound. People who are used to watching dubbed movies may be among people who are not susceptible to the McGurk effect because they have, to some extent, learned to ignore the information they are getting from the mouths of the “speakers”. If a person is getting poor quality auditory information but good quality visual information, they may be more likely to experience the McGurk effect. Integration abilities for audio and visual information may also influence whether a person will experience the effect. People who are better at sensory integration have been shown to be more susceptible to the effect. Many people are affected differently by the McGurk effect based on many factors, brain damages or disorders.” | <urn:uuid:112288d4-58f0-4ecc-ad93-497195edf64a> | CC-MAIN-2023-06 | https://drboulet.com/the-mcgurk-effect-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.936681 | 362 | 3.84375 | 4 |
Our deltas are sinking but some are not. Land people lived in for generations is disappearing as they watch. They need help but what can be done? NASA is entering the picture.
Erosion, sinking land and sea rise from climate change have killed the Louisiana woods where a 41-year-old Native American chief played as a child. Not far away in the Mississippi River delta system, middle-school students can stand on islands that emerged the year they were born. NASA is using high-tech airborne systems along with boats and mud-slogging work on islands for a $15 million, five-year study of these adjacent areas of Louisiana. One is hitched to a river and growing; the other is disconnected and dying. Scientists from NASA and a half-dozen universities from Boston to California aim to create computer models that can be used with satellite data to let countries around the world learn which parts of their dwindling deltas can be shored up and which are past hope. “If you have to choose between saving an area and losing another instead of losing everything, you want to know where to put your resources to work to save the livelihood of all the people who live there,” said lead scientist Marc Simard of NASA’s Jet Propulsion Laboratory.nola.com
Oceans are rising but deltas are sinking. The river deltas are home to millions and provide fish and other foodstuffs to the nation. Two different effects of climate change.
To figure out where to shore up dying deltas, NASA is studying water flowing in and out of Louisiana’s Atchafalaya and Terrebonne basins, sediment carried by it, and plants that can slow the flow, trap sediment and pull carbon from the air. Louisiana holds 40% of the nation’s wetlands, but they’re disappearing fast — about 2,000 square miles (5,180 square kilometers) of the state have been lost since the 1930s. That’s about 80% of the nation’s wetland losses, according to the U.S. Geological Survey. Using two kinds of radar and a spectrometer that measures more colors than the human eye can distinguish, high-altitude NASA airplanes have been collecting information such as water height, slope, sediment, and the types and density of plants. Some measurements are as precise as a couple of centimeters (less than an inch). On boats and islands, scientists and students from across the country take samples and measure everything from currents to diameters of trees. Their findings will be used to calibrate the airborne instruments. “I’ve been working here 15 years, and one of the toughest parts about working in a delta is you can only touch one little piece of it at any one time and understand one little piece of it at one time,” said Robert Twilley, a professor of oceanography and coastal sciences at Louisiana State University. “Now we have the capability of working with NASA to understand the entire delta.
Louisiana has 40% of the nations wetlands but we have lost 80% of the wetlands. We have lost over 2000 sq miles since the 1930’s. In this study, technology is playing a big part. To see how this study is being done click here for a YouTube video.
Using two kinds of radar and a spectrometer that measures more colors than the human eye can distinguish, high-altitude NASA airplanes have been collecting information such as water height, slope, sediment, and the types and density of plants. Some measurements are as precise as a couple of centimeters (less than an inch). On boats and islands, scientists and students from across the country take samples and measure everything from currents to diameters of trees. Their findings will be used to calibrate the airborne instruments. “I’ve been working here 15 years, and one of the toughest parts about working in a delta is you can only touch one little piece of it at any one time and understand one little piece of it at one time,” said Robert Twilley, a professor of oceanography and coastal sciences at Louisiana State University. “Now we have the capability of working with NASA to understand the entire delta.”
The Mississippi River drains 41% of the country but does not do the land building it used to do.
The Mississippi River drains 41% of the continental United States, collecting 150 million tons (130 million metric tons) of sediment per year. But, largely because of flood-prevention levees, most sediment shoots into the Gulf of Mexico rather than settling in wetlands. “Deltas are the babies of the geological timescale. They are very young and fragile, in a delicate balance of sinking and growing,” NASA states on the Delta-X project website. In geological time, young means thousands of years. On that scale, Louisiana’s Wax Lake Delta is taking its first breaths. It dates to 1942, when the Army Corps of Engineers dug an outlet from the lake to reduce flood threats to Morgan City, about 20 miles (32 kilometers) away. Sediment from the Atchafalaya River filled the lake, then began creating islands in the Gulf. The new islands are thick with black willows and, in spring, thigh-high butterweed topped with small yellow flowers. Older wetlands in areas surveyed by Delta-X aircraft are more diverse, their soil rich with humus from generations of plants. Along nearby Hog Bayou, blue buntings and scarlet tanagers dart through magnolia branches and skinks skitter up trees. In swamps, ospreys nest atop bald cypresses and alligators float in the water below.
Twilley has also served as executive director of Louisiana Sea Grant College Program.
In addition to working at LSU, Twilley has spent about nine years as executive director of Louisiana Sea Grant College Program, which uses the Wax Lake Delta as a classroom for middle- and high-school students. “We take kids and make them stand on land that was formed the year they were born.” Twilley said. In contrast, the adjacent Terrebonne Basin is shrinking so rapidly that the government is paying to move the Isle de Jean Charles band of Biloxi-Chitimacha-Choctaw Indians from a vanishing island to higher ground. That band isn’t the only Native American group losing ground. “The wooded areas we used to run through as children — they’re dead,” said Chief Shirell Parfait-Dardar of the Grand Caillou/Dulac Band of Biloxi-Chitimacha Indians, based less than 50 miles (80 kilometers) from the Wax Lake Delta. “Ghost forests” are common in degrading deltas where salt water intrudes as land sinks and erodes, LSU’s Twilley said.
Louisiana has two projects that use the Atchafalaya River sediment to build up the Terrebonne Basin. The CPRA says these projects are more than a year off.
Delta-X’s study gets downright granular. A California Institute of Technology team that studies how sediment moves and is deposited on Earth and other planets will analyze the amounts of sediment in high- and low-tide water samples, breaking the particles down into about 100 sizes. One way LSU researchers measure how much land has been formed by sediment involves sprinkling white feldspar dust on the ground. They return to see how deeply it’s buried by new sediment. They do that by injecting liquid nitrogen into hollow tubes to freeze the dirt and muck around them. When the tubes are pulled up, the frozen “popsicles” show a white ring. They measure from there to the top. In the Terrebonne Basin, such sedimentation can’t keep up with subsidence and sea level rise. “Thus the wetlands basically drown,” Twilley said.
The study sent out boats and planes in the March/April time frame and will do the same in the September. Two satellites will be launched and will use Radar.
To gauge how plants affect water movement, long-wavelengths of L-band radar can measure water level changes in open and vegetated channels, NASA’s Simard said. And high-frequency Ka-band radar can measure surface height of open water, showing how it slopes — and where it’s moving. “All of the tools they’re bringing to bear is really impressive,” said Indiana University sedimentary geologist Douglas Edmonds, who is not part of the project but has worked with many of the researchers. “The project itself is putting a finger on a really essential question for a lot of deltas around the world — how this deltaic land is formed and what processes take it away,” he said.
NASA is bringing to the table its technology and in working with universities will be able to bring this study to people. Hopefully there will come out some good suggestions and ideas that can be implemented. | <urn:uuid:f9a4bf39-3e58-4d1e-8104-c0059bf47c90> | CC-MAIN-2023-06 | https://gnoicc.org/2021/06/29/nasa-studying-our-deltas-in-trouble/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.953536 | 1,890 | 3.625 | 4 |
This presentation begins with basic terminology and demographics of Muslims in the United States and the world, followed by a brief history of Muslim Americans and notable figures today in various sectors, including entertainment, academia, and government. It then provides an overview of traditions and practices of Islam including major Muslim holidays. It also addresses Islamophobia and its impact as well as common misconceptions about Muslims and their views of other religions, including Judaism and Christianity. Our approach in this conversation is non-essentialist, taking into consideration the vast diversity of Muslims in the United States and around the world.
Listed below are the titles of the various presentations that ING offers along with a brief description. Presentations are available in 45-to-90-minute formats.
To schedule a presentation, complete the form at the bottom of this page. Please allow two weeks’ advance notice for scheduling, and at least 45 minutes for a presentation to allow time for questions and answers. If your request is less than two weeks away or if you have questions about our online scheduling system, please feel free to contact us at 408-296-7312 extension 160 or email [email protected].
Getting to Know Muslim Americans and Their Faith
A History of Muslims in America
Most Americans are unaware of the long history of Muslims in the United States. This presentation covers that history beginning with the substantial and documented presence of Muslims among enslaved Africans in the Americas. It then describes the rediscovery of Islam among African Americans in the 20th century as well as among Latinos and Whites. It also highlights Muslim influences on American culture including in music, cuisine, and architecture, and the successive waves of immigration that brought Muslims to our country beginning in the late nineteenth century. The presentation concludes by highlighting notable Muslim Americans today.
Muslim Contributions to Civilization
A majority of Americans are unaware of Muslim influences in our lives, from foods and drinks we all enjoy such as coffee and hummus, to algebra which we all struggle with. This presentation shows how Muslims have both been influenced by and contributed to other cultures in numerous ways. Particularly during the medieval Golden Age of Islam, Muslims made major contributions in diverse fields, including art, architecture, music, mathematics, medicine, astronomy, philosophy, and literature, as well as in hygiene, cuisine, clothing, and furniture.
Muslim Women Beyond the Stereotypes
The role of Muslim women and Islam’s view of women is one of the most widely misunderstood and misrepresented aspects of the religion and its practitioners. This presentation describes the diversity of the lived experiences and statuses of Muslim women today throughout the world, and highlights examples of notable Muslim women in various fields, including over a dozen female heads of state. It also describes normative Quranic and prophetic teachings about the roles and responsibilities of Muslim women and explains issues such as the headscarf and gender relations.
Islamophobia and Its Impact
This presentation begins by defining and examining both historical and contemporary sources that contribute to Islamophobia, including colonialism, orientalism, racism, and religious and ethnic nationalism. It then looks at how Islamophobia manifests in both Western and non-Western nations. It then examines Islamophobia in the United States and the many ways it is disseminated, including through the media, Hollywood, politicians, the internet, social media, and video games. It describes the impact of Islamophobia on Muslims in schools, the workplace and other institutions and concludes with strategies for countering Islamophobia through education and interfaith engagement. This presentation is available in one-to-four-hour formats and more suitable for high school and university level classrooms as well as other adult settings.
Ramadan and Fasting
This presentation provides an overview on the topic of Ramadan and fasting, including a description of the month and the lunar calendar, the purpose, and goals of fasting, how the fast functions, a look at a typical day in the life of a fasting person, and exemptions from fasting. The presentation also describes some of the challenges for fasting students and employees and how best to accommodate them and concludes with a description of the holiday when the end of the fast is celebrated, Eid ul-Fitr.
If you are looking for a topic that is not included above, you may request a custom presentation. After you make this selection, we will contact you to discuss the details of your custom presentation.
Diversity Training for Professional Groups
ING provides diversity training that can be included in your long-term DEIB (Diversity, Equity, Inclusion and Belonging) programs in your organization. Our trainings are specifically tailored for the following professional groups:
Educators in K-12, educators in colleges and universities, corporate staff, law enforcement personnel, healthcare providers, dentists, government employees, court administrators, and non-profit organizations. Click here for more information. | <urn:uuid:e5be3602-de8f-44e7-9e98-80dd1d8f00db> | CC-MAIN-2023-06 | https://ing.org/schedule-speakers/educational-presentations-and-panels-for-schools-and-community-groups/supplementing-education-about-muslims-and-islam/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.941567 | 976 | 3.59375 | 4 |
How do you put quotation marks in HTML? The tag is used to add short quotation marks in HTML. Just keep in mind if the quotation goes for multiple lines, use tag. Browsers usually insert
How do you put quotation marks in HTML?
The tag is used to add short quotation marks in HTML. Just keep in mind if the quotation goes for multiple lines, use tag. Browsers usually insert quotation marks around the q element. You can also use the cite attribute to indicate the source of the quotation in URL form.
How do you code quotation marks?
To place quotation marks in a string in your code
- In Visual Basic, insert two quotation marks in a row as an embedded quotation mark.
- Insert the ASCII or Unicode character for a quotation mark.
- You can also define a constant for the character, and use it where needed.
What do quotation marks mean in HTML?
Double quotes are used for strings (i.e., “this is a string”) and single quotes are used for a character (i.e., ‘a’, ‘b’ or ‘c’).
What is the HTML code for question mark?
|HTML Entity (hex)||?|
|HTML Entity (named)||?|
|URL Escape Code||?|
How do you use double quotes in HTML?
Right Double Quotation Mark
- UNICODE. U+0201D.
- HEX CODE. ”
- HTML CODE. ”
- HTML ENTITY. ”
- CSS CODE. \201D. ” content: “\201D”;
What do quotes mean in coding?
In computer programming, quotes contain text or other data. For example, in the below print statement, what you’re printing to the screen is often surrounded by quotes. If surrounded by a single quote instead of a double quote, the string is treated as a literal string in many languages.
What is full stop in HTML?
HTML &period The full stop (Commonwealth English), period (North American English) or full point . is a punctuation mark. In Anglophone countries, it is used for the decimal point and other purposes, and may be called a point. In computing, it is called a dot.
What do quotation marks do in HTML?
The Quotation elements in HTML are used to insert quoted texts in a web page, that is, portion of texts different from the normal texts in the web page. Below are some of the most used quotation elements of HTML: element: The element is used to set a set of text inside the quotation marks.
What does blockquote mean in HTML?
The HTML element indicates that the enclosed text is an extended quotation. A URL for the source of the quotation may be given using the cite attribute, while a text representation of the source can be given using the element.
Why is blockquote used in HTML?
The blockquote element is used to indicate the quotation of a large section of text from another source. Using the default HTML styling of most web browsers, it will indent the right and left margins both on the display and in printed form, but this may be overridden by Cascading Style Sheets (CSS).
How do you use HTML?
- Step 1: Open Notepad (PC) Windows 8 or later:
- Step 1: Open TextEdit (Mac) Open Finder > Applications > TextEdit.
- Step 2: Write Some HTML. Write or copy the following HTML code into Notepad:
- Step 3: Save the HTML Page. Save the file on your computer.
- Step 4: View the HTML Page in Your Browser.
Whats is HTML?
& is HTML for “Start of a character reference”. & is the character reference for “An ampersand”. ¤t is not a standard character reference and so is an error (browsers may try to perform error recovery but you should not depend on this).
What is black quotes in HTML?
HTML Tag. The tag in HTML is used to display the long quotations (a section that is quoted from another source). It changes the alignment to make it unique from others. It contains both opening and closing tags.
What is BDO tag in HTML?
BDO stands for Bi-Directional Override. The tag is used to override the current text direction.
What is the example of HTML?
HTML (Hypertext Markup Language) is the code that is used to structure a web page and its content. For example, content could be structured within a set of paragraphs, a list of bulleted points, or using images and data tables.
Is HTML a coding?
Technically, HTML is a programming language. In fact, HTML stands for Hypertext Markup Language. While HTML and CSS are declarative, most coding is computational – and it’s what most other coding languages are designed for.
What are blocked quotes?
The block quote is used for direct quotations that are longer than four lines of prose, or longer than three lines of poetry. A block quote is always used when quoting dialogue between characters, as in a play. The block format is a freestanding quote that does not include quotation marks.
Why BDO is used in HTML?
: The Bidirectional Text Override element The HTML element overrides the current directionality of text, so that the text within is rendered in a different direction.
When to use when quoting?
Quotation marks are used when you are quoting something or someone in exact words. When you pick up an exact bunch of words, say, from someone’s speech or a magazine article, you put those words in quotation marks.
When to use quotes marks?
Quotation marks (also known as speech marks, quotes or inverted commas) are used to set off direct speech and quotations. In academic writing, you need to use quotation marks when you quote a source. This includes quotes from published works and primary data such as interviews.
How do you punctuate quotation marks?
Use a single quotation mark to begin it and punctuate it as you would a regular quote, with the period coming before you close the quotation with a second single quotation mark. Normally, a quote within a quote like this will be brief, since people usually use only short direct quotes from another person.
What are some examples of quotation mark?
Examples of Quotation Marks: Quotation Marks Example: The letter said, “We are pleased to inform you that you’ve been accepted into the English department’s graduate school program.”. Quotation Marks Example: Alec wrote, “I’ll be arriving on February 20th at O’Hare Airport. | <urn:uuid:20bb26f5-c64f-43e1-a76e-8bf5ae2182a8> | CC-MAIN-2023-06 | https://missionalcall.com/2020/08/10/how-do-you-put-quotation-marks-in-html/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.869267 | 1,450 | 3.953125 | 4 |
Many of our leading researchers have noted that alongside the need for much faster emissions reductions, we need to start pulling CO2 out of the atmosphere. That greenhouse gas removal is essential to achieve net zero carbon emissions, stabilise the climate, and perhaps even to help us orient towards future green jobs and industries. But first, it’s important to understand the natural cycle of carbon – how it flows across the globe – how nature already recycles carbon atoms. And then, what that might teach us about how humans could intervene to actually remove carbon from the atmosphere.
Here to guide us through that is Ros Rickaby, Chair of Geology in Oxford’s Department of Earth Sciences.
Read more about Oxford’s latest climate and biodiversity research at http://bit.ly/trueplanet | <urn:uuid:340f5882-301f-4178-b07d-0cc088f6f5c7> | CC-MAIN-2023-06 | https://modconpak.com/Blog/5857 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.918561 | 161 | 4.1875 | 4 |
University of California, Berkeley, chemists have discovered a way to simplify the removal of toxic metals, like mercury and boron, during desalination to produce clean water, while at the same time potentially capturing valuable metals, such as gold.
Desalination — the removal of salt — is only one step in the process of producing drinkable water, or water for agriculture or industry, from ocean or waste water. Either before or after the removal of salt, the water often has to be treated to remove boron, which is toxic to plants, and heavy metals like arsenic and mercury, which are toxic to humans. Often, the process leaves behind a toxic brine that can be difficult to dispose of.
The new technique, which can easily be added to current membrane-based electrodialysis desalination processes, removes nearly 100% of these toxic metals, producing a pure brine along with pure water and isolating the valuable metals for later use or disposal.
“Desalination or water treatment plants typically require a long series of high-cost, pre- and post-treatment systems that all the water has to go through, one by one,” said Adam Uliana, a UC Berkeley graduate student who is first author of a paper describing the technology. “But here, we have the ability to do several of these steps all in one, which is a more efficient process. Basically, you could implement it in existing setups.”
The UC Berkeley chemists synthesized flexible polymer membranes, like those currently used in membrane separation processes, but embedded nanoparticles that can be tuned to absorb specific metal ions — gold or uranium ions, for example. The membrane can incorporate a single type of tuned nanoparticle, if the metal is to be recovered, or several different types, each tuned to absorb a different metal or ionic compound, if multiple contaminants need to be removed in one step.
The polymer membrane laced with nanoparticles is very stable in water and at high heat, which is not true of many other types of absorbers, including most metal-organic frameworks (MOFs), when embedded in membranes.
The researchers hope to be able to tune the nanoparticles to remove other types of toxic chemicals, including a common groundwater contaminant: PFAS, or polyfluoroalkyl substances, which are found in plastics. The new process, which they call ion-capture electrodialysis, also could potentially remove radioactive isotopes from nuclear power plant effluent.
In their study, to be published this week in the journal Science, Uliana and senior author Jeffrey Long, UC Berkeley professor of chemistry, demonstrate that the polymer membranes are highly effective when incorporated into membrane-based electrodialysis systems — where an electric voltage drives ions through the membrane to remove salt and metals — and diffusion dialysis, which is used primarily in chemical processing.
“Electrodialysis is a known method for doing desalination, and here we are doing it in a way that incorporates these new particles in the membrane material and captures targeted toxic ions or neutral solutes, like boron,” Long said. “So, while you are driving ions through this membrane, you are also decontaminating the water for, say, mercury. But these membranes can also be highly selective for removing other metals, like copper and iron, at high capacity.”
Global water shortages require reusing wastewater
Water shortages are becoming commonplace around the world, including in California and the American West, exacerbated by climate change and population growth. Coastal communities are increasingly installing plants to desalinate ocean water, but inland communities, too, are looking for ways to turn contaminated sources — groundwater, agricultural runoff and industrial waste — into clean, safe water for crops, homes and factories.
While reverse osmosis and electrodialysis work well for removing salt from high-salinity water sources, such as seawater, the concentrated brine left behind can have high levels of metals, including cadmium, chromium, mercury, lead, copper, zinc, gold and uranium.
But the ocean is becoming increasingly polluted by industry and agricultural runoff, and inland sources even more so.
“This would be especially useful for those areas that have low levels of contaminants that are still toxic at these low levels, as well as different wastewater sites that have lots of types of toxic ions in their streams,” Long said.
Most desalination processes remove salt — which exists largely as sodium and chlorine ions in water — using a reverse osmosis membrane, which allows water through, but not ions, or an ion exchange polymer, which allows ions through, but not water. The new technology merely adds porous nanoparticles, each about 200 nanometers in diameter, that capture specific ions while allowing the sodium, chlorine and other non-targeted charged molecules to pass through.
Long designs and studies porous materials that can be decorated with unique molecules that capture targeted compounds from liquid or gas streams: carbon dioxide from power plant emissions, for example. The nanoparticles used in these polymer membranes are called porous aromatic frameworks, or PAFs, which are three-dimensional networks of carbon atoms linked by compounds made up of multiple ring-shaped molecules — chemical groups referred to as aromatic compounds. The internal structure is related to that of a diamond, but with the link between carbon atoms lengthened by the aromatic linker to create lots of internal space. Various molecules can be attached to the aromatic linkers to capture specific chemicals.
To capture mercury, for example, sulfur compounds called thiols, which are known to tightly bind mercury, are attached. Added methylated sulfur groups enable capture of copper, and groups containing oxygen and sulfur capture iron. The altered nanoparticles make up about 20% of the weight of the membrane, but, because they are very porous, account for about 45% of the volume.
Calculations suggest that a kilogram of the polymer membrane could strip essentially all of the mercury from 35,000 liters of water containing 5 parts per million (ppm) of the metal, before requiring regeneration of the membrane.
Uliana showed in his experiments that boric acid, a compound of boron that is toxic to crops, can be removed by these membranes, though with diffusion dialysis that relies on a concentration gradient to drive the chemical — which is not ionic, like metals — through the membrane to be captured by the PAF nanoparticles.
“We tried different types of high-salinity water — for example, groundwater, industrial wastewater and also brackish water — and the method works for each of them,” he said. “It seems to be versatile for different water sources; that was one of the design principles we wanted to put into this.”
Uliana also demonstrated that the membranes can be reused many times — at least 10, but likely more — without losing their ability to absorb ionic metals. And membranes containing PAFs tuned to absorb metals easily release their absorbed metals for capture and reuse.
“It is a technology where, depending on what your toxic impurities are, you could customize the membrane to deal with that type of water,” Long added. “You may have problems with lead, say, in Michigan, or iron and arsenic in Bangladesh. So, you target the membranes for specific contaminated water sources. These materials really knock it down to often immeasurable levels.”
Long and Uliana’s collaborators were Jeffrey Urban and postdoctoral fellow Ngoc Bui of Lawrence Berkeley National Laboratory and Jovan Kamcev and Mercedes Taylor of UC Berkeley. The work was supported by the U.S. Department of Energy and the National Science Foundation. | <urn:uuid:1af35e9f-0450-415b-ad19-6867d6490dab> | CC-MAIN-2023-06 | https://news.berkeley.edu/2021/04/15/improved-desalination-process-also-removes-toxic-metals-to-produce-clean-water/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.941148 | 1,587 | 3.921875 | 4 |
Tutankhamun or Tutankhamen (in Egyptian: twt-ˁnḫ-ı͗mn, in the sense of the living picture of Amun or in honor of Amun), Egyptian pharaoh. It reigned between 1332 BC and 1323 BC.
His real name is Tutankhaton. For the first time in Egypt, establishing the monotheism of Aten, IV. He is the son of Amenotep. When his father died, he married the half-sister of another mother, Ankhesenamen, and ascended the throne. In the early years of his reign, there was a return to Egypt's ancient polytheistic religion. He took the name Tutankhamun instead of the name Tutankhaton. Thus, IV. The Aten religion founded by Amenhotep was extinguished. The age of Tutankhamun passed peacefully. After this king, who died at a very young age, Ay, who was a vizier to his father and a regent to himself, came to the throne by marrying the widow queen.
It was discovered in 1922 by Howard Carter. Tutankhamon's tomb is located in the Valley of the Kings. Except for Tutankhamun's mummy, those exhumed are exhibited in the Cairo museum. His tomb was exhibited in London in 1972 and later in the USA.
The legend of Tutankhamun
The tomb of King Tutankhamun is quite flashy compared to the tombs of other kings. The reason for the unusual death of Tutankhamun at a young age is not known even today. As if Tutankhamen was buried in a hurry. According to some researchers, the grave was being prepared for a noble, but at the time Tutankhamen died when he died. However, since the mummy's skull is behind the left ear, the current situation by Egyptian scientists is the thesis that Tutankhamun's general, Horemheb, might have hit the back of the skull with a hard object in order to take over the administration.
The tomb of Tutankhamen consists of two rooms and a staircase leading down to the first room. In the first room, a horse-drawn carriage, the throne of Tutankhamen and the priceless works that Tutankhamon used while alive were found. When this room was found, Howard Carter and his friends, who thought that it should have been a grave because the room was located in the Valley of the Kings, hit the walls of the room and searched for the spaces behind the wall. Finally, a gap was found and the wall was broken. A room behind the wall had a huge wooden box that looked like a new room. The box was sealed. Howard Carter had seen the seal - the best thing he'd ever seen and see in his life. Under the solid gold in a sarcophagus, the coffin glowed even by candlelight. Even though Howard Carter provided him with a good career with this discovery, no one but a couple participated in his funeral while he died in poverty and forgetfulness.
The curse began when Carter's beloved canary was defeated by a cobra snake, which was considered the symbol of Egypt for an unknown reason. After a while, the death of Lord Carnavron, who paid for the excavation works, caused a great echo in Cairo due to blood poisoning and there was an influx of tourists. In addition, the death of some people entering the grave from a febrile illness also initiated a superstition called the curse of the pharaoh.
It draws attention in the writings found in the sarcophagus of Pharaoh as a hieroglyph; Whoever touches the pharaoh's grave will be surrounded by the wings of death.
- Father: IV. Became Amenhotep (Akhenaten).
- Mother: Princess Kia
- Siblings: Smenkhkare
- Spouse: Ankhesenpaaten
- Sons: none
- Daughters: none
- Birth name: Tutankhaton
- Self-chosen name: Tutankhamun
- Throne name: Neb-cheperu-Rê (Neb-xprw-Ra)
Günceleme: 05/08/2020 10:38 | <urn:uuid:8d28419a-cfc4-4b04-b9d4-58d0edae4463> | CC-MAIN-2023-06 | https://optimumphysics.com/2020/08/tutankhamun-kimdir-tutankhamun-kac-yasinda-oldu-tutankhamun-efsanesi/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.974064 | 896 | 3.8125 | 4 |
A new model for dark matter
Dark matter remains one of the greatest mysteries of modern physics. It is clear that it must exist, because without dark matter, for example, the motion of galaxies cannot be explained. But it has never been possible to detect dark matter in an experiment.
Currently, there are many proposals for new experiments: They aim to detect dark matter directly via its scattering from the constituents of the atomic nuclei of a detection medium, i.e., protons and neutrons.
A team of researchers—Robert McGehee and Aaron Pierce of the University of Michigan and Gilly Elor of Johannes Gutenberg University of Mainz in Germany—has now proposed a new candidate for dark matter: HYPER, or "HighlY Interactive ParticlE Relics."
In the HYPER model, some time after the formation of dark matter in the early universe, the strength of its interaction with normal matter increases abruptly—which on the one hand, makes it potentially detectable today and at the same time can explain the abundance of dark matter.
The new diversity in the dark matter sector
Since the search for heavy dark matter particles, or so-called WIMPS, has not yet led to success, the research community is looking for alternative dark matter particles, especially lighter ones. At the same time, one generically expects phase transitions in the dark sector—after all, there are several in the visible sector, the researchers say. But previous studies have tended to neglect them.
"There has not been a consistent dark matter model for the mass range that some planned experiments hope to access. However, our HYPER model illustrates that a phase transition can actually help make the dark matter more easily detectable," said Elor, a postdoctoral researcher in theoretical physics at JGU.
The challenge for a suitable model: If dark matter interacts too strongly with normal matter, its (precisely known) amount formed in the early universe would be too small, contradicting astrophysical observations. However, if it is produced in just the right amount, the interaction would conversely be too weak to detect dark matter in present-day experiments.
"Our central idea, which underlies the HYPER model, is that the interaction changes abruptly once—so we can have the best of both worlds: the right amount of dark matter and a large interaction so we might detect it," McGehee said.
And this is how the researchers envision it: In particle physics, an interaction is usually mediated by a specific particle, a so-called mediator—and so is the interaction of dark matter with normal matter. Both the formation of dark matter and its detection function via this mediator, with the strength of the interaction depending on its mass: The larger the mass, the weaker the interaction.
The mediator must first be heavy enough so that the correct amount of dark matter is formed and later light enough so that dark matter is detectable at all. The solution: There was a phase transition after the formation of dark matter, during which the mass of the mediator suddenly decreased.
"Thus, on the one hand, the amount of dark matter is kept constant, and on the other hand, the interaction is boosted or strengthened in such a way that dark matter should be directly detectable," Pierce said.
New model covers almost the full parameter range of planned experiments
"The HYPER model of dark matter is able to cover almost the entire range that the new experiments make accessible," Elor said.
Specifically, the research team first considered the maximum cross section of the mediator-mediated interaction with the protons and neutrons of an atomic nucleus to be consistent with astrophysical observations and certain particle-physics decays. The next step was to consider whether there was a model for dark matter that exhibited this interaction.
"And here we came up with the idea of the phase transition," McGehee said. "We then calculated the amount of dark matter that exists in the universe and then simulated the phase transition using our calculations."
There are a great many constraints to consider, such as a constant amount of dark matter.
"Here, we have to systematically consider and include very many scenarios, for example, asking the question whether it is really certain that our mediator does not suddenly lead to the formation of new dark matter, which of course must not be," Elor said. "But in the end, we were convinced that our HYPER model works."
The research is published in the journal Physical Review Letters.
More information: Gilly Elor et al, Maximizing Direct Detection with Highly Interactive Particle Relic Dark Matter, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.031803
Journal information: Physical Review Letters
Provided by University of Michigan | <urn:uuid:246df91a-48ff-4475-aef6-b1930dc936a1> | CC-MAIN-2023-06 | https://phys.org/news/2023-01-dark.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.93259 | 982 | 3.5 | 4 |
The discovery of 92 nesting sites with a total of 256 fossilized dinosaur eggs is an incredible feat in and of itself. But, the nests and eggs are helping researchers better understand one of the largest dinosaurs that once roamed across India.
According to a recent study from the University of Delhi, India, published in PLOS One, a team of paleontologists uncovered the nesting sites in the Lameta Formation — an area of the Narmada Valley in central India and a hotbed for dinosaur fossils, especially from the Late Cretaceous Period. The eggs and nests belonged to one of the largest dinosaurs ever to live — the titanosaurs. This sauropod (long-neck herbivore) had a stockier frame and a wider stance than other typical sauropods.
Read More: Did Humans and Dinosaurs Ever Live Together?
An Egg-celent Discovery in Fossils
Thanks to the recent findings, paleontologists can peer into the past and learn more about the nesting habits of the titanosaurs.
"Together with dinosaur nests from Jabalpur in the upper Narmada valley in the east and those from Balasinor in the west, the new nesting sites from Dhar District in Madhya Pradesh (Central India), covering an east-west stretch of about 1000 km (about 600 miles), constitute one of the largest dinosaur hatcheries in the world," says co-author and research team leader Guntupalli V.R. Prasad, in a press release.
After analyzing the nests, the study authors identified six different species of titanosaur eggs — indicating that there may have been a wider diversity of titanosaurs in the area than previously thought — based on fossil records.
Modern-Day Relatives to Dinosaurs
According to the study, the nest layout indicates that the titanosaur may have laid its eggs in shallow pits, then buried them as modern-day crocodiles do. However, there was also evidence of the "egg-in-egg" phenomenon called counter-peristalsis contraction — a condition seen in chickens where the formed egg retracts into a hen's oviduct only to have the second egg form around it then.
The nests also indicate that the titanosaur may have had a similar physiology to modern birds, where they sequentially laid their eggs. The close nesting proximity of these dinosaurs is similar to modern-day birds like great egrets, cormorants and brown pelicans.
The researchers also noted that due to the close proximity of the nests, the adult titanosaurs may have left the hatchlings to fend for themselves.
With these findings, researchers have gain valuable insights into these massive dinosaurs.
"Our research has revealed the presence of an extensive hatchery of titanosaur sauropod dinosaurs in the study area and offers new insights into the conditions of nest preservation and reproductive strategies of titanosaur sauropod dinosaurs just before they went extinct," says Harsha Dhiman, lead author of the study in a press release. | <urn:uuid:6fb20ee5-7757-4da1-9008-1f4d976fc1cf> | CC-MAIN-2023-06 | https://preview.discovermagazine.com/planet-earth/dinosaur-hatchery-with-92-nests-and-over-250-eggs-uncovered-in-india | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.940328 | 619 | 3.9375 | 4 |
Among the many aspects of jazz that are influenced by Latin music are its technique. This article will cover the elements that make this type of music so special and who influenced it. You’ll also learn who Latin jazz’s most famous players were. In addition, you’ll learn how to identify what makes this style so unique.
How did Latin music influence jazz?
In the 1930s, Latin musicians from Cuba began decamping to the United States, where they interacted with stars of the big band and bebop scenes. Many of these musicians brought with them Afro-Cuban percussion instruments, such as the clave. Some of these musicians were particularly influential in defining the sound of Latin jazz.
Cuban musicians such as pianist Chucho Valdes and alt saxophonist Yosvany Terry forged new styles of Latin jazz. Other notable Cuban musicians included pianist Omar Sosa and drummer Dafnis Prieto. Eventually, percussionists became prominent soloists in jazz.
A complex process shaped the history of Latin American music. A mixture of influences from Africans and Europeans created a unique musical style that is still recognizable today. The result was a unique sound that brought attention to Latin American music. The music of the Caribbean and the Americas is constantly evolving.
Afro-Cuban jazz is the first form of Latin jazz. It incorporates Afro-Cuban clave-based rhythms with jazz harmonies and improvisation techniques. The genre was born in 1947 in New York City, when Mario Bauza and Frank “Machito” Grillo formed the Machito and his Afro-Cubans. The combination of these music styles resulted in complex new soundscapes, and the percussionists regained a central role in jazz.
What technique elements are used in Latin jazz?
Latin jazz evolved from the fusion of Cuban and American musical styles. Its early days in New Orleans were characterized by a syncopated rhythm, which some musicians referred to as the “Spanish tinge” of jazz. Early 20th-century musicians adapted Cuban instruments and rhythms, including the habanera, a syncopated four-beat pattern. For instance, the St. Louis Blues band featured an early version of the habanera rhythm.
The Latin jazz genre is also known as afro-Cuban jazz. This jazz style combines the rhythms of Cuba with the percussion instruments of Central and South America. It also contains African and European elements. It has become a popular form of jazz in the United States, with a wide variety of styles to choose from.
While the Latin jazz genre is a wide-ranging style with many variations, its roots can be traced back to the countries of Brazil and the Caribbean. This region is home to many music and dance styles, including samba and Bossa Nova.
What is special about Latin jazz?
There are many different types of Latin jazz. Some are characterized by a Latin tinge while others are less distinctly distinct. But one common characteristic of both is improvisation, and the combination of jazz and Latin rhythms is what makes Latin jazz special. Here are 10 artists who put their own spin on this cocktail of jazz improvisation and Latin rhythm.
Latin jazz is sometimes referred to as Afro-Cuban jazz, as it draws heavily from the popular music of Cuba. This type of jazz also draws influences from Brazilian samba and bossa nova. This style of jazz is often characterized by the clave, a five-stroke rhythmic pattern that forms the heartbeat of Afro-Caribbean music.
Its origins can be traced to a jazz band led by Mario Bauza, a Cuban trumpeter, composer and bandleader. He came to the United States in 1943 and worked with artists such as Dizzy Gillespie and Ella Fitzgerald. Many consider his 1943 composition Tanga the first Latin jazz tune. Tanga employs the 2:3 clave pattern, which Mario Bauza introduced into the music.
Who influenced Latin jazz?
Latin jazz is a musical genre that traces its roots to Cuba, which has a vibrant cultural history. The Afro-Latin rhythms, such as the clave, played a central role in shaping this music. In the early twentieth century, Cuban musicians influenced the development of jazz in the United States. These musicians brought with them percussion instruments from their native country and shaped the sound of the genre. The result was the Latin jazz style known as son cubano.
The style is also influenced by Brazilian music. The country is home to the largest music industries in the world, with a total revenue of almost $300 million USD in 2018. Many Latin jazz styles originated in Brazil, including the Bossa Nova and Samba. These styles of music became popular in the United States and across Latin America.
Chano Pozo Gonzalez and Mario Bauza were two of the first major figures in the development of Afro-Cuban music. These musicians formed an orchestra that combined jazz arrangements with Afro-Cuban percussion rhythm. This led to the creation of the Tanga, a song that is now widely regarded as the first true Latin jazz piece.
What scale is used in Latin jazz?
When a musician is performing Latin jazz, they often play the A harmonic minor scale. This is a natural minor scale with a sharp seventh degree and dark feel. The two-note interval of minor third is referred to as a clave. The clave is often represented in the form of two measures of cut-time.
Latin jazz is composed of a variety of rhythms from Latin American and African countries. It typically uses jazz harmonies, but can also include elements from other Latin American traditions. In particular, the rhythms in this style come from Cuba and the Caribbean. A few European and African elements are also incorporated.
How do you make Latin jazz?
There is no single answer to the question, “How do you make Latin jazz?” There are many styles and genres of jazz. One of the most popular is Latin jazz, which is the product of a mix of various styles from Central and South America. Although there are many distinct styles and genres within Latin jazz, some common elements can be found in many styles.
One example of this is the tresillo-habanera rhythm, which Jelly Roll Morton called a “Spanish tinge”. It is also commonly found in the 1914 “St. Louis Blues” written by W.C. Handy, and in the 1936 song “Caravan” by Juan Tizol.
Another form of Latin jazz was Afro-Brazilian jazz, which emerged in the 1960s. It introduced new rhythms, samba, and a more mellow sound. The composers of this genre include Brazilian guitarist Joao Gilberto and New York jazzman Tito Puente. Salsa was born in the 1970s in New York, and is one of the most popular styles of Latin jazz in the world. The percussionist Ray Barretto is also part of the group.
What defines the music of Latin America?
Latin American music is characterized by a variety of themes, many of which can be traced back to the continent’s African heritage. These themes range from the sentimental to the politically charged. Some Latin American songs are based on themes of migration, while others are based on pastoral landscapes and descriptions of local mythology. Regardless of the underlying themes, many Latin American songs feature an emphasis on gender and spirituality.
In ancient times, the music of the region was created by the meeting of different cultures and languages. These cultures developed musical instruments and the music is still very much influenced by them today. Andean music, for example, is characterized by the use of flutes and other instruments to convey messages.
Latin American music has a wide range of genres. Some of the most popular styles are merengue and tango, which originated in the Dominican Republic and spread to other regions of the Americas. Tango and samba, which have African origins, were also heavily influenced by European immigrants. Many other styles of Latin music were influenced by African and Native American music.
Where Did jazz Come From?
The first recordings of jazz were made in New Orleans in the early twentieth century. This city’s population was more diverse than anywhere else in the South, and this combination of African and European musical traditions made jazz. The music, which was originally aimed at dancing, spread throughout the city and throughout the United States. After the first recordings were made, jazz began to gain international popularity.
As jazz spread throughout the world, it brought new musical influences and cultures together. Many of the early jazz musicians were African Americans, and the genre evolved from slave songs and spirituals. Some of the first great jazz soloists were black, including Louis Armstrong. He played trumpet and performed songs like Dippermouth Blues and the Working Man’s Blues. Many musicians from around the world contributed to jazz, so the genre is constantly changing and incorporating new cultural influences.
Despite the fact that jazz emerged in the United States, some of its elements were brought from other areas, such as Brazil. Many Blacks were emancipated in the early nineteenth century and took part in their own cultural development. However, Blacks in Brazil were isolated from the European establishment and were unable to practice their native musical traditions. In the United States, slaves were exploited for playing instruments. | <urn:uuid:26f9d14c-c429-475d-a2ce-82c093752dcb> | CC-MAIN-2023-06 | https://quivvermusic.com/latin-american-music-and-jazz-what-elements-of-latin-american-music-had-an-influence-on-jazz/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.970588 | 1,930 | 3.96875 | 4 |
Character education requires strong community; it requires the support of families, educators and the students themselves. Developing a moral compass is a learned behaviour over time. It requires incidental and direct teaching, pro-social role models, positive problem solving processes, compassion and empathy. Problem solving means making choices and accepting the consequences when things don’t turn out the way you want them to. When children solve problems and learn from their mistakes they build resiliency.
Positive social behaviours are taught and reinforced consistently in the home, school and community. The transfer of positive social and conflict-resolution skills being taught at school will be enhanced if students are encouraged to apply them in a variety of extracurricular and community situations. Adults model appropriate language, actions, and use children's misbehaviour as an opportunity to teach social and conflict-resolution skills, rather than as an opportunity to punish.
The ability to express thoughts and feelings constructively is a necessary skill in building relationships and managing conflict. Self-esteem is built as children develop competency, take responsibility for their language and actions, and learn to resolve problems in a positive way. Developing positive relations with a variety of people fosters a respect for diversity and helps children look at differing perspectives. Children are encouraged to show respect for others and learn how to work together to become responsible citizens. These beliefs drive our school discipline procedures and cause us to learn a better way to handle conflicts and make responsible choices.
There are many ways of showing confidence and assertiveness. If someone is bothering you, try using our IMATTstrategy:
- Ignore the person
- Move away from the person bothering you
- Ask the person to stop
- Tell the person to stop
- Tell an adult
When possible we encourage the children to try the first four steps before coming to an adult. It builds positive social skills. We know that there are times when adult help is needed immediately. We want you to know that any student can depend on any adult at Evergreen School! Teachers and other staff members will continuously assist students in developing positive character qualities and when needed, conflict or problem solving strategies.
Circle of Courage at Evergreen School
This means that we have a shared responsibility to create an environment where everyone has a strong sense of “Belonging”. At Evergreen School every child is included; they work and play in flexible, collaborative groupings.
At Evergreen School every child has a sense of “Mastery”. They are deeply involved in their work and know why it matters for them. Evergreen students are enthusiastic learners. Learning at Evergreen School is inquiry rich. When problem solving, students are encouraged to explore different solutions. The learning is meaningful and engaging; our students are personally invested in their learning.
Both parents and teachers help students become responsible for the choices that they make. Our students demonstrate increased “Independence” as they are able to do so.
At Evergreen School, we are thoughtful, caring and considerate of others. We build a sense of “Generosity” by unconditionally giving to others. | <urn:uuid:cfa71d4e-c514-4de4-809b-742fc8c5dffa> | CC-MAIN-2023-06 | https://school.cbe.ab.ca/school/evergreen/teaching-learning/program-approach/pages/default.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.958261 | 640 | 3.640625 | 4 |
Water is one of the few resources which is indispensable for life. Therefore it must be used with responsibility. Depending on the source, different contaminants can be found in the water we use. Water from wells can be virtually free of particles. However, water from a surface-water source like a river has to be purified and cleaned in order to be made suitable for consumption and use. In the USA, around 76 billion gallons of water are pumped from the ground for various uses on a daily basis.
Groundwater can get polluted in various ways. The most common culprits are leaking underground storage tanks, landfills and hazardous waste sites. Wastewater which is not treated properly at treatment facilities is also another source.
There are two methods of cleaning water, chemically or by natural means. The water for drinking, bathing and washing is treated in a water treatment plant. This water is cleaned through several steps, the first of these being screening. Here the water flows through a pipe with a screen, which acts as a sifter to remove the larger objects in it.
Then there is the flocculation or clarification, where chemicals are added which separate the smaller particles that were not eliminated during the screening process. The third process is filtration, wherein the water passes through fine sand which traps whatever remains of the chemicals used in the second step.
The last and final step is the chlorination. Chlorine is added to the water to protect against any bacteria or other pollutants that may still be in the water. At all stages in this process, samples of the water are taken and tested to assess whether the procedure is effective and the water is getting cleaned in a proper manner.
The natural cleaning of water takes place as it moves from the ground, lakes, oceans and plants and gets transformed into clouds. As water travels through the ground, it gets filtered in a natural way, much like how it is in the filtration process where it is passed through sand. Water also gets naturally purified as it flows through some kinds of ecosystems, especially in the wetlands.
Dangerous chemicals, bacteria and other pollutants can be removed from water by a new technology known as nanotechnology. Nanotechnology is said to be much more effective and less expensive than the conventional methods of water purification. A team at the Ian Wark Research Institute at the University of South Australia has suggested that nanotechnology could solve the global problem of safe drinking water. Active particles of silica called Surface Engineered Silica (SES) have been tested to show that they could remove pathogens, viruses and biological molecules effectively. This innovative new technology for cleaning water could help avert diseases and provide safe drinking water for millions of people all over the world.
About the Author
Paul Favors is a full-time SEO Consultant and freelance writer who operates a private web consulting firm. Paul holds a B.A. in Communications Studies from the University of Alabama at Birmingham and has been a professional writer for 3 years - two of those years as regular Demand Studios contributor. | <urn:uuid:72285415-981f-44b8-b757-452d3cc954c4> | CC-MAIN-2023-06 | https://sciencing.com/water-cleaned-5158828.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.964933 | 625 | 3.734375 | 4 |
Conjunctivitis is an inflammation or infection of the conjunctiva, the thin transparent layer of tissue that lines the inner surface of the eyelid and covers the white part of the eye. Conjunctivitis, often called “pink eye,” is a common eye disease, especially in children. It may affect one or both eyes. Some forms of conjunctivitis can be highly contagious and easily spread in schools and at home. While conjunctivitis is usually a minor eye infection, sometimes it can develop into a more serious problem.
Conjunctivitis may be caused by a viral or bacterial infection. It can also occur due to an allergic reaction to irritants in the air like pollen and smoke, chlorine in swimming pools, and ingredients in cosmetics or other products that come in contact with the eyes. Sexually transmitted diseases like Chlamydia and gonorrhea are less common causes of conjunctivitis.
People with conjunctivitis may experience the following symptoms:
- A gritty feeling in one or both eyes
- Itching or burning sensation in one or both eyes
- Excessive tearing
- Discharge coming from one or both eyes
- Swollen eyelids
- Pink discoloration to the whites of one or both eyes
- Increased sensitivity to light
What causes conjunctivitis?
The cause of conjunctivitis varies depending on the offending agent. There are three main categories of conjunctivitis: allergic, infectious and chemical:
- Allergic Conjunctivitis occurs more commonly among people who already have seasonal allergies. At some point they come into contact with a substance that triggers an allergic reaction in their eyes.
- Giant Papillary Conjunctivitis is a type of allergic conjunctivitis caused by the chronic presence of a foreign body in the eye. This condition occurs predominantly with people who wear hard or rigid contact lenses, wear soft contact lenses that are not replaced frequently, have an exposed suture on the surface or the eye, or have a glass eye.
- Bacterial Conjunctivitis is an infection most often caused by staphylococcal or streptococcal bacteria from your own skin or respiratory system. Infection can also occur by transmittal from insects, physical contact with other people, poor hygiene (touching the eye with unclean hands), or by use of contaminated eye makeup and facial lotions.
- Viral Conjunctivitis is most commonly caused by contagious viruses associated with the common cold. The primary means of contracting this is through exposure to coughing or sneezing by persons with upper respiratory tract infections. It can also occur as the virus spreads along the body’s own mucous membranes connecting lungs, throat, nose, tear ducts, and conjunctiva.
- Ophthalmia Neonatorum is a severe form of bacterial conjunctivitis that occurs in newborn babies. This is a serious condition that could lead to permanent eye damage unless it is treated immediately. Ophthalmia neonatorum occurs when an infant is exposed to Chlamydia or gonorrhea while passing through the birth canal.
Chemical Conjunctivitis can be caused by irritants like air pollution, chlorine in swimming pools, and exposure to noxious chemicals.
[back to top]
How is conjunctivitis diagnosed?
Conjunctivitis can be diagnosed through a comprehensive eye examination. Testing, with special emphasis on evaluation of the conjunctiva and surrounding tissues, may include:
- Patient history to determine the symptoms the patient is experiencing, when the symptoms began, and the presence of any general health or environmental conditions that may be contributing to the problem.
- Visual acuity measurements to determine the extent to which vision may be affected.
- Evaluation of the conjunctiva and external eye tissue using bright light and magnification.
- Evaluation of the inner structures of the eye to ensure that no other tissues are affected by the condition.
- Supplemental testing may include taking cultures or smears of conjunctival tissue, particularly in cases of chronic conjunctivitis or when the condition is not responding to treatment.
Using the information obtained from these tests, your optometrist can determine if you have conjunctivitis and advise you on treatment options.
[back to top]
How is conjunctivitis treated?
Treatment of conjunctivitis is directed at three main goals:
- To increase patient comfort.
- To reduce or lessen the course of the infection or inflammation.
- To prevent the spread of the infection in contagious forms of conjunctivitis.
The appropriate treatment for conjunctivitis depends on its cause:
- Allergic conjunctivitis – The first step should be to remove or avoid the irritant, if possible. Cool compresses and artificial tears sometimes relieve discomfort in mild cases. In more severe cases, non-steroidal anti-inflammatory medications and antihistamines may be prescribed. Cases of persistent allergic conjunctivitis may also require topical steroid eye drops.
- Bacterial conjunctivitis – This type of conjunctivitis is usually treated with antibiotic eye drops or ointments. Improvement can occur after three or four days of treatment, but the entire course of antibiotics needs to be used to prevent recurrence.
- Viral Conjunctivitis – There are no available drops or ointments to eradicate the virus for this type of conjunctivitis. Antibiotics will not cure a viral infection. Like a common cold, the virus just has to run its course, which may take up to two or three weeks in some cases. The symptoms can often be relieved with cool compresses and artificial tear solutions. For the worst cases, topical steroid drops may be prescribed to reduce the discomfort from inflammation, but do not shorten the course of the infection. Some doctors may perform an ophthalmic iodine eye wash in the office in hopes of shortening the course of the infection. This newer treatment has not been well studied yet, therefore no conclusive evidence of the success exists.
- Chemical Conjunctivitis – Treatment for chemical conjunctivitis requires careful flushing of the eyes with saline and may require topical steroids. The more acute chemical injuries are medical emergencies, particularly alkali burns, which can lead to severe scarring, intraocular damage or even loss of the eye.
Contact Lens Wearers
Contact lens wearers may need to discontinue wearing their lenses while the condition is active. Your doctor can advise you on the need for temporary restrictions on contact lens wear.
If the conjunctivitis developed due to wearing contact lenses, your eye doctor may recommend that you switch to a different type of contact lens or disinfection solution. Your optometrist might need to alter your contact lense prescription to a type of lens that you replace more frequently to prevent the conjunctivitis from recurring.
Practicing good hygiene is the best way to control the spread of conjunctivitis. Once an infection has been diagnosed, follow these steps:
- Don't touch your eyes with your hands.
- Wash your hands thoroughly and frequently.
- Change your towel and washcloth daily, and don't share them with others.
- Discard eye cosmetics, particularly mascara.
- Don't use anyone else's eye cosmetics or personal eye-care items.
- Follow your eye doctor's instructions on proper contact lens care.
You can soothe the discomfort of viral or bacterial conjunctivitis by applying warm compresses to your affected eye or eyes. To make a compress, soak a clean cloth in warm water and wring it out before applying it gently to your closed eyelids.
For allergic conjunctivitis, avoid rubbing your eyes. Instead of warm compresses, use cool compresses to soothe your eyes. Over the counter eye drops are available. Antihistamine eye drops should help to alleviate the symptoms, and lubricating eye drops help to rinse the allergen off of the surface of the eye.
See your doctor of optometry when you experience conjunctivitis to help diagnose the cause and the proper course of action.
[back to top] | <urn:uuid:fe22b073-9a42-495a-a196-138f34b13474> | CC-MAIN-2023-06 | https://sdeyes.org/~sdos/conjunctivitis.php | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.916723 | 1,712 | 4.34375 | 4 |
JEM is designed to instill and enhance students’ reading habits by providing a variety of interesting and relevant articles and exercises. It is a great source of edutainment that can be used by students independently, alongside use as a classroom reader for more intensive reading development.
Articles use graded language for distinct levels, from B1 (intermediate) to C1 (advanced).
Articles written reflect a wide range of topics, text types, and genres. Magazine content includes original short stories written by specialist ELT authors. Each article contains a glossary to help explain more advanced vocabulary items or technical terminology. Length of the articles varies according to the appropriate level and genre
The articles are written based on the themes below to cater for readers with different interests.
Science and technology
Art and Design
There are also fun language exercises and games, such as ‘Word Search’ and ‘Spot the Differences’ included in the magazine. | <urn:uuid:4fb90d7a-c5be-4127-92ca-f9bce38cef79> | CC-MAIN-2023-06 | https://store.justenglish.com/products/just-english-magazine-vol-13-issue-4 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.919992 | 220 | 3.75 | 4 |
What is an Interrupt
Interrupt is an event that changes the program flow i.e. the instruction stream being executed by the CPU. Interrupts are also generated by various devices connected to the CPU or they caused by bugs within the software. Interrupts are way for hardware to signal to the processor.
Interrupts and Exceptions
- Exceptions Synchronous Interrupts:
Caused by software and produced by control unit of CPU.
Example: A bug in software or a page fault. Kernel handles these exceptions by following the steps defined (in kernel code) to recover from such a condition.
- Interrupts Asynchronous Interrupts:
Caused by hardware devices.
Example: Keypress/mouse movement by user.
Interesting Points about Interrupts
- Interrupts are asynchronous and they are nested.
- An interrupt can occur while kernel is handling another interrupt. When kernel is executing some critical region, interrupts are disabled and kept the critical region as small as possible.
- By disabling interrupts, Kernel guarantees that an interrupt handler will not preempt the critical code.
- The interrupts and exceptions are identified by a number between 0 to 255.
- The code executed by interrupt handler is not a process switch, rather its ran at an expense of the process that was running when interrupt was received.
- Interrupt handling is critical for Kernel but since handling can take long time in the case of slow I/O devices. Hence the interrupt handling is divided into two parts
- Urgent: Kernel executes this right away.
- Bottom Halves: Deferred to execute later. (using various techniques like soft irqs, Tasklets, Task/ Work Queues)
Classification of Interrupts
- These are interrupt requests issued by I/O devices.
- There are two states for a maskable interrupt.
- The vectors of maskable interrupts are altered by programming the interrupt control.
- They are always recognized by CPU.
- The vectors of non-maskable interrupts are fixed.
Classification of Exceptions
- Processor Detected
- CPU detects an anomalous condition while executing an instruction
- Example: Page faults
- Faults can be corrected and once corrected, program can be resumed.
- They can be reported immediately at the next instruction.
- Used mainly for debugging.
- These are severe errors like hardware failure
- The process is terminated on receiving this signal.
- Programmable Exceptions:
- Often called as software interrupts.
- Used to implement system calls and debugging.
|0||divide by zero error||Fault|
|1||Debug||Trap or Fault|
|7||Device not available||Fault|
|9||Coprocessor segment overrun||Abort|
|11||Segment not present||Fault|
|12||Stack segment fault||Fault|
|15||Reserved by Intel|
|19||SIMD floating point exception||Fault|
_Page Fault occurs when the process try to address a page in its address space but is not currently in RAM. When Kernel is handling this exception, it may suspend current process and switch to another process until the page is available in the RAM. The process switch is done because of high latency of RAM (200 ns or serveral hundred CPU cycles). _
- Each hardware device connected to a computer has a single output line named as Interrupt Request (IRQ) line.
- There is a hardware circuit called Programmable Interrupt Controller (PIC) to which all the IRQ lines are connected.
- Interrupt Controller (PIC) monitors IRQ lines for raised signals.
- In case of multiple signals raised simultenously, the signal with lower pine number is selected.
- When there is a signal raised, its stored in signal vector, then the vector is sent to CPU and signal is raised to CPUs INTR pin to wait until the CPU acknowledges the signal.
Image credit: By Jfmantis – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=18168230
APIC – Advanced Programmable Interrupt Controller
In modern multiprocessor systems, there is a local APIC chip per CPU. The APIC has following components:
- 32 bit registers
- Internal clocks
- Local timer device
- Two additional lines LINT 0 and LINT 1
Image credit: [Intel Software Developer manual vol 3.](https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.pdf)
Categories of Interrupts
- I/O Interrupts
- Timer Interrupts
- Interprocessor interrupts
- Critical actions are executed within the interrupt handler immediately.
- These are quick to finish and hence executed by interrupt handler immediately.
- Noncritical deferrable
- These may be delayed for a long time interval without affecting the kernel operations
The kernel tries to distribute the IRQ signals coming from the hardware devices in a round-robin fashion among all the CPUs.
The interrupts coming from external hardware can be distributed within CPUs in following ways
- Static Distribution
- Dynamic Distribution
The kernel provides a functionality to redirect all the interrupts to a particular CPU. This is achieved by modifying Interrupt Redirection Table entries of the I/O APIC. IRQ affinity of particular interrupts can also be changed by writing a new CPU bitmap mask into the /proc/irq/n/smp_affinity file.
When an interrupt is received, kernel runs inyerrupt handler or interrupt service routine code. These are ‘C’ functions. A data structure named Interrupt Descriptor Table (IDT) stores each interrupt or exception vector with the address of the corresponding interrupt or exception handler. That Table must be properly initialized before the kernel enables interrupts.
- Interrupt handling should be fast but there may be large amount of work involved, hence the handling is divided into two parts:
- Top Half: Executed immediately and perform time critical work like acknoledging the interrupt.
- Bottom Half: This part can be deferred like communicating with I/O.
- As stated above the interrupt handling has two parts: critial and non critical (deferred handling).
- SoftIRQs, Tasklets, WorkQueues etc are ways to process deferred part of interrupt handling which is also called as bottom halves.
|They are statically allocated.||can also be allocated and initialized at runtime|
|softirqs are reentrant functions and must explicitly protect their data structures with spin lock||Do not need synchronization because Kernel handles that for them.|
|provide the least serialization||Tasklets of the same type are always serialized: in other words, the same type of tasklet cannot be executed by two CPUs at the same time|
|Easy to code|
- They defer work into Kernel Queue.
- Functions in work queues run in process context and hence the function can be blocking functions or can sleep.
- Function in a work queue is executed by a kernel thread, so there is no User Mode address space to access.
- The exceptions raised by CPU are handled by linux as error conditions.
- Kernel sends a signal to the process about the erroneous condition.
- Steps taken to handle exception:
- Save registers to Kernel Stack
- Invoke C-level function to handle exception.
- call ret_from_exception() function and exit!
Signals are software generated interrupts. A signal is generated for a process (or sent to a process) when the event that causes the signal occurs. When the signal is generated, the kernel usually sets a flag of some form in the process table. A signal is delivered to a process when the action for a signal is taken. Between the time of generation and delivery, the signal is pending.
When a process receives a signal, it can do either of following.
- Ignore: except for
SIGSTOPall signals can ignored.
- catch the signal: call some callback on receiving this signal. Again,
SIGSTOPcan not be caught or blocked.
- Apply some default action.
On Termination of the process the memory image of the file is stored in the pwd of the process.
Reentrant functions: functions that are guaranteed to be safe to call from within a signal handler. These functions are async-safe functions meaning they block the signals before entering into a critical region | <urn:uuid:8a65d8ac-008d-43cf-a8f7-4e182af5e44b> | CC-MAIN-2023-06 | https://superchargedcomputing.com/2018/02/07/interrupts-and-exceptions/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.869752 | 2,020 | 3.90625 | 4 |
As this blog is about volcanic and seismic activity, a word or two on what a volcano is might be helpful.
A volcano is defined by the Oxford English Dictionary as mountain or hill with a crater or vent through which rocks, rock fragments, lava, hot vapour and / or gases are or have been erupted through the Earth’s crust. However, said mountain or hill may be quite small, even just a depression or a rupture in the Earth’s surface.
The island of Vulcano is the source of the term, volcano, itself. Vulcano Island is located in the Tyrrhenian Sea, north of Sicily, made up of several active volcanoes, including calderas.
So what causes the lava and other matter to be erupted? What is the Earth’s crust? These are questions some of the questions we will look at in this blog.
Heat generated at the Earth’s core drives the geological processes which result in volcanic activity. For starters, we will look at the basics of the earth’s composition, magma and the source of the energy to enable eruptions to occur.
Basics of the Earth’s Composition
This is a very basic description of the Earth’s composition. The Earth is made up of three main parts: the core, the mantle and the crust. The core is very hot and temperatures decrease towards the Earth’s surface. Most of the Earth is solid, only the outer core is liquid. Evidence for this structure has been gleaned from seismic studies, notably how the different wave types generated by an earthquake pass through the Earth, geophysics and the study of rocks.
The radius of the Earth is around 6,378 km, in other words the centre of the Earth’s core can be found 6,738 km down. The core makes up around one third of the Earth’s mass. It is made up of an outer core which starts at around 2,900 km down and an inner core which starts at around 5,100 km down.
Material in the core is too dense to make its way to the surface, so there is some uncertainty over its composition. What we do know is inferred from geophysical studies of the Earth and the chemical analysis of meteorites. During the Earth’s formation, as rocks and fragments combined to form the planet, denser matter sunk towards the core under gravitational and other forces. Iron is the chief component of the core, with nickel at the inner core and a lighter element in the outer core (possibly, oxygen, sulphur, carbon, hydrogen or potassium). The iron in the core and the electrical currents in the molten outer core are the source of the Earth’s magnetic field.
Seismic studies have shown that the outer core is impermeable to earthquake shear waves (S waves) so acts like a liquid. Whether or not a layer is liquid or solid is down to the balance between temperature, pressure and chemical composition: while the inner core is around 4,700°C, immense pressure keeps the rock solid.
The mantle is composed of solid rocky materials that are less dense that the outer core; it makes up two thirds of the Earth’s mass. Density differences mean that the mantle is a distinct layer from the outer core. The most abundant elements in the mantle are silicon and oxygen, that form silicates. The mantle is made up of around 45% silica. Magnesium and iron are the third and fourth most abundant elements. Many other elements are to be found in the mantle, but these tend to be depleted near the boundary with the crust.
The composition of the mantle is inferred from xenoliths (small fragments of rock) contained in some basalt magmas and kimberlites. Whether or not these are representative of the mantle as a whole or just the fragments that have been erupted is open for debate.
The upper mantle is joined to the crust; the combined layer is referred to as the lithosphere. Below the lithosphere, also in the upper mantle, is the asthenosphere. The asthenosphere, being weaker than the lithosphere, enables lithospheric slabs to move around (plate tectonics). The asthenosphere moves at the rate of a few centimetres a year from a process called solid-state convection; hot mantle rises, transfers heat to the lithosphere and the resulting cooled mantle sinks. The heat in the lithosphere is dissipated through conduction or via rising magma.
The lithosphere is around 120 km thick. It’s boundary with the asthenosphere is defined by the temperature at which rocks become ductile, around 1,350°C.
The crust is a silicate rich brittle layer covering the mantle; it comprises less than 0.5% of the Earth’s mass. There are two types of crust: oceanic crust, c. 6 km to 11 km thick, mostly basalt, which makes up ocean floors; and, continental crust, c. 25 km to 90 km thick, composed of igneous rocks (granite and andesite), sedimentary rocks and metamorphic rocks, which, as the name suggests, make up the continents and the continental shelves. Igneous rocks are those resulting from volcanic processes. Sedimentary rocks are those made up of fragments produced by erosion or decay of rocks on the surface. Metamorphic rocks are sedimentary or igneous rocks altered by changes in temperature and / or pressure.
Magma is the molten rock from either the mantle or the crust, itself, that makes its way through the crust to where it may be erupted as lava at a volcano or volcanic fissure. Magma and lava are the same rock: it is magma until it is erupted; and, lava is the erupted matter.
The composition of the magma and how it is generated determine the eruptive style of the volcano: e.g. effusive or explosive.
The energy required for matter to be erupted is heat from the Earth’s core. The Earth’s core is made up of radioactive materials; their radioactive decay generates heat. Most heat today is generated from four long-lived radioactive isotopes: two uranium isotopes,235U and 238U; one thorium, 232Th; and, one potassium, 40K. Additional heat came from the decay of the shorter-lived aluminium isotope, 26Al earlier in the planet’s formation. Asteroid bombardment has also added kinetic energy.
So we know have the Earth’s crust, magma and heat. What happens next? Watch this space.
The Armchair Volcanologist
24 June 2020
© Copyright remains with the author; all rights reserved. 2020. | <urn:uuid:57323442-0d41-4b83-b287-458ec141398f> | CC-MAIN-2023-06 | https://thearmchairvolcanologist.com/tag/volcano/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.939766 | 1,415 | 4.0625 | 4 |
Technological Innovation Website Editor – 08/01/2022
Illustration and image (detail) of atoms swimming in a liquid.
[Imagem: University of Manchester]
Liquids come into contact with solids all the time, creating essential interfaces for everything from life-sustaining biological processes to industrial processes, batteries, fuel cells and just about anything else we can imagine.
However, we know very little about this fundamental phenomenon.
What we do know is that when a solid surface comes into contact with a liquid, both substances change their configuration in response to each other’s proximity – how, how quickly, and how each substance comes to behave are all still enigmas to be solved.
“Given the broad industrial and scientific importance of this behavior, it is really surprising how much we still have to learn about the fundamentals of how atoms behave on surfaces in contact with liquids. One of the reasons why so much information is lacking is the lack of techniques capable of producing experimental data for solid-liquid interfaces,” explained Professor Sarah Haigh from the University of Manchester (UK).
The good news is that Professor Haigh’s team has just created a new “nano-petri dish”, allowing for the first time to observe how individual atoms of a solid behave as that solid comes into contact with a liquid.
This is the liquid cell, which allows viewing the solid-liquid interface.
[Imagem: Daniel J. Kelly et al. – 10.1021/acs.nanolett.7b04713]
The team began by stacking layers of a well-studied two-dimensional material, molybdenite. Then they drilled holes in this molybdenum disulfide, covered one side with graphene, inserted liquid, and then capped the other side with graphene as well—the researchers call this device a “dual graphene liquid cell.”
These graphene windows made it possible to create precisely controlled liquid layers, making it possible for the first time to film individual atoms “swimming” surrounded by the liquid.
By analyzing how atoms move, and comparing the images with the theories, the researchers were able to understand the effect of the liquid on atomic behavior.
They found, for example, that the liquid accelerates the movement of atoms, and also that it changes the atom’s preferred resting places in relation to the underlying solid – which was not quite what the theories expected.
“In our work, we have shown that misleading information is provided if atomic behavior is studied in a vacuum, rather than using our liquid cells,” explained researcher Nick Clark, referring to transmission electron microscopes, a technique that allows us to visualize and analyze individual atoms, but which requires a high vacuum environment – and the structure of materials changes in a vacuum.
The team believes that its cells with transparent windows should have a widespread impact on the development of green technologies, such as hydrogen production.
“This is a landmark achievement and just the beginning – we are already looking to use this technique to support the development of materials for the sustainable chemical processing needed to achieve the world’s zero net emissions goals,” said Clark.
Article: Tracking single adatoms in liquid in a Transmission Electron Microscope
Authors: Nick Clark, Daniel J. Kelly, Mingwei Zhou, Yi-Chao Zou, Chang Woo Myung, David G. Hopkinson, Christoph Schran, Angelos Michaelides, Roman Gorbachev, Sarah J. Haigh
Vol.: 18, 2, 1168-1174
Article: Nanometer Resolution Elemental Mapping in Graphene-Based TEM Liquid Cells
Authors: Daniel J. Kelly, Mingwei Zhou, Nick Clark, Matthew J. Hamer, Edward A. Lewis, Alexander M. Rakowski, Sarah J. Haigh, Roman V. Gorbachev
Magazine: Nano Letters
Other news about: | <urn:uuid:ad392b6c-f4fc-44ba-8c43-af12155a558a> | CC-MAIN-2023-06 | https://thegoaspotlight.com/2022/08/02/images-show-tomes-swimming-in-a-liquid/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.897656 | 884 | 3.625 | 4 |
By Hannah Packman, NFU Communications Coordinator
Last month on the Climate Column, we introduced the idea of prescribed grazing, a conservation practice in which the frequency and intensity of grazing, as well as the density and placement of livestock, are regulated with a specific goal in mind. Prescribed grazing can be carried out in a number of different ways, depending on the size and type of operation, topography, climate, season, and desired outcomes.
Rotational grazing is one of the most common forms of prescribed grazing. Under rotational grazing, as the U.S. Department of Agriculture (USDA) Natural Resources Conservation Service (NRCS) describes it, “only one portion of pasture is grazed at a time while the remainder of the pasture ‘rests.’” Before incorporating rotational grazing into their management plans, producers must use fences to partition their grazing land into smaller subdivisions, known as “paddocks.” Livestock are then herded from paddock to paddock and allowed to graze for a specified amount of time, allowing the rest of the land to rejuvenate during that period.
By allowing the forage to regrow, rotational grazing offers a number of conservation benefits. For one, it can decrease the risk of soil erosion. Healthy and robust forage has a deep root system, which can stabilize soil, as well as vegetative cover, which can protect soil from wind and water. Furthermore, moving livestock frequently can prevent soil compaction, which in turn increases the soil’s infiltration capacity. This provides additional conservation benefits; greater infiltration capacity inhibits the occurrence of runoff, which may carry plant nutrients, manure, and pesticides into nearby land water. Ground water quality may benefit as well; rotationally grazed land does not require as many nutrient inputs, and deeper roots can absorb nutrients further down in the soil, both of which decrease the quantity of contaminants entering ground water.
Rotational grazing does not merely offer conservation benefits. Many producers choose to implement the practice because of the economic efficiency it affords. Forage raised in this system is typically healthier, more resilient, and more abundant than those grown in a continuous system, which can save farmers money on feed and other inputs. Additionally, the start-up costs and maintenance expenses are low, as are the time requirements, when compared to a confinement system that necessitates significant infrastructure and time spent feeding livestock. And wildlife can benefit as well; like many conservation practices, rotational grazing can bolster wildlife habitats by allowing native species to grow undisturbed.
Have you used prescribed grazing on your operation? If so, how has it benefitted you? Share your thoughts in the comments section below!
Like what you’ve read? Check out our Climate Leaders home page, join the conversation in the NFU Climate Leaders Facebook Group, and keep up-to-date with NFU climate action by signing up for the mailing list. | <urn:uuid:4308a78a-454f-4f79-8ede-aed536dcc2b0> | CC-MAIN-2023-06 | https://utahfarmersunion.com/2017/05/08/what-can-farmers-do-about-climate-change-rotational-grazing/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.94255 | 602 | 3.84375 | 4 |
Context: The Non-Aligned Movement (NAM) is an important topic for UPSC GS Paper 2.
The Non-Aligned Movement (NAM) was created and founded during the collapse of the colonial system and the independence struggles of the peoples of Africa, Asia, Latin America, and other regions of the world, at the height of the Cold War.
During the early days of the Movement, its actions were a key factor in the decolonization process, which led later to the attainment of freedom and independence by many countries and peoples and to the founding of tens of new sovereign States. Throughout its history, the Movement of Non-Aligned Countries has played a fundamental role in the preservation of world peace and security.
While some meetings with a third-world perspective were held before 1955, historians consider that the Bandung Asian-African Conference is the most immediate antecedent to the creation of the Non-Aligned Movement.
This Conference was held in Bandung on April 18-24, 1955 and gathered 29 Heads of states belonging to the first post-colonial generation of leaders from the two continents with the aim of identifying and assessing world issues at the time and pursuing out joint policies in international relations.
The principles that would govern relations among large and small nations, known as the "Ten Principles of Bandung", were proclaimed at that Conference. Such principles were adopted later as the main goals and objectives of the policy of non-alignment. The fulfillment of those principles became the essential criterion for Non-Aligned Movement membership; it is what was known as the "quintessence of the Movement" until the early 1990s.
In 1960, in the light of the results achieved in Bandung, the creation of the Movement of Non-Aligned Countries was given a decisive boost during the Fifteenth Ordinary Session of the United Nations General Assembly, during which 17 new African and Asian countries were admitted. A key role was played in this process by the then Heads of State and Government Gamal Abdel Nasser of Egypt, Kwame Nkrumah of Ghana, Shri Jawaharlal Nehru of India, Ahmed Sukarno of Indonesia and Josip Broz Tito of Yugoslavia, who later became the founding fathers of the movement and its emblematic leaders.
Six years after Bandung, the Movement of Non-Aligned Countries was founded on a wider geographical basis at the First Summit Conference of Belgrade, which was held on September 1-6, 1961. The Conference was attended by 25 countries: Afghanistan, Algeria, Yemen, Myanmar, Cambodia, Srilanka, Congo, Cuba, Cyprus, Egypt, Ethiopia, Ghana, Guinea, India, Indonesia, Iraq, Lebanon, Mali, Morocco, Nepal, Saudi Arabia, Somalia, Sudan, Syria, Tunisia, Yugoslavia.
The Founders of NAM have preferred to declare it as a movement but not an organization in order to avoid bureaucratic implications of the latter.
The membership criteria formulated during the Preparatory Conference to the Belgrade Summit (Cairo, 1961) show that the Movement was not conceived to play a passive role in international politics but to formulate its own positions in an independent manner so as to reflect the interests of its members.
Thus, the primary objectives of the non-aligned countries focused on:
the support of self-determination
national independence and the sovereignty and territorial integrity of States
opposition to apartheid; non-adherence to multilateral military pacts and the independence of non-aligned countries from great power or block influences and rivalries;
the struggle against imperialism in all its forms and manifestations;
the struggle against colonialism, neocolonialism, racism, foreign occupation and domination;
disarmament; non-interference into the internal affairs of States and peaceful coexistence among all nations;
rejection of the use or threat of use of force in international relations;
the strengthening of the United Nations
the democratization of international relations;
socioeconomic development and the restructuring of the international economic system
international cooperation on an equal footing.
During the 1970s and 1980s, the Movement of Non-Aligned Countries played a key role in the struggle for the establishment of a new international economic order that allowed all the peoples of the world to make use of their wealth and natural resources and provided a wide platform for a fundamental change in international economic relations and the economic emancipation of the countries of the South.
During its nearly 50 years of existence, the Movement of Non-Aligned Countries has gathered a growing number of States and liberation movements which, in spite of their ideological, political, economic, social and cultural diversity, have accepted its founding principles and primary objectives and shown their readiness to realize them.
The ten principles of Bandung-
Respect of fundamental human rights and of the objectives an principles of the Charter of the United Nations.
Respect for the sovereignty and territorial integrity of all nations.
Recognition of the equality among all races and of the equality among all nations, both large and small.
Non-intervention or non-interference into the internal affairs of another -country.
Respect of the right of every nation to defend itself, either individually or collectively, in conformity with the Charter of the United Nations.
A. Non-use of collective defense pacts to benefit the specific interests of any of the great powers.
Non-use of pressures by any country against other countries.
Refraining from carrying out or threatening to carry out aggression, or from using force against the territorial integrity or political independence of any country.
Peaceful solution of all international conflicts in conformity with the Charter of the United Nations.
Promotion of mutual interests and of cooperation.
Respect of justice and of international obligations.
Inspired by the principles and purposes which were brought to the Non-Aligned Movement by the Bandung principles and during the First NAM Summit in Belgrade in 1961, the Heads of States and Governments of the member countries of the Non-Aligned Movement adopted in their 14th Summit in Havana the following purposes and principles of the movement in the present International juncture:
a. To promote and reinforce multilateralism and, in this regard, strengthen the central role that the United Nations must play.
b. To serve as a forum of political coordination of the developing countries to promote and defend their common interests in the system of international relations
c. To promote unity, solidarity and cooperation between developing countries based on shared values and priorities agreed upon by consensus.
d. To defend international peace and security and settle all international disputes by peaceful means in accordance with the principles and the purposes of the UN Charter and International Law.
e. To encourage relations of friendship and cooperation between all nations based on the principles of International Law, particularly those enshrined in the Charter of the United Nations.
f. To promote and encourage sustainable development through international cooperation and, to that end, jointly coordinate the implementation of political strategies which strengthen and ensure the full participation of all countries, rich and poor, in the international economic relations, under equal conditions and opportunities but with differentiated responsibilities.
g. To encourage the respect, enjoyment and protection of all human rights and fundamental freedoms for all, on the basis of the principles of universality, objectivity, impartiality and non-selectivity, avoiding politicization of human rights issues, thus ensuring that all human rights of individuals and peoples, including the right to development, are promoted and protected in a balanced manner.
h. To promote peaceful coexistence between nations, regardless of their political, social or economic systems. i. To condemn all manifestations of unilateralism and attempts to exercise hegemonic domination in international relations.
j. To coordinate actions and strategies in order to confront jointly the threats to international peace and security, including the threats of use of force and the acts of aggression, colonialism and foreign occupation, and other breaches of peace caused by any country or group of countries.
k. To promote the strengthening and democratization of the UN, giving the General Assembly the role granted to it in accordance with the functions and powers outlined in the Charter and to promote the comprehensive reform of the United Nations Security Council so that it may fulfill the role granted to it by the Charter, in a transparent and equitable manner, as the body primarily responsible for maintaining international peace and security.
l. To continue pursuing universal and non-discriminatory nuclear disarmament, as well as a general and complete disarmament under strict and effective international control and in this context, to work towards the objective of arriving at an agreement on a phased program for the complete elimination of nuclear weapons within a specified framework of time to eliminate nuclear weapons, to prohibit their development, production, acquisition, testing, stockpiling, transfer, use or threat of use and to provide for their destruction.
m. . To oppose and condemn the categorization of countries as good or evil based on unilateral and unjustified criteria, and the adoption of a doctrine of pre-emptive attack, including attack by nuclear weapons, which is inconsistent with international law, in particular, the international legally-binding instruments concerning nuclear disarmament and to further condemn and oppose unilateral military actions, or use of force or threat of use of force against the sovereignty, territorial integrity and independence of Non-Aligned countries.
n. To encourage States to conclude agreements freely arrived at, among the States of the regions concerned, to establish new Nuclear Weapons-Free Zones in regions where these do not exist, in accordance with the provisions of the Final Document of the First Special Session of the General Assembly devoted to disarmament (SSOD.1) and the principles adopted by the 1999 UN Disarmament Commission, including the establishment of a Nuclear Weapons Free Zone in the Middle East. The establishment of Nuclear Weapons-Free Zones is a positive step and important measure towards strengthening global nuclear disarmament and non-proliferation.
o. To promote international cooperation in the peaceful uses of nuclear energy and to facilitate access to nuclear technology, equipment and material for peaceful purposes required by developing countries. p. To promote concrete initiatives of South-South cooperation and strengthen the role of NAM, in coordination with G.77, in the re-launching of North-South cooperation, ensuring the fulfillment of the right to development of our peoples, through the enhancement of international solidarity. q. To respond to the challenges and to take advantage of the opportunities arising from globalization and interdependence with creativity and a sense of identity in order to ensure its benefits to all countries, particularly those most affected by underdevelopment and poverty, with a view to gradually reducing the abysmal gap between the developed and developing countries.
r. To enhance the role that civil society, including NGO´s, can play at the regional and international levels in order to promote the purposes, principles and objectives of the Movement
Current Challenges facing the NAM include :
the necessity of protecting the principles of International law,
eliminating weapons of mass destruction,
combating terrorism, defending human rights,
working toward making the United Nations more effective in meeting the needs of all its member states in order to preserve International Peace , Security and Stability, as well as realizing justice in the international economic system.
India being a founder and largest member in NAM was an active participant in NAM meetings till 1970s but India’s inclination towards erstwhile USSR created confusions in smaller members. It led to the weakening of NAM and small nations drifted towards either US or USSR.
Further disintegration of USSR led the unipolar world order dominated by US. India’s New Economic Policy and inclination towards US raised questions over India’s seriousness over non alignment.
Prime Minister of India skipped the 17th Non Aligned Movement (NAM) summit held in Venezuela in 2016, it was only second such instance when Head of a state didn’t participate in NAM conference.
Moreover, NAM continued losing relevance for India in a unipolar world, especially after the founding members failed to support India during crisis. For instance, during 1962 War with China, Ghana and Indonesia, adopted explicitly pro-China positions. During 1965 and 1971 wars, Indonesia and Egypt took an anti India stance and supported Pakistan.
India in particular, but also most other NAM countries, have integrated themselves to varying degrees within the liberal economic order and have benefited from it.
India is a member of the G20 and has declared itself as a nuclear weapons power and has for all practical purposes abandoned the call for global nuclear disarmament.
India has also engaged itself with new and old global powers. India joining the Quadrilateral Security Dialogue, a coalition seen by many as a counterforce to China’s rise in the Indo-Pacific and Shanghai cooperation organisation led by China shown India’s balancing approach in new world order.
India is striving hard for a multipolar world order and asserting itself as one of the player. Multi polar world order is very much closed to NAM principles.
Relevance of NAM
NAM continues to hold relevance as a platform and due to its principles.
World peace - NAM has played an active role in preserving world peace.It still stands by its founding principles, idea and purpose i.e. to establish the peaceful and prosperous world. It prohibited invasion of any country, promoted disarmament and a sovereign world order.
Territorial integrity and sovereignty - NAM stands with this principle and proved its repeated relevance with the idea of preserving the independence of every nation.
Third World nations - Third world countries fighting against socio-economic problems since they have been exploited for a long time by other developed nations, NAM acted as a protector for these small countries against the western hegemony.
Support of UN - NAM’s total strength compromises of 118 developing countries and most of them being a member of UN General Assembly. It represents two third members of general assembly, hence NAM members act as important vote blocking group in UN.
Equitable world order - NAM promotes equitable world order. It can act as a bridge between the political and ideological differences existing in the international environment.
Interest of developing countries - If disputes arise between developed and developing nation at any point of a concerned topic for example WTO, then NAM act as a platform which negotiates and conclude disputes peacefully securing the favorable decisions for each member nation.
Cultural diversity and human rights - In the environment of gross human right violation, it can provide a platform to raise such issues and resolve the same through its principles.
Sustainable development - NAM supported the concept of sustainable development and can lead the world toward sustainability. Can be used as larger platform to make consensus on global burning issues like climate change, migration and global terrorism.
Economic growth - The countries of NAM has inherent assets, such as a favourable demography, demand and favourable location. The cooperation can lead them to higher and sustainable economic growth. Can be an alternative to regional groupings like TPP and RCEP.
Last NAM Summit- Virtual NAM Summit:
The virtual Non-Aligned Movement (NAM) Contact Group Summit on “United against Covid-19” through video conferencing was held recently.
The meeting was convened at the initiative of President Ilham Aliyev of Azerbaijan, in his capacity as chair of the Non Aligned Movement.
Moreover 30 Heads of State and other leaders had joined the Summit. The Summit was also addressed by the UN General Assembly president and World Health Organisation (WHO) chief.
It was the first time that Prime Minister Narendra Modi participated in NAM Summit since he assumed the office in 2014.
Prime Minister Narendra Modi became the first Indian Prime Minister to skip the NAM Summit in 2016 and in 2019.
Adoption of the Declaration:
The Summit adopted a Declaration underlining theimportance of international solidarity in the fight against Covid-19.
Creation of Task Force:
It also announced the creation of a ‘Task Force’ to identify needs and requirements of member States.
A common database reflecting counties’ basic medical, social and humanitarian needs in the fight against Covid-19 will be created.
EDITORIAL-NAM at 60 marks an age of Indian alignment
The birth anniversary of Jawaharlal Nehru this month and the 60th anniversary of the Non-Aligned Movement prompt reflection on Nehru’s major contribution to the field of international relations.
The concept of not aligning a country’s policy with others can be traced to the Congress of Vienna of (1814-15) when the neutrality of Switzerland, by which that country would keep out of others’ conflicts, was recognised.
One world and free India
Mahatma Gandhi, icon of Indian Independence, believed in non-violent solutions and spirituality, with India having a civilising mission for mankind which accorded well with Nehru’s desire to innovate in world politics and his conception of modernity.
In 1946, six days after Nehru formed the national government, he stated, “we propose... to keep away from the power politics of groups aligned against one another... it is for One World that free India will work.”
Nehru, the theoretician, saw world problems as interlinked; not a binary of right and wrong, but as a practical person, his instructions to delegates at international meetings were to consider India’s interests first, even before the merits of the case; this was the paradox of a moral orientation in foreign policy and the compulsions of the real world.
In essence, Indian non-alignment’s ideological moorings began, lived and died along with Nehru’s idealism, though some features that characterised his foreign policy were retained to sustain diplomatic flexibility and promote India while its economic situation improved sufficiently to be described as an ‘emerging’ power.
Nehru was opposed to the conformity required by both sides in the Cold War, and his opposition to alliances was justified by American weapons to Pakistan from 1954 and the creation of western-led military blocs in Asia.
Non-alignment was the least costly policy for promoting India’s diplomatic presence, a sensible approach when India was weak and looked at askance by both blocs, and the best means of securing economic assistance from abroad.
India played a lone hand against colonialism and racism until many African states achieved independence after 1960.
India played a surprisingly prominent role as facilitator at the 1954 Geneva Peace Conference on Indochina, whereafter non-alignment appeared to have come of age.
The difficulty was always to find a definition of this policy, which caused a credibility gap between theory and practice. In the early years, there was economic dependence on donor countries who were nearly all members of western military pacts.
Indian equidistance to both Koreas and both Vietnams was shown by India recognising neither; yet it recognised one party in the two Chinas and two Germanies, and the Treaty of peace, friendship and cooperation between India and the Union of Soviet Socialist Republics of 1971, fashioned with the liberation war of Bangladesh in view, came dangerously close to a military alliance.
When Yugoslavia and Egypt became non-aligned by defying the great powers and convened the first Summit Conference of the Non-Aligned Movement in 1961, Nehru, who never endorsed confrontational methods, became a third but hesitant co-sponsor, because in theory, a coalition or movement of non-aligned nations was a contradiction in terms.
According to then Defence Minister Krishna Menon’s epigram, true non-alignment was to be non-aligned towards the non-aligned.
Nehru’s misgivings were confirmed when only two members, Cyprus and Ethiopia, of the conference supported India in the war with China.
Among the Non-Aligned Movement’s members was a plenitude of varying alignments, a weakness aggravated by not internalising their own precepts of human rights and peaceful settlement of disputes on the grounds of not violating the sacred principle of sovereign domestic jurisdiction.
Other failures were lack of collective action and collective self-reliance, and the non-establishment of an equitable international economic or information order. The Movement could not dent, let alone break, the prevailing world order.
The years following Nehru’s death saw the atrophy of his idealism, and non-alignment during his successors moved from pragmatism under Indira Gandhi and opportunism after the dissolution of the former Soviet Union, to the semi-alignment of today.
Prime Minister Narendra Modi’s party, by ideology, inclination and threat perception, is inclined to greater alignment with the United States whether under the nebulous rubric of the Indo-Pacific or otherwise.
Longevity of organisations
The Centre for Policy Research produced a document in 2012 titled ‘Non-alignment Mark 2.0’ which left no trace; the same body’s paper, ‘A rethink of foreign policy, this year elides it all together.
Every international organization has a shelf life, though many survive for years in semi-neglect.
The League of Nations was given the coup de grâce after seven years of inactivity only in 1946, even after the United Nations had come into being.
The Commonwealth will last only as long as the British find it useful. It is hard to see any future for Brazil-Russia-India-China-South Africa (BRICS) or its various institutional offspring, given the state of India-China relations.
The South Asian Association for Regional Cooperation (SAARC) has faded into oblivion.
Few among even our serving diplomats could tell what transpired at the last Non-aligned Conference or where the next will be held, while the symbolic anniversary, unanimously agreed upon in 1981 of ‘The First September, Day of Non-alignment’, has come and gone unnoticed.
In conclusion, The Non-Aligned Movement, faced with the goals yet to be reached and the many new challenges that are arising are called upon to maintain a prominent and leading role in the current International relations in defense of the interests and priorities of its member states and for the achievement of peace and security for mankind. | <urn:uuid:cd72c098-41a9-4698-a0aa-ac6bba3524b4> | CC-MAIN-2023-06 | https://www.aspireias.com/current-affairs-news-analysis-editorials/Non-Aligned-Movement-Detailed-Overview | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.949907 | 4,603 | 3.765625 | 4 |
Everything we do is ‘behaviour’, and everybody communicates through their behaviour. A person with Autism may behave in ways that are unexpected or unusual in order to communicate effectively in a situation.
For example, a person needing help might suddenly call out, stand near someone or touch them on the arm. In this situation, all these behaviours would communicate the same intention or purpose, however, some of these behaviours may be considered more appropriate than others, depending on the context. (For example, calling out may be an acceptable behaviour in a park but not in a library).
Behaviour is contextual. It’s more than just what the person does. It involves the interaction between the person, their specific situation, their environment, and the other people around them. So, to understand behaviour we need to look beyond what we merely see on the surface. It’s important to remember that the underlying difficulties related to Autism characteristics can contribute to challenging behaviour, but challenging behaviour in and of itself is not a core feature of Autism and is not synonymous with Autism.
Behaviour Has a Purpose
Behaviour is a form of communication that can convey an important message. There is always a reason for the way we behave.
At times behaviours may be viewed as challenging to manage. However, behaviours don’t happen because the person is intentionally trying to be ‘difficult’, and they generally don’t “come out of the blue” even if the cause isn’t immediately obvious. Challenging behaviour often indicates that the person is unable to cope at that moment and can’t express why in a typical way. It’s frequently the result of a clash between the demands of the situation and the person’s skills to respond, and is influenced by how they feel, what’s happened before and what is happening around them at the time.
The impact of these behaviours is what makes them challenging, both for the person and those around them.
Behaviour is called ‘challenging’ because it challenges those who support the person to understand why it is happening (like parents, carers, teachers and professionals). Challenging behaviours are sometimes called ‘Behaviours of Concern’. Behaviours become concerning if they impact the quality of a person’s life or put them or those around them at risk.
Challenging behaviour usually has 2 main functions: To get or get away from something (like an item, an activity, a sensation, attention or a person).
It is always important to rule out pain or illness as possible contributors to unexpected behaviours.
Imagine you suddenly feel pain while having some dental work done. If you couldn’t use words to express yourself, how would you let the dentist know? More than likely you’d use behaviour to communicate, like grimacing, groaning, waving your hand or jumping out of your seat.
To prevent a behaviour from occurring (or to reduce the likelihood of it happening again), we first need to understand the purpose of the behaviour. To understand why a behaviour happens, we need to consider all the contributing factors — the person’s Autism characteristics; their skills; the environment that they are in; the expectations placed on them; and the other people involved in the situation. | <urn:uuid:23907392-2f9c-4a3e-b6f7-c3ea09c7641f> | CC-MAIN-2023-06 | https://www.autism.org.au/what-is-autism/understanding-behaviour/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.952611 | 683 | 3.734375 | 4 |
We're examining factors that can influence canola winter survival. Winter survival is complicated as stand losses can be caused by one or more abiotic and biotic stresses, including poor plant establishment, low temperatures, duration of cold temperatures, wind desiccation, dry soils, soil heaving, and damage by diseases and pests. These environmental factors, in addition to a cultivar’s freezing tolerance and its ability to cold harden, will ultimately determine whether a crop will survive cold temperatures.
Winter Hardening Process
In order to survive the winter, canola must go through a hardening process. This begins in the rosette stage in the late fall after several days of near-freezing temperatures (about 35°F). At these temperatures, plant growth is slowed, resulting in smaller cells with a higher concentration of soluble substances more resistant to frost damage.
A few hard freezes (about 26°F) are beneficial for halting leaf growth and for hardening to "set in." Longer acclimation periods with fewer diurnal swings in temperatures above and below freezing are beneficial to hardening and increased freezing tolerance in plants. Hardened winter canola can endure a certain amount of time with temperatures at or below 0°F. However, extended periods of temperatures at or below 0°F, especially without snow cover, can be detrimental to survival.
In Figure 1, winter canola is pictured in a plot near Manhattan, Kan., on the morning of Feb. 15, 2021. At the time, overnight low temperatures had fallen below 0°F for four consecutive nights. A low temperature of -18°F was recorded on the night of Feb. 16. Fortunately, the plots experienced nearly 100% survival because the limited snow cover helped insulate them from the bitter cold.
Figure 1. Winter canola nursery under snow cover near Manhattan, Kan., on the morning of February 15, 2021. Picture by Mike Stamm, K-State Research and Extension.
"Un-Hardening” of Canola
Ultimately, it may not be the cold temperatures per se that cause winter kill but the rapid fluctuations in temperature, which can be a common occurrence in Kansas during the winter. “Un-hardening” of canola is accelerated when temperatures increase to 60°F or above for an extended period of time (approximately 2 weeks). Un-hardening is a loss of freezing tolerance. However, the effect of fluctuating temperatures and un-hardening during the winter is complex.
Research conducted by K-State indicates winter warming trends can actually have a positive effect on winter survival in some ways. Green leaf tissue may have increased metabolic activity, rejuvenating the overwintering plants. This partly explains why plants growing in the field can survive colder temperatures than plants acclimated at continuous cold temperatures in a controlled environment. If the warming trend is followed by a gradual cool down and no stem elongation occurs, then plants can re-harden. In addition, as long as low nighttime temperatures accompany warmer daytime temperatures, the rate of un-hardening should be slowed.
Winter Hardiness Traits in Canola Cultivars
Winter hardiness is an important trait to consider when selecting a cultivar for any cropping system. Differences exist, however, so decisions should be based on results from multiple years and locations. A good rule of thumb to follow is to only select cultivars that show at least 60% or greater survival scores on a consistent basis across site years.
To increase canola’s consistency in the southern Great Plains region, the canola breeding program at K-State continues to select and incorporate winter hardiness traits. Breeding accessions possessing longer vernalization periods are being crossed into the germplasm pool. One theory on improving winter hardiness is that canola can harden more easily after a winter warming trend prior to the vernalization requirement being reached. Therefore, extending the vernalization requirement may allow plants to withstand more variations in temperature during the winter months.
Two important phenotypic defenses against winterkill are a flat, prostrate growth habit, which keeps the crown protected at the soil surface, and the ability to avoid fall stem elongation. The K-State breeding program continues to select for both winter protecting traits among its breeding materials. Another beneficial trait could be the semi-dwarfing growth habit. The crowns of semi-dwarf hybrids are thicker and more compact (shorter internodes), and held closer to the soil surface. The breeding program continues to evaluate the semi-dwarfing trait for potential usefulness in future hybrids.
New Research on Winter Survival
A recent review conducted by a team of researchers from K-State (Dr. M. Secchi, former PhD student; M. Stamm; and Dr. Ciampitti) in collaboration with other industry partners provided new insights on the role of environmental variables on winter canola survival. The main objective of this study was to improve our understanding of the impact of meteorological factors on survival of winter canola, in addition to providing an assessment of the risks for winter kill. Research data was obtained from the National Winter Canola Variety Trial from 2003 until 2018 (190 site-years) and auxiliary meteorological data over the last 40 years. Key findings of this study are summarized below.
Environment was the main factor explaining the variation in winter survival, accounting for 71% of the variation on this variable. Overall winter survival averaged 84%, but a large range of variation across all site-years was present. The main meteorological variables explaining mean winter survival were the number of days with temperatures between 14°F and 5°F, the number of cycles when temperatures fluctuated above or below 32°F, and wind chill temperature during the cold period (i.e. the time between the first and last date when average daily temperature reaches 32°F).
Lastly, variety selection is a key factor for improving the probabilities of obtaining better winter survival. Most of the current varieties and hybrids available fall into either the semi-tolerant or semi-susceptible characterizations, indicating there is room to improve winter survival traits in winter canola. This information will be valuable in assessing new growing environments for winter canola and will aid breeding programs in evaluating the impact of environment on selection for this trait. Click here to read the full article.
Assessing Winter Canola Stands
After average daily temperatures warm to approximately 40°F, producers can begin evaluating their stands for winter kill. When evaluating winter survival, look for green leaf tissue at the center of the rosette. If green leaf tissue is present and the crown (stem) is firm when squeezed, it is likely the crop will resume active growth as temperatures rise and day length increases. The root may be examined as well for firmness and vigor.
If temperatures warm for several days and the crowns remain limp and fleshy, this could be indication that cold temperature damage has occurred. Remember that the crop can sustain some winter stand loss and still produce an acceptable yield as long as the losses are evenly distributed across the field. Normally, a final winter survival assessment can be made after the danger of further stand loss has passed, which is usually mid-March to early-April in Kansas. As long as the center crown and root remain green and firm, the crop has the potential to recover.
Winter survival will depend on the ultimate cold temperature, the duration and fluctuations of those temperatures, and the variety selected, among other meteorological factors. Improving our understanding on the main factors affecting winter survival is critical for consistent canola production. | <urn:uuid:a0283c08-e321-40eb-b206-e793105a9dcb> | CC-MAIN-2023-06 | https://www.covercropstrategies.com/articles/2616-how-canola-survival-is-influenced-by-climate-meteorological-factors | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.922084 | 1,559 | 3.859375 | 4 |
The National Fire Protection Association (NFPA) reports that there were over 490,500 structure fires in the United States in 2020. Fires are more common in the fall and winter seasons, with December and January being the peak months. It is important to be aware of the most common causes of fire, and take the appropriate precautions to reduce the risk of a fire happening to you.
Cooking Fires– the most common place for a fire to happen is in the kitchen. Most cases when cooking, the food or cooking equipment catch fire, and we quickly lose control. When cooking, make sure to always have someone paying attention. This will greatly reduce the risk of fire.
Smoking– Smoking, especially in bedrooms, should be off limits. If a cigarette is not properly put out, it can cause a flame, and come into contact with flammable materials. Over 70% of fire fatalities start in the bedroom. If you do smoke in your home, make sure to keep ashtrays empty. The more butts in the tray raises the risk of a fire.
Candles– Candles can burst into flames and spread fast if unattended. Keep them away from flammable materials, curtains, book, etc. When purchasing candles, keep them confined in a jar, or dish. An open candle (decoration style) can heat up on the side, causing the deformation, and spreading of wax, which can result in a fire, or just a plain mess. To overcome this, consider using a candle warmer, and avoid the flame altogether.
Christmas Tree & Decoration Fires– There is nothing better than having a real tree for Christmas. Hanging up all the decorations and stringing the lights. This Kodak moment can easily become a nightmare before Christmas. Before setting up the tree, cut off 1 inch off of the trunk. Then, remove any dead branches that you see. Don’t let your tree go without water. Always keep it hydrated. The worst time for a house fire is for it to happen when we are sleeping. Make sure to disconnect the lights before going to bed.
Children Playing with Fire- To ensure everyone’s safety, make sure all matches and lighters are out of reach of children. Continue to remind them about fire safety, stop, drop and roll, know where to find a fire extinguisher, and have an exit strategy. Always make sure your smoke alarms are working properly.
Electrical Equipment/Faulty Wiring/Lighting– Homes with inadequate wiring can cause fires from electrical hazards. Here are some signs to pay attention to in your home:
1. Fuses blow or trip the circuit frequently.
2. Lights dim if you use another appliance.
3. For an appliance to work, you have to disconnect another appliance.
4. You have to use extension cords extensively.
If you suspect you have faulty or bad wiring, a licensed electrician can do an inspection.
Barbeques/Grills– Make sure you clean removable parts with soapy water. Spray the connections with soapy water to check for potential leaks; watch if bubbles form when you open the gas. Keep barbeques away from your home, deck rails, tablecloths and tree limbs.
Portable Heaters– Keep portable heaters at least 3-4 feet away from anything that could easily catch fire, such as furniture, curtains, clothes, blankets and even yourself. If you have a furnace, get it inspected once a year to make sure it is working to safety standards.
To protect your valuables, documents, jewelry, etc, consider purchasing a fireproof safe to save specific belongings. In some situations, fires are unavoidable. If your home or business has fire or water damage, contact Crew Construction and Restoration for our board up and restoration services. Our certified, trained professionals will work with you from start to finish. | <urn:uuid:e80a8534-7ae8-41a8-9705-cf8b9ee384cc> | CC-MAIN-2023-06 | https://www.crew3r.com/common-causes-of-a-house-fire/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.928678 | 797 | 3.859375 | 4 |
Geology, the study of Earth, is a broad and complex discipline. In this lesson, you reviewed some geologic concepts, including the difference between the three basic rock types. You read briefly on plate tectonics, geologic time, the rock cycle, and various related processes focusing on weathering, erosion, and deposition. I asked you to consider several questions regarding the role of parent material, erosion, and deposition on soil formation. Finally, various types of geologic maps were explored—the accessibility and usefulness of these resources are the most important lesson for you to draw here.
Reminder—Complete all of the lesson tasks!
You have finished Lesson 8. Double-check the list of requirements on the Lesson 8 Overview page to make sure you have completed all of the activities listed there before beginning the next lesson.
Tell us about it!
If you have anything you'd like to comment on or add to, the lesson materials, feel free to share your thoughts with Tim. For example, what did you have the most trouble with in this lesson? Was there anything useful here that you'd like to try in your own classroom? | <urn:uuid:6eaed025-8606-4eaa-97ec-f1d73a9644ab> | CC-MAIN-2023-06 | https://www.e-education.psu.edu/earth530/content/l8_p9.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.955545 | 234 | 3.859375 | 4 |
A large and potentially unstable Antarctic glacier may be melting farther inland than previously thought, according to new research.
This melting could affect the stability of another large glacier nearby—an important finding for understanding and projecting ice sheet contributions to sea-level rise.
The findings come from radar data collected at the same locations in 2004, 2012, and 2014, each revealing details of the glaciers miles below the surface. The surveys show that ocean water is reaching beneath the edge of the Pine Island Glacier about 7.5 miles further inland than indicated by previous observations from space.
The team also found that the Southwest Tributary of Pine Island Glacier, a deep ice channel between the two glaciers, could trigger or accelerate ice loss in Thwaites Glacier if the observed melting of Pine Island Glacier by warm ocean water continues down the ice channel.
“This is a potentially really dynamic place between these two glaciers, and this is somewhere where further study is really warranted,” says lead author Dustin Schroeder, an assistant professor of geophysics at the School of Earth, Energy & Environmental Sciences at Stanford University. “If this tributary were to retreat and get melted by warm ocean water, it could cause the melt beneath Pine Island to spread to Thwaites.”
Sea-level rise has become a major global concern based on research showing extra ocean water from melting glaciers could swamp coastal areas around the world, contaminate drinking and irrigation water, threaten wildlife populations, and hurt the economy. This new perspective on the Southwest Tributary shows melting beneath Pine Island may be currently or imminently causing the melting of Thwaites and speeding the rate of sea-level rise.
“These results show that the ocean is really starting to work on the edge of this glacier, which means that we’re likely at the onset of it having an impact,” Schroeder says.
The Thwaites and Pine Island glaciers in the Amundsen Sea Embayment are known as outlet glaciers, or channels of ice that flow out of an ice sheet. In recent years, they have become the focus of large international research efforts to better understand their potential impacts on sea-level rise. But measurements of the same areas over time are rare due to the high cost of building and operating airborne radars that collect information underneath ice.
Looking at these two glaciers as a system involved a time-consuming process of building algorithms that interpret airborne data gathered from planes flying at different heights with unique radar systems, Schroeder says. Researchers analyzed 2004 data from a University of Texas survey using the UTIG HiCARS radar system and 2012 and 2014 data from University of Kansas surveys using the CReSIS MCoRDS radar system.
“Our group is a combination of glaciologists and radar engineers, so we’re particularly suited to the challenge of taking these very different radar systems and trying to figure out what you can see between them,” says Schroeder, who is also a faculty affiliate with the Stanford Woods Institute for the Environment.
The process has shifted Schroeder’s outlook on how to approach collecting data about glaciers.
“Even as we map and fill in the coverage, we should have in our portfolio of observations repeat coverage, as well, which is something that as a radar-sounding community we really haven’t traditionally prioritized,” Schroeder says.
The findings appear in the Annals of Glaciology. Additional coauthors are from the University of Kansas, the University of Texas, and the Natural Environment Research Council’s British Antarctic Survey.
The research was partially supported by a grant from the NASA Cryospheric Sciences Program.
Source: Stanford University | <urn:uuid:6eeee94b-1c9f-4884-a167-257a997b84de> | CC-MAIN-2023-06 | https://www.futurity.org/glaciers-melting-antarctica-1665532/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.944725 | 756 | 4.03125 | 4 |
National Save Your Hearing Day is a holiday that’s observed every year on May 31st. It’s a holiday that recognizes the importance of protecting your hearing and the hearing of your family.
Hearing loss is a condition that can result from environmental factors, illness, genetic factors, neurobiological disorders, accidents, and a ton of other factors. That’s why it’s important for all of us to eliminate as many of these risk factors as possible to ensure that we can keep our hearing throughout our entire life.
Some Informative Facts About Hearing
Below are some facts about hearing that we feel everyone should be interested in learning about. Let’s take a few moments and cover the following facts before we start talking about how this day should be observed.
- In World War I, parrots were used on the Eiffel Tower in Paris to listen for enemy aircraft.
- By the age of 65, one in three adults will experience some form of hearing loss.
- Most hearing loss happens to people who are under the age of 65.
- Wearing headphones for 60-minutes will increase the bacteria in a person’s ear by 700-times.
- Sound travels at about 761 miles per hour.
- The number one cause of hearing loss is exposure to sounds above 85-decibels.
Observing National Save Your Hearing Day
This day should serve as a reminder for everyone to get their ears checked on a regular basis and to use ear protection when using noisy pieces of equipment or while in noisy environments.
People should also make sure to avoid listening to loud music, especially while wearing headphones. And finally, people should take the time to use the hashtag #NationalSaveYourHearingDay to spread the word about this holiday.
When is National Save Your Hearing Day?
|This year (2023)||May 31 (Wednesday)||Multiple dates - more|
|Next year (2024)||May 31 (Friday)||Multiple dates - more|
|Last year (2022)||May 31 (Tuesday)||Multiple dates - more| | <urn:uuid:ac92de74-b563-4a53-bbe3-354ba532a2f8> | CC-MAIN-2023-06 | https://www.holidayscalendar.com/event/national-save-your-hearing-day/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.916227 | 442 | 3.53125 | 4 |
Learn how the American idea of government evolved from a revolutionary response to monarchy to a union of states. The sources will illustrate the effort taken to reach “a more perfect union” through a close read of our founding documents. Students will dig into the preambles and introductory text of the Declaration of Independence, Articles of Confederation, and the United States Constitution.
Students will be able to...
- Articulate the evolution of the goals and purpose of American government.
- Connect the context of the documents and the authors' stated expectations for government.
- Explain the role of popular sovereignty, consent of the governed, and rights in the development of American government.
- Use evidence from informational texts to support analysis and answer questions. | <urn:uuid:9d408543-c367-43eb-a460-8fd166a22eb4> | CC-MAIN-2023-06 | https://www.icivics.org/node/2533016/resource?back-ref-search=&back-ref-filter=curriculum_unit%3A674%2Cgrades%3A41572%2Cresources%3A41600%2Cresources%3A41605%2Ctags%3A41579%2Ctags%3A41585%2Ctags%3A41589 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.883686 | 157 | 4.375 | 4 |
Women have long served the U.S. military, serving first in support and civilian roles. Not until WWI could women enlist. This DBQuest looks at the changing roles of women in the military, focusing on the post-World War II period to the present. Students will examine Congressional testimony in support of women serving as permanent members of the Armed Forces, an oral history from the first Black woman to graduate from the U.S. Naval Academy, and a 2015 press release announcing that all roles in the Armed Forces now will be open to women. Students will also consider the challenges women faced as barriers to their participation were lifted.
Students will be able to...
- Use evidence from informational texts to support analysis and answer question
- Identify why some people are called to serve in times of conflict
- Discuss how women’s roles in the military changed after World War II | <urn:uuid:78f01d94-c5c2-43b9-8434-a1af0002f8fe> | CC-MAIN-2023-06 | https://www.icivics.org/node/3132524/resource?back-ref-search=&back-ref-filter=content_type%3Aeyes_on_prize_module%2Ccontent_type%3Aweb_quest%2Ccurriculum_unit%3A3016%2Ctags%3A41584%2Ctags%3A41585 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.9628 | 186 | 3.671875 | 4 |
Expansionary fiscal policy plays an important role in economic policy. This post gives you the definition and an example of expansionary fiscal policy. You will also learn how fiscal policy affects the IS LM model and how to graph this change.
In this post, we cover fiscal policy. You will learn what fiscal policy is and what fiscal policy measures a state can take to remedy an economic downturn. We explain to you what expansive fiscal policy is and its goals. You will get to know fiscal policy as a sub-area of financial policy.
What is expansionary fiscal policy?
Expansive fiscal policy is used in times of economic weakness, even without reaching recession – technically, it is three consecutive quarters of GDP decline – and its basic objective is to stimulate growth and consumption by increasing aggregate demand.
When the economy is weak, consumption declines, companies increase their stocks and suffer an excess production capacity, reducing employment and wages. The less employment and lower wages, the demand suffers, and the economy of a country, the GDP, falls. You say goodbye, and factories close because what they produce is not sold.
To combat this negative economic scenario, governments have an expansive fiscal policy as a weapon.
What are the objectives of fiscal policy?
Among the objectives of fiscal policy, we can find the following:
- Stimulate the growth of the domestic economy and protect it from the changes of the economic cycles.
- Increase the country’s growth capacity through spending on R+D+i, education, investment in infrastructure, etc.
- Redistribute income between territories and people.
- Protect basic public services such as health and education.
- Maintain employment, reduce unemployment and seek to approach full employment.
- Guarantee minimum income levels for citizens.
- Control inflation by stabilizing prices by reducing public spending and raising taxes.
- Promote investment in the private sector and facilitate investment in the public sector.
How is an expansive fiscal policy implemented?
1.- Increasing public spending
Public spending is increased mainly with the construction of more infrastructure, from roads, railways, airports, hospitals or schools, even with aid to families and companies, which is known as transfers, which are still direct subsidies or with credits soft for certain family assumptions or for business projects that normally entail the generation of employment. It usually works in the short term, and with this type of action, the economy rebounds thanks to increased demand.
2.- Tax reduction
The tax reduction is made for the entire population, and generally, the one that is acted upon by lowering the tax rates is the IRPF. This personal income tax is a tax on work income or, what is the same, the payroll tax. You can also act on other kinds of taxes, such as donations, inheritance, savings or investment, lowering, for example, the tax burden on stock market capital gains depending on the generation period or rewarding the reinvestment of dividends.
The objective of comprehensive tax reform is for citizens to have more disposable income to spend, thereby contributing to reactivating the economy through greater aggregate demand. Suppose the money is in the pocket of citizens.
In that case, that money will end up producing benefits in society through purchases of different products that will cause an increase in sales of industry and services, requiring more personnel, thereby reducing unemployment and the social spending that entails meeting the increase in demand.
However, expansionary fiscal policy can be used in the short term since it has pernicious effects on the economy in the long term. Its main problem is that it generates a deficit or a lake simply because the State earns less than it spends.
To make up for this deficit, states must issue debt, which must be paid back to investors through interest. In the end, if the debt skyrockets, much public spending will attend to the interest payments on the public debt instead of reactivating the economy.
Which of the following is an example of an expansionary fiscal policy?
Tax reductions and greater public spending are the two main manifestations of expansionary fiscal policy. Both measures aim to boost overall demand while reducing budget surpluses or adding to deficits. Typically, they are used during recessions to accelerate recovery or during recessionary worries to prevent it.
According to traditional macroeconomic theory, the government should employ fiscal policy to balance the natural expenditure decline and economic activity during a recession. Consumers and corporations reduce spending and investments as the economy worsens. This budget cut worsens the situation for the company and starts a vicious cycle that might be challenging to break.
- Tax reductions and greater government expenditure are two instances of expansionary fiscal policy.
- Recessions can be stopped or prevented with the help of an expansionary fiscal policy and high unemployment.
- A drawback of expansionary fiscal policy tax cuts is that they must eventually be undone. • The Economic Stimulus Act of 2008 permitted the government to put money into consumers’ pockets to encourage expenditure.
- According to John Maynard Keynes, fiscal policy is essential for reducing the negative effects of slowing expenditure and economic activity.
Expansionary fiscal policy graph
Let’s look at a concrete example of the IS-LM model. The graphic shows the basic IS LM model. The IS curve stands for the equilibrium in the goods market, and the LM curve describes the equilibrium in the money market. The interest rate r is plotted on the x-axis, and the total national income Y on the y-axis. The economy is in equilibrium when the IS and LM curves intersect.
Now let’s imagine that we are in an economic downturn. According to Keynes’s theory, we should now pursue expansive fiscal policies to boost the economy. It can be implemented, for example, by building new roads or by renovating universities.
Effect of expansionary fiscal policy
For a better understanding, Which of the following is an example of an expansionary fiscal policy? let’s look at the IS equation again:
As you can see, increasing government spending increases the right side of the equation, so the IS curve shifts outward. It increases national income from Y to Y` – the economy grows. Incidentally, the same result could also be achieved by reducing government revenue, for example, by lowering taxes. You can easily see this again from the IS equation:
Since the taxes T have a negative sign, the right-hand side of the IS equation increases because fewer taxes are “deducted”.
As you can see in the graphic, interest rates and national income are increasing. Of course, this increase could also lead to a dampening of investments in the medium term. Supporters of the Keynesian theory usually use the multiplier effect to argue here. Accordingly, the positive impact on national income is greater than the negative impact caused by the interest rate hike.
Expansionary fiscal policy chain of effects
State expenditures are increased → State, for example, has renovated universities → Income for construction companies and their employees increases → Construction companies buy new machines from their income → The machine manufacturers, in turn, buy more raw materials, etc. → In the end, national income increases by a multiple of the original state expenditures. This process is called the multiplier effect.
It can easily be transferred to the restrictive fiscal policy. It works in exactly the opposite way. Reducing government spending or increasing taxes shifts the IS curve to the left and reduces national income and interest rates.
Examples of fiscal policy during covid-19
To lessen the COVID-19 pandemic’s negative effects on health and the economy, nations all over the world implemented a wide range of budgetary measures in 2020. Beginning in 2021, there is still uncertainty over how the epidemic will develop.
Despite the high number of COVID-19 cases in several nations, effective vaccinations have been licenced and distributed. The emergence of novel virus strains that spread swiftly and easily and may be linked to a higher risk of death increases the doubt about how quickly the pandemic may be contained.
American Fiscal Policy
The U.S. government passed three primary relief packages and one supplemental package in March and April 2020. After the additional package’s adoption in April—also known as “phase 3.5″—Congress took no significant action on COVID-19 stimulus or relief for several months.
To alleviate the effects of COVID-19, the Member States of the European Union have resorted to increasing public spending, one of the fiscal policy tools. Which of the following is an example of an expansionary fiscal policy?
- We explain what fiscal policy is.
- Find out what their main goals are.
Fiscal policy, supported by monetary policy, has been the main defence against the pandemic’s havoc on the economy. In our country, the pandemic has caused the largest increase in public spending in democracy.
In July 2020, the European Council agreed on an exceptional temporary recovery instrument known as Next Generation EU, endowed with 750,000 million euros for all Member States. This Recovery Fund guarantees a coordinated European response regarding the Member States’ fiscal policy to face the pandemic’s economic and social consequences.
According to the IMF, without fiscal support, the devastation on production and employment would have been three times more intense globally.
Types of fiscal policy
There are two types of fiscal policy: expansive and restrictive. When taxes are reduced or public spending is increased to stimulate aggregate demand for goods and services, we face a comprehensive policy. However, fiscal policy is said to be restrictive when we increase taxes or reduce public spending to achieve the opposite objectives. Which of the following is an example of an expansionary fiscal policy?
Expansive fiscal policy: by increasing public spending or lowering taxes, the disposable income of consumers increases, as well as that of companies. It gives rise to an increase in consumption and investment that translates into an increase in the total or aggregate demand for goods and services, which impacts positive growth in production, employment and prices.
Restrictive fiscal policy: If we increase taxes or reduce public spending, the immediate effect is a fall in consumers’ and companies’ disposable income, which translates into a decrease in consumption and investment. Consequently, aggregate demand decreases and, with it, production, employment and prices.
4 Tech Tools That Can Help Business Leaders Breathe Easier
Technology can’t completely replace the people behind a business. But tech tools can help those people perform their jobs more…
Why Small Businesses Should Hire a Digital Marketing Company
Are you a small business owner looking to break into the digital marketing world? With so much competition, ensuring your… | <urn:uuid:b38701c5-dd41-46a1-9ca6-126892541066> | CC-MAIN-2023-06 | https://www.inbusinessworld.com/which-of-the-following-is-an-example-of-an-expansionary-fiscal-policy/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.941859 | 2,181 | 3.6875 | 4 |
What Does Concentration Gradient Mean?
In gardening, the concentration gradient refers to a graduated difference in the concentration of a solute per unit distance. While an active transport of nutrients and minerals does require a certain energy input, these can also be transported throughout the plant against their concentration gradient. Nutrients are absorbed by the roots before being transported and distributed through the xylem vessels.
Maximum Yield Explains Concentration Gradient
Osmosis works across a concentration gradient, whereby liquid is transferred from a dilute mixture of sugars and salts, traveling through a semi-permeable membrane before reaching a solution with a far higher concentration. Plants usually contain higher concentrations of sugars and salts than their surroundings. As a result, the water is easily drawn inside since both the cytoplasm and cell membrane act as the plant’s semi-permeable membrane.
Various environmental and intrinsic factors can influence this nutrient uptake, including temperature, light, pH level, as well as the interaction between the nutrients. In Passive-Mediated Glucose transport, no energy is needed for the molecule to pass through the concentration gradient. | <urn:uuid:9080648a-a68a-41f3-b9fc-551901eec8e5> | CC-MAIN-2023-06 | https://www.maximumyield.com/definition/2140/concentration-gradient | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.92464 | 231 | 3.640625 | 4 |
Bridges are among the most common and utilitarian structures found in modern society. They are an important element in our individual and collective lives, allowing us to cross rivers, sounds, bays, creeks, railroads and highways with ease and safety.
But this was not always the case. The earliest generations of North Carolinians had few bridges, and crossing obstacles could be arduous tasks involving fords, ferries or circuitous routes.
The past 200 years of North Carolina transportation history have been characterized by a series of improvements aimed at increasing capacity, speed, directness, flexibility and regularity of service. Wagon roads, turnpikes, canals, railroads and highways — each in turn has left a direct and strong series of cumulative imprints on the landscape and environment. Bridges are an integral and highly visible part of the state's historic transportation networks.
Many highway bridges are at or near crossings chosen by that region's early inhabitants. They are the third, fourth, or even fifth-generation structures at their sites — once served perhaps by a ferry or a ford — or they are on a highway that parallels an older road or railroad. While the present bridge may be wider, stronger or realigned for modern traffic, its goal of conveying people and goods has remained much the same since its locale was initially settled and developed. Rare are bridges that were not built in response to already existing transportation demands and patterns.
To further explore the history of North Carolina's bridges and those who designed them, click on the titles on the right side of this page. | <urn:uuid:da5d6fe4-4a14-4245-9162-bc21f3390a04> | CC-MAIN-2023-06 | https://www.ncdot.gov/initiatives-policies/Transportation/bridges/historic-bridges/Pages/history-of-bridge-building.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.975413 | 321 | 3.53125 | 4 |
What Does External Bus Mean?
An external bus is a type of data bus that enables external devices and components to connect with a computer.
It enables connecting devices, carrying data and other control information, but is only restricted to be used external to the computer system.
An external bus is also known as external bus interface (EBI) and expansion bus.
Techopedia Explains External Bus
An external bus primarily enables connecting peripherals and all external devices to a computer. These devices can include storage, monitors, keyboard, mouse and more.
Typically, an external bus is composed of electrical circuits that connect and transmit data between the computer and the external device. Being external to the computer, external buses are much slower than internal buses. Moreover, an external bus can be both serial or parallel.
Universal Serial Bus (USB) , PCI bus and IEEE 1294 are common examples of external buses. | <urn:uuid:d305a29a-1c97-434b-aac5-c0a1874bf579> | CC-MAIN-2023-06 | https://www.techopedia.com/definition/310/external-bus | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.902635 | 184 | 3.5 | 4 |
It’s time to practice reading. Say the name of the picture at the beginning of the row. Underline the word that begins with the same sound as the picture. Use this creative worksheet to practice reading.
Blends and digraphs are two concepts in phonics that can be challenging for grade 1 and kindergarten students to understand. Blends are two or more letters that make a single sound when they are pronounced together, such as “bl” in “blue” or “st” in “stop”. Digraphs, on the other hand, are two letters that make a single sound, but are pronounced as separate sounds when they are pronounced, such as “sh” in “ship” or “ch” in “chip”.
Worksheets on blends and digraphs can help grade 1 and kindergarten students learn and practice these concepts. These worksheets typically include a list of words with blends or digraphs, and students have to identify the blend or digraph in each word and say the word out loud. For example, a worksheet on blends might include words like “black”, “clap”, “flat”, and “slip”, and students would have to identify the blend in each word and say the word out loud.
By completing these worksheets, students can develop their phonemic awareness, which is the ability to hear, identify, and manipulate the individual sounds in words. This is an important skill for early reading and spelling development. Blends and digraphs worksheets can also help students learn to read and spell new words, and can be a fun and engaging way to practice phonics skills. | <urn:uuid:93679d1c-7074-4977-bb6c-3107c865e1cb> | CC-MAIN-2023-06 | https://www.worksheetsgo.com/blends-and-digraphs-worksheets/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00000.warc.gz | en | 0.960336 | 370 | 4.5 | 4 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 9