Search is not available for this dataset
text
string
meta
string
__index_level_0__
int64
Board index » Silent Hill Central » Silent Hill: Downpour Worried that Downpour will be too "casual." [ 334 posts ] Go to page Previous 1 ... 9, 10, 11, 12, 13, 14, 15 ... 17 Next jeremyjh Post subject: Re: Worried that Downpour will be too "casual." Missing since: 28 Dec 2010 Notes left: 136 Tillerman wrote: This is a problem I've been having with a lot of games recently, not just Silent Hill games or even Horror games (though especially for horror games.) It just seems like games have been getting so casual-oriented recently. What I mean by that specifically is that games are afraid to punish the player. The player must always be constantly rewarded and making progress, and heaven forbid there be any setbacks! Usually the way this problem manifests is by having checkpoints all over the place, and basically letting the player respawn right after they die with little to no setbacks. Let me give a specific example... I tried a horror game recently called "Amnesia." I'm sure many of you, being horror fans, have played this game as it has received a lot of praise, and for good reason. It's one of the best horror games I've played recently, in terms of atmosphere it's awesome. But it has a fatal flaw... yes, you guessed it, it is infected with the "casual gamer" virus. In Amnesia, if you die, you respawn in the same room or close to it... you lose none of the items you collected... and finally, the monster that killed you despawns, allowing you to make further progress. So in effect, the monsters in this game are absolutely no threat to the player whatsoever. Once I learned that, my fun with this game was ruined. So naturally, I'm worried that this mentality will infect Silent Hill as well. I am really hoping that this game goes back to the old fashioned method of save points, since this is the logical method for a horror game. Checkpoints only work for linear games, and letting the player respawn after death is the worst thing a horror game can do, IMO. What about you guys... anyone else given this any thought besides me? Dude you have totally read my mind I too would like to be trained and punished more then getting something for good deed like the older survival horror games. Homecoming was looking that way but got way to easy getting to the ending middle to the end parts. http://www.youtube.com/user/jeremyjh13? ... BjX2Y0gNrc- Survival Horror Game Library redrum Brookhaven Receptionist Last seen at: Blackpool UK Brian Gomez said in a Youtube interview that he want save points to be far enough away for the player to worry about dying, so no worries about checkpoints Rosewater Park Attendant Missing since: 12 Oct 2010 Notes left: 1446 Last seen at: Chicago jeremyjh wrote: Well, I think the older Silent Hills were fine... they were not too hard, but there was just enough danger to maintain tension. But a lot of newer horror games (or just games in general) seem afraid to even do that... it's like there's a new theory in game design that says the player can never be faced with setbacks, ever. When I play a newer game like that, the lack of tension really makes me sad... especially if the game tried to have horror ambitions. redrum wrote: That is excellent news, thank you! www.flipsidecomics.com Doctor Eggnog Subway Guard Missing since: 22 Aug 2010 1 and 2 were easy. 3 was hard. 4 was medium. Origins was medium. Homecoming was medium-hard. Overall I would say the newer games are more punishing. ...Except puzzle wize. May have already said this, but the higher the difficulty, the more save points there should be. And if you respawn in the same freaking room you die in, the Easy, Normal Hard bettter equal SH3's Extreme 1, 5, and X. Socially Awkward Penguin is my hero. Woodside Apartments Janitor Missing since: 21 May 2010 I think the newer games in general are more user friendly, but that is only because the tech is overall better than what we had years ago. The dev's are able to design more realistic and streamlined levels. The level design were what they were back then because of the technology. Don't get me wrong, you can still have new games today with horrible level designs, but the overall level designs in general are better and more realistic than what they were years ago... Missing since: 20 Jul 2004 Last seen at: VA Doctor Eggnog wrote: i pretty much agree with everything, except your difficulty rating of SH4. i thought that was the hardest SH (for me), mainly due to the terrible controls. SH2 i could beat if i played with one hand. no survival horror game should ever be that simple. KiramidHead Historical Society Historian Missing since: 01 Jun 2009 Silent Hill 3 is only hard if you insist on fighting every single monster and not saving whenever you get the chance. Of course, the game forces you to kill monsters and take damage if you want to get the alternate ending, making dying that much easier. Screenplay Archaeology Podcast - THE INVISIBLE MAN EPISODE IS UP! I would say that SH3 is the hardest of the original Silent Hills, but even so once you learn the monsters and how to deal with them, it's not hard at all. But that's as it should be... horror games aren't action games, so they shouldn't be truly "hard." They would lose their scare power if they were. What's much more important is that there feels like there is risk involved, not that the game is hard. Which is why I think Silent Hill games need a lot more randomness. Harrys_Girl Missing since: 15 Jan 2005 Last seen at: Couldn't tell you even if I tired Ha ha... SH3 was hard? It is the only game in the entire series that I have played and beaten Hard. I am HardX6 or something now and it still isn't something I would consider "hard". I still go out of my way to fight any enemies I encounter and refuse to use health until I absolutely have to. None of the games were "hard" until Homecoming. Homecoming was difficult because of 3 reasons, IMO (spoilers for SHH): The blocking the enemies did. Every enemy had some form of blocking unlike any other enemy in any other SH game (save PH). All the rest of the enemies of the series just sort of stand there and take the abuse. Most enemies are either ground bound or have very, very weak attacks. And the are slooooooow. 2. The enemies were fast(er). Unlike the slow moving grey children or the blind mannequins, the enemies in SHH went after Alex for the kill and could unleash a string of combos that easily plummet his health. 3. The bosses had decent AI. The Sepultcher used different attacks, relying on one when the other was being blocked. Scarlet was fast and easily hopped the fuck out of your view to attack. Asphyxia was a large enemy in a small area and easily snatched Alex up whenever she was close enough. Amnion was all over the place and one of the most challenging enemies in any SH game, IMO. I think that if these elements were drawn upon in SHD it would make the game a lot more challenging and therefor, more interesting and entertaining. The war has begun: Use your voice today before you no longer have one tomorrow. I'm like a circle, I'M TOO GOOD FOR CORNERS!!! SPRINGS02 Last seen at: i'm sick of these monkey fighting snakes on this monday to friday plane. I would say that silent hill 3 seemed hard to me when i played it. It was the first one i played though, but when i played sh2 after playing silent hill 3 it seemed much easier. I guess in terms of difficulty for me it goes SHH>SH3/SH4>SH2/origins>SSM. I honestly don't care too much about challenge in silent hill games. I mean i don't want it to just be a breeze, but challenge isn't a main thing i play silent hill games for. If i want challenge i will play games like super meat boy or dead space. IMO challenge isn't really an important ingredient for silent hill games. Mantorok Silent Hill 3 was the hardest for me the first time through. After playing all the games multiple times it no longer seems the hardest, probably because I know where to go. Harrys_Girl wrote: Ha ha... SH3 was hard? It is the only game in the entire series that I have played and beaten Hard. I am HardX6 or something now and it still isn't something I would consider "hard". I still go out of my way to fight any enemies I encounter and refuse to use health until I absolutely have to. I agree, I don't think it's "hard" either, but I don't really think horror games should be hard. I would say that the challenge level in SH3 is perfect. Notes left: 19401 Last seen at: #lfk Of course Silent Hill 3 isn't hard on repeats when Heather is carrying an unlimited SMG, a lightsaber, a flamethrower, and the all-powerful Sexy Beam. . . . This post is the property of its author and is not to be used elsewhere without explicit permission from the author. . . . AND THAT'S THAT. As far as SH games go..SH3 was harder than most in the TS series....on your first playthrough with your standard weapons the enemies are incredibly tough...you couldn't go through the game and wipe out every creature...well in some sections you could and do a little exploration, but for the most part you had to avoid some to conserve ammo. Even so the challenge level in that game wasn't annoying or irritating...the only time i became annoyed in a SH game was in prt4 when i would be escorting "forgot her name" around in those subway trains..and she ended up wanting to fight everything... ...Sh3 still had a nice balance of difficulty and the disturbing imagery was amazing... Last edited by clips on 05 May 2011, edited 1 time in total. Yeah, I wouldn't say that SH3 is "hard"... I would say it's hard enough to not be "easy," which is just right. In terms of challenge SH3 doesn't even come close to truly hard games like Devil May Cry or Ninja Gaiden or whatever... but if horror games were that hard, frustration would start to drown out the feelings that horror games are supposed to inspire. As I've said before, I think horror games need a balanced challenge. I thought it was easy, even on the first few playthroughs and on playthroughs in which I refused to use unlimited weapons. I didn't start using the unlimited weapons until HardX2 or 3 I think. I thought it was the easiest, after SH: SM of course. I think that some of the horror comes from the gameplay being a little challenging. That was one of the few things that really worried and stressed me out in SHH was the fact that I was running around w/ only the bullets I had in the gun and I knew that most enemies took 4 or more rounds/shots to take down, so ammo was constantly on my mind. I never worry about ammo in sh1-Oringins. Not even in SH4 because I only ever used the pipe/ax to fight everything. On occasion, for funsies, I'll use the gun on Walter in the final boss fight, but over all, melee. SHH forced you to use melee on some enemies and guns on others. It is almost impossible to use the knife on a Siam w/o dying. The Schisms were difficult too. Asphyxia and Amnion made it near impossible, or at least difficult enough to were I didn't want to bother using melee until I had to. And I am really not much of a battle-based gamer, my I remind you. I don't play battle or fighting games... ever. But I've never had problems or anxiety from the battle parts of SH. I've played the games as they came out (save for SHH which I only just got back in Jan of this yr.) and I've never died on on first playthrough. I've never died in SH1, SH4, Origins, or SH: SM in any of my playthroughs. I've died in SH2... damn PH swinging around and ending James 'cause I was tailgating him. Yeah. >>' And Well, HardX6 is being diffcult. Of course, I've also never played Hard on any game but SH3, so that has to be taken into consideration too. Maybe I'll do that--Yeah, I'll go dust off my PS2 and play all the games on Hard, come back and maybe I'll have a different song to sing. I didn't say that SH3 was a hard game or hard compared to Devil May Cry or crazy sh*t like that. I meant it was hard compared to the other games. I was just comparing the games to each other so one had to be hard. I wasn't going to say: SH1: Incredibly Easy SH3: A Bit Less Easy than 1 and 2 but not nearly as hard as many games SH4: A Bit Easier than 3, A BIt Harder than 1 and 2 But Still Overall A Pussy Game SHO: Having a Difficulty of Which is in the Zone of Easy but Has a Difficulty Higher Than You get the picture. :p I'd rate Silent Hill 3 and 4 as the hardest in the series. People talk about Homecoming because of the lack of a lot of ammo, but the games melee system was pretty decent and the combat wasn't too hard except for probably Scarlet. It was pretty easy to avoid getting hit. I haven't played 3 and 4 since IIRC my first semester in college in 2006. I remember I bought 1-4 brand new online before they went out of print. SH3 seemed harder to me than the first games because SH3 was the first one where ammo was pretty scarce and the game had a lot of pretty strong monsters. SH4 was a cakewalk in the beginning where you could just go back to your room and heal, after the second part where you have to escort eileen and deal with the ghosts that just can't die, it got pretty hard. I also remember the monsters that attacked through the walls. Last seen at: Marioland What I liked about the first four games was the ten star ranking challenges. Casual players could enjoy the games for what they were, they could even pick easy difficulty settings, while hardcore players could aim for the ten star ranking special treats. I really wish that Vatra brought such a kind of challenge back (with in-game features as a treat, not just an utterly useless platinum trophy). Last seen at: Kentucky I get the vibe that Downpour is going to be like Homecoming. I think it's the atmosphere and character design that makes me think this. I honestly don't really know what to think at the moment. The trailers and gameplay don't really indicate the game being very disturbing, but one character's view of Silent Hill is different than another character's. The real thing that is important is that they can make the character and his view of Silent Hill work and keep that atmosphere throughout the game. I really didn't like the shift to the Otherworld. It looked like dissolving pixie dust, but meh. I'm rarely on the forums anymore. Click here for ways to reach me. Jump to: Select a forum ------------------ Silent Hill Central Silent Hill Heaven Silent Hill Town Centre Silent Hill 1 Silent Hill 2 Silent Hill 3 Silent Hill 4: The Room Silent Hill Origins Silent Hill: Homecoming Silent Hill: Shattered Memories Silent Hill: Downpour Silent Hill: Book of Memories Silent Hills / PT Teaser Silent Hill General Discussion Daddy...Help me! Silent Hill Media Silent Hill: Movie (2006) Silent Hill: Revelation Silent Hill Media Silent Hill Resort Area Silent Hill Post Office All content © Silent Hill Heaven 2002-2010. Design © geek-goddess.co.uk. Template modified by Ratiocinator. All Rights Reserved.
{'redpajama_set_name': 'RedPajamaCommonCrawl'}
9,952
NOTE: The Azure storage blob also requires updating the Xcalar default.cfg configuration file. Dataset locking is now available. Locking a dataset prevents a user from accidentally deleting the dataset. A new Get info option is now available for displaying information about the dataset, including the dataset name, the user that created the dataset, and if the dataset is locked. Support added for row-delimited datasets but not field-delimited datasets. You can now sort tables by worksheet from the Tables list. You can now sort values from multiple columns. Maps and GroupBy operations can now output to multiple columns. A new TableID animation is included in a table's table header, which explicitly shows when a new table is created. In this release, automated skew detection is displayed. The Xcalar Design menu bar displays the skew score and the skew information of tables. Improvements were added for inline search and replace operations in the UDF editor. A Pie chart was added to the Profile modal window. You can now group multiple operations as a Logical operation. User experience improvements were added for the parameterization of batch dataflows. The Xcalar administrator now has the ability to broadcast messages to Xcalar users. The Xcalar Admin can now configure how users log in to Xcalar. Xcalar now integrates with Microsoft Azure Active Directory (ADD). Where, Xcalar users can now log in to their AAD account with their Xcalar user names. For more information, see Xcalar Help. The addition of the Metaphone extension. This extension clusters common misspellings, such as city names and company names. A shared demand paging area is now accessible for the temporary storage of Xcalar data. With this release, the demand paging process is activated during a batch dataflow for operations that previously would have failed due to memory space issues. The demand paging process temporarily stores Xcalar data in your demand paging area until required. Data loading improvements. With this release, Xcalar Root is no longer used for primary inter-node communications. Previously, if Xcalar Root was installed in a slow Network File System (NFS) loading files on a cluster was a slow process. For more information, see Xcalar Help and Xcalar Installation Guide. Fixed issues related to inefficient memory utilization during load and UDF processing. Previously, large amounts of memory could be used when UDFs were activated, which could lead to Xcalar being unresponsive. This is now fixed.
{'redpajama_set_name': 'RedPajamaC4'}
9,953
[monday, february 13, 2017] Video: CBS All Access Debuts First Trailer for "The Good Fight" In the new series, an enormous financial scam has destroyed the reputation of a young lawyer, Maia Rindell, while simultaneously wiping out her mentor and godmother Diane Lockhart's savings. Video: New "Endorsement" for the Santa Clarita Diet from Timothy Olyphant ("Joel Hammond") "Santa Clarita Diet" premieres only on Netflix, worldwide, on February 3. Video: New BBCA's "Doctor Who" First Look Trailer Released See what's in store for the Doctor (Peter Capaldi) and Bill (new companion played by Pearl Mackie) in 2017! Video: New Promo for The CW's "Riverdale"! Check out this new promo for The CW's drama, premiering Thursday, January 26. Video: Crackle Releases Trailer & Key Art for New Original Film, "Mad Families," Starring Charlie Sheen & Leah Remini The project centers on three families - one Hispanic, one African American, one Caucasian - who find themselves annoyingly sharing the same campsite during a busy Fourth of July holiday weekend. Video: Netflix Releases The Ivory Game Featurette with Dr. Jane Goodall The film breathlessly follows some of the most dedicated activists and conservationists as they try to intervene before it is too late. Video: CMT Releases First Teaser for Sexy and Soulful "Sun Records" Set in Memphis during the tumultuous early days of the civil rights movement, "Sun Records" tells the untold story of nothing less than the birth of rock 'n' roll. Video: CBS All Access Debuts First Teaser for "The Good Fight" "The Good Wife" spin-off premieres Sunday, February 19. Video: Showtime Releases the Official Trailer for Season Six of "Homeland" Back on U.S. soil, this season focuses on the aftermath of a U.S. presidential election and the transition between election day and the inauguration for a president-elect. Video: CMT Releases Fiery "Nashville" Full Season Trailer The new season officially kicks off with the full two-hour premiere on January 5 at 9:00/8:00c only on CMT.
{'redpajama_set_name': 'RedPajamaCommonCrawl'}
9,954
The AAI-GM12 allows direct interface of any preamp level audio source to a Buick factory radio, eliminating the need for sound degrading solutions like an FM modulator. The AAI-GM12 interface inputs sources from units such as a DVD, MP3, Satellite Radio or PlayStation. Requirements: 2000-2008 SUV's and trucks with a RDS or Navigation radio. The SUV or truck must have a factory XM receiver, factory DVD Rear Seat Entertainment (RSE) or factory/aftermarket CD changer (factory 6 disc changer built in the radio does not qualify).
{'redpajama_set_name': 'RedPajamaC4'}
9,955
It can be troublesome to decorate a basement since you can not probably think about what you may do in such a darkish and gloomy place. If you happen to use some brighter colors and materials, you'll be able to flip your darkish, damp, miserable basement into a spot where you'll want to spend time with your family. One thing that you can do to higher prepare for the inside design project is to observe tv exhibits, read magazines or search the internet for various design ideas. There are numerous sites that can help you view rooms when they're fully furnished or manually alter the fashion to your liking. If you happen to resolve to make use of an interior designer, talk your objectives and price range to them. Skilled designers typically have bold plans. Generally these plans conflict with the house owner's style or their pocketbook. Do not be timid. If what the interior designer suggests doesn't suit your goals, tell them. You're the one who has to live with the designer's choices. As you'll be able to see from the above article, it doesn't take a lot so as to add pleasure and alter up any size room. Persist with the ideas you learned right here and use them as a information as you change around the model of your home. You can all the time come to the article to consult with the tips as you go alongside. Pedestal sinks are fantastic decisions in small loos. Some of these sinks take up less space, and they make a small bathroom look bigger. Additionally they have a sublime appeal to them that's basic and works with any decor. You can find them at your local home enchancment store at many alternative value factors. A very good storage solution for a small toilet is using baskets. Instead of inserting one large image on a wall, use a couple of smaller ones. You may make a photograph collage on a wall. Use your own photos in affordable frames and you should have a creative and personalized wall that everybody will discover. You possibly can additionally use one large picture reduce into smaller ones in small frames. Inside design can bring a lot more to your house that you in all probability had anticipated once you had originally imagined while you purchased it. The nice thing is that everybody has what it takes to make their house the home of their goals if they're willing to put in the effort. If that person is you, you should read the article that follows. As soon as you've got mastered the artwork of interior design, refreshing a room might be a breeze. Good interior design abilities may even prevent cash. Whereas some folks might spend hundreds on costly renovations, you possibly can work out the best way to fix the room up on a budget. Hopefully, the following pointers will make it easier to develop into a terrific interior designer. A superb interior-design tip is to consider what's more important to you when making massive purchases comparable to fridge or other accessories. Do you favor style or is perform an important characteristic for you? Lots of products must sacrifice one for the opposite in their design. A impartial cream is a good color for a hallway. This coloration is neutral and can complement colours in adjoining rooms. It's also a lightweight coloration, and light-weight colors bring brightness to the house and make it look larger. The hallway coloration will proceed to work even for those who change the colours of different rooms at a later time. When you've got determined that you will be an interior designer, then this text is for you. Some simple advice could be very helpful when studying of what to do when designing your house's interior. The entire lighting of any room is dependent upon the curtains you set up. Darker colours, like black, brown, and dark, red or blue, don't enable enough natural light to enter the room, causing it to be darkish. Attempt to get lighter-colored curtains, like tan, white, peach, beige and taupe. If you want to create those areas in your house that truly wow, you want the best info. With a bit of know-how, some elbow grease and a contact of creativity, you'll be able to flip your visions into reality. Use the recommendation and suggestions you've got discovered here that will help you get started. Then, come dwelling and picture what each swatch would seem like, and the way it will mix with the furniture and other rooms in your home. Select one and see how completely different your room looks!
{'redpajama_set_name': 'RedPajamaC4'}
9,956
Tweaking the unfitted kitchen in response to a sequence of minor changes, I ordered a high-tech version of the tinny ArtDeco old school kitchen cart on wheels. The new rig, part of the line of sturdy epoxy-coated modular wire racks, was ideal for its purposes but seemed to kill the room. I let it float for a week, and Cook picked up a gaspingly expensive silicone baking mat to ameliorate the slick metal top shelf of the cart. That was all it took to integrate the cart with the room.
{'redpajama_set_name': 'RedPajamaC4'}
9,957
While we may not use them year-round, residents in Florida need heating systems in our homes. We have some chilly evenings in the winter months and many systems work as both the heating and cooling system. Two options to consider include the ductless mini-split and the traditional Heating, Ventilation, Air Conditioning System (HVAC) heating system. Here's what you need to know when choosing which system is right for your home, from the experts at Ball Building Services. A ductless mini-split has two main parts, an outdoor unit, and an indoor unit. The indoor unit heats or cools the room it's in, so homes often have multiple units, according to Energy.gov. It's ideal in homes that do not have ductwork, such as older homes as well as more modern, efficient homes. Each indoor unit has its own thermostat, so you have the ability to adjust the temperature in each room independently. The installation of a ductless mini-split is easier than a traditional system since an HVAC technician won't need to install or change ductwork and ventilation. Many homes have a traditional heating and cooling system that includes a furnace to heat and A/C unit to cool the home. An air conditioning unit sits outside and cools the air in the home. The hot and cold air travels through the home using ductwork. A furnace works well with homes in any climate and is ideal in locations with harsh winters that would lead to frozen and burst pipes if the home was not adequately heated. A huge factor in deciding which system to choose is the efficiency rating. Regardless of whether you choose a ductless mini-split system or HVAC system, the Seasonal Energy Efficiency Ratio (SEER) will help you ensure you are paying less over time. The higher the SEER, the more efficient a system is. It's important to have your system properly maintained in order to keep heating costs low over time. Are you looking for more information on ductless mini-split vs. traditional heating systems? At Ball Building Services our knowledgeable contractors are licensed to install, repair, and maintain both types of heating systems. Contact us to learn more.
{'redpajama_set_name': 'RedPajamaC4'}
9,958
Cheapest in carousell🤓. 100% authentic or money back guaranteed. RFS : parents ask to declutter makeup room LOL Do make a 50% deposit if u're really keen as i need to handcarry them myself and might need to get xtra luggage so i need to know by today! cheers. Brand new including normal postage with bubble wrap.
{'redpajama_set_name': 'RedPajamaC4'}
9,959
Bo Erik "Bosse" Lindquist, född 31 december 1954 i Stockholm, är en svensk författare och dokumentärmakare för radio och film. Biografi Lindquist har regisserat dokumentärer sedan 1988 varav många har blivit internationellt uppmärksammade och prisbelönta och var chef för Sveriges Radios dokumentärredaktion till 2010. Han är knuten till Sveriges Televisions dokumentärredaktion sedan 2012. Han var också Ander Visiting Professor of Global Media Studies vid Karlstads universitet mellan 2012 och 2013. En serie dokumentärer om genetik och fosterdiagnostik, Livets mekano, vann Prix Futura (numera bytt namn till Prix Europa) i Berlin 1995, och belönades också med Ikarospriset och Sveriges Föreningen Grävande Journalisters Guldspade för 1993. Radiodokumentären Rebellerna, om en extrem och hemlig maoistisk revolutionär grupp, fick Ikarospriset 1997 och tillsammans med Förädlade svenskar, om det svenska tvångssteriliseringsprojektet, Vilhelm Moberg-stipendiet 1997. Ta judarna sist fick hedersomnämnande vid Prix Italia 1998 samt Juridiska och historiska fakulteterna vid Stockholms universitets pris 1998. Tystnaden i Phnom Penh, om Sveriges stöd till Pol Pot-regimen, vann Prix Europa i Berlin 2000, Sveriges Föreningen Grävande Journalisters Guldspade och Ikarospriset 1999. En tv-dokumentär och en radiodokumentär om adoption från Sydkorea till Sveriges fick hedersomnämnande vid Prix Europa 2002. 2009 kom filmen Geniet och pojkarna som handlar om nobelpristagaren Carleton Gajdusek som upptäckte Kuru (en prionsjukdom, lik galna kosjukan) hos ett kannibaliskt folk på Nya Guinea. Filmen tar också upp Gajduseks gigantiska familj av fostersöner och hans pedofili. The Genius and the Boys hade premiär på BBC i juni 2009 och är officiellt utvald till Internationella dokumentärfilmsfestivalen i Amsterdam 2009. Filmen är en samproduktion mellan BBC, Arte, SVT, NRK och DR. Flera dokumentärer har sänts internationellt - WikiRebels i trettio länder. Lindquists Give Us the Money sändes i sextio länder november 2012 och undersöker bland annat Bonos och Bob Geldofs politiska lobbying för att minska extrem fattigdom i Afrika. Filmen fick bland annat den amerikanska Peabody Award för 2012 och ingår i den internationella serien Why Poverty som tagits fram av BBC och SVT med flera. Trilogin Experimenten undersöker den schweiziske kirurgen Paolo Macchiarinis forskning och experiment på konstgjorda organ. Serien avslöjar forskningsfusk och extremt experimenterande på människor på Karolinska sjukhuset och Karolinska institutet. Dokumentärserien satte igång en stor kontrovers som fortfarande är oavslutad. Lindquist har också introducerat ett nytt format för översatt radio: RadioVideo. Den nya tekniken möjliggör nedladdning av radioprogram med översättning i bildfältet för dator, mp3-spelare och mobil. Produktion Publicerat Förädlade svenskar, Alfabeta 1991 Rullstol och varm korv, tillsammans med Walter Hirsch, LL-förlaget, 1991. Översatt till tyska 1995. Kärlek i vått och torrt, tillsammans med Walter Hirsch, LL-förlaget, 1991. Översatt till tyska 1995. Genguiden, tillsammans med Robert Nyberg, Alfabeta 1995 Hakkors och skinnskallar - rasism från Auschwitz till Vålberg, tillsammans med Kurdo Baksi, Robert Blombäck & Susanne Berglind, LL-förlaget 1998. "Om rasistiska brott i nyheterna", i Vita redaktioner, mörk magi, Ylva Brune ed, Carlssons 1998. , Radiodokumentärer Vällingby i Afrika, 1988 Förädlade svenskar - rashygien och sterilisering, 1990 Bland tinnar och torn, 1992 Prinsessans fängelse, 1996 Aftonbladet och hotbilderna, 2000 Serien Livets mekano - om genetik, slump och miljö: En forskardröm, 1993 Blodsband, 1993 Vem ska få leva?, 1993 Serien En vit fläck på kartan - om ett vitt paradis i Afrika: Pionjärer, 1995 Friherrinnans fristad, 1995 Den gränslöse kolonisatören, 1995 Serien Svarta Sverige - om främlingsfientlighet i Sverige: Statslös Lucia, 1996 Invandrare, född i Vålberg, 1996 Serien En studie i borgerlighet - om svenska maoister: Östern är röd, 1997 Elitpartiet, 1997 Rebellerna / The Rebels, 1997 Serien Flyktingströmmar från Nazityskland till Sverige: Ta judarna sist/Bring the Jews last, 1998 Sverige och de baltiska SS-männen, 1998 Serien Sverige och röda khmererna: Tystnaden i Phnom Pehn / The silence of Phnom Penh, 1999 I revolutionens hjärta, 1999 Serien Adoption: Varför är jag här?, 2002 Svensk adoption, 2002 TV-dokumentärer En gång korean, 2002 (regissör, tillsammans med filmaren Bo Öhlén) Vad är det för fel på Socialen? 2003 (Producent) I Guds namn – av Peter och Maria Rinaldo om kyrkans roll i folkmordet i Rwanda, 2004 (Producent) Rebellerna 2005 (Regissör) Feminismen och socialdemokraterna 2006 (Producent) The Genius and the Boys, 2009 (Regissör) McFusk & Co, 2010 (SVT, Producent) Wikileaks - med läckan som vapen, 2010 (SVT, Producent) "Experimenten" I tre delar "-Stjärnkirurgen", "-Varje kirurg har sin kyrkogård" och "-Sanningens labyrint", 2016 SVT Dokument inifrån (Producent) Utställningar Sterilisering och rashygien, Kulturen i Lund och Nordiska museet 2002 (Författare) Middag med Pol Pot, Forum för levande historia, Stockholm 2009 (Författare) Referenser Noter Övriga källor Sveriges Radios dokumentärredaktion Swedish Radio Documentary Dept. SVT Sveriges Television Dokumentärredaktion SVT Dokumentär Ordfront Magasin Karlstads universitet Svenska filmproducenter Svenska journalister under 1900-talet Män Födda 1954 Levande personer Vinnare av Stora journalistpriset Svenska journalister under 2000-talet Journalister från Stockholm Författare från Stockholm Svenska dokumentärfilmare
{'redpajama_set_name': 'RedPajamaWikipedia'}
9,960
Triple layer vocals and several stringy thingies are vigorously put to use by Pete and the Skiffy Rivets as they weave musical tales borrowed from bluegrass, folk, old timey, swing, and wherever. They are amusing and serious, happy and sad, tales of trains, radios, chains, white horses, elbow room, altitude, flowers, paintbrushes, daughters, lovers, bosses, memories, thunder storms, neon signs, too much gravy, too much coffee, dreams of wide open spaces and hearts we never won. The group is assembled from Holly Carrington, Sue Tearne and Pete Parnham on vocals, with Pete and Steve Gerrish animating the stringy thingies. All sung, plucked and strummed purely for your listening pleasure. Check out www.skiffyrivetsmusic.com and the Elbow Room CD. This entry was posted in Uncategorized on August 1, 2018 by Jenine Abarbanel.
{'redpajama_set_name': 'RedPajamaC4'}
9,961
GalleryVault is a fantastic privacy protection app to easily hide and encrypt your photos videos and any other files that you do not want others to see. GalleryVault can hide its app icon and keep your privacy absolutely safe. You can import your private images and videos in this secure vault and nobody knows the existence of it. What's more GalleryVault has a beautiful design it provides you the smooth and amazing media browse experience. • Support hiding files in SD card and moving your encrypted files to SD card to save your device storage including Android 4.4(KitKat) 5.0(Lollipop) 6.0(Marshmallow) and 7.0(Nougat)+. With GalleryVault your privacy is well protected. No. Your files are stored only on your device so please make sure to backup all your hidden files before transferring to a new device or factory reset. Please find the latest mail we send to you (by searching the keyword thinkyeah in your mailbox) and follow the steps in the mail to reset your passcode. If your icon is hidden tap the "Manage Space" button in System App Detail Info page of Gallery Vault (System Setting->Apps->GalleryVault). 2. Try to unlock and fail for 2 times then a Forgot button will show. For more details please visit FAQ: http://support.thinkyeah.com/posts . We focus on Privacy Protect provide the professional Hide Picture and Hide Video app to protect your privacy! English Russian Spanish French Japanese Korean Indonesian German Vietnamese Italian Thai Arabic Hindi Simplified Chinese Traditional Chinese.
{'redpajama_set_name': 'RedPajamaC4'}
9,962
Published on CTIO (http://www.ctio.noao.edu/noao) CTIO Home > About > Site Description Site Description [1] Site Details for Cerro Tololo and Cerro Pachón "El Totoral" Reserve Sky Brightness Cerro Tololo Sidney Wolff view Cerro Pachón CTIO/AURA La Serena Facilities The "El Totoral" Reserve, Cerro Tololo and Pachón The Cerro Tololo Inter-American Observatory is located about 500km north of Santiago, Chile, about 52km EAST (80km by road) of La Serena, at an altitude of 2200 meters. It lies on a 34,491Ha (85,227 ac.) site known as "Estancia El Totoral" which was purchased by AURA on the open market in 1967 for use as an astronomical observatory. When purchased, the land supported a number of subsistance farmers and goat herders. They were allowed to continue to live on the reserve after it was purchased by AURA and have gradually been leaving voluntarily for more lucrative jobs in the nearby towns. As a result of departure of most of its human inhabitants and a policy combining environmental protection with "benign neglect" on the part of the Observatory, the property sees little human activity except for the roads and relatively small areas on the tops of Cerro Tololo and Cerro Pachón. As a result, much of the reserve is gradually returning to its natural state. Many native species of plants and animals, long thought in danger of extinction, are now returning. The last half of the trip to Tololo is an excellent opportunity to see a reasonably intact Chilean desert ecosystem. During the first portion of the journey, to a few km beyond "El Totoral", the effect on the environment of humans, bad farming practices and the remaining goats is easily seen. This damage will take many years to heal. Sky Brightness over Cerro Pachón and Cerro Tololo Light pollution from nearby cities (La Serena, Coquimbo, Ovalle, Andacollo, and Vicuna) has recently become a concern due to the rapid growth in population and development which this region of Chile has undergone. AURA and CTIO have undertaken an agressive campaign, both locally in the surrounding cities and at the Chilean congressional level, to alert the Chilean public and governing agencies to these concerns (e.g., CTIO's web page on light pollution [3]) However, it is not the current level of lighting which is worrisome. The concern is what changes the future development of the region will bring to what are presently extremely dark skies. The most recent published measurement of the sky over La Serena and Coquimbo, viewed from Cerro Tololo, are presented in Kriscuinas et al., 2010, PASP, 122, 373-377, "Light Pollution at hight zenith angles as measured at Cerro Tololo Inter-American Observatory" (see specially figures 1 and 3) and in Kriscuinas et al., 2007, PASP, 119, 687-696 "Optical sky brightness at Cerro Tololo Inter-American Observatory from 1992 to 2006". A CTIO study [4] in 2004, discussed predictions for night sky brightness at Cerro Pachón, un the context of planning for the LSST- presenting the relevant numbers and several projections, depending on population growth and the success of lighting controls. The study demonstrates that with successful lighting awareness campaigns, such as that which CTIO/AURA has launched, Cerro Pachón and Cerro Tololo can continue to be prime astronomical sites far into the future. The Top of Cerro Tololo Roughly in the center of the property lies Cerro Tololo on which are located 5 working optical astronomical telescopes: The 4m Victor M. Blanco, 1.5m, .9m, the 1m Yale and the .6m Curtis-Schmidt telescopes. Coordinates: W 70d48m52.7s S 30d09m55.5s Individual determinations of the geodetic positions for the observatories on Cerro Tololo and Cerro Pachon [7] Geodetic and geocentric positions and elevations for observatories on Cerro Tololo and Cerro Pachon (September 2012) [8] See google map here [9] Telephone: 011-56-51-225415 Fax: 011-56-51-205342 Post: CTIO Support Office CTIO/AURA Inc. 950 N. Cherry Ave. Tucson AZ 85719 Casilla 603. La Serena, Chile View point from the road to Cerro Tololo in honor to Dr. Sidney Wolff. See the news article at NOAO [10] On the southeast side of the property lies Cerro Pachón where the Southern Hemiphere Gemini 8m [11] and the 4.2m SOAR [12] telescopes are located. In this picture, looking up at the face of Pachón from the northwest, the Gemini dome can be seen when it was under construction. The SOAR site is behind the promontory in the top center of the picture. [13] A broader look to Cerro Pachón years later (2011), where SOAR (left) and Gemini (right) can easily be distinguished. Two bumps furthewr to the right of the Gemini SIte is where rock blasting preparation work on the site for the LSST is currently under way. La Serena [14] is a city with a population of over 100,000 about 490km North of Santiago, Chile at the mouth of the Elqui river. The CTIO facilities in La Serena support the Observatory and consist of instrument shops, engineering, operation and administration buildings along with visitor and staff lodging. Source URL (modified on 12/18/2012 - 10:29): http://www.ctio.noao.edu/noao/content/Site-Description [1] http://www.ctio.noao.edu/noao/content/Site-Description [2] http://www.ctio.noao.edu/noao/sites/default/files/totoralmap.gif [3] http://www.ctio.noao.edu/noao/content/dark-sky-education [4] http://www.ctio.noao.edu/noao/sites/default/files/Night%20Sky%20Brightness%20at%20Cerro%20Pachon.pdf [5] http://www.ctio.noao.edu/noao/sites/default/files/telshil2.jpg [6] http://www.ctio.noao.edu/noao/sites/default/files/Cerro_Tololo_from_air.gif [7] http://www.ctio.noao.edu/noao/content/Coordinates-Observatories-Cerro-Tololo-and-Cerro-Pachon [8] http://www.ctio.noao.edu/noao/sites/default/files/telescopes/Mamajek12_positions.pdf [9] http://www.ctio.noao.edu/noao/content/Contact-CTIO [10] http://www.noao.edu/news/2010/pr1001.php [11] http://www.gemini.edu/ [12] http://www.soartelescope.org [13] http://www.ctio.noao.edu/noao/sites/default/files/pachon.JPG [14] http://www.ctio.noao.edu/noao/content/la-serena [15] http://www.ctio.noao.edu/noao/sites/default/files/ctiols.gif
{'redpajama_set_name': 'RedPajamaCommonCrawl'}
9,963
Q: Rails 4 - how do I get the images path of an image in production with a hash on the end? I want to reference an image in the public folder that has been precompiled in prod. But, it seems that all the images have a hash on the end. (i.e., assets/image-3414fewafe313.jpg) asset_path(photo) = assets/photo.jpg (i need the full image path with hash) How do I reference this image in a view in Rails? Thanks! A: For a view, you can just reference image_path('photo.jpg') http://guides.rubyonrails.org/asset_pipeline.html#coding-links-to-assets See also image_tag('photo.jpg') which produces a full HTML img tag. A: The hash at the end of the asset's URL is the result of having the config.assets.digest parameter set to true Quoting from http://edgeguides.rubyonrails.org/asset_pipeline.html#in-production: In the production environment Sprockets uses the fingerprinting scheme outlined above. By default Rails assumes assets have been precompiled and will be served as static assets by your web server. During the precompilation phase an MD5 is generated from the contents of the compiled files, and inserted into the filenames as they are written to disk. These fingerprinted names are used by the Rails helpers in place of the manifest name.
{'redpajama_set_name': 'RedPajamaStackExchange'}
9,964
\section{Introduction}\label{sec:Intro} In recent years, machine learning has witnessed enormous success in many areas, including image processing, natural language processing, online recommender systems, just to name a few. From a mathematical perspective, training machine learning models amounts to solving an optimization problem. However, with the rapidly increasing dataset sizes and the high dimensionality and the non-convex hardness of the training problem (e.g., due to the use of deep neural networks), training large-scale machine learning models by a single centralized machine has become inefficient and unscalable. To address the efficiency and scalability challenges, an effective approach is to leverage {\em decentralized} computational resources in a computing network, which could follow a parameter server (PS)-worker architecture \cite{recht2011hogwild,zinkevich2010parallelized,dean2012large} or fully decentralized peer-to-peer network structure \cite{nedic2009distributed,lian2017can}. Also, thanks to the robustness to single-point-of-failure, data privacy, and implementation simplicity, decentralized learning over computing networks has attracted increasing interest recently, and has been applied in various science and engineering areas (including dictionary learning \cite{chen2014dictionary}, multi-agent systems \cite{cao2012overview,zhou2011multirobot}, multi-task learning \cite{wang2018distributed,zhang2019distributed}, information retrieval \cite{ali2004tivo}, energy allocation \cite{jiang2018consensus}, etc.). In the fast growing literature of decentralized learning over networks, a classical approach is the so-called network consensus optimization, which traces its roots to the seminal work by Tsitsiklis in 1984~\cite{tsitsiklis1984problems}. Recently, network consensus optimization has gained a lot of renewed interest owing to the elegant decentralized subgradient descent method (DSGD) proposed by Nedic and Ozdaglar \cite{nedic2009distributed}, which has been applied in decentralized learning due to its simple algorithmic structure and good convergence performance. In network-consensus-based decentralized learning, a set of geographically distributed computing nodes collaborate to train a common learning model. Each node holds a local dataset that may be too large to be sent to a centralized location due to network communication limits, or cannot be shared due to privacy/security risks. A distinctive feature of network-consensus-baed decentralized learning is that there is a lack of a dedicated PS. As a result, each node has to exchange information with its local neighbors to reach a consensus on a global optimal learning model. Despite its growing significance in practice, the design of high-performance network-consensus-based decentralized learning faces three fundamental {\em conflicting} complexities, namely {\em sample, communication, and memory complexities}. First, due to the high dimensionality of most deep learning models, it is impossible to leverage beyond first-order (stochastic) gradient information to compute the update direction in each iteration. The variability of a stochastic gradient is strongly influenced by the number of training samples in its mini-batch. However, the more training samples in a mini-batch, the higher computational cost of the stochastic gradient. Second, by using fewer training samples in each iteration to trade for a lower computational cost, the resulting stochastic gradient unavoidably has a larger variance, which further leads to more iterations (hence communication rounds) to reach a certain training accuracy (i.e., slower convergence). The low communication efficiency is particularly problematic in many wireless edge networks, where the communication links could be low-speed and highly unreliable. Lastly, in many mobile edge-computing environments, the mobile devices could be severely limited by hardware resources (e.g., CPU/GPU, memory) and they cannot afford reserving a large memory space to run a very sophisticated decentralized learning algorithm that has too many intermediate variables. Due to the above fundamental trade-off between sample, communication, and computing resource costs, the notions of sample, communication, and memory complexities (to be formally defined in Section~\ref{sec:related}) become three of the most important measures in assessing the performances of decentralized learning algorithms. However, in the literature, most existing works have achieve low complexities in some of these measures, but not all (see Section~\ref{sec:related} for in-depth discussions). The limitations of these existing works motivate to ask the following question: {\em Could we design a decentralized learning algorithm that strikes a good balance between sample complexity and communication complexity?} In this paper, we answer the above question positively by proposing a new GT-STORM algorithm (\underline{g}radient-\underline{t}racking-based \underline{sto}chastic \underline{r}ecursive \underline{m}omentum) that achieves low sample, communication, and memory complexities. Our main results and contributions are summarized as follows: \begin{list}{\labelitemi}{\leftmargin=1em \itemindent=0.em \itemsep=.2em} \item Unlike existing approaches, our proposed GT-STORM algorithm adopts a new estimator, which is updated with a consensus mixing of the neighboring estimators of the last iteration, which helps improve the global gradient estimation. Our method achieves the nice features of previous works \cite{tran2019hybrid,cutkosky2019momentum,di2016next,lu2019gnsd} while avoiding their pitfalls. To some extent, our GT-STORM algorithm can be viewed as an indirect way of integrating the stochastic gradient method, variance reduction method, and gradient tracking method. \item We provide a detailed convergence analysis and complexity analysis. Under some mild assumptions and parameter conditions, our algorithm enjoys an $\tilde{O}(T^{-2/3})$ convergence rate. Note that this rate is much faster than the rate of $O(T^{-1/2})$ for the classic decentralized stochastic algorithms, e.g., DSGD \cite{jiang2017collaborative}, PSGD \cite{lian2017can} and GNSD \cite{lu2019gnsd}. Also, we show that to reach an $\epsilon^2$-stationary solution, the total number of sample evaluations of our algorithm is $\tilde{O}(m^{1/2}\epsilon^{-3})$ and the communication round is $\tilde{O}(m^{-1/2}\epsilon^{-3})$. \item We conduct extensive experiments to examine the performance of our algorithm, including both a non-convex logistic regression model on the LibSVM datasets and convolutional neural network models on MNIST and CIFAR-10 datasets. Our experiments show that the our algorithm outperforms two state-of-the-art decentralized learning algorithms \cite{lian2017can,lu2019gnsd}. These experiments corroborate our theoretical results. \end{list} The rest of the paper is organized as follows. In Section~\ref{sec:related}, we first provide the preliminaries of network consensus optimization and discuss related works with a focus on sample, communication, and memory complexities. In Section~\ref{Section: algorithm}, we present our proposed GT-STORM algorithm, as well as its communication, sample, and memory complexity analysis. We provide numerical results in Section~\ref{Section: experiment} to verify the theoretical results of our GT-STORM algorithm. Lastly in Section \ref{Section: conclusion}, we provide concluding remarks. \section{Preliminaries and Related Work} \label{sec:related} To facilitate our technical discussions, in Section~\ref{sec:ncoa}, we first provide an overview on network consensus optimization and formally define the notions of sample, communication, and memory complexities of decentralized optimization algorithms for network consensus optimization. Then, in Section~\ref{sec:sfoa}, we first review centralized stochastic first-order optimization algorithms for solving non-convex learning problems from a historical perspective and with a focus on sample, communication, and memory complexities. Here, we introduce several acceleration techniques that motivate our GT-STORM algorithmic design. Lastly, we review the recent developments of optimization algorithms for decentralized learning and compare them with our work. \subsection{Network Consensus Optimization} \label{sec:ncoa} As mentioned in Section~\ref{sec:Intro}, in decentralized learning, there are a set of geographically distributed computing nodes forming a network. In this paper, we represent such a networked by an undirected connected network $\mathcal{G}=(\mathcal{N},\mathcal{L})$, where $\mathcal{N}$ and $\mathcal{L}$ are the sets of nodes and edges, respectively, with $|\mathcal{N}| = m$. Each node can communicate with their neighbors via the edges in $\mathcal{L}$. The goal of decentralized learning is to use the nodes to {\em distributively} and {\em collaboratively} solve a network-wide optimization problem as follows: \begin{align}\label{Eq: general_problem} \min_{\mathbf{x} \in \mathbb{R}^p} f(\mathbf{x}) = \min_{\mathbf{x} \in \mathbb{R}^p} \frac{1}{m}\sum_{i=1}^{m} f_i(\mathbf{x}), \end{align} where each local objective function $f_i(\mathbf{x}) \triangleq \mathbb{E}_{\zeta\sim\mathcal{D}_i} f_i(\mathbf{x};\zeta)$ is only observable to node $i$ and not necessarily convex. Here, $\mathcal{D}_i$ represents the distribution of the dataset at node $i$, and $f_{i}(\mathbf{x};\zeta)$ represents a loss function that evaluates the discrepancy between the learning model's output and the ground truth of a training sample $\zeta$. To solve Problem~\eqref{Eq: general_problem} in a decentralized fashion, a common approach is to rewrite Problem (\ref{Eq: general_problem}) in the following equivalent form: \begin{align}\label{Eq: consensus_problem} & \text{Minimize} && \hspace{-.5in} \frac{1}{m}\sum_{i=1}^{m} f_i(\mathbf{x}_i) & \\ & \text{subject to} && \hspace{-.5in} \mathbf{x}_i = \mathbf{x}_j, && \hspace{-.5in} \forall (i,j) \in \mathcal{L}, \nonumber \vspace{-.05in} \end{align} where $\mathbf{x} \triangleq [\mathbf{x}_1^\top,\cdots,\mathbf{x}_m^\top]^\top$ and $\mathbf{x}_i$ is an introduced local copy at node $i$. In Problem~\eqref{Eq: consensus_problem}, the constraints ensure that the local copies at all nodes are equal to each other, hence the term ``consensus.'' Thus, Problems~\eqref{Eq: general_problem} and \eqref{Eq: consensus_problem} share the same solutions. The main goal of network consensus optimization is to design an algorithm to attain an $\epsilon^2$-stationary point $\mathbf{x}$ defined as follows: \begin{align}\label{Eq: FOSP_network} \underbrace{\Big\|\frac{1}{m}\sum_{i=1}^{m} \nabla f_i(\mathbf{\bar{x}}) \Big\|^2}_{\mathrm{Global \,\, gradient \,\, magnitude}} \!\!\!\! + \underbrace{\frac{1}{m}\sum_{i=1}^{m}\|\mathbf{x}_{i}- \mathbf{\bar{x}}\|^2}_{\mathrm{Consensus \,\, error}} \le \epsilon^2, \end{align} where $\mathbf{\bar{x}} \triangleq \frac{1}{m}\sum_{i=1}^{m} \mathbf{x}_{i}$ denotes the global average across all nodes. Different from the traditional $\epsilon^2$-stationary point in centralized optimization problems, the metric in Eq.~\eqref{Eq: FOSP_network} has two terms: the first term is the gradient magnitude for the (non-convex) global objective and the second term is the average consensus error of all local copies. To date, many decentralized algorithms have been developed to compute the $\epsilon^2$-stationary point (see Section~\ref{sec:sfoa}). However, most of these algorithms suffer limitations in sample, communication, and memory complexities. In what follows, we formally state the definitions of sample, communication, and memory complexities used in the literature (see, e.g., \cite{sun2019improving}): \begin{defn}[Sample Complexity] The sample complexity is defined as the total number of the incremental first-order oracle (IFO) calls required across all the nodes to find an $\epsilon^2$-stationary point defined in Eq.~(\ref{Eq: FOSP_network}), where one IFO call evaluates a pair of $(f_i(\mathbf{x};\zeta), \nabla f_i(\mathbf{x};\zeta))$ on a sample $\zeta \sim \mathcal{D}_i$ and parameter $\mathbf{x} \in \mathbb{R}^p$ at node $i.$ \end{defn} \begin{defn}[Communication Complexity] The communication complexity is defined as the total rounds of communications required to find an $\epsilon^2$-stationary point defined in Eq.~(\ref{Eq: FOSP_network}), where each node can send and receive a $p$-dimensional vector with its neighboring nodes in one communication round. \end{defn} \begin{defn}[Memory Complexity] The memory complexity is defined as total dimensionality of all intermediate variables in the algorithm run by a node to find an $\epsilon^2$-stationary point in Eq.~(\ref{Eq: FOSP_network}). \end{defn} To make sense of these three complexity metrics into perspective, consider the standard centralized gradient descent (GD) method as an example. Note that the GD algorithm has an $O(1/T)$ convergence rate for non-convex optimization, which suggests $O(\epsilon^{-2})$ communication complexity. Also, it takes a full gradient evaluation in each iteration, i.e., $O(n)$ per-iteration sample complexity, where $n$ is the total number of samples. This implies $O(n\epsilon^{-2})$ sample complexity to converge to an $\epsilon^{2}$-stationary point. Hence, the sample complexity of GD is high if the dataset size $n$ is large. In contrast, consider the classical stochastic gradient descent (SGD) algorithm that is widely used in machine learning. The basic idea of SGD is to lower the gradient evaluation cost by using only a mini-batch of samples in each iteration. However, due to the sample randomness in mini-batches, the convergence rate of SGD for non-convex optimization is reduced to $O(1/\sqrt{T})$~\cite{ghadimi2013stochastic,bottou2018optimization,zhou2018new}. Thus, to reach an $\epsilon^2$-stationary point $\mathbf{x}$ with $\|\nabla f(\mathbf{x})\|^2 \le \epsilon^2$, SGD has $O(\epsilon^{-4})$ sample complexity, which could be either higher or lower than the $O(n\epsilon^{-2})$ sample complexity of the GD method, depending on the relationship between $n$ and $\epsilon$. Also, for $p$-dimensional problems, both GD and SGD have memory complexity $p$, since they only need a $p$-dimensional vector to store (stochastic) gradients. \subsection{Related Work} \label{sec:sfoa} {\bf 1) Centralized First-Order Methods with Low Complexities:} Now, we review several state-of-the-art low-complexity centralized stochastic first-order methods that are related to our GT-STORM algorithm. To reduce the overall sample and communication complexities of the standard GD and SGD algorithms, a natural approach is variance reduction. Earlier works following this approach include SVRG \cite{johnson2013accelerating,reddi2016stochastic}, SAGA \cite{defazio2014saga} and SCSG \cite{lei2017non}. These algorithms has an overall sample complexity of $O(n + n^{2/3}\epsilon^{-2})$. A more recent variance reduction method is the stochastic path-integrated differential estimator (SPIDER)~\cite{fang2018spider}, which is based on the SARAH gradient estimator developed by Nguyen {\em et al.} \cite{nguyen2017sarah}. SPIDER further lowers the sample complexity to $O(n + \sqrt{n}\epsilon^{-2})$, which attains the $\Omega(\sqrt{n}\epsilon^{-2})$ theoretical lower bound for finding an $\epsilon^2$-stationary point for $n = O(\epsilon^{-4})$. More recently, to improve the small step-size $O(\epsilon L^{-1})$ in SPIDER, a variant called SpiderBoost was proposed in~\cite{wang2019spiderboost}, which allows a larger constant step-size $O(L^{-1})$ while keeping the same $O(n + \sqrt{n}\epsilon^{-2})$ sample complexity. It should be noted, however, that the significantly improved sample complexity of SPIDER/SpiderBoost is due to a restrictive assumption that a universal Lipschitz smoothness constant exists for all local objectives $f(\cdot;\zeta_i)$ $\forall i$. This means that the objectives are ``similar'' and there are no ``outliers'' in the training samples. Meanwhile, to obtain the optimal communication complexity, SpiderBoost require a (nearly) full gradient every $\sqrt{n}$ iterations and a mini-batch of stochastic gradient evaluation with batch size $\sqrt{n}$ in each iteration. To overcome the above limitations, a hybrid stochastic gradient descent (Hybrid-SGD) method is recently proposed in~\cite{tran2019hybrid}, where a convex combination of the SARAH estimator~\cite{nguyen2017sarah} and an unbiased stochastic gradient is used as the gradient estimator. The Hybrid-SGD method relaxes the universal Lipschitz constant assumption in SpiderBoost to an average Lipschitz smoothness assumption. Moreover, it only requires two samples to evaluate the gradient per iteration. As a result, Hybrid-SGD has a $O(\epsilon^{-3})$ sample complexity that is {\em independent} of dataset size. Although Hybrid-SGD is for centralized optimization, the interesting ideas therein motivate our GT-STORM approach for {\em decentralized learning} following a similar token. Interestingly, we show that in decentralized settings, our GT-STORM method can further improve the gradient evaluation to only {\em one sample} per iteration, while not degrading the communication complexity order. Lastly, we remark that all algorithms above have memory complexity at least $2p$ for $p$-dimensional problems. In contrast, GT-STORM enjoys a $p$ memory complexity. \vspace{.05in} {\bf 2) Decentralized Optimization Algorithms} In the literature, many decentralized learning optimization algorithms have been proposed to solve Problem~(\ref{Eq: general_problem}), e.g., first-order methods \cite{nedic2009distributed,yuan2016convergence,shi2015extra, di2016next}, prime-dual methods \cite{sun2019distributed,mota2013d}, Newton-type methods \cite{mokhtari2016decentralized,eisen2017decentralized} (see in~\cite{nedic2018network,chang2020distributed} for comprehensive surveys). In this paper, we consider decentralized first-order methods for the non-convex network consensus optimization in~\eqref{Eq: consensus_problem}. In the literature, the convergence rate of the well-known decentralized gradient descent (DGD) algorithm \cite{nedic2009distributed} was studied in~\cite{zeng2018nonconvex}, which showed that DGD with a constant step-size converges with an $O(1/T)$ rate to a step-size-dependent error ball around a stationary point. Later, a gradient tracking (GT) method was proposed in~\cite{di2016next} to find an $\epsilon^2$-stationary point with an $O(1/T)$ convergence rate under constant step-sizes. However, these methods require a full gradient evaluation per iteration, which yields $O(n\epsilon^{-2})$ sample complexity. To reduce the per-iteration sample complexity, stochastic gradients are adopted in the decentralized optimization, e.g., DSGD \cite{jiang2017collaborative}, PSGD \cite{lian2017can}, GNSD \cite{lu2019gnsd}. Due to the randomness in stochastic gradients, the convergence rate is reduced to $O(1/\sqrt{T}).$ Thus, the sample and communication complexities of these stochastic methods are $O(\epsilon^{-4})$ and $O(m^{-1}\epsilon^{-4})$, two orders of magnitude higher than their deterministic counterparts. To overcome the limitations in stochastic methods, a natural idea is to use variance reduction techniques similar to those for centralized optimization to reduce the sample and communication complexities for the non-convex network consensus optimization. So far, existing works on the decentralized stochastic variance reduction methods include DSA~\cite{mokhtari2016dsa}, diffusion-AVRG~\cite{yuan2018variance} and GT-SAGA~\cite{xin2019variance} etc., all of which focus on convex problems. To our knowledge, the decentralized gradient estimation and tracking (D-GET) algorithm in~\cite{sun2019improving} is the only work for non-convex optimization. D-GET integrates the decentralized gradient tracking~\cite{lu2019gnsd} and the SpiderBoost gradient estimator~\cite{wang2019spiderboost} to obtain $O(mn+m\sqrt{n}\epsilon^{-2})$ {\em dataset-size-dependent} sample complexity and $O(\epsilon^{-2})$ communication complexity. Recall that the sample and communication complexities of GT-STORM are $O(m^{1/2}\epsilon^{-3})$ and $O(m^{-1/2}\epsilon^{-3})$, respectively. Thus, if dataset size $n=\Omega(\epsilon^{-2})$, D-GET has a higher sample complexity than GT-STORM. As an example, when $\epsilon=10^{-2}$, $n$ is on the order of $10^4$, which is common in modern machine learning datasets. Also, the memory complexity of D-GET is $2p$ as opposed to the $p$ memory complexity of GT-STORM. This implies a huge saving with GT-STORM if $p$ is large, e.g., $p\approx 10^6$ in many deep learning models. \section{A Gradient-Tracking Stochastic Recursive Momentum Algorithm}\label{Section: algorithm} In this section, we introduce our \underline{g}radient-\underline{t}racking-based \underline{sto}chastic \underline{r}ecursive \underline{m}omentum (GT-STORM) algorithm for solving Problem (\ref{Eq: consensus_problem}) in Section~\ref{sec:gt-storm}. Then, we will state the main theoretical results and their proofs in Sections~\ref{sec:main_results} and~\ref{sec:proofs}, respectively. \subsection{The GT-STORM Algorithm} \label{sec:gt-storm} In the literature, a standard starting point to solve Problem~\eqref{Eq: consensus_problem} is to reformulate the problem as \cite{nedic2009distributed}: \vspace{-.03in} \begin{align} \label{Eq:DGD_reformulation} & \text{Minimize} && \hspace{-.5in} \frac{1}{m}\sum_{i=1}^{m} f_i(\mathbf{x}_i) & \\ & \text{subject to} && \hspace{-.5in} (\mathbf{W} \otimes \mathbf{I}_{p}) \mathbf{x} = \mathbf{x}, &&\nonumber \vspace{-.09in} \end{align} where $\mathbf{I}_{p}$ denotes the $p$-dimensional identity matrix, the operator $\otimes$ denotes the Kronecker product, and $\mathbf{W}\in \mathbb{R}^{m\times m}$ is often referred to as the consensus matrix. We let $[\mathbf{W}]_{ij}$ represent the element in the $i$-th row and the $j$-th column in $\mathbf{W}$. For Problems~\eqref{Eq:DGD_reformulation} and~\eqref{Eq: consensus_problem} to be equivalent, $\mathbf{W}$ should satisfy the following properties: \begin{enumerate}[topsep=1pt, itemsep=-.1ex, leftmargin=.25in] \item[(a)] {\em Doubly Stochastic:} $\sum_{i=1}^{m} [\mathbf{W}]_{ij}=\sum_{j=1}^{m} [\mathbf{W}]_{ij}=1$. \item[(b)] {\em Symmetric:} $[\mathbf{W}]_{ij} = [\mathbf{W}]_{ji}$, $\forall i,j \in \mathcal{N}$. \item[(c)] {\em Network-Defined Sparsity Pattern:} $[\mathbf{W}]_{ij} > 0$ if $(i,j)\in \mathcal{L};$ otherwise $[\mathbf{W}]_{ij}=0$, $\forall i,j \in \mathcal{N}$. \end{enumerate} The above properties imply that the eigenvalues of $\mathbf{W}$ are real and can be sorted as $-1 < \lambda_m \leq \cdots \leq \lambda_2 < \lambda_1 = 1$. We define the second-largest eigenvalue in magnitude of $\mathbf{W}$ as $\lambda \triangleq \max\{|\lambda_2|,|\lambda_m|\}$ for the further notation convenience. It can be seen later that $\lambda$ plays an important role in the step-size selection and the algorithm's convergence rate. As mentioned in Section~\ref{sec:ncoa}, our GT-STORM algorithm is inspired by the GT method \cite{di2016next,nedich2016geometrically} for reducing consensus error and the recursive variance reduction (VR) methods \cite{fang2018spider,wang2019spiderboost} developed for centralized optimization. Specifically, in the centralized GT method, an estimator $\mathbf{y}$ is introduced to track the global gradient: \begin{align}\label{Eq: GT} \mathbf{y}_{t} = \mathbf{W}\mathbf{y}_{t-1} + \mathbf{g}_{t} - \mathbf{g}_{t-1} , \end{align} where $\mathbf{g}_t$ is the gradient estimation in the $t$th iteration. Meanwhile, to reduce the stochastic error, a gradient estimator $\v$ in VR methods is updated recursively based on a {\em double-loop} structure as follows: \begin{align}\label{Eq: VR} \v_{t} = \v_{t-1} + \nabla f(\mathbf{x}_t; \zeta_t) - \nabla f(\mathbf{x}_{t-1}; \zeta_t), \quad \text{if } \text{mod}(t, q) \neq 0, \end{align} where $\nabla f(\mathbf{x}; \zeta)$ is the stochastic gradient dependent on parameter $\mathbf{x}$ and a data sample $\zeta,$ and $q$ is the number of the inner loop iterations. On the other hand, if $\text{mod}(t,q) = 0$, $\v_{t}$ takes a full gradient. Note that these two estimators have a {\em similar} structure: Both are {\em recursively} updating the previous estimation based on the difference of the gradient estimations between two consecutive iterations (i.e., momentum). This motivates us to consider the following question: {\em{ Could we somehow ``integrate'' these two methods to develop a new decentralized gradient estimator to track the global gradient and reduce the stochastic error at the same time?}} Unfortunately, the GT and VR estimators can not be combined straightforwardly. The major challenge lies in the structural difference in the outer loop iteration (i.e., $\text{mod}(t,q) = 0$), where the VR estimator requires full gradient and does not follow the recursive updating structure. Surprisingly, in this paper, we show that there exists an ``indirect'' way to achieve the salient features of both GT and VR. Our approach is to abandon the double-loop structure of VR and pursue a {\em single-loop} structure. Yet, this single-loop structure should still be able to reduce the variance and consistently track the global gradient. Specifically, we introduce a parameter $\beta_t \in [0,1]$ in the recursive update and integrate it with a consensus step as follows: \begin{align} \v_{i,t} \!=\! \beta_t & \sum\nolimits_{j \in \mathcal{N}_{i}} [\mathbf{W}]_{ij} \v_{j,t-1} \!+\! \nabla f_i(\mathbf{x}_{i,t};\zeta_{i,t}) \!-\! \beta_t \nabla f_i(\mathbf{x}_{i,t-1};\zeta_{i,t}), \!\!\! \end{align} where $\mathbf{x}_{i,t},$ $\v_{i,t}$ and $\zeta_{i,t}$ are the parameter, gradient estimator, and random sample in the $t$th iteration at node $i$, respectively. Note that the estimator reduces to the classical stochastic gradient estimator when $\beta_t = 0$. On the other hand, if we set $\beta_t = 1$, the estimator becomes the (stochastic) gradient tracking estimator based on a single sample (implying low sample complexity). Then, the key to the success of our GT-STORM design lies in meticulously choosing parameter $\beta_t$ to mimic the gradient estimator technique in centralized optimization~\cite{cutkosky2019momentum,tran2019hybrid}. Lastly, the local parameters can be updated by the conventional decentralized stochastic gradient descent step: \begin{align} \label{eqn:main_update} \mathbf{x}_{i,t+1} = \sum\nolimits_{j \in \mathcal{N}_{i}} [\mathbf{W}]_{ij} \mathbf{x}_{j,t} - \eta_t \v_{i,t}, \end{align} where $\eta_t$ is the step-size in iteration $t$. To summarize, we state our algorithm in Algorithm~1 as follows. \medskip \hrule \vspace{.03in} \noindent {\bf Algorithm~1:} Gradient-Tracking-based Stochastic Recursive Momentum Algorithm (GT-STORM). \vspace{.03in} \hrule \vspace{0.1in} \noindent {\bf Initialization:} \begin{enumerate} [topsep=1pt, itemsep=-.1ex, leftmargin=.2in] \item[1.] Choose $T>0$ and let $t=1$. Set $\mathbf{x}_{i,0} = \mathbf{x}^0$ at node $i$. Calculate $\v_{i,0} = \nabla f_i(\mathbf{x}_{i,0};\zeta_{i,0})$ at node $i$. \end{enumerate} \noindent {\bf Main Loop:} \begin{enumerate} [topsep=1pt, itemsep=-.1ex, leftmargin=.2in] \item[2.] In the $t$-th iteration, each node sends $\mathbf{x}_{i,t-1}$ and local gradient estimator $\v_{i,t-1}$ to its neighbors. Meanwhile, upon the reception of all neighbors' information, each node performs the following: \begin{enumerate} [topsep=1pt, itemsep=-.1ex, leftmargin=.18in] \item[a)] Update local parameter: $\mathbf{x}_{i,t} = \sum\nolimits_{j \in \mathcal{N}_{i}} [\mathbf{W}]_{ij} \mathbf{x}_{j,t-1} - \eta_{t-1} \v_{i,t-1}$. \item[b)] Update local gradient estimator: $\v_{i,t} = \beta_{t} \sum\nolimits_{j \in \mathcal{N}_{i}} [\mathbf{W}]_{ij} \v_{j,t-1}$ $+ \nabla f_{i}(\mathbf{x}_{i,t};\zeta_{i,t}) -\beta_{t} \nabla f_{i}(\mathbf{x}_{i,t-1};\zeta_{i,t})$. \end{enumerate} \item[3.] Stop if $t>T$; otherwise, let $t \leftarrow t+1$ and go to Step 2. \end{enumerate} \smallskip \hrule \medskip Two remarks for Algorithm~1 are in order. First, thanks to the single-loop structure, GT-STORM is easier to implement compared to the low-sample-complexity D-GET~\cite{sun2019improving} method, which has in a double-loop structure. Second, GT-STORM only requires $p$ memory space due to the use of only one intermediate vector $\v$ at each node. In contrast, the memory complexity of D-GET is $2p$ (cf. $\mathbf{y}$ and $\v$ in \cite{sun2019improving}). This 50\% saving is huge particularly for deep learning models, where the number of parameters could be in the range of millions. \subsection{Main Theoretical Results}\label{sec:main_results} In this section, we will establish the complexity properties of the proposed GT-STORM algorithm. For better readability, we state the main theorem and its corollary in this section and provide the intermediate lemmas to Section~\ref{sec:proofs}. We start with the following assumptions on the global and local objectives: \begin{assum}\label{Assump: obj} The objective function $f(\mathbf{x}) = \frac{1}{m}\sum_{i=1}^{m} f_i(\mathbf{x})$ with $f_i(\mathbf{x}) = \mathbb{E}_{\zeta\sim\mathcal{D}_i} f_i(\mathbf{x};\zeta)$ satisfies the following assumptions: \begin{enumerate}[topsep=1pt, itemsep=-.1ex, leftmargin=.25in] \item[(a)] {\em Boundedness from below:} There exists a finite lower bound $f^* = \inf_\mathbf{x} f(x) > -\infty;$ \item[(b)] {\em $L$-average smoothness:} $f_i(\cdot;\zeta_i)$ is $L$-average smooth on $\mathbb{R}^p$, i.e., there exists a positive constant $L,$ such that $\mathbb{E}_{\zeta\!\sim\!\mathcal{D}_i}[\|\nabla f_i(\mathbf{x};\zeta)\!-\! \nabla f_i(\mathbf{y};\zeta)\|^2]\! \le \!L^2\|\mathbf{x}\!-\!\mathbf{y}\|^2, \forall \mathbf{x},\mathbf{y} \in \mathbb{R}^p, i \in [m]$; \item[(c)] {\em Bounded variance:} There exists a constant $\sigma\ge 0$ such that $\mathbb{E}_{\zeta \sim \mathcal{D}_i}[\|\nabla f_i(\mathbf{x};\zeta) - \nabla f_i(\mathbf{x})\|^2] \le \sigma^2, \forall \mathbf{x} \in \mathbb{R}^p, i\in[m]$; \item[(d)] {\em Bounded gradient:} There exists a constant $G\ge 0$ such that $\mathbb{E}_{\zeta \sim\mathcal{D}_i}[\|\nabla f_i(\mathbf{x};\zeta)\|^2] \le G^2, \forall \mathbf{x} \in \mathbb{R}^p, i \in [m]$. \end{enumerate} \end{assum} In the above assumptions, (a) and (c) are standard in the stochastic non-convex optimization literature; (b) is an expected Lipschitz smoothness condition over the data distribution, which implies the conventional global Lipschitz smoothness \cite{ghadimi2013stochastic} by the Jensen's inequality. Note that (b) is weaker than the individual Lipschitz smoothness in \cite{fang2018spider,wang2019spiderboost,sun2019improving}: if there exists an outlier data sample, then the individual objective function might have a very large smoothness parameter while the average smoothness can still be small; (d) is equivalent to the Lipschitz continuity assumption, which is also commonly used for non-convex stochastic algorithms \cite{zhou2018generalization,karimireddy2019error,koloskova2019decentralized} and is essential for analyzing the decentralized gradient descent method \cite{yuan2016convergence,zeng2018nonconvex,jiang2017collaborative}.\footnote{Note that under the assumption (b), as long as the parameter $\mathbf{x}$ is bounded, (d) is satisfied.} For convenience, in the subsequent analysis, we define $\tilde{\mathbf{W}} = \mathbf{W} \otimes \mathbf{I}_m,$ $\mathbf{g}_{i,t} = \nabla f_i(\mathbf{x}_{i,t}),$ $\u_{i,t} = \nabla f_i(\mathbf{x}_{i,t};\zeta_{i,t})$, $\mathbf{w}_{i,t} = \nabla f_i(\mathbf{x}_{i,t};\zeta_{i,t}) - \nabla f_i(\mathbf{x}_{i,t-1};\zeta_{i,t})$ and $\a_{t} = [\a_{1,t}^\top,\cdots,\a_{m,t}^\top]^\top$ and $\bar{\a}_t =\frac{1}{m} \sum_{i=1}^{m}\a_{i,t},$ for $\a \in \{\mathbf{x},\u,\mathbf{w},\v,\mathbf{g}\}$. Then, the algorithm can be compactly rewritten in the following matrix-vector form: \begin{align} \mathbf{x}_t &= \tilde{\mathbf{W}}\mathbf{x}_{t-1} -\eta_{t-1} \v_{t-1},\label{Eq: alg_updating_x}\\ \v_t &= \beta_t \tilde{\mathbf{W}}\v_{t-1} + \beta_t\mathbf{w}_{t} + (1-\beta_t)\u_{t}\label{Eq: alg_updating_v}. \end{align} Furthermore, since $\mathbf{1}^\top\mathbf{W} = \mathbf{1}^\top,$ we have $\mathbf{\bar{x}}_t = \mathbf{\bar{x}}_{t-1} - \eta_{t-1} \bar{\mathbf{v}}_{t-1},$ $\bar{\mathbf{v}}_{t} = \beta_t {\bar{\mathbf{v}}}_{t-1} +\beta_t\bar{\mathbf{w}}_t+(1-\beta_t)\bar{\u}_{t}.$ We first state the convergence result for Algorithm~1 as follows: \begin{thm}\label{Theorem: convergence1} Under Assumption~1 and with the positive constants $c_0$ and $c_1$ satisfying $1-( 1+c_1)\lambda^2 - \frac{1}{c_0} > 0$, if we set $\eta_t = \tau/(\omega + t)^{1/3}$ and $\beta_{t+1} = 1- \rho\eta_t^2$, with $\tau >0,$ $\omega \ge \max\{2,\tau^3/\min\{k_1^3,k_2^3,k_3^3\}\}$ and $\rho = 2/(3\tau^3) + 32 L^2$, then we have the following result for Algorithm~1: \begin{align}\label{Eq: convergence1} &\min_{t\in[T]} \mathbb{E}[\|\nabla f(\mathbf{\bar{x}}_{t})\|^2]+ \frac{1}{m}\mathbb{E}[\|\mathbf{x}_{t}-\mathbf{1}\otimes\mathbf{\bar{x}}_{t}\|^2] \notag\\ \le & \frac{2(f(\mathbf{\bar{x}}_{0}) - f(\mathbf{\bar{x}}^*))}{\tau{(T+1)}^{2/3}} + \frac{2c_0\mathbb{E}[\|\v_{0}-\mathbf{1}\otimes\bar{\mathbf{v}}_{0}\|^2]}{m\tau{(T+1)}^{2/3}} \notag\\ & + \frac{(\omega - 1)\sigma^2}{16mL^2\tau^2{(T+1)}^{2/3}} + \frac{\rho^2\sigma^2\ln(\omega+T-1)}{8mL^2(T+1)^{2/3}}\notag\\ & + \frac{12(1 + \frac{1}{c_1})c_0\tau^{1/3}G^2\rho^2}{(\omega - 1)^{1/3}(T+1)^{2/3}} + O\Big(\frac{c_3\omega}{\tau T^{5/3}}\Big), \end{align} where $c_3 = \max\{1,\omega/(m\tau^2), \tau^{4/3}/\omega^{1/3}, \tau \ln(\omega +T)/m\},$ and the constants $k_1,$ $k_2$ and $k_3$ are: \begin{align} k_1 &= 1/\Big(2L + 32(1 + \frac{1}{c_1})c_0L^2\Big),\\ k_2 &= \Big(1- (1+c_1)\lambda^2\Big) / \Big(1+\frac{1}{c_1} + \frac{1}{c_0}\Big) ,\\ k_3 &=\sqrt{\Big(1-( 1+c_1)\lambda^2 - \frac{1}{c_0}\Big)/\Big(\frac{2}{3\tau^3} + \frac{2L^2+1}{2c_0}\Big)}. \end{align} \end{thm} In Theorem \ref{Theorem: convergence1}, $c_0$ and $c_1$ are two constants depending on the network topology, which in turn will affect the step-size and convergence: with a sparse network, i.e., $\lambda$ is close to but not exactly one (recall that $\lambda = \max\{|\lambda_2|,|\lambda_m|\}$). In order for $1-( 1+c_1)\lambda^2 - \frac{1}{c_0} > 0$ to hold, $c_0$ needs to be large and $c_1$ needs be close to zero, which leads to small $k_1,$ $k_2$ and $k_3.$ Note that the step-size $\eta_t$ is of the order $O(t^{-1/3}),$ which is larger than the $O(t^{-1/2})$ order for the classical decentralized SGD algorithms. With this larger step-size, the convergence rate is $O(t^{-2/3})$ and faster than the rate $O(t^{-1/2})$ for the decentralized SGD algorithms. Based on Theorem~\ref{Theorem: convergence1}, we have the sample and communication complexity results for Algorithm~1: \begin{cor}\label{Cor: complexity} Under the conditions in Theorem~\ref{Theorem: convergence1}, if $\tau = O(m^{1/3})$ and $\omega = O(m^{4/3})$, then to achieve an $\epsilon^2$-stationary solution, the total communication rounds are on the order of $\tilde{O}(m^{-1/2}\epsilon^{-3})$ and the total samples evaluated across the network is on the order of $\tilde{O}(m^{1/2}\epsilon^{-3}).$ \end{cor} \subsection{Proofs of the Theoretical Results} \label{sec:proofs} Due to space limitation, we provide a proof sketch for Theorem~\ref{Theorem: convergence1} here and relegate the details to the appendices. First, we bound the error of gradient estimator $\mathbb{E}[\|\v_t - \mathbf{g}_t \|^2 ]$ as follows: \begin{lem}[Error of Gradient Estimator]\label{Lemma: Error of v} Under Assumption \ref{Assump: obj} and with $\v_t$ defined in (\ref{Eq: alg_updating_v}), it holds that $\mathbb{E}[\|\bar{\mathbf{v}}_t - \bar{\mathbf{g}}_t \|^2 ] \le \beta_t^2 \mathbb{E}[\|\bar{\mathbf{v}}_{t-1} - \bar{\mathbf{g}}_{t-1}\|^2] + \frac{2\beta_t^2L^2}{m}\mathbb{E}[\|\mathbf{x}_{t}-\mathbf{x}_{t-1}\|^2] + \frac{2(1-\beta_t)^2\sigma^2}{m}$. \end{lem} It can be seen that the upper bound depends on the error in the previous step with a factor $\beta_t^2$. This will be helpful when we construct a potential function. Then, according to the algorithm updates (\ref{Eq: alg_updating_x})--(\ref{Eq: alg_updating_v}), we show the following descent inequality: \begin{lem}[Descent Lemma]\label{Lemma: Descend lemma} Under Assumption \ref{Assump: obj}, Algorithm~1 satisfies: $\mathbb{E}[f(\mathbf{\bar{x}}_{t+1})]-\mathbb{E}[f(\mathbf{\bar{x}}_{t})] \le - \frac{\eta_t}{2}\mathbb{E}[\|\nabla f(\mathbf{\bar{x}}_{t})\|^2] - (\frac{\eta_t}{2} - \frac{L\eta_t^2}{2}) \times$ $\mathbb{E}[\|\bar{\mathbf{v}}_t\|^2] + \eta_t\mathbb{E}[\|\bar{\mathbf{v}}_t - \bar{\mathbf{g}}_t\|^2] + \frac{L^2\eta_t}{m}\mathbb{E}[\|\mathbf{x}_t-\mathbf{1}\otimes\mathbf{\bar{x}}_{t} \|^2 ]$. \end{lem} We remark that the right-hand-side (RHS) of the above inequality contains the consensus error of local parameters $\sum_{t= 0}^{T}\mathbb{E}[\|\mathbf{x}_t-\mathbf{1}\otimes\mathbf{\bar{x}}_{t} \|^2 ]$, which makes the analysis more difficult than that of the centralized optimization. Next, we prove the contraction of iterations in the following lemma, which is useful in analyzing the decentralized gradient tracking algorithms. \begin{lem}[Iterates Contraction]\label{Lemma: Iterates Contraction} The following contraction properties of the iterates produced by Algorithm~1 hold: \begin{align} \|\mathbf{x}_{t+1}-\mathbf{1}\otimes\mathbf{\bar{x}}_{t+1}\|^2 \le (1+c_1)&\lambda^2\|\mathbf{x}_{t} -\mathbf{1}\otimes\mathbf{\bar{x}}_{t} \|^2 \notag\\ &+ (1+\frac{1}{c_1}) \eta_{t}^2\|\v_{t}-\mathbf{1} \otimes \bar{\mathbf{v}}_{t}\|^2, \end{align} \vspace{-.2in} \begin{align} \|\v_{t+1}-\mathbf{1}&\otimes\bar{\mathbf{v}}_{t+1}\|^2 \le (1+c_1)\beta_{t+1}^2\lambda^2\|\v_{t} - \mathbf{1} \otimes\bar{\mathbf{v}}_{t}\|^2 \notag\\ & + 2(1 + \frac{1}{c_1})\big(\beta_{t+1}^2\|\mathbf{w}_{t+1}\|^2 + (1-\beta_{t+1})^2\|\u_{t+1}\|^2\big), \end{align} where $c_1$ is a positive constant. Additionally, we have \begin{align} \|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|^2 \le 8\|(\mathbf{x}_{t} &- \mathbf{1}\otimes\mathbf{\bar{x}}_{t}) \|^2 \notag\\ &+ 4\eta_t^2 \|\v_{t} - \mathbf{1}\otimes\bar{\mathbf{v}}_{t}\|^2 + 4\eta_t^2m\|\bar{\mathbf{v}}_{t}\|^2. \end{align} \end{lem} Finally, we define a potential function in (\ref{Eq: potential function}), based on which we prove the convergence bound: \begin{lem}(Convergence of Potential Function)\label{Lemma: Potential Func} Define the following potential function: \begin{align}\label{Eq: potential function} H_t = \mathbb{E}[f(\mathbf{\bar{x}}_{t})+ \frac{1}{32L^2\eta_{t-1}}\|\bar{\mathbf{g}}_{t} - \bar{\mathbf{v}}_{t}\|^2& + \frac{c_0}{m\eta_{t-1}}\|\mathbf{x}_{t}-\mathbf{1}\otimes\mathbf{\bar{x}}_{t}\|^2 \notag\\ &+ \frac{c_0}{m}\|\v_{t}-\mathbf{1}\otimes\bar{\mathbf{v}}_{t}\|^2], \end{align} where $c_0$ is a positive constant. Under Assumption \ref{Assump: obj}, if we set $\eta_t = \tau/(\omega + t)^{1/3}$ and $\beta_{t+1} = 1- \rho\eta_t^2$, where $\tau,$ $\omega \ge 2,$ $\rho = 2/(3\tau^3) + 32 L^2$ are three constants, then it holds that: \begin{align} H_{t+1} -H_t \le & - \frac{\eta_t}{2}\mathbb{E}[\|\nabla f(\mathbf{\bar{x}}_{t})\|^2] + \frac{\rho^2\sigma^2\eta_t^3}{16mL^2} + 2(1 + \frac{1}{c_1})c_0G^2\rho^2\eta_t^4 \notag\\ & - \frac{c_0C_1}{m\eta_t}[\|\mathbf{x}_{t}-\mathbf{1}\otimes\mathbf{\bar{x}}_{t}\|^2] - \frac{c_0C_2}{m}\mathbb{E}[\|\v_{t} - \mathbf{1} \otimes\bar{\mathbf{v}}_{t}\|^2] \notag\\ & - \frac{C_3\eta_t}{4}\mathbb{E}[\|\bar{\mathbf{v}}_t\|^2], \end{align} where $C_1,$ $C_2,$ and $C_3$ are following constants: $C_1 = 1-( 1+c_1)\lambda^2 - \frac{1}{2c_0} - 16(1 + \frac{1}{c_1})L^2\eta_t - \Big(\frac{2}{3\tau^3} + \frac{L^2}{c_0}\Big)\eta_t^2,$ $C_2 = 1- (1+c_1)\lambda^2 - (1+\frac{1}{c_1})\eta_t - \frac{\eta_t}{4c_0} - 8(1 + \frac{1}{c_1})L^2\eta_{t}^2$, $C_3 = 1 - 2L\eta_t - 32(1 + \frac{1}{c_1})c_0L^2\eta_t$. \end{lem} Finally, by properly selecting the parameters, constants $C_1,$ $C_2$ and $C_3$ can be made non-negative, which leads to Theorem \ref{Theorem: convergence1}. \section{Experimental Results}\label{Section: experiment} In this section, we conduct experiments using several non-convex machine learning problems to evaluate the performance of our method. In particular, we compare our algorithm with the following state-of-art {\em single-loop} algorithms: \begin{list}{\labelitemi}{\leftmargin=1em \itemindent=0.em \itemsep=.2em} \item DSGD \cite{nedic2009distributed,yuan2016convergence,jiang2017collaborative}: Each node performs: $\mathbf{x}_{i,t+1} = \sum_{j \in \mathcal{N}_{i}}$ $[\mathbf{W}]_{ij}\mathbf{x}_{j,t} - \eta \nabla f_i (\mathbf{x}_{i,t}; \zeta_{i,t})$, where the stochastic gradient $\nabla f_i (\mathbf{x}_{i,t}; \zeta_{i,t})$ corresponds to random sample $\zeta_{i,t}$. Then, each node exchanges the local parameter $\mathbf{x}_{i,t}$ with its neighbors. \item GNSD \cite{lu2019gnsd}: Each node keeps two variables $\mathbf{x}_{i,t}$ and $\mathbf{y}_{i,t}$. The local parameter $\mathbf{x}_{i,t}$ is updated as $\mathbf{x}_{i,t+1} \!=\! \sum_{j \in \mathcal{N}_{i}} [\mathbf{W}]_{ij}\mathbf{x}_{j,t}\! -\! \eta \mathbf{y}_{i,t}$ and the tracked gradient $\mathbf{y}_{i,t}$ is updated as $\mathbf{y}_{i,t+1} \!=\! \sum_{j \in \mathcal{N}_{i}} [\mathbf{W}]_{ij}\mathbf{y}_{j,t} \! + \! \nabla f_i (\mathbf{x}_{i,t+1}; \zeta_{i,t+1})\! - \!\nabla f_i (\mathbf{x}_{i,t}; \zeta_{i,t}).$ \end{list} Here, we compare with the above two classes of stochastic algorithms because they all employ a single-loop structure and do not require full gradient evaluations. We note that it is hard to have a direct and fair comparison with D-GET~\cite{sun2019improving} numerically, since D-GET uses full gradients and has a double-loop structure. \smallskip {\em \underline{Network Model:}} The communication graph $\mathcal{G}$ is generated by the Erd$\ddot{\text{o}}$s-R$\grave{\text{e}}$nyi graph with different edge connectivity probability $p_c$ and number of nodes $m$. We set $m=10$ and the edge connectivity probability as $p_c =0.5$. The consensus matrix is chosen as $\mathbf{W} = \mathbf{I} - \frac{2}{3\lambda_{\text{max}}(\mathbf{L})} \mathbf{L},$ where $\mathbf{L}$ is the Laplacian matrix of $\mathcal{G}$, and $\lambda_{\text{max}}(\mathbf{L})$ denotes the largest eigenvalue of $\mathbf{L}$. \smallskip {\bf 1) Non-convex logistic regression:} In our first experiment, we consider the binary logistic regression problem with a non-convex regularizer \cite{wang2018cubic,wang2019spiderboost,tran2019hybrid}: \begin{align} \min_{\mathbf{x} \in \mathbb{R}^d} -\frac{1}{mn} &\sum_{i=1}^{m}\sum_{j=1}^{n} [y_{ij}\log \big(\frac{1}{1+e^{-\mathbf{x}^\top\zeta_{ij}}} \big) + \notag\\ &(1-y_{ij}) \log \big(\frac{e^{-\mathbf{x}^\top\zeta_{ij}}}{1+e^{-\mathbf{x}^\top\zeta_{ij}}}\big)]+ \alpha \sum_{i=1}^{d} \frac{\mathbf{x}_i^2}{1+\mathbf{x}_i^2}, \end{align} where the label $y_{ij} \in \{0,1\},$ the feature $\zeta_{ij} \in \mathbb{R}^{d}$ and $\alpha =0.1$. \smallskip {\em 1-a) \underline{Datasets:}} We consider three commonly used binary classification datasets from LibSVM: $a9a$, $rcv1.binary$ and $ijcnn1$. The $a9a$ dataset has $32561$ samples, $123$ features, the $rcv1.binary$ dataset has $20242$ samples, $47236$ features, and the $ijcnn1$ dataset has $49990$ samples, $22$ features. We evenly divide the dataset into $m$ sub-datasets corresponding to the $m$ nodes. \smallskip {\em 1-b) \underline{Parameters:}} For all algorithms, we set the batch size as one and the initial step-size $\eta_0$ is tuned by searching over the grid $\{0.01, 0.02, 0.05 , 0.1, 0.2, 0.5, 1.0\}.$ For DSGD and GNSD, the step-size is set to $\eta_t = \eta_0/\sqrt{1+0.1 t}$, which is on the order of $O(t^{-1/2})$ following the state-of-the-art theoretical result \cite{lu2019gnsd}. For GT-STORM, the step-size is set as $\eta_t = \eta_0/\sqrt[3]{1+0.1 t}$, which is on the order of $O(t^{-1/3})$ as specified in our theoretical result. In addition, we choose the parameter $\rho $ for GT-STORM as $1/\eta_0^2$, so that $\beta_1 = 0$ in the first step. \smallskip {\em 1-c) \underline{Results:}} We first compare the convergence rates of the algorithms. We adopt the consensus loss defined in the left-hand-side (LHS) of (\ref{Eq: FOSP_network}) as the criterion. After tuning, the best initial step-sizes are $0.1,$ $0.5$ and $0.2$ for $a9a$, $ijcnn1$ and $rcv1.binary,$ respectively. The results are shown in Figs.~\ref{fig_a}--\ref{fig_c}. It can be seen that our algorithm has a better performance: for $a9a$ and $rcv1.binary$ datasets, all algorithms reach almost the same accuracy but our algorithm has a faster speed; for $ijcnn1$ dataset, our algorithm outperforms other methods both in the speed and accuracy. \begin{figure*}[t!] \begin{minipage}[t]{0.24\linewidth} \includegraphics[width=1\textwidth]{simu1_convergence_a9a.pdf} \caption{Non-convex logistic regression on LibSVM: a9a.} \label{fig_a} \end{minipage}% \hspace{0.005\linewidth} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1\textwidth]{simu1_convergence_rcv.pdf} \caption{Non-convex logistic regression on LibSVM: ijcnn1.} \label{fig_b} \end{minipage}% \hspace{0.005\textwidth} \begin{minipage}[t]{0.24\linewidth} \includegraphics[width=1\textwidth]{simu1_convergence_ijcnn.pdf} \caption{Non-convex logistic regression on LibSVM: rcv1.}\label{fig_c} \end{minipage}% \hspace{0.005\linewidth} \begin{minipage}[t]{0.24\linewidth} \includegraphics[width=1\textwidth]{simu2_rho.pdf} \caption{Non-convex logistic regression: The effect of $\rho$.}\label{fig_d} \end{minipage}% \vspace{-.1in} \end{figure*} \begin{figure*}[t!] \begin{minipage}[t]{0.24\linewidth} \includegraphics[width=1\textwidth]{simu2_p.pdf} \caption{Non-convex logistic regression: The effect of $p_c$.} \label{fig_e} \end{minipage}% \hspace{0.005\linewidth} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1\textwidth]{simu2_m.pdf} \caption{Non-convex logistic regression: The effect of $m$.} \label{fig_f} \end{minipage}% \hspace{0.005\textwidth} \begin{minipage}[t]{0.24\linewidth} \includegraphics[width=1\textwidth]{dnn_mnist2_accy.pdf} \caption{CNN experimental results on MNIST dataset.}\label{fig_g} \end{minipage}% \hspace{0.005\linewidth} \begin{minipage}[t]{0.24\linewidth} \includegraphics[width=1\textwidth]{dnn_cifar2_accy.pdf} \caption{CNN experimental results on CIFAR-10 dataset.}\label{fig_h} \end{minipage}% \vspace{-.15in} \end{figure*} Next, we examine the effect of the parameter $\rho$ on our algorithm. We focus on the $a9a$ dataset and fix the initial step-size as $\eta_0\!=\!0.1$. We choose $\rho$ from $\{10^{-1},10^0, 10^1,10^2\}.$ Note that $\rho = 10^2$ is corresponding to the case $\rho \!=\! 1/\eta_0^2.$ The results are shown in Fig.~\ref{fig_d}. It can be seen that the case $\rho \!=\! 10^1$ has the best performance, which is followed by the case $\rho = 10^2.$ Also, as $\rho$ decreases, the convergence speed becomes slower (see the cases $\rho \!=\! 10^{-1}$ and $10^0$). In addition, we examine the effect of the network topology. We first fix the number of workers as $m = 10$ and change the the edge connectivity probability $p_c$ from $0.35$ to $0.9.$ Note that with a smaller $p_c,$ the network becomes sparser. We set $\eta_0 = 0.1$ and $\rho = 10^2.$ The results are shown in Fig.~\ref{fig_e}. Under different $p_c$-values, our algorithm has a similar performance in terms of convergence speed and accuracy. But with a larger $p_c$-values i.e., a denser network, the convergence speed slightly increases (see the zoom-in view in Fig.~\ref{fig_f}. Then, we fix the the edge connectivity probability $p_c = 0.5$ but change the number of workers $m$ from $10$ to $50.$ We show the results in Fig.~\ref{fig_f}. It can be seen that with more workers, the algorithm converges faster and reaches a better accuracy. \smallskip {\bf 2) Convolutional neural networks}\label{Section: Experiment_CNN} We use all three algorithms to train a convolutional neural network (CNN) model for image classification on MNIST and CIFAR-10 datasets. We adopt the same network topology as in the previous experiment. We use a non-identically distributed data partition strategy: the $i$th machine can access the data with the $i$th label. We fix the initial step-size as $\eta_0 = 0.01$ for all three algorithms and the remaining settings are the same as in the previous experiment. {\em 2-a) \underline{Learning Models:}} For MNIST, the adopted CNN model has two convolutional layers (first of size $1 \times 16 \times 5$ and then of size $16 \times 32 \times 5$), each of which is followed by a max-pooling layer with size $2\times 2$, and then a fully connected layer. The ReLU activation is used for the two convolutional layers and the ``softmax'' activation is applied at the output layer. The batch size is 64 for the CNN training on MNIST. For CIFAR-10, we apply the CNN model with two convolutional layers (first of size $3 \times 6 \times 5$ and then of size $6 \times 16 \times 5$). Each of the convolutional layers is followed by a max-pooling layer of size $2\times 2$, and then three fully connected layers. The ReLU activation is used for the two convolutional layers and the first two fully connected layers, and the ``softmax'' activation is applied at the output layer. The batch size is chosen as 128 for the CNN training on CIFAR-10. {\em 2-b) \underline{Results:}} Fig.~\ref{fig_g} illustrates the testing accuracy of different algorithms versus iterations on MNIST and CIFAR-10 datasets. It can be seen from Fig.~\ref{fig_g} that on the MNIST dataset, GNSD and GT-STORM have similar performance, but our GT-STORM maintains a faster speed and a better prediction accuracy. Compared with DSGD, our GT-STORM can gain about $10\%$ more accuracy. On the CIFAR-10 dataset (see Fig.~\ref{fig_h}), the performances of DSGD and GNSD deteriorate, while GT-STORM can achieve a better accuracy. Specifically, the accuracy of GT-STORM is around $15\%$ higher than that of GNSD and $25\%$ higher than that of DSGD. \section{Conclusion}\label{Section: conclusion} In this paper, we proposed a gradient-tracking-based stochastic recursive momentum (GT-STORM) algorithm for decentralized non-convex optimization, which enjoys low sample, communication, and memory complexities. Our algorithm fuses the gradient tracking estimator and the variance reduction estimator and has a simple single-loop structure. Thus, it is more practical compared to existing works (e.g. GT-SAGA/SVRG and D-GET) in the literature. We have also conducted extensive numerical studies to verify the performance of our method, including non-convex logistic regression and neural networks. The numerical results show that our method outperforms the state-of-the-art methods when training on the large datasets. Our results in this work contribute to the increasingly important field of decentralized network training. \bibliographystyle{IEEEtran}
{'redpajama_set_name': 'RedPajamaArXiv'}
9,965
Floaters are opacities in the vitreous cavity that occur as a result of the normal ageing process, or less commonly, due to eye disease such as uveitis, infection, intraocular haemorrhage, retinal tears, or retinal detachment. Floaters are a common complaint noted in optometric practice but do not cause substantial problems for the majority of patients.1 However, for a minority of patients, the impact of floaters is significant. A 2011 survey of 266 patients with symptomatic floaters reported that the floaters were associated with a negative impact on quality of life comparable to eye conditions such as age-related macular degeneration or diabetic retinopathy, and systemic diseases including stroke and colon cancer.2 These patients were willing to trade an average 1.1 years out of every 10 years of their remaining life and take, on average, an 11 per cent risk of death and a 7 per cent risk of blindness to get rid of symptoms relating to floaters. YAG laser vitreolysis and vitrectomy surgery are becoming increasingly accepted as effective treatment options for the motivated and symptomatic patient. Cataract surgery is a major risk factor for PVD, with over 50 per cent of patients developing a PVD within one year of cataract surgery. Potential mechanisms for post-operative PVD induction include a decrease of hyaluronic acid concentration leading to vitreous liquefaction and the loss of lens volume leading to forward movement of the vitreous base. Although most floaters seen in clinical practice are degenerative floaters related to the ageing process, it is important to exclude other pathological causes of floaters (Table 1). The aims of clinical assessment of patients complaining about floaters are to exclude the presence of sight threatening pathology (especially a retinal tear or RD), determine the cause of the floaters, and assess the impact that the floaters are having on vision and quality of life. Specific features of the floaters including their duration (acute or chronic), shape, size and number should be noted. The type and magnitude of refractive error should be determined. Floaters are more common and often more symptomatic in myopes. A history of refractive or cataract surgery should be sought. Patients with floaters due to PVD are at risk of vitreous haemorrhage, retinal tear and detachment. These patients may report seeing flashing lights due to vitreoretinal traction or loss of peripheral vision due to RD.18 Any history of ocular trauma should be noted. Uveitis may present with floaters caused by inflammatory cells that have gained access to the vitreous cavity via a compromised blood-retina barrier. Patients should be questioned about concurrent ocular symptoms of uveitis such as photophobia, redness or discomfort, as well as features associated with systemic disease such as lethargy, weight loss, fever, cough, joint pain or rashes.19-21 Systemic causes of uveitis that may cause floaters include a wide variety of autoimmune, infectious and idiopathic conditions. Visual acuity is typically unaffected by degenerative floaters unless they block the visual axis but they can cause a reduction in contrast sensitivity.23,24 Patients with extensive floaters associated with vitreous haemorrhage or intraocular inflammation may have reduced visual acuity. Slit lamp examination of the anterior segment should assess for signs of uveitis (anterior chamber cells, keratic precipitates, posterior synechiae). The anterior vitreous can be visualised behind the lens using a bright thin slit beam offset by 10 degrees from the visual axis. The presence or absence of floaters, pigment ('tobacco dust'), red blood cells or white cells behind the lens should be carefully assessed. Asking the patient to quickly look down and then straight ahead mobilises the vitreous and may bring pigment in the inferior vitreous into view behind the lens. The finding of tobacco dust (a positive Shafer's sign) indicates the likely presence of a retinal tear. If a PVD is advanced (i.e. the posterior vitreous has separated from the posterior pole and has moved anteriorly behind the lens) the posterior hyaloid membrane may be visible as a well defined crinkly membrane behind the lens. The presence of a PVD is commonly associated with a Weiss ring representing the remnant of the vitreopapillary attachment and the peripapillary glial tissue at the optic disc.24 This can typically be seen directly anterior to the optic disc with patients describing a circular or semi-circular shaped floater. OCT may be helpful to visualise floaters located within the pre-macular bursa area, which is an optically empty vitreous cistern located immediately anterior to the macula (see Case 2). Specific treatment of associated eye disease is directed at the cause e.g. retinal tears, RD, intraocular haemorrhage, intraocular infection, posterior uveitis, or intraocular malignancy, and is beyond the scope of this article. Reassurance that there is no serious underlying disease present can be sufficient to allay the fears of many patients and avoid the need for surgery. Some patients can become distressed or depressed about floater related symptoms and may benefit from psychiatric assessment by their general practitioner or a mental health professional. This involves the use of a YAG laser to vaporise floaters to reduce floater related symptoms. It is performed in a clinic setting using topical anaesthesia, a fundus contact lens and a slit lamp mounted YAG laser. Depending on the number and size of floaters, multiple treatment sessions may be needed. In many cases only partial but not complete resolution of floater symptoms is possible. A 2017 randomised clinical trial comparing a single session of YAG vitreolysis versus sham treatment (control) for symptomatic floaters in 52 eyes showed a significantly greater symptomatic improvement (54 per cent) in the treatment group than controls (9 per cent).32 Significant or complete resolution of symptoms was reported by 53 per cent of patients at postoperative month six. Several measures of quality of life also improved compared with those in the sham laser group, including general vision, peripheral vision, and independence. The YAG group reported numerous improvements six months after treatment, including in near and distance activities and mental health. No complications were noted in either group. Surgical removal of floaters is an option if symptoms significantly affect the patient's quality of life and the patient has been unresponsive to, or unsuitable for, YAG laser vitreolysis. Vitrectomy surgery typically allows removal of all symptomatic floaters and results in high levels of patient satisfaction. The surgery is usually performed under local anaesthesia with sedation as a day procedure. A pars plana vitrectomy using 23, 25 or 27 gauge probes is performed to remove the vitreous and associated floaters. A 58 year old man was referred for treatment of a highly symptomatic ring shaped floater, which had developed acutely four weeks prior. He reported that the floater was "making life miserable" and making it difficult to read comfortably. He requested YAG laser vitreolysis or vitrectomy surgery after having read about the treatments on the internet. Examination demonstrated a posterior vitreous detachment (PVD) and prominent Weiss ring floater (Figure 1) with no additional ophthalmic pathology visible. The patient was advised against surgical intervention because his symptoms had been present for only four weeks. He was counselled that there was a high chance the symptoms would improve spontaneously with time, but if they did not resolve, there were effective treatment options available. When reviewed three months later, the floater was still present but the patient reported that he had become accustomed to it and was no longer troubled enough to desire surgical intervention. Acute onset floaters associated with a PVD are often initially highly symptomatic, but in a majority of patients, the symptoms diminish within three months due to a combination of neuro-adaptation and the floaters gravitating away from the visual axis. Due to the high rate of symptomatic improvement, it is prudent to wait at least three to six months before considering any surgical intervention. A 26 year old emmetropic woman complained of a six year history of multiple dot shaped floaters, which were constantly visible in her central visual field. She reported that the constant distraction caused by the floaters had caused her to become socially withdrawn and anxious. Examination by numerous optometrists over the previous six years had revealed no visible pathology. The patient presented requesting YAG vitreolysis after having researched treatment for floaters on the internet. Careful high magnification visualisation using a Goldman 3 mirror lens revealed multiple small floaters in close proximity to the macula. OCT scanning demonstrated floaters in the premacular bursa (Figure 2). The patient was advised that YAG laser vitreolysis was not an appropriate treatment option due to the proximity of the floaters to the macula. She was advised that vitrectomy surgery was possible to remove the floaters and alleviate her symptoms but was counselled against it due to the likelihood of developing a post-vitrectomy cataract, which would require cataract surgery and lead to a loss of accommodation. Positive aspects of her situation were highlighted and reinforced (i.e. no serious sight-threatening pathology was present and the option of surgery was available if her quality of life was severely affected by the floaters in the long term). She was referred to a general practitioner for management of anxiety. At review six months later, the patient reported a moderate reduction in symptoms and improvement in quality of life. She had started anxiolytic medication and learned cognitive behavioural coping strategies to distract her from thinking about the floaters under the guidance of a clinical psychologist. She felt her symptoms had improved to a point that she no longer desired surgical intervention. Some young patients have floaters localised to the premacular bursa which are difficult to visualise on slit lamp examination. Due to their proximity to the retina, floaters in the pre-macular bursa can cause a high level of symptoms. It is not uncommon for patients with floaters in the pre-macular bursa to have visited numerous eye care practitioners who have been unable to identify the troublesome floaters. This can lead to a loss of confidence by the patient and prompt them to 'doctor shop'. This case highlights the benefits of a holistic approach to patient care in which mental state and feelings should be considered and managed, in this case by positive reinforcement, psychological counselling and psychoactive medication when appropriate. A 45 year old man with an 18 month history of highly symptomatic floaters requested treatment because he felt they were interfering with his ability to drive safely. Examination demonstrated a large clump of degenerative floaters in the central vitreous cavity suspended close to the visual axis (Figure 3). No other ocular pathology was seen. After a thorough informed consent process, YAG laser vitreolysis was performed using 200 laser shots to partially vaporise the floaters. Immediately after the laser treatment the floaters were noted to be less dense and numerous gas bubbles generated by the laser were visible (Figure 4). A second session of laser was performed the same day, using an additional 200 laser shots following which the floaters were noted to be barely visible (Figure 5). When reviewed one month postoperatively the patient was symptom free. No complications were noted. This case demonstrates the potential efficacy of YAG laser vitreolysis in reducing floater related symptoms. Easily visualised, isolated large symptomatic floaters localised to the mid-vitreous cavity are often suitable for YAG vitreolysis. The ideal candidate for YAG laser vitreolysis is a highly symptomatic patient with a small number of easily visualised floaters located in the mid-vitreous cavity (or located in the anterior vitreous cavity in pseudophakes). This treatment is not suitable for patients with floaters that cannot be easily visualised on slit lamp examination, those with large numbers of floaters, or those with floaters located close to the retina or crystalline lens (but floaters close to an IOL can be safely treated) due to the risk of secondary damage from aberrant laser pulses. Floaters located within the pre-macular bursa are too close to the macula for safe application of laser. A 76 year old woman complained of blurred vision persisting for six months following a YAG laser posterior capsulotomy for treatment of posterior capsule opacification. Examination revealed a free floating capsular remnant in the anterior vitreous located in the visual axis behind an IOL (Figure 6). Additional ocular pathology was excluded. YAG laser vitreolysis was performed to ablate the centrally located floater. When reviewed three months postoperatively, the patient was symptom free. Floaters may develop following YAG laser posterior capsulotomy, particularly if a circular pattern (or can opener) technique is employed.35 The capsular remnants are often suitable for treatment with YAG laser vitreolysis because they are few in number and easily visualised when applying YAG laser shots. Additionally, the eye's pseudophakic status removes the risk of cataract formation that can occur in phakic eyes. A 50 year old lawyer was referred complaining of disturbing floaters, which were noted following cataract surgery with implantation of IOLs 18 months prior. The patient had been a moderately high myope (-5.00D in both eyes) prior to surgery. The symptoms were impacting her ability to read comfortably and interfering with her work. Examination revealed numerous strand-like vitreous floaters distributed throughout the vitreous with many localised in the visual axis (Figure 7). Additional ocular pathology was excluded. YAG laser vitreolysis was considered to be inappropriate due to the large number and location of floaters close to the retina. After extensive discussion regarding the option of conservative management and the potential risks of surgery, vitrectomy was performed without complication. When reviewed three months postoperatively, the patient was symptom free (Figure 8). PVD induced by cataract surgery. Of note, multifocal IOLs may increase floater symptoms because the multiple IOL focal planes allow multiple floaters at different locations in the vitreous to be seen more clearly. Pseudophakic patients are often ideal candidates for vitrectomy surgery because the absence of a crystalline lens and the usual presence of a PVD makes surgery technically easier. This means the surgeon does not need to worry about inadvertently touching the lens with the vitrectomy instruments and does not need to induce a PVD. Additionally, the procedure is potentially safer because there is no risk of cataract formation, and the risk of postoperative RD when a PVD does not need to be induced is potentially reduced. When assessing patients complaining about floaters it is important to exclude sight threatening conditions such as retinal tear, RD and uveitis amongst others. In patients with degenerative floaters, a conservative 'watch and wait' approach is appropriate for the large majority. Clinicians should be aware that floaters can have a significant impact on vision and quality of life, so patients complaining of chronic symptomatic floaters should be informed about the potential pros and cons of treatment options such as YAG laser vitreolysis or vitrectomy surgery, which may be effective in reducing or eliminating floater related symptoms in motivated patients. Dr. Simon Chen MBBS, FRANZCO is an experienced cataract and retinal surgeon at Vision Eye Institute in Chatswood, Bondi Junction, and Drummoyne in Sydney. He is also Conjoint Senior Lecturer at UNSW. He has an interest in performing complex cataract surgery in patients with retinal disease or ocular trauma. Dr. Chen has had the privilege of performing cataract or retinal surgery on over 100 Sydney optometrists and their closest relatives and was the first surgeon in the world to perform femtosecond laser cataract surgery combined with vitrectomy surgery. Dr. Chris Hodge (PhD) is a clinical research coordinator at Vision Eye Institute. 9. Foos RY, Wheeler NC. Vitreoretinal juncture. Synchysis senilis and posterior vitreous detachment. Ophthalmology 1982. 89(12): p. 1502-12. 10. Chuo JY, Lee TY, Hollands H, et al. Risk factors for posterior vitreous detachment: a case-control study. Am J Ophthalmol 2006. 142(6): p. 931-7.
{'redpajama_set_name': 'RedPajamaC4'}
9,966
Hypothyroidism is a serious condition of the thyroid gland. Find out what its symptoms, signs, causes, and treatments are. Hypothyroidism Definition: Hypothyroidism, also known as underactive thyroid disease, is a disorder wherein the thyroid gland does not produce enough thyroid hormones. Radiation therapy of the neck utilizes high doses of radiation to eliminate cancer cells and destroy tumors. This procedure can lead to hypothyroidism as it destroys the thyroid cells, making it difficult for your thyroid to produce hormones. This treatment is an internal radiotherapy that uses a radioactive form of iodine to let the nutrient circulate in the bloodstream, killing cancer cells. Radioactive iodine treatment is commonly prescribed to patients with hyperthyroidism (an overactive gland), but it also damages the cells in the thyroid, leading to hypothyroidism. Some medications, like interleukin-2, interferon alpha, and amiodarone, to treat cancer, psychiatric conditions, and heart problems can interfere with the production of the thyroid hormones. Thyroid surgery may be prescribed by a doctor when a person suffers from thyroid cancer. The patient can either be undergoing thyroidectomy where the entire thyroid gland is removed or lobectomy where only one of the two lobes is taken. This procedure can significantly affect the hormone supply of your body. Pregnancy causes changes in the function and amount of hormones, which can make your thyroid underactive. But, a pregnant woman's thyroid gland may swell after giving birth, which is a condition called postpartum thyroiditis. Women with this condition commonly experience high levels of thyroid hormones and a sudden drop in hormone production. In most cases, women will regain their thyroid's normal function. Congenital hypothyroidism occurs when babies do not develop a normal thyroid gland or their glands do not function properly. This happens when a mother lacks iodine supply during pregnancy and can cause physical deformities, stunted growth, and impaired neurological function to the baby. In rare cases, people may also experience problems, like a pituitary tumor, with the pituitary gland, leading to hypothyroidism. The pituitary gland makes the thyroid-stimulating hormone that assists in the production and release of the thyroid hormones. Hypothalamus is a portion of the brain that senses low circulating levels of the thyroid hormones and responses by releasing thyrotropin-releasing hormone, affecting the release of the thyroid-stimulating hormone. When the hypothalamus does not produce enough thyrotropin-releasing hormone, hypothyroidism may develop. The underactive thyroid treatment involves the prescription of a thyroid hormone (T4) pill on a daily basis to help boost your hormone levels. It's not actually a cure, but it can aid in controlling hypothyroidism for the rest of the patient's life. Hypothyroidism treatments and relief options have already been established by medicine but may still vary depending on the body's response. Underactive thyroid natural remedies are also available if you want to manage the condition in a natural way. But only do so with medical advice. Follow your physician's advice and switch to an underactive thyroid diet. How do you manage hypothyroidism? Share your story in the comments below. We'd love to read from you. Up Next: Vitamin D2 vs D3: What Are The Similarities And Differences? The post Hypothyroidism: Causes, Symptoms And Relief appeared first on Dr. Seeds Blog.
{'redpajama_set_name': 'RedPajamaC4'}
9,967
We use cookies to ensure that we give you the best experience on our website. Our website uses cookies, which could include also third party cookies, to send advertising that is relevant to you. By continuing your visit on the website, you consent to the use of the cookies. Read more about our cookie policy Inside Alfa Romeo view all the news Alfa Romeo at the 2018 Geneva International Motor Show The Alfa Romeo brand stars with a display area enhancing performance and exclusivity. The Nürburgring Edition Stelvio Quadrifoglio NRING and Giulia Quadrifoglio NRING dominate the stand – two limited editions which pay tribute to the record-breaking performance of the two cars and showcase the quintessential Alfa Romeo excellence. Extensive use of carbon fibre, unique contents and extraordinary performance – this is the Giulia Veloce Ti, the special edition that sports the time-honoured "Turismo Internazionale" badge which has always distinguished the most attractive and high-tech versions. 4C will be present, with the exclusive "Competizione" Coupé and "Italia" Spider special series. The Alfa Romeo DNA combines sporty personality and meticulous attention to detail, and both are very much on view in the complete Stelvio and Giulia range. Alfa Romeo cars have been delighting motorists for 108 years with their Italian style, which evokes performance, technology and driving pleasure just waiting to be unleashed at a turn of the ignition key. This year, visitors to the Geneva International Motor Show will clearly perceive that the brand has further evolved towards more extreme performance, exclusiveness and the opportunity for owners to customise their Alfa Romeo with care and a choice of details reserved for top fashion houses. The distinctive features of a tailored garment are all present in the mechanical creations displayed in Geneva: Stelvio, Giulia, 4C Spider and 4C Coupé are sure to fascinate enthusiasts and onlookers with their new looks, some focused on performance and others on premium materials, also through exquisite special series. All eyes will be on the exciting Nürburgring Edition of Stelvio Quadrifoglio sporting the "NRING" badge, of which 108 will be made, one for each year of the history of Alfa Romeo. This is not just a car; it is the tangible expression of invention, hard work and dedication, the secret to achieving the most ambitious goals and smashing enduring records, and also expressing an almost sensual pleasure in the result attained. The special Nürburgring Edition sporting the "NRING" badge celebrates the SUV capable of doing a lap of the legendary circuit in 7 minutes 51.7 seconds, the fastest time ever in its class. Quadrifoglio: one word is enough to identify a car in a class of its own, that defies all normal categorisation. This applies to both Stelvio and to Giulia, which also ready to stun the public in Geneva and continue to seduce, on a daily basis, the 108 lucky owners of the special Nürburgring Edition, sporting the "NRING" badge, through, for example, the 2.9 V6 Twin-Turbo 510 HP engine, the Torque Vectoring differential and Chassis Domain Control (CDC) that characterise this model, in addition to a specific livery, a prestigious name and details that contribute to making it a supreme example of Alfa Romeo excellence. It is no coincidence that these two outstanding cars bear the name of the famous German circuit: Alfa Romeo cars started to record victories at the Nürburgring in the 1930s, in the hands of world class drivers such as Tazio Nuvolari, who won the German Grand Prix at the wheel of a 8C 2300 Tipo Monza in 1932 and a Tipo B-P3 in 1935. Other memorable years were 1966, when the Giulia Sprint GTA became the first GT to make it round the 'Ring in under ten minutes, and 1975, when the 33 TT 12 driven by Arturo Merzario triumphed in Germany before going on to win the World Sportscar Championship. Last but not least, on June 10, 1993, Nicola Larini's 155 V6 Ti won both heats of the most prestigious race of the "DTM" championship, held on the Nürburgring circuit. The Italian driver repeated Nuvolari's feat by winning the race with a crushing victory over the other cars, all of them German. Racing track thrills will be found in the Quadrifoglio area, where the Stelvio and Giulia Nürburgring Edition proudly sporting the "NRING" badge celebrate the records scored by the two models, and continue in the space dedicated to Giulia Veloce Ti and to Stelvio Super with Pack Sport, the new Performance Pack and new leather dashboard and door panels. Both cars are equipped with Q4 all-wheel drive. The more elegant personality of the brand is expressed by the Luxury Pack on Stelvio and Giulia, while its sporty, bespoke spirit is embodied by the 4C range, with the "Competizione" and "Italia" special editions created for Coupé and Spider. The Nürburgring Edition Stelvio and Giulia Quadrifoglio "NRING" Stars of the show will surely be the Nürburgring Edition Alfa Romeo Stelvio Quadrifoglio "NRING" and Giulia Quadrifoglio "NRING", two limited-edition, special series created to celebrate the records scored by Alfa Romeo on the legendary German race track. Stelvio holds the record for its segment, with a lap time of 7 minutes, 51.7 seconds; it is the fastest SUV, equipped with the remarkable 2.9 V6 Bi-Turbo petrol engine delivering 510 HP and 600 Nm of torque, powering a top speed of 283 km/h and acceleration from 0 to 100 km/h in just 3.8 seconds. Giulia Quadrifoglio is the record-holder for standard production four-door sedans, with a lap time of 7'32", achieved thanks to its superlative handling, top speed of 307 km/h and acceleration from 0 to 100 km/h in 3.9 seconds. Just 108 of each model will be made for collectors and the most loyal Alfa Romeo customers. The new limited editions have exclusive contents, such as the numbered badge in the carbon fibre dashboard insert, and the unique Circuito Grey livery, exclusive for this limited edition. In addition to the features of excellence that characterise all Quadrifoglio cars, standard equipment on the "NRING" special series includes carbon-ceramic brakes, Sparco racing seats with red stitching and carbon shell structure, Mopar® automatic transmission knob with carbon insert, leather and Alcantara steering wheel, also with carbon inserts. The front badge and the rearview mirror caps are made of carbon fibre, like the side skirt inserts. Not to mention tinted windows, active cruise control, the Harman Kardon premium audio package, and the AlfaTM Connect 3D Nav infotainment system with 8.8" screen, Apple CarPlayTM, Android AutoTM and DAB. Giulia premieres a new bare carbon roof. Both have mats with red logo developed by Mopar. Stelvio and Giulia Quadrifoglio Nürburgring Edition sporting the "NRING" badge, are beyond top-range and perfectly showcase the excellence of Alfa Romeo. The Quadrifoglio models are paragons of engineering quality and superior performance in themselves. For example, on Stelvio, for the first time, the 2.9 V6 Bi-Turbo powerplant is combined with the innovative Q4 all-wheel drive system, with its guarantees of unbeatable performance, traction, driving pleasure and safety in all situations. Both cars also incorporate AlfaTM Chassis Domain Control, which coordinates all the on-board electronic systems, to deliver the best performance and the utmost driving pleasure at all times. Specifically, the system manages and simultaneously assigns specific tasks to the various active systems, such as Q4 all-wheel drive (on Stelvio Quadrifoglio), AlfaTM Active Torque Vectoring system, AlfaTM Active Suspension, ESC and AlfaTM DNA Pro selector with Race function. The Torque Vectoring technology optimises Stelvio and Giulia's drive distribution and accentuates their sporting character. The two electronically controlled clutches in the Torque Vectoring system make it possible to control torque delivery to each wheel separately. This ensures the optimal transfer of power to the ground even when the car is pushed to its dynamic limits. So driving is safe and fun without ever running up against an invasive stability control system. The 8-speed ZF automatic transmission, supplied as standard, is specifically calibrated to shift in just 150 milliseconds in Race mode. The transmission has a lock-up clutch to give the driver a powerful, precise feeling of in-gear acceleration. Depending on the DNA mode set, new auto 'box optimises fluidity, comfort and ease of driving in all environments, including around town, and further improves fuel efficiency and cuts CO2 emissions. So the excellence on offer is not just in performance: both the Alfa Romeo sports SUV and the sedan are also incredibly efficient in terms of emission and fuel efficiency, thanks also to their electronically controlled cylinder deactivation system and the sailing function, available in Advanced Efficiency driving mode. Last but not least, to maximise the driving experience, they are both equipped with paddle shifters machined from solid aluminium which are integral with the steering column. Alfa Romeo Stelvio Super with the new Performance Pack The Geneva stand also proudly displays two Stelvio cars equipped with 2.0 Turbo petrol powerplant, in 200 and 280 HP versions, paired with an 8-speed automatic transmission and Q4 all-wheel drive. The engine is a 4-cylinder unit built entirely in aluminium with carbon drive shaft. Its distinctive features include MultiAir electro-hydraulic valve actuation system and 200-bar high-pressure direct injection, which combine to deliver a particularly snappy accelerator response across the rev range in addition to first-class fuel-efficiency. The Alfa Romeo Stelvio with 2.0 280 HP Turbo Petrol engine (peak torque of 400 Nm at 2,250 rpm) is best in class in terms of acceleration, powering from 0 to 100 km/h in just 5.7 seconds. Its top speed is 230 km/h. Alongside it, a car with the same engine in 200 HP version: maximum torque of 330Nm at 1,750 rpm and top speed of 215 km/h, with acceleration from 0 to 100 km/h in 7.2 seconds. Two important new features are debuting on the two cars on show, namely the new leather dashboard and door panels and the Performance Pack, which includes active Alfa Active Suspension (also available separately as optional equipment), self-locking rear mechanical differential and paddle shifters machined from solid aluminium. Alfa Active Suspension dynamically adapts its response to driving conditions, the selected AlfaTM DNA mode and the driver's preferences. The self-locking rear differential guarantees perfect traction for a smooth driving experience, emphasising the agility and sporty nature of the car, with positive effects also on safety. The two cars on show are enhanced by two packs which underline the multifaceted character of the Alfa Romeo SUV, equally at home in sporting and luxurious interpretations.The former, with 280 HP engine and Competizione Red livery, fits a Sport Pack comprising aluminium inserts, aluminium pedals, sporty steering wheel and red brake callipers. Other features include leather seats and dashboard with grey stitching, privacy windows and roof bars. What is more, Mopar, the Official Service Partner of Alfa Romeo for services, spare parts, genuine accessories, assistance and customer care, has developed special accessories for this car, including 20″ matte black wheels. Next to it, a 200 HP Stelvio Super with a luxurious soul, in Volcano Black with beige leather seats and dashboard and grey wood inserts. This model has 20″ alloy wheels by Mopar with black painted callipers. Alfa Romeo Giulia Veloce Ti The Alfa Romeo Giulia Veloce Ti, equipped with 280 HP turbo petrol powerplant combined with 8-speed automatic transmission and Q4 all-wheel drive, is a fine embodiment of the technical and automotive excellence of the Giulia range. The time-honoured code "Ti", standing for "Turismo internazionale", has always been reserved to the most lavishly equipped, leading-edge, extreme versions. The Giulia Ti, presented in Racing Red livery, is the most exclusive version in the range, successfully combining the Giulia Veloce's sporty flavour with an alluring style, a rich standard outfit and the very latest technology, a near sister to the Quadrifoglio, with which it shares several stylistic features. The black roof, the leather-clad dashboard with carbon inserts and the leather and Alcantara seats with black stitching all immediately catch the eye, as well as many carbon details styled by Mopar, such as the backlit kick plates with Alfa Romeo logo, the "V" of the front cloverleaf, the gear knob insert and the rearview mirror caps. The side skirts with carbon insert and the rear spoiler are quintessential Quadrifoglio elements. The 19″ burnished alloy 5-spoke wheels are another real racing style feature. The red brake callipers gleam enticingly between the spokes. The sporty personality of this model is reasserted by the extreme care for details and extensive use of carbon fibre. The configuration is completed by the AlfaTM Connect 3D infotainment system with Apple CarPlayTM and Android AutoTM and 8.8″ 3D navigation system with DAB, active cruise control, tinted windows and the Harman Kardon premium audio package. Alfa Romeo Giulia Super The Giulia Veloce Ti is partnered by a Giulia Super in Volcano Black with the one-of-a-kind Luxury Pack, which comprises an attractive colour combination of beige leather interior with real grey oak wood inserts. This model has 18″ alloy wheels with black painted brake callipers and a mission of elegance, the ideal car for those wishing to enjoy the performance of Giulia in peerless comfort and luxury. Under the bonnet is the 200 HP supercharged four-cylinder petrol engine with automatic transmission. 4C in the "Competizione" and "Italia" special series The Alfa Romeo stand will also showcase the super sporty 4C, which expresses the racing spirit that is part of the Alfa Romeo DNA: optimum performance and excellent engineering designed for ultimate pleasure to drive in breathtaking style. Two cars, a Coupé and a Spider, in the "Competizione" and "Italia" special series, will be at Geneva for visitors to admire. The Coupé has a much sportier personality, with strong racing connotations, while the Spider, still capable of thrilling performances, has a more elegant, refined character. They are both powered by a feisty all-aluminium 240 HP 1750 Turbo petrol engine, with intercooler and continuous twin phase variators. Combined with the TCT automatic transmission, it delivers supercar performance: weight-to-power ratio of less than 4 kg/HP, top speed of 258 km/h (257 km/h for the 4C Spider), 0 to 100 km/h in just 4.5 seconds, lateral acceleration of 1.1 g and maximum braking deceleration of 1.25 g. This performance is aided by the lavish use of ultralight materials, including carbon fibre for the shell, aluminium for the front and rear subframes and SMC (low density compound) for the bodywork. The two series, both limited editions, are exclusively styled. The matte Vesuvio Grey of the Coupé, combined with the exclusive livery designed especially for this configuration, (which each customer can choose or not) underscores its uncompromisingly sporty nature. As well as the body shell, additional carbon details raise the sporty temperature: the roof, rear spoiler, mirror caps, side air vents and headlight moulding, all emphasising the car's muscular power and supreme engineering, and clearly referencing the racing world. The stylistic outfit is completed by the body-colour front bumper with air vent, the dark-finish five-spoke wheels (18″ at the front and 19″ at the back), the red brake callipers and the Akrapovič titanium central tailpipe with dual mode and carbon again in the trim. The racing mood continues inside the cockpit: the seats are leather and microfibre with red stitching, as is the racing steering wheel, and to conclude, the exclusive status of the limited series of just 108 cars is underlined by the numbered badge on the central tunnel and the aluminium "Competizione" plaque on the dashboard. The Spider "Italia" special series is identified by the Misano Blue bodywork, exclusive to the 108 cars in this collectors' version. On the outside, eye-catching features include the asymmetric five-spoke wheels, 18″ at the front and 19″ at the rear, and the yellow brake callipers that match the stitching on the seats, dashboard, steering wheel and door panels. The side proudly displays an exclusive "Spider Italia" sticker in the three colours of the Italian flag. The 4C Spider Italia's standard outfit includes Akrapovič titanium central twin tailpipes with dual mode function and carbon trim. The equipment is completed by Alpine premium Hi-Fi system, with subwoofer, numbered badge on the central tunnel and aluminium "Italia" dashboard plaque. Latest From Alfa Romeo Alfa Romeo Giulia triumphs at... The Alfa Romeo Giulia... FCA What's Behind: international previews... Today saw the International... Alfa Romeo at GP number... Here we are at... 2019 FIA Formula One Heineken... Weather: FP3: sunny and... "Alfa Romeo Racing" celebrates Grand... Alfa Romeo, the winners... Alfa Romeo Giulietta MY19 and... Alfa Romeo Giulietta evolves... ALFA WORLD alfaromeo.com FCA Italy S.p.A. Corso G. Agnelli 200, 10135 Turin, Italy Turin Companies Register / Tax code no. 07973780013 Company capital 800.000.000 Euros, fully paid up A single shareholder company a local dealer an estimate a test drive a brochure
{'redpajama_set_name': 'RedPajamaCommonCrawl'}
9,968
This guide will teach you how to buy TRON from square one (i.e., all you have is fiat money, no cryptos). It will also work for most other cryptocurrencies, but as I'm focusing on TRON right now, I am going to write a quick foreword about it and then we'll get right into the details of the guide. A Little About TRON — Why Buy? There is a massive subset of people who are new to the crypto-space entirely but have only heard of Bitcoin. Then they go online and they search around and figure out there are actually many, many cryptocurrencies out there with many different use cases. For TRON (ticker symbol TRX), there has been a lot of recent interest and massive upward price movement. It's no surprise that TRON was named in a recent Bloomberg article titled "Bitcoin's Smaller Cousins Are Leading the Crypto Rally" — and personally I believe it's likely that simple speculation is going to raise the price, at least in the short term, in the wake of this news. However, I never write these guides only to use a coin as an engine for speculation. For me to be interested in a coin, it has to have great fundamentals or a new and interesting technology which solves specific problems in elegant ways. TRON punches these buttons. The Tron Foundation was founded in Singapore by its current CEO, Justin Sun. Its goal is to create a "worldwide free content entertainment system" using smart contracts. Putting it in simpler terms, this is a technology that may end up with the ability to seize the market share currently held by various kinds of content distribution platforms like Google Play and the Apple Store. TRON is aimed at content creators who would otherwise have to pay money for access to such platforms. With TRON, they'd be able to distribute their work for free. Here's the cool part, though: the protocol behind TRON allows these creators to make decisions about the cost (if any) of the content, how it gets distributed, etc. Now, so far everything I've said is basic information about TRON that I've discovered in the course of my research. However, what really piqued my interest about TRON is something that I actually haven't seen anyone else mention: censorship resistance. Censorship resistance is an exciting concept in cryptocurrency — the idea that it's impossible for a government or other bad actors to alter the blockchain, e.g., your coins cannot easily be confiscated nor can your transactions be reversed after the fact. With cryptocurrency, there is no such thing as "freezing someone's bank account" — now that's censorship resistance. With TRON, I envision a far more literal form of censorship resistance — content creators will be able to say or create whatever they want without the risk of being "de-platformed." The fact is, small-minded censors in governments or powerful corporations are always looking for ways to keep people from being able to speak their minds. This is often politically motivated, but sometimes it's simply nonsensical — how long did Apple resist allowing Bitcoin wallets onto its Apple Store, just because they could? The reality is that companies like Google and Apple are prehistoric megafauna, dinosaurs proudly walking the Earth. At first glance they seem to be the rulers, but there's a meteor in the sky called cryptocurrency and TRON may be the one that kicks them off their thrones. Could this be a true paradigm shift? Before moving on to the rest of the guide, I will say a few things about the current market capitalization of TRX — this is, of course, something you should always think about when considering an investment into a coin. I've found analysis is often made easier by doing direct comparisons between coin market capitalizations for various cryptocurrencies. For example, Ripple is currently ranked #3 on coinmarketcap and has a market capitalization of $47,771,951,251. If TRX had Ripple's current market cap, it would be worth around 73 cents per coin. Considering that TRX is currently worth 4 cents per coin, that's an 1825% increase over today's price! The realistic potential for massive growth in cryptocurrencies is always alluring, and that's only possible when you can get in on the ground floor. With Tron, the opportunity to get in early is definitely still present — so, let's learn how to buy some. Create and sign in to a Binance account. I recommend Binance because they have been reliable and convenient for me and they offer many different lesser-known cryptocurrencies with trading pairs on both ETH and BTC. An added bonus is that you can withdraw up to 2 BTC/day worth of funds with no verification at all. Alternate exchange options: HitBTC also trades TRX — I have written a guide to using HitBTC which you will find useful. Move your ETH to Binance. Once it has confirmed, you can now easily use the ETH/TRX trading pair to buy as much or as little TRX as you want. More on trading pairs later. Send your coins from Binance to a safe TRX wallet for long term storage if you intend to hold for awhile. This is not strictly necessary but it is considered a safer option than keeping ANY coin on ANY exchange long term. I personally believe Coinomi to be the safest and most convenient wallet for many cryptocurrencies. Unfortunately, they don't support Tron at this time — so instead, we'll talk about options later on in this guide. Once you are up and running there are a number of options when it comes to actually paying for coins via Coinbase. It is possible to link your bank account to Coinbase but actually transferring coins out of Coinbase will be impossible for a few days while the funds are clearing. This is obviously not ideal if you want to move quickly, as you would now have to wait several days to move your coins to an exchange where you can trade them for TRX (or any other coin). In my opinion if you intend to buy lesser-known coins like TRX, ETH is the best choice to buy here. Why? Well, with regards to LTC, the reason is clear: other exchanges like Binance offer direct trading pairs for BTC and ETH, but not LTC or any other currency. What this means is that you could directly exchange your ETH for TRX, or your BTC for TRX, but you'd have to perform another step if you wanted to trade your LTC for TRX (and that means more fees!). Obviously we don't want to waste even one cent if we can avoid it, so LTC is out. Now is the time for you to make your Binance account. Follow the link and create an account using a strong password (this should be different than the one you used for Coinbase!). There are other places where you may be able to buy TRX (Changelly, Kraken, EtherDelta, HitBTC, KuCoin, etc.). I cannot directly recommend most of these exchanges as I don't have much experience with them at this point — however, I can say that I have used HitBTC and KuCoin and they have worked well for me. This guide focuses on Binance because my experience there has been 100% positive — I have transferred coins in and out of their system many times with no problems. Once you click Send, you will need to wait a little while. Without getting too technical about it, exchanges want to be as secure as possible. Thus, when you make a deposit, they wait for multiple "confirmations" from the network before allowing you access to your funds. You can view the progress in your Binance account by clicking Funds and then History. Do not be alarmed if nothing shows up at first! There are many reasons there might be a slight delay. In general you should see the transaction show up within a few minutes, with the current number of "confirmations" next to the number of required "confirmations" next to it. Be patient — your TRX is nearly in hand! Once you have the required number of confirmations, it's time to trade your ETH to TRX. This is blessedly simple. In the front page of Binance, click "ETH Markets." Search for "TRX/ETH" in this list, and click it. Now you are on the trading page. In the bottom left under "Buy TRX," click "100%" below the "Amount" field. This indicates to Binance that you'd like to trade all of your ETH for a commensurate amount of TRX for no more than the price listed above. The price field is automatically listed based on the current market. If you like, you can change it to a different price, but like any market it's not guaranteed that someone will buy at the price you'd like. Your order will remain open until it's been fully filled or you cancel it. There are several options here such as Stop-Limit orders, etc., but this is outside the scope of this guide. In this case, you are simply placing a "Limit" order for some TRX. If you want to be done now, you can be — but there are more steps if you want to be security-conscious. You may want to check under the "Orders" and "Order History" tabs that the order went through — if you placed a Limit order at the default price, it probably did. Once you have your TRX in your Binance account, you can see them under "Funds" → "Deposits Withdrawals." You can click "Hide 0 Balances" at the top to clean up the screen of coins you don't own, and you can see an estimate of the overall converted BTC and USD value of your account at the top right. In Binance, go into the "Deposits & Withdrawals" tab, then click "Withdrawal" to the far right of the "TRX" row. By now it should be clear what you're looking at — fields that let you input the address to send the coins to, and how many coins to send. For your convenience, there is a "Max" button to the right of the Amount field. Note that once you click "Submit" you will need to use your two-factor authentication via Google Authenticator. I recommend picking up a low-cost Android phone for use with Google Authenticator — as a bonus, you can use such a phone for the Coinomi wallet when dealing with other cryptocurrencies. The thing about TRON is that it's an "ERC-20" token rather than a traditional cryptocurrency, which means it lives on the Ethereum blockchain. This also means that it works the same as any other ERC-20 token, in that it can be stored in similar ways. It is possible to get TRX working in Coinomi right now and the process is fairly straightforward. In my opinion this is the preferred method, especially if you're willing to buy an Android phone solely for cryptocurrency use. However, if you'd prefer to use a desktop wallet, I have written an easy-to-follow comprehensive guide which walks you through how to use MetaMask and MyEtherWallet to store any ERC-20 token, including TRX. If you've followed my guide this far, then you're the proud owner of some TRX. Be advised that the information here only scratches the surface on TRON and cryptocurrencies in general. I recommend you read as much as possible. Cryptocurrencies are the future, and if you're reading this guide you are already lightyears ahead of the curve.
{'redpajama_set_name': 'RedPajamaC4'}
9,969
World History Edu The Boston Massacre: The American Revolution The American Flag Greek Philosophers Zeus' Power in Ancient Greek Mythology Gods and Goddesses in Ancient Egypt Ancient Egyptian Mummies Pharaohs of Egypt Cities in Ancient Egypt Roman Gods and Goddesses Roman Gladiators History of the Roman Numerals Julius Caesar: History, Accomplishments and Facts Ancient China The Great Wall of China- History & Facts Ancient Mesopotamia Greatest African Leaders of all Time British Monarchs British Prime Ministers British Monarchs / Famous Rulers 10 Longest-Reigning English Kings and Queens by World History Edu · Published July 11, 2019 · Updated June 10, 2020 In recorded British history, there have been quite a number of men and women to wear the British crown. But have you ever wondered which of those British monarchs wore the crown the longest? And for how long did those long-serving British monarchs reign? At what age did they ascend to the British throne? In order to answer the above questions, here is a look at the top 10 longest-reigning British kings and queens in recorded history: Queen Elizabeth II holds the record of being the longest-reigning monarch in British history Official Title: Queen Elizabeth II, Queen of the United Kingdom and the Commonwealth realms Birth name: Elizabeth Alexandra Mary Date of Birth: 02:40 GMT, April 21, 1926 Place of Birth: 17 Bruton Street, Mayfair, London. Parents: Duke and Duchess of York (later George VI and Queen Elizabeth) Royal House: House of Windsor Date of Ascension: February 2, 1952 Predecessor: George VI Coronation Date and Place: June 2, 1953; Westminster Abbey Spouse: Philip Mountbatten (later Prince Philip, Duke of Edinburgh) Children: Prince Charles (Prince of Wales), Princess Anne (Princess Royal), Prince Andrew (Duke of York) and Prince Edward (Earl of Wessex) When Elizabeth II was born (April 21, 1926), very few people gave her any chance of ascending to the throne. The reason was because she was third in line to the throne, behind her uncle, Prince Edward, and her father, Prince George) . That does not seem very far away. However, one must realize that the Prince of Wales (Prince Edward, her uncle) was a very young man– 32 years old at the time of Elizabeth's birth. Everyone expected him to have a long and illustrious reign filled with several children of his own. However, less than a year into his reign, Prince Edward (then Edward VIII) abdicated his throne in order to marry a divorced woman. Elizabeth's father, Prince George, was then crowned King George VII. That's how come Princess Elizabeth moved from a relatively obscure position in the royal line to becoming queen- the best British queen, if not the best British monarch ever. She has been at this job ever since she was 25 years – longer than anyone in British history. Her reign has been nothing short of phenomenal. In addition to holding the record of the longest-reigning British monarch, the Queen currently sits atop the list of the world's oldest reigning monarch. Undoubtedly, it will take a very long time for another British monarch to surpass those stellar records of Queen Elizabeth II. To read more about the life, interesting facts and accomplishments of Queen Elizabeth II, please visit this link. Queen Victoria- Portrait by Winterhalter, 1859 Official Title: Queen Victoria, Queen of the United Kingdom of Great Britain and Ireland and Empress of India Birth name: Alexandrina Victoria Date of Birth: May 24, 1819 Place of Birth: Kensington Palace, London Parents: Duke and Duchess of Kent (Prince Edward, Duke of Kent and Strathearn, and Princess Victoria of Saxe-Coburg-Saalfeld) Royal House: House of Hanover Date of Ascension: June 20, 1837 Predecessor: William IV Coronation Date and Place: June 28, 1838; Westminster Abbey Spouse: Prince Albert of Saxe-Coburg and Gotha (married in 1840) Offspring: Victoria (Princess Royal), Albert Edward (Prince of Wales, later Edward VII), Alice, Alfred (Duke of Edinburgh), Helena, Louise, Arthur, Leopold, and Beatrice Number of years on the throne: 63 years and 216 days Date of Death: January 22, 1901 at Osborne House, Isle of Wight Succeeded by: Edward VII If you thought Elizabeth II's path to the throne was surprising, wait until you hear of how Victoria was crowned Queen of the United Kingdom of Great Britain and Ireland on June 28, 1838. As at the time of her birth in 1819, Victoria was the fifth in line to the British throne. However, and sadly enough, the number became three after her father and later her grandfather (George III) died a few months into her birth. Victoria was now three places behind her three uncles: Prince George, the Duke of Cornwall; Prince Fredrick, the Duke of York; Prince William, the Duke of Clarence. After unremarkable reigns of King George IV and King William IV, Victoria was crowned on June 28, 1938. Shockingly, all two of her predecessors failed to produce any legitimate children. At the time of her coronation, she was 19 years old. Over the course of 63 years, Queen Victoria was heavily involved in the steering her kingdom towards a path of stability. She chalked so many feats of achievements in an era that is commonly referred to as the Victorian Era. Her steadfast dedication to family values were the admiration of many people across Europe and beyond. Along with her beloved husband, Prince Albert, Queen Victoria successfully redefined the royal family by placing it on a solid footing for the 20th century. Her reign sort of wiped the slate clean after the series of poor approval ratings that some of her predecessors had. Read more about other major facts and accomplishments that characterized Queen Victoria's reign. George III- Portrait by Johann Zoffany, 1771 Official Title: King George III, King of the United Kingdom of Great Britain and Ireland Elector; King of Hanover Birth name: George William Frederick Date of Birth: June 4, 1738 Place of Birth: Norfolk House, St. James Square, London Parents: Frederick Prince of Wales and Princess Augusta of Saxe-Gotha Date of Ascension: October 25, 1760 Predecessor: George II Coronation Date and Place: September 22, 1761; Westminster Abbey Spouse: Charlotte of Mecklenburg-Strelitz Offspring:  George (Prince of Wales, later George IV), Frederick (Duke of York), William (Duke of Clarence and St Andrews, later William IV), Charlotte (Princess Royal), Edward (Duke of Kent and Strathearn), Augusta Sophia, Elizabeth, Ernest Augustus (later King of Hanover), Augustus Frederick (Duke of Sussex), Adolphus (Duke of Cambridge), Mary (Duchess of Gloucester), Sophia, Octavius, Alfred, Amelia Number of years on the throne: 59 years, 96 days Date of Death: June 29, 1820 Succeeded by: George IV On September 22, 1761, Prince George William Frederick was crowned King of Great Britain and King of Ireland. He was 23 years old at the time of the coronation. About 40 years later, he became King George III of the United Kingdom of Great Britain and Ireland. The change in title came as a result of the merger of the two countries- Great Britain and Ireland- under Acts of Union 1800. George III's was most characterized with several upheavals abroad. The most notable of this came from the 13 American colonies that engaged in an 8-year long American War of Independence with George III. In the eyes of royalists, he was always regarded as the king that lost the American colonies. His reign was also filled with several wars with his European neighbors. George III did however bring Napoleon's army down at the 1815 Battle of Waterloo. King George III was besieged by a several mental issues towards the later part of his 59-year reign. He was often described as "Mad King George". His defenders and loyalists claim it was simply a mild case of bipolar disease. George III's ten sons and six daughters certainly dwarfs the number of children Queen Victoria had. Additionally, he is one of few British monarchs that can boast of having two of his children become kings- George IV and William IV. James VI of Scotland and James I of England and Ireland James VI of Scotland and James I of England Official Title: King James, King of Scots, England and Ireland Birth name: James Charles Stuart Date of Birth: June 19, 1566 Place of Birth: Edinburgh Castle Parents: Henry Stuart, Lord Darnley and Mary, Queen of Scots Royal House: House of Stuart Date of Ascension: July 24, 1567 (King of Scotland) and March 24, 1603 (King of England) Predecessor: Mary, Queen of Scots (for Scotland), and Elizabeth I (for the English throne) Coronation Date and Place for the Scottish throne: July 29, 1567, Church of the Holy Rude, Stirling Coronation Date and Place for the English throne: July 25, 1603, Westminster Abbey Spouse: Anne of Denmark Offspring:  Henry (Prince of Wales), Elizabeth, Margaret, Charles (later Charles I), Robert (Duke of Kintyre), Mary, Sophia Number of years on the throne: 57 years, 246 days Date of Death: March 27, 1625 Succeeded by: Charles I The fourth on the list of longest-reigning British monarch goes to James VI (James I- if you factor in the fact that he was also King of England and Ireland). James' parents were Henry Stuart, Lord Darnley, and Mary, Queen of Scots. James had one of the most turbulent upbringings a child could get. At just 13 months old, the crown of Scotland fell on his head. This was after a group of noblemen (predominantly Protestants) forced his mother, Mary, Queen of Scots to abdicate on July 24, 1567. All throughout James's minority (an age when the monarch is a minor and therefore cannot rule), Scotland saw a total of 4 different regents- James Stewart, Earl of Moray; Matthew Stewart, Earl of Lennox; John Erskine, Earl of Mar; and James Douglas, Earl of Morton. The death of the Elizabeth I, who was childless, meant that James VI became the legitimate claimant of the English throne. He staked his claim to the English throne because he was the paternal great-great-grandson of Henry VII, King of England and Lord of Ireland. James was crowned King James I of England on July 25, 1603 at Westminster Abbey. All in all, James VI ruled for a total of 57 years, 246 days before he was succeeded by his son, Charles I. Henry III of England Henry III-A 13th-century depiction of Henry III's coronation Official Title: King Henry III of England, Lord of Ireland and Duke of Aquitaine Birth name: Henry of Winchester Date of Birth: October 1, 1207 Place of Birth: Winchester Castle Parents: King John and Isabella of Angouleme Royal House: House of Plantagenet Predecessor: John, King of England Coronation Date and Place: October 28, 1216, Gloucester; later -May 17, 1220 at Westminster Spouse: Eleanor of Provence, Daughter of Raymond Berenger Offspring:  Edward (later Edward I), Margaret of England (later Queen of Scots), Beatrice of England, Edmund Crouchback (Earl of Lancaster and Leicester), Katherine of England Date of Death: November 16, 1272 Succeeded by: Edward I Henry the third of England comes in fifth on the list longest reigning British kings and queens. At the time of Henry III's coronation he was 9 years old. Commonly referred to as Henry of Winchester, the King had a taste for spectacular parties and religious gatherings. His oppression and exploitation of the Jews in England made alienated him from the Jews. Henry III tried on two occasions to claim back portions of France he believed once belonged to his father. The first attempt, in 1230, was a disaster. In similar vein, the second attempt, during the Battle of Taillebourg, ended badly for the king. Henry III's 56 years reigned was marked by series of rebellions from renegade barons and brief period as a prisoner. His militaristic tendencies, as well as the high cost that came with them, made him an unpopular king for the majority of those 56 years. Edward III. Image source Official Title: King Edward III of England and Lord of Ireland Birth name: Edward of Windsor Place of Birth: Windsor Castle Parents: Edward II of England and Isabella of France Date of Ascension: January 25, 1327 Predecessor: Edward II Coronation Date and Place: January 29, 1327, Westminster Abbey Spouse: Philippa of Hainault Offsprings: Edward of Woodstock – the 'Black Prince' (Duke of Cornwall, Prince of Wales, Prince of Aquitaine), Isabella, Joan of England, Lionel of Antwerp (Duke of Clarence), John (Duke of Lancaster), Edmund (Duke of York), Mary of Waltham, Margaret of England, Thomas of Woodstock (Duke of Gloucester, Earl of Buckingham, Earl of Essex) Succeeded by: Richard II Edward III was crowned King of England after his mother, Isabella of France, deposed his father, Edward II. Isabella of France was aided by her lover Roger Mortimer, and the two marched right into England. At the age of 17, Edward III then took full control of the kingdom after ousting Mortimer. Contrary to his father, Henry III was very successful in his battles. There were very few insurrections in his kingdom during his reign. Henry III successfully made England's military a force to be reckoned with. His reign of 50 years and 147 days makes him the second longest-reigning British monarch of the Middle Ages (the first was his great-grandfather Henry III). During his half a century reign, Henry III brought a lot of reforms to the legislation. He also had a reasonably efficient government. Unfortunately his reign was marred by the Black Death as well as the Hundred Years' War. William I of Scotland Official Title: William I, King of Scots Birth name: William, son of Henry Date of Birth: c. 1143 Place of Birth: Scotland Parents: Henry of Scotland and Ada de Warenne Royal House: House of Dunkeld Date of Ascension: December 9, 1165 Predecessor: Malcolm IV Coronation Date: December 24, 1165 Spouse: Ermengarde de Beaumont Offspring: Margaret of Scotland, Isabella of Scotland, Alexander (later Alexander II), Marjorie Date of Death: December 4, 1214 Succeeded by: Alexander II The seventh monarch on the list of longest reigning British kings and queens is William I of Scotland. William I inherited the Scottish throne after his older brother Malcolm died in 1165. Unlike his predecessor, William was a very healthy young king. He was very muscular and in top-notch physique. This partly explains why he earned the name Garbh—"the Rough". He spent a great deal of his 48-year reign trying to reclaim his lost Earldom of Northumbria from Henry II of England. Commonly known as William the Lion, William I of Scotland is considered the second-longest reign monarch in Scottish history- behind James VI of Scotland. He got the title "the lion" because his flag- the flag had a red lion rampant with a forked tail. Llywelyn of Gwynedd Llywelyn the Great's statue Official title: Prince of Gwynedd and Prince of Powys Wenwynwyn Birth name: Llywelyn ab Iorwerth Place of Birth: Doywyddelan Parents: Iorwerth Drwyndwn and Marared ferch Madog Royal House: House of Gwynedd Date of Ascension: 1216 Predecessor: Gwenwynwyn ab Owain Spouse: Joan, Lady of Wales Offsprings: Dafydd ap Llywelyn, Gruffydd ap Llywelyn, Elen ferch Llywelyn, Gwladus Ddu, Marared ferch Llywelyn, Gwenllian ferch Llywelyn, Angharad ferch Llywelyn, Susanna ferch Llywelyn Number of years on the throne: 44-46 years Succeeded by: Dafydd ap Llywelyn (Gwynedd) and Gruffydd ap Gwenwynwyn (Powys Wenwynwyn) Llywelyn of Gwynedd, also known as Llywelyn the Great, was the King of Wales for about 44 years. By 27 years old, Llywelyn had become King of Gwynedd. Due to his relatively peaceful relationship with John of England, Llywelyn was able to marry John's daughter- Joan. However, the relationship with England grew sour when John marched into Gwynedd in 1211. After about 4 years of conflict, the two Kings signed the Magna Carta in 1215. After close to half a century on the Welsh throne, Llywelyn died in 1240. His son, Dafydd ap Llywelyn became King of Gwynedd. Elizabeth I- The Pelican Portrait by Nicholas Hilliard. Official Title: Queen of England and Ireland Date of Birth: September 7, 1533 Place of Birth: Greenwich Palace Parents: Henry VIII and Anne Boleyn Royal House: House of Tudor Date of Ascension: November 17, 1558 Predecessor: Mary I of England (also known as "Bloody Mary") and Philip Spouse: None Succeeded by: James VI of Scotland Elizabeth I's 44 years, 127 days reign over England and Ireland puts her 9th on the list of longest-reigning British monarchs. Because she blatantly refused to marry or bare any children, Elizabeth I was commonly referred to as 'the Virgin Queen'. She was also called Gloriana or Good Queen Bess in some cases. Elizabeth I's highlight came when she successfully vanquished King Philip II's Spanish Armada (a 130-ship naval fleet) in 1588. Elizabeth I's reign was also characterized by two decades' rift with her cousin, Mary, Queen of Scots. The conflict ultimately culminated in the execution of Mary, Queen of Scot. During her childhood, Elizabeth's parents was so estranged so much that her father Henry VIII brutally executed Elizabeth's mother, Anne Boleyn. The king accused his wife of cheating. Elizabeth I is generally considered the last monarch to come from the House of Tudor. Her house ended because she bore no children. After her death on March 24, 1603, the English crown moved to her distant cousin, James VI of Scotland. To read more about the life, interesting facts and accomplishments of Queen Elizabeth I, please visit this link. David II of Scotland King David II of Scotland- portrait by Sylvester Harding, 1797 Official Title: David II, King of Scots Place of Birth: Dunfermline Palace, Fife Parents: Robert I of Scotland and Elizabeth de Burgh Royal House: House of Bruce Date of Ascension: June 7, 1329 Predecessor: Robert I of Scotland Coronation Date and Place: November 24, 1364 at Scone Spouse: Joan of England (1328), Margaret Drummond (1364) Date of Death: February 1371 Succeeded by: Robert II Coming in tenth on the list of longest reigning British kings and queens is David II of Scotland. Reigning from June 7, 1329 to February 22, 1371, David II went down in history as the king that vehemently opposed English incursions into Scotland. David was only 5 years when he was crowned King of Scots. It is believed that David II's 41-year reign did so much for the kingdom in general. He made sure that the Scottish monarch remained intact and free from foreign influences. David's parents were Robert I of Scotland and Elizabeth de Burgh. At the age of 3, David lost his mother. Then, a year later, David (then 4 years old) was married off to Joan of the Tower. His groom was only 7 years old at the time. The marriage produced no child; neither did his second marriage to Margaret Drummond. Therefore, upon the death of David II, the crown moved to his distant nephew- Robert II. Tags: British Queens and KingsElizabeth IElizabeth IIGeorge IIIJames VIMonarchsVictoria King Offa of Mercia: Biography, family, reign, & accomplishments by World History Edu · Published November 24, 2020 Catherine the Great (Catherine II): Life, Reign & Death by World History Edu · Published August 2, 2019 · Last modified July 31, 2020 Alfred the Great: 10 Major Accomplishments by World History Edu · Published November 23, 2020 · Last modified November 26, 2020 Next story Excalibur- Origin Story, Myths & Powers Previous story Chivalry: Meaning and Codes Subscribe to world history edu via Email World History Edu © 2021. All Rights Reserved. Worldhistoryedu is not responsible for the content of external sites.
{'redpajama_set_name': 'RedPajamaCommonCrawl'}
9,970
Islamist Threat: Why Is Russia Scaring Turkmenistan? Anyone who is listening to Russian officials, or getting their information from Russian media lately would think that the Central Asian states are on the brink of disaster, that Islamic militants massing in Afghanistan are preparing to swarm over the border and establish a caliphate in the southern tier of the CIS. Turkmen Embassy In Minsk Hacked Apparently By IS Turkmenistan's embassy in the Belarusian capital, Minsk, has been hacked by people with apparent links to the Islamic State extremist group. Turkmenistan's Call, Or Rather Order, To Get Healthy Fitness-and-happiness month is a state-sponsored campaign in Turkmenistan and it's not just some good advice from the authorities, it's an order. Does Nuclear Deal Presage A New Era For Iran-Central Asia Relations? The five Central Asian states have been independent for nearly 25 years and throughout that time -- as talk continued about recreating the ancient Silk Road -- there was always one direction that was off limits: the route through Iran. But, judging by the encouraging news from negotiations between Tehran and world powers, that route might soon open and, if it does, it will have a dramatic effect on Central Asia. Russia Beefs Up Exam For Migrants ​Russia has drafted a new, tougher version of its compulsory exam on Russian history, language, and civics that foreign labor migrants will need to pass in order to receive residence permits, the Izvestia newspaper reported on April 6. Turkmen Leader To Conserve Water Turkmenistan's president has pledged to conserve scarce water resources, a major problem in a country that's considered to be among the world's top water wasters. CIS Senior Diplomats Meet In Bishkek Senior diplomats of the Russian-dominated Commonwealth of Independent states have gathered in the Kyrgyz capital, Bishkek, to discuss ways to increase cooperation. U.S. To Back Human Rights In Central Asia Deputy Secretary of State Anthony Blinken says the United States will "continue to advocate for free media and more open political systems" in Central Asia, a strategic region that lies between Russia, China, Afghanistan, and Iran. New Chinese Bank Becomes Major Headache For U.S. China's decision to establish a new China-led development bank for Asia is causing major headaches in Washington. As countries -- including, now, Russia -- rush in to join the bank before a March 31 deadline, the United States looks increasingly isolated. U.S. Reassesses Central Asia Strategy The State Department says it has completed a review of U.S. policy toward Central Asia at a time of economic and political uncertainty in the region. Afghan Turkmens Demand Probe Into Deadly Shooting Residents of an ethnic Turkmen village in northwestern Afghanistan are urging authorities to investigate a deadly shooting by police. The Black Flag South Of The Amu-Darya The black flag of the Islamic State (IS) militant group has reportedly been raised in areas of northern Afghanistan, just south of the border with Central Asia, and Ashgabat, Tashkent, and Dushanbe have taken notice.
{'redpajama_set_name': 'RedPajamaCommonCrawl'}
9,971
Fingertips: I'm Having a Heart Attack by Andrew Schultz profile Episode 35 of Apollo 18 Tribute Album Return to the game's main page Newest FirstOldest FirstMost Helpful FirstLeast Helpful FirstHighest Ratings FirstLowest Ratings First - Edo, June 30, 2020 - deathbytroggles (Minneapolis, MN), March 30, 2019 - miruial, October 8, 2018 Chew on that scenery! Or not, but you can do plenty else. , April 2, 2016 by Teaspoon Surprisingly believable! It's a cliched scenario, but the game knows and plays on this. Not only are your actions far-ranging but the reactions of your director make him an unexpectedly deft NPC. Minor touches of weirdness (the "Reefer Madness" reference warranted a chuckle) add to matters. Here's the list of actions. (Spoiler - click to show)Before looking at the list below, try looking at the director's notepad. It'll have a suggested move on it, which may help you find more. You can look at what you've done so far with SCORE. Be aware that repeating some commands will have amusing results, so it's worth trying things multiple times. (Spoiler - click to show) --open window, wait, ask for a hint, call 911 --look at the camera, examine the magnet, read the magazine, examine backpack --request the score, tell a joke, undo, swear --make a body noise, examine the boom mike, grab chest, live --open the fridge, any unparseable command, jump, restart --examine the brochure, quit, eat apple, coughing --go in any direction, cry, sleep, lick the ashtray --think, pray, sing, eat chocolate --eat the paper, phone LifeCall, wake up, take or drink the beer --smoke, say XYZZY, examine your wallet, take the gum --die --press enter (as in, literally press the enter key on your keyboard, with no text. Heh.) Promote this user Demote this user Plonk this user Flag spoilers View my user filters Explain these options | Add a comment A humorous game collecting various fake deaths, February 16, 2016 by MathBrush In this game, you are part of an anti-smoking ad, and you are supposed to die dramatically. But you keep getting distracted by everything around you. I enjoyed the variety of deaths and non-deaths, as well as having a main antagonist whom you have a sort of relationship with, rather than a romantic interest. - Shadow Fox (Texas), April 17, 2013 - Emily Short, April 8, 2013 - Mr. Patient (Saint Paul, Minn.), April 1, 2013 - Molly (USA), September 23, 2012 Catchy and funny one-move game about acting., August 14, 2012 by Wade Clarke (Sydney, Australia) Related reviews: Inform, comedy I hadn't played a one-move game before I played I'm Having a Heart Attack and it turned out to be an excellent introduction to this mini-genre. The game puts you in the shoes of an actor starring in some kind of pro-health commercial, one in which your character might be about to have a heart attack in the wake of a poorly lived life. The director's nearby, the camera's rolling and there are a few domestic props and food items within reach. With your single move, you determine how the scene will be performed. The viability of any one performance is determined by the director, whose enthusiastic interpretations of your actions are highly amusing. Each viable performance scores you another point out of a possible 41, plus there are an unknown number of bonus points up for grabs for trying out more meta or 'guess the verb' type actions. The scene loops, which makes a lot of sense in the context, giving you the opportunity to stumble around the set, fiddling with the props in a creative manner and trying on gratuitous emotions as the director eggs you on. The game is addictive and progress tends to come in waves. One successful action will often cause a rash of similar actions to pop into your head. The director's feedback is also helpful. The more of it that you read, the more you may connect with the game's mindset and work out what other angles might lead to performances. I scored more than half the available known points plus a bunch of unknown points in my first session with the game, and I intend to revisit it to try to find more. 1-10 of 10 | Return to game's main page
{'redpajama_set_name': 'RedPajamaCommonCrawl'}
9,972
2018 Exhibitions 2012 Exhibitons OPEN e v+ a at The Hunt Museum The Hunt Museum, Rutland Street, Limerick Of the 36 artists selected by the Curators Angelika Nollert & Yilmaz Dziewior, 4 artists exhibit their work in the Hunt Museum, with work including photography, external installation, text, audio and an artist book. Nevin ALADAG (1972, Turkey/ Germany) Curtain House, Limerick 9 Curtains Hung on Hunt Museum Exterior, 2009 By installing her curtains on the outside of some windows of the Hunt Museum Nevin Aladag creates several contradictions: she uses a device mainly found in a private setting for a public space and inserts something which is supposed to be hung inside a room externally. The Hunt Museum collection was initially a personal and private one, and the suggestion of the context of the domestic context in a public space is appropriate. By installing the curtains, which are slightly oversized, Nevin Aladag creates a surreal setting which negotiates aspects of exposing and hiding. (1968, Romania/Germany) Leabhar Ealaíontóir Artist Books edition of 200, DVD Documentation, 2009 Daniel Knorr spent a number of weeks on residency in Limerick, working on the production of his artist's book, Leabhar Ealaíontóir. The book was published in 200 unique-copies, signed and numbered by the artist. Inserted and pressed inside the pages of the book each copy contains different garbage pieces found and collected by the artist from a public space. Knorr also visited Belfast and Dublin to collect material. The book contains a DVD with a 32' film documenting the whole production process, showing the different stages and the artist's involvement. His contribution for ev+a 2009 is part of a bigger publication project that has already included other versions in Romania and China. Alan PHELAN (1968, Ireland) Irish Guards Enlarged Newspaper Prints 2006 This image is from The Daily Telegraph, 12 June 2006 accompanying an article written by UK Attorney General Lord Goldsmith titled, where there is a credible allegation of serious wrongdoing, the rule of law must apply. In the article he defends the military justice system that has prosecuted British troops fighting in Iraq. The caption for the image reads "Guardsmen Joseph McCleary, left and Martin McGing after they were found not guilty of manslaughter of a 15-year-old Iraqi". Jochen SCHMITH (1978, 1975, 1976, Germany) Audio, 2009 In the audio work There was a time, a voice guides the listener through fictional interiors placed in luxury housing estates. Beside descriptions of detailed ambience the outside remains present and distant. The basis for the descriptions was advertising from luxury properties from Limerick and other places around the world. The Right to be Lazy Text cut in grass behind Hunt Museum, 2009 The citation of the French socialist Paul Lafargue's publication with the same title (1880) was cut into the grass of the public green space behind the Hunt Museum, previously the Custom House of Limerick. In his book Lafargue argues against the theories of Marx and Engels regarding the influence of capital and opts for contra-productiveness. The letters will be trimmed during the exhibition while the surrounding grass will grow further. The arrangement of the letters suggests a former labyrinth from pleasure gardens. The green space behind the Hunt Museum reflects the idea of public spaces.
{'redpajama_set_name': 'RedPajamaCommonCrawl'}
9,973
CHICAGO (AP) — A federal judge has thrown out a jury verdict in favor of a woman who alleged an off-duty Chicago police officer attacked her during a road-rage incident. In her ruling Thursday, U.S. District Judge Sara Ellis accused the plaintiff's attorney, Dana Kurtz, of engaging in a "pervasive" pattern of misconduct at trial, including repeated misrepresentations to the court. The ruling means Nicole Tomaskovic will not receive $260,000 in damages for the July 2007 incident involving now retired Officer William Szura. The Chicago Tribune (http://trib.in/2cROCE3 ) reports the judge also ended litigation over whether the city had a pattern and practice of covering up bad policing. Lawyers for the city asked the judge to take the action against Kurtz, who couldn't be reached for comment Friday. Tomaskovic's new attorney also was unavailable.
{'redpajama_set_name': 'RedPajamaC4'}
9,974
Q: Python code working for one file but not for an identically formatted longer file? So I'm currently trying to get a function to work that reads a csv file and returns it's information as a list of dictionaries. The file it is reading is formatted like this: 3070,01:44:03,Aaron,Glue,Finished 480,02:06:47,Aaron,Collins,Finished 2228,01:42:06,Abigail,Swales,Finished 1519,01:24:11,Adam,Mcarthur,Finished ... and so on. My code works fine, here it is: def readFile(filename): file = open(filename,'r') data = file.read() a = data.split() dataLists = [] for term in a: termList = term.split(',') dataLists.append(termList) results = [] for list in dataLists: competitorInfo = {'id': list[0], 'time': list[1], 'firstname': list[2], 'lastname': list[3]} results.append(competitorInfo) return results Now I have two csv files, one called 'marathon.csv' and one called 'marathon10.csv'. marathon10.csv is the exact same as marathon.csv, but contains only 10 lines of information (easier for testing). In comparison, marathon.csv contains 2738 lines of information. Whilst the program is returning the expected output when run on 'marathon10.csv', when run on 'marathon.csv' I receive this error: link to error on imgur or shortened it says > 'IndexError: list index out of range' when reading the line beginning 'competitorInfo'. I'll be honest I'm not sure where to go with this. I've emailed a few people who have told me it's probably to do with the sheer size of marathon.csv, but is that really it? I need this to work, and would like a proper explanation as to why I'm specifically getting a list index error if that is possible. I understand some people might want to actually see marathon.csv, but I'm not sure how to share that. I assure you it is the exact same as marathon10.csv, with 2738 lines instead, each line is formatted the same way with 4 commas so I don't understand how I'm getting the error I'm receiving. Thanks so much to anyone that can help :) A: I assure you it is the exact same as marathon10.csv, with 2738 lines instead No, I assure you that it is not. There is most likely a missing comma in one of the lines. 2738 lines is not much. You can try something like this instead and then check your output for None: def splitn(s, n): return (s.split(',') + [None] * n)[:n] def readFile(filename): file = open(filename,'r') data = file.read() a = data.split() dataLists = [] for term in a: termList = splitn(term, 5) dataLists.append(termList) results = [] for dataList in dataLists: competitorInfo = {'id': dataList[0], 'time': dataList[1], 'firstname': dataList[2], 'lastname': dataList[3]} results.append(competitorInfo) return results now, if there are less than 5 items in term, you will get the list padded with None
{'redpajama_set_name': 'RedPajamaStackExchange'}
9,975
Q: Tricky computations in graph theory proof Let $0 < p < 1$ be a constant, and set $b = 1/p$. Let $0 < \epsilon < 1/2$. Given a natural number $r \ge 2$, let $n_r$ be the maximal natural number for which $\binom{n_r}{r} p^{\binom{r}{2}} \le r^{-(1+\epsilon)}$ Also, let $n_{r}^{\prime}$ be the minimal natural number for which $\binom{n_{r}^{\prime}}{r} p^{\binom{r}{2}} \ge r^{1+\epsilon}$ It is clear that $n_r < n_{r}^{\prime}$. After some computations, one get that $$ n_r = \frac{r}{e} b^{(r-1)/2} + o(rb^{r/2})$$ and similarly $$ n_{r}^{\prime} = \frac{r}{e} b^{(r-1)/2} + o(rb^{r/2})$$ My question is, how do we get the following: $$ n_{r}^{\prime}-n_r < \frac{5 \log{r}}{2r} n_r $$ I don't know how can we obtain that inequality. Equivalently, we want $$ \frac{n_{r}^{\prime}}{n_r} < 1 + \log{r^{5/2r}} $$ As $n_r$ is maximal, it follows that $$\binom{n_r+1}{r} p^{\binom{r}{2}} > r^{-(1+\epsilon)}$$ As well, as $n_{r}^{\prime}$ is minimal, we see that $$\binom{n_{r}^{\prime} - 1}{r} p^{\binom{r}{2}} < r^{1+\epsilon}$$ Then, the quotient of these inequalities gives $$ \frac{\frac{(n_r^{\prime}-1)^r}{r^r}}{\frac{(n_r+1)^r}{r!}} \le \frac{\binom{n_{r}^{\prime} - 1}{r}}{\binom{n_r+1}{r}} < r^{2(1+\epsilon)} \le r^{3}$$ So, by Stirling's approx. $$ \left(\frac{n_r^{\prime}-1}{n_r+1}\right)^r < \frac{r^r}{r!} r^{3} \sim \frac{e^r r^{5/2}}{\sqrt{2 \pi}} < e^r r^{5/2} $$ Then $ \left(\frac{n_r^{\prime}-1}{n_r+1}\right) < e r^{5/2r} $, now taking logarithms: $$ \log{\left(\frac{n_r^{\prime}-1}{n_r+1}\right)} < 1 + \log{r^{5/2r}}$$ The RHS is the one we desire, however, I don't know how to fix the LHS to get what we need: $\frac{n_r^{\prime}}{n_r}$. Thanks for your help!
{'redpajama_set_name': 'RedPajamaStackExchange'}
9,976
This has been an insane summer. I've never had more work. I can't wait to share the illustrations when they're published. I've also done countless new pieces for my solo show. We've been gearing up for our move to Manhattan from our lovely little Brooklyn Brownstone apt too. It will be a big change but the new space will be a great one to live and work in (another thing I can't wait to share with you). I took it upon myself to paint the whole apt. It seemed like a good idea but took a loooooonnng time. It's a big space for a girl to paint. Then we lost my dad. My mind was swimming and I felt like crawling into bed and staying there but instead I did everything else and you know what? It helped keep me focused. My dad would want us to carry on and find joy in life. Yesterday was the last day of painting. When I finished I pulled all the tape from the walls and paper from the floor. It was very rewarding to see how nice the colors looked in the space. I'm obsessed with color as you may or may not know as a Maquette reader. I have not taken much time to relax so I decided to get a manicure on my way back to Brooklyn. It felt really nice after that big job and I love the vibrant color on my hands. It gives me a little burst of joy when I see my hands typing, illustrating, washing dishes or picking up my son. Do you have a favorite nail polish this summer? • Have you seen this confetti/ CMYK manicure on Design Crush? Sounds like you've been through a lot this summer, good for you for taking a little time & getting pampered! You deserve it! Thanks Elizabeth….It really does feel like a treat to have this bright color on my hands! Jeanine I can't believe we're nail twins! I love the color…did you do your toes too ? Love the color! I thought you emailed to say you weren't using the post so I recycled it on tys. Glad you are busy with work. I'm so sorry for your loss, Samantha. Hang in there.
{'redpajama_set_name': 'RedPajamaC4'}
9,977
Thanks for taking the time to visit our JustGiving page. The Tesco Team this year are braving the Lycra to raise money for a great cause as well as remembering a colleague who is missed dearly from Unilever. Mel Jaggard was a true inspiration and has left behind a legacy of never giving up. On Friday the 15th of May we will join our Unilever colleagues and start our journey from Surrey to Gilwern in Wales which was Mels hometown. We are riding as part of an amazing team that last year raised over £58k. Our goal this year is to exceed £75k (£1000 each) so we really do need all your help and generosity. David, Lynsey, Julia, Jo, Neil, Chris, Jack, Andrew, Hugo and Katy!!! The Get A-Head Charitable Trust is dedicated to fighting head and neck cancer and other diseases by raising awareness, education, medical research, the purchase of vital medical equipment that the NHS cannot afford and the provision of free Complementary therapies such as Acupuncture, Reflexology, Reike and Chinese Massage.
{'redpajama_set_name': 'RedPajamaC4'}
9,978