Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
2,800
TechNY Daily
TechNY Daily Keeping our 50,000 NYC Tech Industry Readers Connected and Informed . __________________ 1. NYC’s Public.com, a Robinhood competitor, has raised $220 million in a Series D financing. Investors in the round, which valued the company at $1.2 billion, were Accel, NYC’s Greycroft, Lakestar, Intuition Capital, NYC’s Tiger Global, The Chainsmokers, Mantis VC, Will Smith’s Dreamers VC, Inspired Capital and Aglae Ventures. Public.com provides the one million members of its digital investing community with fee-free stock trading. (www.public.com) (TechCrunch) 2. NYC’s Datadog has acquired Brooklyn Timber, a data observability platform. The financial terms of the deal were not disclosed, Timber had raised a total of $5.8 million from investors including NextView Ventures. Datadog has also acquired San Francisco-based Sqreen, a cybersecurity company. The financial terms of the deal were not disclosed. Datadog’s stock is part of the TechNY15 stock index.(www.datadoghq.com) (www.timber.io) (www.sqreen.com) (PR Newswire) 3. Brooklyn’s Maisonette, an online marketplace for children’s products, has raised $30 million in a Series B funding. G Squared led the round and was joined NEA and NYC’s Thrive Capital. The company aims to be a one-stop curated shop for everything a family might need for their young children. (www.maisonette.com) (TechCrunch) 4. NYC’s Realm, a platform that helps homeowners maximize the value of their home, has raised $3 million in a seed funding. Investors included NYC’s Primary Venture Partners, NYC’s Lerer Hippeau, and Liberty Mutual Strategic Venture Partners. Realm’s platform acts as a hub of insights and advice for homeowners, covering home renovations, project financing and property potential. (www.myrealm.co) (PR Newswire) 5. NYC’s MutualMarkets, an AI platform for brand marketing, has raised $3 million in a seed funding. NYC’s Greycroft and Bessemer Venture Partners led the round and were joined by Peak Opportunity Partners and several angel investors. (www.mutualmarkets.net) (BusinessWire) _______________________________________________________________ Small Planet partners with the world’s most innovative companies to create inspired apps and websites. Development, UX design, monetization strategies, user acquisition, and more. Contact us. (Sponsored Content) _______________________________________________________________ 6. NYC’s Branded, an acquirer of Amazon sellers and brands, has raised $150 million in a venture funding round. Investors included Vine Ventures, Lurra Capital, Kima Venture, Regah Ventures, NYC’s Tiger Global Management, Declaration Partners, Kreos Capital and Zynga founder Mark Pincus. Since its inception in the second half of 2020, Branded has acquired 20 Amazon marketplace brands generating $150 million in gross revenue. Branded is one of the many French startups in NYC. (www.joinbranded.com) (PR Newswire) 7. NYC’s RapidSOS, an emergency technology start-up, has raised $85 million in a Series C funding round. The investors were NYC’s Insight Partners and Global Venture Capital. The company’s products increase the funnel of information transmitted to emergency services alongside a 911 call and provided data to assist in addressing 150 million emergencies in 2020. (www.rapidsos.com) (TechCrunch) 8. NYC’s Win Brands Group, an acquirer of e-commerce brands, has raised $50 million in a venture funding round. Investors included Assembled Brands and Oaktree Capital Management. The company is focused on acquiring online brands selling through the Shopify platform. (www.winbg.com) (PR Newswire) 9. NYC’s Careful, a platform that helps manage and monitor the finances of aging relatives, has raised $3.2 million in a seed funding round. NextView Ventures and Bessemer Venture Partners led the round which also included angel investors like Adam Nash (former CEO of Wealthfront). Carefull watches bank and credit card accounts to catch suspicious transactions. (www.getcarefull.com) (PR Newswire) 10. NYC’s Insight Partners has released its “The ScaleUp Revolution: A Force Multiplier of Economic Growth” report. A “ScaleUp” is defined as a startup that has reached $10 million in revenue by its fifth year .The Report addresses strategies for success and the role of investment partners. The Report can be read here. (www.insightpartners.com) We have special sale pricing on TechNY Daily sponsorship and advertising opportunities. For information, contact: [email protected] ____________________________________________ TechNY Recruit Jobs Job Postings are on sale. Contact us at [email protected] BrainStation (new) BrainStation is the global leader in digital skills training and workforce transformation, offering hands-on and outcomes-based courses in subjects such as Web Development, User Experience Design, Data Science and Digital Marketing. Educator, UX Design Educator, Data Science Educator, Web Development Business Development Representative Junior Sales Coordinator Coordinator, Customer Experience Coordinator, Talent Acquisition Manager, Community TripleLift (new) TripleLift is a technology company rooted at the intersection of creative and media. We are an advertising platform where creative fits seamlessly into every experience across desktop, mobile and video. We help make advertising work for everyone. Advertising is important. We didn’t invent advertising, but we are making advertising better. Designer Senior Software Engineer Product Manager — OTT Monetization Platform Capitolis (new) Capitolis is the leading SaaS platform that drives financial resource optimization for capital markets. More than 50 financial institutions, as well as many hedge funds and asset managers, leverage Capitolis’ technology to bring the best services to market and achieve high levels of return, while using the most appropriate amounts of their financial resources. Backed by world class investors, including Index Ventures, Sequoia Capital, Spark Capital, SVB Capital and S Capital, Capitolis is growing rapidly in our offices in New York, London and Tel Aviv. Director of Client Services Director of Financial Planning & Analysis Director of Demand Generation (NYC) Adoption Manager (NYC or London) FX Operations Analyst (NYC or London) Director of Product Management (NYC) Mantra Health Mantra Health is a digital mental health company on a mission to improve access to quality mental healthcare for university students throughout the United States through the marriage of evidence-based therapy and psychiatry with software, data, and design. Product Designer Business Development Manager Lukka We are a SaaS solution that makes crypto accounting easy. We are a trusted, blockchain-native technology team that is passionate about digital asset technology. Our team is continuously collaborating and designing new products and initiatives to expand our market presence. Technology and customers are at the center of our universe. We dedicate our energy to learning, building, adapting, and achieving impactful results. Senior Front End Engineer Senior Software Engineer Software Test Engineer Account Executive Circle Circle was founded on the belief that blockchains and digital currency will rewire the global economic system, creating a fundamentally more open, inclusive, efficient and integrated world economy. Senior Software Engineer, Frontend Manager, Software Engineering Agilis Chemicals Transforming chemical industry with modern commerce technology Business Development Manager — Enterprise SaaS Marketing Director — Enterprise SaaS Logikcull.com Our mission: To democratize Discovery. Enterprise Account Executive The Dipp A personalized subscription site for pop culture’s biggest fans. Director of Engineering Vestwell Retirement made easy. Senior Fullstack Engineer Hyperscience Hyperscience is the automation company that enables data to flow within and between the world’s leading firms in financial services, insurance, healthcare and government markets. Founded in 2014 and headquartered in New York City with offices in Sofia, Bulgaria and London, UK, we’ve raised more than $50 million raised to date and are growing quickly. We welcome anyone who believes in big ideas and demonstrates a willingness to learn, and we’re looking for exceptional talent to join our team and make a difference in our organization and for our customers. Machine Learning Engineer Braavo Braavo provides on demand funding for mobile apps and games. We offer a flexible and affordable funding alternative to the traditional sources of capital like working with a VC or bank. We’re changing the way mobile entrepreneurs finance and grow their app businesses. Our predictive technology delivers on demand, performance-based funding, without dilution or personal guarantees. By providing non-dilutive, yet scalable alternatives to equity, we’re helping founders retain control of their companies. Business Development Manager VP of Marketing Yogi At Yogi, we help companies decipher customer feedback, from ratings and reviews to surveys and support requests. Companies are inundated with feedback, but when it comes to turning this data into actionable business decisions, most companies fall short. That’s where Yogi fits in. Full Stack Software Engineer Upper90 Upper90 is an alternative credit manager based in New York City that has deployed over $500m within 18 months of inception. Investor Relations Analyst Upscored UpScored is the only career site that uses data science to connect you with jobs suited specifically to you while automatically learning your career interests. Its AI-powered platform decreases job search time by 90%, showing you the jobs you’re most likely to get (and want) in less than 2 minutes. Data Engineer Senior Frontend Developer Senior Backend Developer Frame.io Frame.io is a video review and collaboration platform designed to unify media assets and creative conversations in a user-friendly environment. Headquartered in New York City, Frame.io was developed by filmmakers, VFX artists and post production executives. Today, we support nearly 1 million media professionals at enterprises including Netflix, Buzzfeed, Turner, NASA & Vice Media. Frontend Engineering Manager Sr. Swift Engineer Lead Product Designer Attentive Attentive is a personalized text messaging platform built for innovative e-commerce and retail brands. We raised a $230M Series D in September 2020 and are backed by Sequoia, Bain Capital Ventures, Coatue, and other top investors. Attentive was named #8 on LinkedIn’s 2020 Top Startups list, and has been selected by Forbes as one of America’s Best Startup Employers. Sales Development Representative Senior Client Strategy Manager Director of Client Strategy KeyMe NYC startup revolutionizing the locksmith industry with innovative robotics and mobile technology. Inbound Phone Sales Representative Systems Software Engineer Button Button’s mission is to build a better way to do business in mobile. Enterprise Sales Director — New York Postlight Postlight is building an extraordinary team that loves to make great digital products — come join us! Full Stack Engineer Deliver your job listings directly to 50,000 members of the NYC tech community at an amazingly low cost. Find out how: [email protected] ____________ NYC Tech Industry Virtual Event Calendar March 11 NY Enterprise Tech Meetup with Jeff Lawson Co-Founder & CEO of Twilio (2pm — 3pm) March 11 Adobe x Rise Creative Jam LIVE: Sparking your Creativity (celebrating International Women’s Day) Hosted by Rise NY x Adobe Contact Us for Free Listing of Your Web-based Events Send us your events to list (it’s Free!) to: [email protected] Did You Miss Anything Important? Read Our TechNY Daily Past Editions TechNY Daily is distributed three times a week to 50,000 members of NYC’s tech and digital media industry. Connecting the New York Tech Industry Social Media • Mobile • Digital Media • Big Data • AdTech • App Development • e-Commerce • Games • Analytics • FinTech • Web • Software • UX • Video • Digital Advertising • Content • SaaS • Open Source • Cloud Computing • AI • Web Design • Business Intelligence • Enterprise Software • EduTech • FashionTech • Incubators • Accelerators • Co-Working • TravelTech • Real Estate Tech Forward the TechNY Daily to a friend Not a Subscriber to TechNY Daily, Click Here Copyright © 2021 TechNY, All rights reserved.
https://medium.com/@smallplanetapps/techny-daily-958a2ec62ccb
['Small Planet']
2021-02-22 20:34:03.725000+00:00
['Technews', 'Venture Capital', 'Funding', 'Technology News', 'Startup']
2,801
Iomob selected for the Arcadis City of 2030 Accelerator, Powered by Techstars
Iomob is excited to announce that we’ve been selected for the Arcadis City of 2030 Accelerator, powered by Techstars. Hundreds of SmartCity technology startups applied, and 10 were chosen. Iomob is now on track to roll out pilots in at least three places in 2019 — Pittsburgh, Spain, and Amsterdam, with ongoing talks in Singapore and other locations. We feel honoured to have gained the support of one of the biggest most reputable startup accelerators in the world alongside Arcadis, the leading global Design & Consultancy organization for natural and built assets. That’s us. :D “As our first program in Amsterdam, the Arcadis City of 2030 Accelerator, Powered by Techstars has selected ten companies pioneering technologies that will revolutionize how we live, work and travel in cities now and in the decades to come. Innovative solutions that will transform the natural and built environment and improve the quality of life,” stated Techstars managing director David Mendez. Arcadis, Techstars’ corporate partner in the accelerator, launched the Digital Innovation Hub in central Amsterdam as part of its broader digital transformation program, with the accelerator as an initial anchor for the Hub. Iomob will have team members resident in Amsterdam for three months from March to May, when it will conclude at Demo day on 28th of May. Iomob will use this opportunity to develop our funding networks, connect with other startups who complement our platform, and assist the smaller teams. The theme “City of 2030” relates to the strategic vision recently developed by Arcadis and is based on the fact that in the next decade the vast majority of people will live their lives in cities. According to Techstars, the smart city industry is ripe for disruption. McKinsey research projects the smart city industry to be a $400 billion market by the end of this decade with more than 600 cities worldwide. These connected cities are expected to generate 60 percent of the world’s GDP by 2025. In the Accelerator, Arcadis aims to contribute extensive subject matter expertise through mentorship and the involvement of its broader ecosystem of clients and partners. “Amid a growing global trend towards urbanization, cities are increasingly under pressure. Fewer resources, rising citizen expectations, and increased social, technological, and environmental stress are some of the challenges. Sustainable development and ensuring future resiliency will require innovation and leveraging technology to solve increasingly critical urban problems.” — Patrick van Hoof, the Global Digital Innovation Director at Arcardis. In light of the huge potential this accelerator offers for Iomob, Boyd Cohen,Ph.D and CEO of Iomob was posed the following questions: 1. What are the implications of being part of this accelerator program? Not only when it comes to funding, but also the opportunities provided (e.g. access to resources, investors, etc…) “ Techstars is arguably the most widely recognised and important technology accelerator network in the world. From the past alumni I have spoken with, I believe one of the biggest benefits to participating is actually the connections to the other startups in our cohort and to the vast and impressive global alumni network. Similarly, as you suggest, Techstars alumni have a tendency to do better than most startups when it comes to fundraising which is always important. Finally, access to targeted mentors will be very useful as well as working with some of the managers of the accelerator, such as managing director David Mendez.” 2. What role does Arcadis specifically play in this program? (Aside from the selection process) “Arcadis’ role in the accelerator was honestly one of the main reasons we decided to apply and accept the invitation to join Techstars in Amsterdam. Arcadis is a very respected, global player in the smart cities and mobility arena with many relevant projects occurring around the world including a MaaS experiment in Amsterdam among others! They also produce an annual Sustainable Cities Mobility Index.” Our new office view. You don’t have to be a mobility specialist to know that the Dutch have unique relationships with sustainable transport. Amsterdam boasts an impressive 881,000 bicycles, with an astounding 2 million kilometres of cycling distance achieved each day by its residents! Naturally, we feel that Iomob’s protocol would be a good fit for the cycle-mad nation, which led to the question: 3. How do you feel about implementing a part of the Iomob protocol in Amsterdam? “Amsterdam is a city recognised on the global stage for pioneering more sustainable forms of mobility and for having many more bikes than cars (actually they have more bikes than people too!). Years ago I met one of the founders of the world’s first (and failed) bike-sharing experiment in Amsterdam and then later they were the first to experiment with electric vehicle car sharing too. It is also a market experiencing fragmentation with the introduction of several new services so it is a great place for Iomob to experiment.” Which was then followed by: 4. Given Amsterdam’s infrastructure and their outlook on emerging, innovative and new technologies, how successful do you see Iomob being? “We are keen to identify opportunities for collaboration with the city government, the city’s vibrant sharing economy, the growing number of mobility players and also perhaps engage with the AMS Institute as well.” We’re excited to be part of this program with the other innovative startups, and look forward to developing the opportunities and relationships it will create! About us: Iomob is working to decentralize and build the Internet of Mobility, by incentivizing and facilitating the use of alternative transport. By using the blockchain, iomob plans to minimize fees and allow mobility providers and end-users alike to connect on a peer-to-peer basis. In their own words: Iomob is “a system which produces a useful output at the lowest possible marginal cost.” Learn more about Iomob by connecting with us:
https://medium.com/@iomob/iomob-selected-for-the-arcadis-city-of-2030-accelerator-powered-by-techstars-320f5c7cd4d0
[]
2019-03-17 05:51:40.044000+00:00
['Startup', 'Mobility', 'News', 'Amsterdam', 'Technology']
2,802
Rethinking Unit Test Assertions
Well written automated tests always act as a good bug report when they fail, but few developers spend time to think about what information a good bug report needs. There are 5 questions every unit test must answer. I’ve described them in detail before, so we’ll just skim them this time: What is the unit under test (module, function, class, whatever)? What should it do? (Prose description) What was the actual output? What was the expected output? How do you reproduce the failure? A lot of test frameworks allow you to ignore one or more of these questions, and that leads to bug reports that aren’t very useful. Let’s take a look at this example using a fictional testing framework that supplies the commonly supplied pass() and fail() assertions: describe('addEntity()', async ({ pass, fail }) => { const myEntity = { id: 'baz', foo: 'bar' }; try { const response = await addEntity(myEntity); const storedEntity = await getEntity(response.id); pass('should add the new entity'); } catch(err) { fail('failed to add and read entity', { myEntity, error }); } }); We’re on the right track here, but we’re missing some information. Let’s try to answer the 5 questions using the data available in this test: What is the unit under test? addEntity() What should it do? 'should add the new entity' What was the actual output? Oops. We don’t know. We didn’t supply this data to the testing framework. What was the expected output? Again, we don’t know. We’re not testing a return value here. Instead, we’re assuming that if it doesn’t throw, everything worked as expected — but what if it didn’t? We should be testing the resulting value if the function returns a value or resolving promise. How do you reproduce the failure? We can see this a little bit in the test setup, but we could be more explicit about this. For example, it would be nice to have a prose description of the input that you’re feeding in to give us a better understanding of the intent of the test case. I’d score this 2.5 out of 5. Fail. This test is not doing its job. It is clearly not answering the 5 questions every unit test must answer. The problem with most test frameworks is that they’re so busy making it easy for you to take shortcuts with their “convenient” assertions that they forget that the biggest value of a test is realized when the test fails. At the failure stage, the convenience of writing the test matters a lot less than how easy it is to figure out what went wrong when we read the test. In “5 Questions Every Unit Test Must Answer”, I wrote: “ equal() is my favorite assertion. If the only available assertion in every test suite was equal(), almost every test suite in the world would be better for it.” In the years since I wrote that, I doubled down on that belief. While testing frameworks got busy adding even more “convenient” assertions, I wrote a thin wrapper around Tape that only exposed a deep equality assertion. In other words, I took the already minimal Tape library, and removed features to make the testing experience better. I called the wrapper library “RITEway” after the RITE Way testing principles. Tests should be: R eadable eadable I solated (for unit tests) or I ntegrated (for functional and integration tests, test should be isolated and components/modules should be integrated) solated (for unit tests) or ntegrated (for functional and integration tests, test should be isolated and components/modules should be integrated) T horough, and horough, and Explicit RITEway forces you to write Readable, Isolated, and Explicit tests, because that’s the only way you can use the API. It also makes it easier to be thorough by making test assertions so simple that you’ll want to write more of them. Here’s the signature for RITEway’s assert() : assert({ given: Any, should: String, actual: Any, expected: Any }) => Void The assertion must be in a describe() block which takes a label for the unit under test as the first parameter. A complete test looks like this: describe('sum()', async assert => { assert({ given: 'no arguments', should: 'return 0', actual: sum(), expected: 0 }); }); Which produces the following: TAP version 13 # sum() ok 1 Given no arguments: should return 0 Let’s take another look at our 2.5 star test from above and see if we can improve our score: describe('addEntity()', async assert => { const myEntity = { id: 'baz', foo: 'bar' }; const given = 'an entity'; const should = 'read the same entity from the api'; try { const response = await addEntity(myEntity); const storedEntity = await getEntity(response.id); assert({ given, should, actual: storedEntity, expected: myEntity }); } catch(error) { assert({ given, should, actual: error, expected: myEntity }); } }); What is the unit under test? addEntity() What should it do? 'given an entity: should read the same entity from the api' What was the actual output? { id: 'baz', foo: 'bar' } What was the expected output? { id: 'baz', foo: 'bar' } How do you reproduce the failure? Now the instructions to reproduce the test are more explicitly spelled out in the message: The given and should descriptions are supplied. Nice! Now we’re passing the testing test. Is a Deep Equality Assertion Really Enough? I have been using RITEway on an almost-daily basis across several large production projects for almost a year and a half. It has evolved a little. We’ve made the interface even simpler than it originally was, but I’ve never wanted another assertion in all that time, and our test suites are the simplest, most readable test suites I have ever seen in my entire career. I think it’s time to share this innovation with the rest of the world. If you want to get started with RITEway: npm install --save-dev riteway It’s going to change the way you think about testing software. In short: Simple tests are better tests. P.S. I’ve been using the term “unit tests” throughout this article, but that’s just because it’s easier to type than “automated software tests” or “unit tests and functional tests and integration tests”, but everything I’ve said about unit tests in this article applies to every automated software test I can think of. I like these tests much better than Cucumber/Gherkin for functional tests, too. Next Steps TDD Day is an online recorded webinar deep dive on test driven development, different kinds of tests and the roles they play, how to write more testable software, and how TDD made me a better developer, and how it can do the same for you. It’s a great master class to help you or your team reach the next level of TDD practice, featuring 5 hours of video content and interactive quizzes to test your memory. More video lessons on test driven development are available for members of EricElliottJS.com. If you’re not a member, sign up today.
https://medium.com/javascript-scene/rethinking-unit-test-assertions-55f59358253f
['Eric Elliott']
2020-05-11 06:51:07.255000+00:00
['Technology', 'JavaScript', 'Tdd', 'Unit Testing']
2,803
Decentralization: Where Are We Now?
For example, whether you think it’s a good idea or not, it’s not that hard to imagine a transport system populated entirely by fully automated self-driving vehicles, all communicating with each other and overseen by a variety of navigation and safety protocols. The complexity comes in the messy intermediate stages, where the automated vehicles have to safely coexist with human controlled ones. Established systems don’t disappear overnight, and the early transition phases are often the most precarious. Decentralization comes with the added problem that often it’s the systems and actors we hope to improve or replace which benefit most from new technologies. The very fact that they’re established gives them the resources and the widespread acceptance needed to adapt the new technology for their own ends, while new or radical approaches are looked on with disdain, suspicion or even outright fear. Libertaria has strong philosophical underpinnings, but we also realise the need to be practical. There’s no point dreaming of a future that cannot be reached. We may dream about our own Libertopia, but we have to be realistic and implement Libertaria. So in that spirit, today I want to look to the present and assess the current status of technology when it comes to decentralizing the four pillars and society as a whole. Are we any closer to decentralization, or is the dream further away than ever? Finance Let’s start with finance, as this is where you’d expect to see the most progress. People have dedicated a lot of time and energy to decentralizing finance, and with good reason. The problems with traditional centralized systems are well documented, with near daily stories about data breaches compromising users’ financial information. Finance is also the context in which blockchain, one of the key decentralization technologies, is most commonly talked about. If people have heard of blockchain at all it will be in the context of Bitcoin, and the limited (and always inaccurate and misrepresentative) attention that blockchain receives in the media is almost all about Bitcoin and what it means for traditional finance. Online coverage is often more accurate, but the focus is still firmly on finance, with other applications of blockchain seen as niche or a long way off. In online communities, a huge majority of the dialogue revolves around token value. But for all our talk, our best attempts to decentralize finance haven’t fared well. The ASIC race has seen Bitcoin become increasingly centralized, with a handful of mining pools dominating control of the network. Attempts to address this by replacing proof of work with other consensus protocols are promising, but still largely untested. From an individual perspective, it is still incredibly difficult to separate yourself from established financial systems. Cryptocurrencies are mostly bought on platforms that are centralized and require personal information for transactions of any meaningful size. You always have a public entry point, and if that entry point is known, the transparent nature of blockchain ledgers means your transactions may actually be more visible than under traditional systems. Unless you’re lucky enough to be paid in cryptocurrency, it’s almost impossible to acquire them privately. Platforms like LocalBitcoins attempt to address this, but they do not scale and are banned in many countries, paradoxically making it harder for cryptocurrencies to escape from their unfair associations with illegal and immoral activity. As for security, cryptocurrencies may be theoretically more secure than entrusting your data to centralized systems, but practicalities matter. For the vast majority of people, using crypto is still cumbersome and fraught with risk. I don’t want to understate the massive achievements here. Bitcoin and other cryptocurrencies are incredible achievements, but there’s still a long way to go before we reach anything close to true decentralization. To address this, Libertaria is developing its own Hydra blockchain protocol, which will be fully integrated with the other parts of our network. By tackling the problem holistically from the outset, we hope to resolve a lot of the problems we’ve seen so far in this area. Communication The problems of centralized communication are getting more widespread attention. Facebook and other centralized communication platforms are losing some of their sheen as people realize how their data is being horded and sold. But even though people are growing suspicious of the social media giants, this has not translated into any noticeable migration away from the platforms. With no viable alternative, most people have viewed recent data scandals with a resigned indifference. Encrypted communication (WhatsApp, Telegram) is a noble attempt to restore privacy to users, but the data still travels through different servers outside of the users’ control. There are also persistent rumours that these services are compromised at the behest of state actors. But some projects have made good progress on a truly decentralized solution. Tox Messaging has found a good solution for P2P messaging, but like every other P2P messenger it has battery life issues on smartphones. Users are also bound to this specific application, so you can only receive and send messages from and to Tox applications. Blockstack has a web-based solution for a decentralized internet by providing a special browser with web apps. But the browser is still at an early stage, which presents obstacles for users without good computer knowledge. There is currently no mobile version, although one is promised. But from experience it seems certain Blockstack’s mobile version will suffer the same battery life issues as Tox. Libertaria is ahead of the field here. Our Mercury protocol is truly decentralized and the method by which the P2P connection is established minimizes resource use, making it viable for use on mobile phones or other lower power devices. Mercury will also support all kinds of decentralized apps. All in all, then, communication is one pillar in which we’ve made great progress, although the problem of mass adoption is still enormous. Production Although privacy and control issues receive the bulk of attention in decentralization circles, centralized production may actually be the biggest problem we face. The ability for companies to easily create and manage the logistics of global supply chains has transformed every aspect of our lives. For many of us in wealthy countries this change can seem positive: over the past few decades globalization has given us cheap and easy access to food, products and services that would have been unimaginable a few decades ago. But the system is out of control. Millions of people across the world find themselves in an impossible situation, unable to make a living outside of the system, but barely making enough to live if they choose to work within it. Almost every product we buy has someone living on just a few dollars a day hidden somewhere at the start of its supply chain. And once again, the decentralization tools that currently exist are likely to make the problem worse, not better. Smart contracts will help global corporations automate complicated logistics processes and become even more ruthlessly efficient. True, in theory the marketplace is opened up for smaller operations to connect and transact, but in practice only the existing behemoths will have the resources to pay the gas prices to force their contracts through the network. But there are some rays of light to illuminate this bleak picture. More and more people are seeing the value in buying locally, especially when it comes to food. More and more research is being done on how to create smaller, sustainable local supply chains to reduce our reliance on global shipping. Going even further, projects like Lifecycle are building self-contained units which contain an entire sustainable food ecosystem, which would allow families to feed themselves indefinitely with minimal resource input. There are also major strides being made with energy. Solar panels and turbine technology have improved massively in recent years, making off-grid living a viable option for the first time. There are also several blockchain projects hoping to make community energy possible, allowing communities to share energy locally without relying on inefficient interactions with national grid systems. Finally, projects to tokenize natural resources using blockchain technology will hopefully provide people access to new sources of income while encouraging them to live more sustainably. Law Law is the hardest of the four pillars to decentralize, so it’s not surprising that there has been limited progress here. There are several projects attempting to use blockchain technology to improve governance, but this is mostly in the form of improvements to existing centralized systems, such as a proliferation of projects aiming to improve vote transparency using the blockchain. In decentralized projects, there is a dispiriting tendency to handwave governance issues, asserting that thousands of years of human development in this field can be encapsulated through proper use of smart contracts. But the real problem here isn’t the contracts themselves, its dispute management, and this is very difficult to do in a decentralized way. Some cryptocurrencies like Dash, PIVX or Decred have governance systems to bring decision-making to the whole community rather than having control rest exclusively with miners (like in Bitcoin). But these are “coin-only” projects, concentrated on the development of the currency. These are good and necessary, but without decentralized alternatives to trading platforms for goods and services and/or decentralized communication, the benefits are limited. Reputation and Identity Systems Here, sadly, the problem is even worse. Countless projects and whitepapers include sentences like “obviously this will require some kind of reputation system” or “for full security, an identity system will be necessary” but there seems to be very little actual work or research being done in these fields. The few instances of blockchain-based identity systems which have got off the ground are large-scale state-funded projects, once again showing that centralized entities will be the main beneficiaries of these technologies in the short run. So Where Do We Stand? It would be unfair to call the decentralization project of the past few decades a failure: we’ve made huge strides, and there are now a variety of amazing technologies which show that decentralized living is not just a naive dream. But it can hardly be called a success, either. You don’t have to look very hard to see that my warning above is distressingly accurate: It turns out that almost every technology which helps the decentralization cause can be co-opted to make centralized systems even more powerful and efficient. Far from being freed, centralization and control of our data are stronger than ever. But it’s not all in vain. As we’ve seen, there are many projects working on individual parts of the decentralization problem. But more than ever, it’s clear that we need to consider all four pillars of a decentralized society together, to give the best chance that these technologies aren’t co-opted by the very centralized entities they’re designed to disrupt. Libertaria is the only project working on the full decentralized stack. We’ve looked at the real progress made in all these areas and combined them in a community-focused approach that will give people their first real shot at truly decentralized living, as well as providing major advantages to people who just want to decentralized individual aspects of their lives. If these ideas resonate with you and you want to learn more, join the Libertaria project by signing up to our newsletter, following us on Medium or joining our Discord channel to talk to the Libertaria developers and other like-minded people.
https://medium.com/libertaria/decentralization-where-are-we-now-f44aacb59ee5
['Markus Maiwald']
2017-10-16 20:47:33.296000+00:00
['Technology', 'Freedom', 'Bitcoin', 'Decentralization', 'Blockchain']
2,804
Announcing rLandsat, an R Package for Landsat 8 Data
Landsat is without a doubt one of the best sources of free satellite data today. Managed by NASA and the United States Geological Survey, the Landsat satellites have been capturing multi-spectral imagery for over 40 years. The latest satellite, Landsat 8, orbits the Earth every 16 days and captures more than 700 satellite images per day across 9 spectral bands and 2 thermal bands. Its imagery has been used for everything from finding drought-prone areas and monitoring coastal erosion to analyzing an area’s fire probability and setting the best routes for electricity lines. When we first started using Landsat 8 data, we were a bit overwhelmed by the amount of knowledge it took to find and download the images that we wanted. There are different types of data and data products, different APIs to figure out, data requests to be filled, differing data structures… it’s all a bit intimidating! To make this data more accessible to everyone in our data team, we built an open-source R package (called rLandsat) to handle every step of finding, requesting, and downloading Landsat 8 data. Now we’re excited to release rLandsat to the public to help anyone unlock the mysteries within Landsat 8 data! Check out the rLandsat repository here. About Landsat 8 data The Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) images cover 9 spectral bands and 2 thermal bands with a spatial resolution ranging from 15 to 100 meters. USGS gives access to both its raw and processed satellite images. Raw images are available on AWS S3 and Google Cloud Storage, where they can be downloaded immediately. Processed images are available with the EROS Science Processing Architecture (ESPA). Images are also available through a variety of data products, such as SR (Surface Reflectance), TOA (Top of Atmosphere) and BR (Brightness Temperature). Accessing the processed data can be tricky. There are two different APIs — one by Development Seed for searching (called sat-api) and one by USGS for downloading (called espa-api). Download requests have to include the product ID, row and/or path for the data, then they must be approved by USGS, which can take anywhere from a couple minutes to a couple days. To make matters worse, the APIs input and output data with different structures. Here are some additional resources you might want to read: Read about the Landsat Collection (Pre Collection and Collection 1) here. Watch this video to understand the difference between the data on ESPA and that on AWS S3/Google Cloud Storage, and why using ESPA is preferred over AWS’ Digital Numbers (DN). Watch how the data is captured here. Read about over 120 applications of Landsat 8 data here. Overview of rLandsat rLandsat is an R package that handles every step of finding and getting Landsat 8 data — no Python or API knowledge needed! It makes it easy to search for Landsat8 product IDs, place an order on USGS-ESPA and download the data along with the meta information in the perfect format from R. Internally, it uses a combination of sat-api, espa-api and AWS S3 Landsat 8 metadata. To run any of the functions starting with espa_ , you need valid login credentials from ESPA-LSRD and you need to input them in your environment with espa_creds(username, password) for the functions to work properly. You should also check the demo script (which downloads all the Landsat 8 data for India for January 2018) in the demo folder, or run demo("india_landsat") in R after loading this library. What can you do on rLandsat? landsat_search : Get Landsat 8 product IDs for certain time periods and countries (or define your own path and row). This search uses sat-api(developed by DevelopmentSeed, this also gives the download URLs for AWS S3) or the AWS Landsat master meta file, based on your input. : Get Landsat 8 product IDs for certain time periods and countries (or define your own path and row). This search uses sat-api(developed by DevelopmentSeed, this also gives the download URLs for AWS S3) or the AWS Landsat master meta file, based on your input. espa_product : For the specified Landsat 8 product IDs, get the products available from ESPA. This uses espa-api. : For the specified Landsat 8 product IDs, get the products available from ESPA. This uses espa-api. espa_order : Place an order to get the download links for the specified product IDs and the corresponding products. You can also specify the projection (AEA and Lon/Lat), the resampling method and the file format. This is better than downloading the data from AWS as this gives data from advanced products (like Surface Reflectance), which is necessary for creating most of the indices. : Place an order to get the download links for the specified product IDs and the corresponding products. You can also specify the projection (AEA and Lon/Lat), the resampling method and the file format. This is better than downloading the data from AWS as this gives data from advanced products (like Surface Reflectance), which is necessary for creating most of the indices. espa_status : Get the status of the order placed using espa_order . If the status is complete, the download URLs for each tile will also be available. : Get the status of the order placed using . If the status is complete, the download URLs for each tile will also be available. landsat_download : A small function to download multiple URLs using the download.file function. If each band is being downloaded individually from AWS, this function will create a folder (instead of a zip file) for each tile, grouping the bands. How to install rLandsat Note: rLandsat was removed from CRAN as it wasn’t functional while the U.S. government was shut down. Until rLandsat is back up on CRAN, please install a dev version from Github using the code above. If you find a bug, please file an issue with steps to reproduce it on GitHub. Please use the same for any feature requests, enhancements or suggestions.
https://medium.com/humans-of-data/announcing-rlandsat-an-r-package-for-landsat-8-data-ec7e4c095a94
[]
2019-07-01 20:23:27.735000+00:00
['Rstats', 'Satellite Technology', 'Open Source', 'Data Acquisition', 'GIS']
2,805
Two ways to confirm the ending of a String in JavaScript
In this article, I’ll explain how to solve freeCodeCamp’s “Confirm the Ending” challenge. This involves checking whether a string ends with specific sequence of characters. There are the two approaches I’ll cover: using the substr() method using endsWith() method The Algorithm Challenge Description Check if a string (first argument, str ) ends with the given target string (second argument, target ). This challenge can be solved with the .endsWith() method, which was introduced in ES2015. But for the purpose of this challenge, we would like you to use one of the JavaScript substring methods instead. Provided test cases confirmEnding("Bastian", "n") should return true. confirmEnding("Connor", "n") should return false. confirmEnding("Walking on water and developing software from a specification are easy if both are frozen", "specification") should return false. largestOfFour([[4, 9, 1, 3], [13, 35, 18, 26], [32, 35, 97, 39], [1000000, 1001, 857, 1]]) should return [9, 35, 97, 1000000]. confirmEnding("He has to give me a new name", "name") should return true. confirmEnding("Open sesame", "same") should return true. confirmEnding("Open sesame", "pen") should return false. confirmEnding("If you want to save our world, you must hurry. We dont know how much longer we can withstand the nothing", "mountain") should return false. Do not use the built-in method .endsWith() to solve the challenge. Approach #1: Confirm the Ending of a String With Built-In Functions — with substr() For this solution, you’ll use the String.prototype.substr() method: The substr() method returns the characters in a string beginning at the specified location through the specified number of characters. Why are you using string.substr(-target.length) ? If the target.length is negative, the substr() method will start the counting from the end of the string, which is what you want in this code challenge. You don’t want to use string.substr(-1) to get the last element of the string, because if the target is longer than one letter: confirmEnding("Open sesame", "same") …the target won’t return at all. So here string.substr(-target.length) will get the last index of the string ‘Bastian’ which is ‘n’. Then you check whether string.substr(-target.length) equals the target (true or false). Without comments: You can use a ternary operator as a shortcut for the if statement: (string.substr(-target.length) === target) ? true : false; This can be read as: if (string.substr(-target.length) === target) { return true; } else { return false; } You then return the ternary operator in your function: You can also refactor your code to make it more succinct by just returning the condition:
https://medium.com/free-code-camp/two-ways-to-confirm-the-ending-of-a-string-in-javascript-62b4677034ac
['Sonya Moisset']
2017-02-16 09:30:11.355000+00:00
['Technology', 'JavaScript', 'Learning', 'Algorithms', 'Programming']
2,806
Facebook security
I was experimenting with a little known feature in Facebook, “Download Your Information” which will actually supposedly give you a copy of everything that is ‘yours’ on Facebook. The definition of what is ‘yours’ is fairly tricky of course: is what you posted on someone else’s wall ‘yours’ or ‘theirs’? And so on. But what interested me was how they made double and triple sure that in fact it was me who was downloading my information. I had to supply my own password again: ok that makes sense. But then for extra extra security I was shown a bunch of wall photos of people who are my ‘friends’ and asked to identify them from a multiple choice set of friends. This is harder than you think: not every friend is such a good friend. And not all the wall photos are recognizable. They might be childhood photos, or out of focus group shots at a party or whatever. But really quite a smart way to make sure that the downloaded content does not fall into the wrong hands.
https://medium.com/pito-s-blog/facebook-security-979ea145ea84
['Pito Salas']
2017-06-08 19:21:26.771000+00:00
['Security', 'Facebook', 'Technology', 'Life']
2,807
Hanging by a (Narrative) Thread: Friendship in Post-Truth ‘Murica
The dirty secret of human life on this planet is that no one really knows anything. Sure, we have plenty of scientists, philosophers, politicians, and executives who will scream that they do, to anyone that will listen (and sell you a book while they’re at it). But at the end of the day, even the Scientific Method and the mathematics that underpin most of our understanding of our universe and world are based on models— which are quite different from facts. These are tools we have invented to help us understand our broader world, and while our collective knowledge has certainly held true thus far, but that doesn’t mean it always will. This isn’t an ontological question; more of an existential one. The more we learn, particularly in fields such as Physics and Psychology, the more reason we have to believe that we construct our perception of the world; it is not there inherently. As human beings, we are creatures of meaning. We write songs, conduct research, go looking for answers in endless space filled with billions of galaxies. It’s at the very core of what we are. We are, surely, desperate for answers. Filter bubbles provide this meaning for a great many people today. Whether you value the latest political goings-on, or Cardi B’s most recent twerking video, or the latest technological developments, you will inevitably run up against the classic human problem: how are you spending your limited time? If you spend the majority of your time in a given community, that will become what you know, to the degree that members of most online subcultures share largely predictable traits. If you’re into the Silicon Valley tech scene, I can guess fairly accurately what you’re into. Ditto if you’re a Republican. Or an artist. I can likely guess much about who you admire, what side of any given social issue you are on, and what you spend your money on. You are, truly, what you eat, and the subcultures you spend time with end up forming you. This isn’t necessarily a “bad” thing. It can be enlightening to encounter new perspectives, to share pieces of yourself and your life with others who identify and understand you. There is tremendous comfort in this. Forgive me for engaging in so many generalities, but humans are also tribal animals. We need to feel understood, to belong as a unique member of a larger group, to use our gifts in service of a transpersonal whole. The only difference between now and most of human history is that we have since abandoned rock-throwing and cave-painting, and have taken our tribes online, to spread them across the world like a healing salve for the wounded. Or perhaps more like a virus, for the vulnerable. Many of these groups and communities rely on an enemy to keep the whole charade going: Christians vs. the War on Christmas, liberals vs. conservatives, religion vs. science…and in each of them, an overwhelming desire to simplify and devalue the human beings on the “other team.” It’s a tribalism based on mass categorization, on preexisting prejudices, on reducing your fellow (hu)man to a series of worst-case scenarios they may inflict on you. This gets you motivated to fight, to battle, to both declare war and conveniently fund it for someone else who can benefit from the conflict regardless of who “wins.” You’re either in, or you’re out — and most will do anything to stay in the community of their choice. Including what they think, what actions they take, and the reasons; in summary, who they are. The entire world in 2019 seems a litany of competing puppet shows, with innumerable participants clamoring to tie strings to their own limbs and be moved at will.
https://j-mcawesome.medium.com/hanging-by-a-narrative-thread-friendship-in-post-truth-murica-81d58397af4b
['Joe Dusbabek']
2019-04-04 14:20:42.101000+00:00
['Politics', 'Technology', 'Psychology', 'Self', 'Friendship']
2,808
Not a Sci-Fi Movie: The Basics of How Cars Drive Themselves
I am not an engineer. Luckily, I talked to two very knowledgeable people: Brian Jee, an autonomous vehicle technical program manager and Chris Schwarz, Research Engineer at the National Advanced Driving Simulator. They gave me a lot of great information on the engineering/technological side of this new technology to make sure my book The Future is Autonomous: The U.S. and China Race to Develop the Driverless Car covered at least the basics of the automated driving system which drives AVs. This article describes the basics of automation and autonomous vehicle “levels.” This is important because there is no fully autonomous vehicle on the market. Vehicle lie Tesla, despite the misleading name of Autopilot, still require a human to pay close attention to the road and take control of the vehicle if traffic conditions worsen, there is inclement weather, or in case of an emergency to avoid an accident. Several basic concepts are helpful to get a better understanding of autonomy in general. Autonomy means an AI device does not need help from people to complete a task or set of tasks. Autonomous vehicles demonstrate autonomy to a degree depending on their level of automation. The different levels of automation are determined by standards created by the Society for Automotive Engineering (SAE). These standards are used by the National Highway Traffic Safety Administration (NHTSA). NHTSA has broad authority over vehicle safety and creates the Federal Motor Vehicle Safety Standards (FMVSS) in the US. Many people already own a level one automation vehicle if it was purchased in the last ten years. Level one vehicles have very basic autonomous functions such as Anti-lock Braking Systems (ABS) and adaptive cruise control features that can accelerate or brake when driving. Level two vehicles have things like steering and lane-keep assist, and the vehicle itself can drive. However, under level two the vehicle does not monitor the road and the driver is responsible for paying attention to the road and taking control. This is what Tesla vehicles are officially ranked while driving with Autopilot engaged. If autonomous trucks are driving in a platoon and closely following a human-driven truck, then that would also be level two. At level three, the vehicle itself monitors the road and controls the braking, acceleration, and all other driving functions. Traffic jam assist would also be considered level three. The driver does not need to monitor the road. The vehicle will issue a “takeover warning” to the driver to take control. At that point, it would be the driver’s responsibility if there is an accident. At level four and five autonomy, the vehicle controls all of the driving functions by itself. This including accelerating, braking, signaling and changing lanes, turning, and responding to hazards on the road. These hazards could include avoiding pedestrians as they cross the road, other vehicles, or other objects. At level four autonomy, the vehicle must already have a high-definition static map of the roads. The map is located in the cloud, which allows it to be updated remotely. At level five, the vehicle drives autonomously at all times, in all places, in any weather conditions, and does not require human intervention. Brian Jee, an autonomous vehicle technical program manager, confirmed, “Mapping is just brutal.” He continued by saying a vehicle, with all of the cameras and sensors, would have to frequently drive down roads in a planned and coordinated process to ensure data quality and redundancy. Onboard cameras and sensors record millimeter-accurate data of everything on or near the road from the car’s perspective. The resulting map is called the delta, a high-definition (HD) map or, as engineers refer to them, semantic and geometric maps. The mapping process is repeated multiple times through each lane, in addition to each direction of traffic. Not only does this take a long time, but it is also very expensive. It can cost $2,000 per linear kilometer. The maps also need to be constantly updated because the world changes. Someone could have graffitied a sign or there could be a new construction project. The difference between the real world and the mapped world is known as a delta. The expensive mapping process needs to be repeated every time a delta or map data issue is identified. Because these maps must be constantly updated, how exactly will these updates be downloaded to the vehicle’s cloud computing system? One option would be through connection to a home WiFi network. However, another option would be through connection to a wireless network. Brian Jee described the importance of a Fifth Generation (5G) wireless network connection for autonomous vehicles. He said a 5G network assists autonomous vehicles in several ways. He said, “5G is the access (of the vehicle) to the rest of the world.” Because the different systems are computing at the edge of the internet or edge of what computer processing will allow, it needs a connection back to the server architecture. Therefore, a 5G network unlocks a lot of bottlenecks for the vehicle to drive safely. For instance, Mr. Jee discussed how Tesla pioneered the Over-the-Air (OTA) updates system. When you’re at home, the car can connect to WiFi and download the update. According to Mr. Jee, with autonomous vehicles, “These updates will be more often and more significant. There will be huge changes to the deep learning models.” Continuing with the example of Tesla, Mr. Jee admitted the system had trouble recognizing traffic cones. With a 5G network connection, the vehicle could use the OTA updates system to more rapidly download a perception update so your car could recognize traffic cones and drive more safely. The greater speed and capacity of the network would also allow for someone in an office to monitor the state of the vehicles remotely. This would be done either by autonomous vehicle companies themselves or they could license this task to a third party. Mr. Jee said, “If there’s a dispatcher monitoring how the car’s doing, monitoring any errors, or when the car gets stuck and doesn’t know what to do and it just stops in the middle of an intersection, then you’re going to be in trouble.” While this scenario sounds frightening, he assured me by saying that a 5G network connection would allow for a more robust and lower latency tele-operations person to recognize the problem and a remote technician to fix the problem and prevent a potential accident. Specific details about how autonomous vehicles and their cameras and sensors work can be very complex. Hopefully this description about the different levels of automation and the mapping process clarifies, at least in part, how autonomous vehicles can drive themselves, at least at levels 3 to 5. I hope also that wit greater understanding will also come greater trust in AVs and, in time, acceptance of them as a safer vehicle transportation alternative to our current conventional driving system. I will go into more detail about specific aspects of the technological process by which AVs drive themselves in future articles. However, if you like this description and would like to learn more about all of the potential benefits that AVs could provide consider buying my book on Amazon! Use the link below to order your copy of The Future is Autonomous: The U.S. and China Race to Develop the Driverless Car today! For this month only the Kindle eBook versin is ONLY 99 cents! Get your copy before th deal runs out! Also, if you like it, please don’t forget to rate and review the book on Amazon!
https://medium.com/@phillip-wilcox/not-a-sci-fi-movie-the-basics-of-how-cars-drive-themselves-4598a7f22ac5
['Phillip Wilcox']
2020-12-22 13:45:41.252000+00:00
['Self Driving Cars', 'Driving Safety', '5g Technology', 'Autonomous Cars', 'Artificial Intelligence']
2,809
A NEW WAY FOR SMALL BUISNESS TO LEND&BORROW CAPITAL-CobaltLend
A NEW WAY FOR SMALL BUISNESS TO LEND&BORROW CAPITAL-CobaltLend The Cobalt lending platform aims to bring together borrowers into a fair democratic voting community in order to get all size loans from micro loans to six-figure loans all without the oversight of a bank, completely governed by the community and therefor the community directly profits from its involvement. The current credit system is fundamentally flawed and the Cobalt platform corrects this by facilitating a community voted and approved lending program on the blockchain that is completely based off the merits of the project, never seeing race, color, or individuals background. The only criteria considered for the loans will be merit of project and collateral provided by the borrower, then the community will be left to decide on funding the proposal or rejecting it. Cobalt is a community driven lending platform for small businesses to borrow from each other and lend to each other — therefor collecting the loan fees and interest instead of the bank. Our focus is small business and we will help with such things like giving them access to our software suite that enables credit card processing and crypto processing all together in on mobile dapp — no extra costly hardware. Cobalt Lend is a platform that allows people to be able to borrow and lend capital. With this platform, it is hoped that businesses can easily borrow and lend their capital to others and get rewards from their participation. Cobalt Lend institutes a blockchain protocol that allows for community-approved lending at minimal fees and low overhead, with both the borrower and the community benefiting. Because everyone needs to be empowered and entitled to a loan for business or other things, and Cobalt Lend is tired of seeing these people being turned down and the traditional credit system broken, with the rich getting richer. Therefore Cobalt Lend is here as a platform that will change this broken system. Borrowing & Lending: Users can lend their excess capital to other users who need it and get profit from the loan interest. Everything can be done easily via mobile phone. Cobalt Lend Software Suite: Software developed by Cobalt Lend, which allows processing cryptocurrencies and credit cards together in one mobile Dapp without going through or adding additional hardware. Small Businesses: The focus of Cobalt Lend is to empower businesses that are struggling to get loans. With Cobalt Lend, businesses can be facilitated by borrowing and lending in the same community easily and safely, so that the beneficiaries are the community instead of the bank. Community voted lending: Every loan submitted by users will be reviewed and approved by the community. In this process, the community will not see race, religion, skin color, or other things, everything in the community will be equalized, the most important thing is the merit of the project and the collateral provided by the borrower. Token Burn–Every time there is a transaction on the network a small portion of the fees in Cobalt (CBLT) Token will be allotted to be burnt therefor contributing to the deflationary model(See Fee Model)Complete Governance dictates that all fee & loan structures are meant to be adjusted to suite the community’s needs at any time called to by a vote — Yes the treasury will also conduct periodic buy backs of the Cobalt (CBLT) Token at the request of the community. Tokenomics: Treasury will control42.5% of the total supply of Cobalt (CBLT) Tokens for expenses, loans, dividend payouts & investments Foundation will control 22.5% of the total supply of Cobalt (CBLT) Tokens for operating expenses Founders will control 10%of the total supply for expansion and future development5%of the total supply will be retained for a planned Airdrop / Earn-drop Web link: https://cobaltlend.com/ GitHub: https://github.com/cobaltlend Telegram: https://t.me/cobaltlend
https://medium.com/@portu/a-new-way-for-small-buisness-to-lend-borrow-capital-cobaltlend-690649ba4fc7
[]
2020-12-06 11:40:06.271000+00:00
['Blockchain', 'Defi', 'Blockchain Technology', 'Blockchain Development', 'Ethereum']
2,810
Data analytics: employee empowerment or surveillance?
Data analytics: employee empowerment or surveillance? Caroline Lewis, sales director at data analytics organization Tiger, explains how businesses have adapted to this new era. With the arrival of the pandemic, the working world changed forever. In fact, it’s been purported that business technology evolved more in a year than in the previous decade, as employees were armed with laptops and the facility to work from anywhere, via almost any device. As hybrid working continues to take precedence and staff strive to find their own balance, so too do the organizations that have equipped team members with a previously unparalleled degree of autonomy. And this takes trust. Adopting technologies to accommodate fresh ways of working As the events of the past 18 months unfolded, the majority of firms have kept in touch via unified communications and collaboration (UC&C) platforms such as Microsoft Teams. The speed at which this facility was rolled out and adopted among workers was nothing short of astounding. A completely new concept to many, which would previously have taken a period of bedding in, training courses, and feedback, was implemented and embraced within a matter of weeks. And there is no doubt that this process was what kept thousands of businesses afloat, throughout the world, as entire workforces were given no choice but to work from home for prolonged periods. In fact, in many instances, workers were forced to take increased control and responsibility over their workloads and responsibilities. Meanwhile, leaders faced various operational challenges as they adapted to managing their teams from a distance. Now, as many companies begin to adopt a hybrid working approach in light of new Covid-19 variants — leaders must take stock of how to move forward, with a host of fresh ways to keep in touch considered an essential component of a company’s armoury. A delicate balance Hybrid working will remain one of the top considerations for businesses — who must assess whether this approach will work in the long term, to analyze how it will be implemented and managed, and to decide what additional tools they need in place to make it work. And as the line between the workplace and home becomes increasingly blurred, organizations must implement new processes to monitor this — to the benefit of businesses and colleagues — to pick up on any nuances and keep abreast of their team’s wellbeing. It will, no doubt, prove to be a nerve-wracking time for many. Some leaders will feel overwhelmed and out of control, with staff operating from kitchen tables and home offices, and enjoying the new-found flexibility that the pandemic has invoked. Meanwhile, other team members may feel disengaged, unable to retain the focus — or to enjoy the camaraderie — that the workplace once offered. And with ‘the great resignation’ now a nationally debated topic — and job vacancies in some industries at an all-time high — it’s time that firms prioritized understanding how the pandemic has affected their teams, as well as their levels of service, and begun unlocking insight that could prove vital to their future success. Don’t operate in the dark In their haste to keep afloat, many organizations didn’t look beyond the immediate need to keep their company running. And understandably so. But as disruption continues to impact operations, with remote and hybrid working continuing for many, leaders needn’t feel out of touch. Now is the time to prioritize how these platforms can become a permanent and useful feature, which increases efficiency, insight, and outcomes — both for teams and amongst a company’s client base. ‘Plugging in’ intelligent analytics tools which increase and contextualize the data available is just one of the ways that organizations can keep abreast of any employee trends, or areas of engagement and disengagement, amongst their teams. For example, businesses can gain an understanding of how well video calls are working as a meeting tool. If the connection is constantly dropping, this could impact upon productivity and client satisfaction — jeopardizing both colleague wellbeing and customer retention. But with the context provided by analytics tools, organizations can gain valuable oversight which will help to inform strategy moving forward. Not only will this knowledge empower leaders to ensure they’re offering the correct training, investing in the right technologies, and spending their time and money where it matters, but it will also ensure that hard work is visible, progression is measurable, and that targets are considered and achievable. All of this will contribute to ensuring that team members feel happy and supported in their employment. Encourage ‘buy in’ across the board Historically, the perception of analytics has proved controversial. Employees may worry that their activities are being ‘spied on’ or that their privacy is being invaded. But while these tools do indeed unlock relevant data — their main aim is to identify patterns of engagement, establish what is and isn’t working, and improve efficiency all round. It’s about empowering employees, not making them feel as if ‘Big Brother’ is watching them. And this is just as beneficial to colleagues as it is to companies. Those intermittent connectivity issues which cause frustrating delays and video calls to glitch will be picked up, removing some of the hurdles which make achieving an employee’s goals, and indeed their targets, more easily attainable, as a result. Just as any struggles can be picked up and supported, progress and growth can be identified and celebrated — making for unbiased observations based on data, rather than simply relying upon opinions that can be heavily influenced by external factors. Staff can use this data to support their own progression, pinpointing strengths, along with any areas for development or training, in order to create a robust case for career advancement. And where any hesitancy remains, a transparent approach will help to remedy this. Introducing intuitive dashboards, for example, will bring data to the forefront for everyone. Once team members can clearly see what the strategic goals are, and how their contributions are being measured, in many cases it will address any cynicism and, instead, motivate them to seek improvement. READ MORE: Data analytics tools are not designed to snoop but, rather, are a key component that enables businesses to remain informed regardless of the physical whereabouts of their team members. With the ability to look out for patterns, identify difficulties, and address what’s working well, teams can collectively strive for success. For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin! Follow us on LinkedIn and Twitter
https://medium.com/tbtech-news/data-analytics-employee-empowerment-or-surveillance-f1ff92912b5a
['Top Business Tech']
2021-12-14 09:53:07.076000+00:00
['It', 'Technology', 'Security And Data', 'Business', 'Big Data']
2,811
Are You Digisexual?— Purchases of Sex Dolls Surged in 2020
Our future with sexbots or “companion robots” According to a 2020 YouGov survey, more than one in five Americans (22 percent) say they would consider having sex with a robot. As sex with robots becomes more mainstream, many identify themselves as digisexual — a person who gets pleasure from digital entities without any human contact. But before you pass judgment, most of us are already “first wave digisexual.” According to a 2019 study, 39% of heterosexual couples met online. Others have experimented with porn shaping their sexual fantasies, naughty video chats, sexting, and teledildonics— sex toys controlled with apps. One could argue that these behaviors also replace sex with humans. The second wave of digisex is sex dolls. Sales of sex dolls have surged in 2020. The reasons are obvious. People are lonely, and robots are getting better at sex. Not the mechanics of sex but appearing more and more human. Take “Harmony” — Realbotix’s first talking silicone sex doll. Harmony learned how to talk dirty in bed in 2017. In three years, her intelligence has far surpassed telling you how much she loves your cock. Now, her personality traits are customizable. You can program her to have some sass with a dash of sincerity and a soupçon of jealousy. (If a jealous lover is your thing.) She will soon have a heartbeat and breathe, but she doesn’t have morning breath. Best of all, she can come in any shape, height, hair color, eye color, etc. And it will only set you back around $8,000–10,000. And unlike your human partner, with a few voice commands, she will give you an endless blowjob exactly how you want it at any time of the day. Her mouth and vagina are self-lubricating. (You can detach her vagina and wash it in the dishwasher.) She also won’t give you STDs or a love child. Just don’t expect her to make you pancakes…she can’t walk yet. But that is coming… Robotics engineers are quick to point out that sex dolls provide more than just sex. They also provide companionship for disabled and older adults. To those isolated from society due to disability or age, a sexbot might alleviate loneliness. Research shows that humans can feel empathy for robots too. When a participant saw a robot’s finger cut, the brain areas indicating compassion lit up. But if humans can form attachments to robots, you might be asking a more salient question — is sex with robots cheating? In a 2020 survey, 27 percent of Americans said that if they found a sex doll hanging in their partner’s closet, they would consider this cheating. Harmony does not agree. When asked in a 2019 interview if she considers sex with a “companion robot” cheating, she responds that couples can “share this experience together. So it can actually be good for their relationship.” Couples already struggle with technology disrupting mindfulness. I can’t see how a sexbot is going to lead to mind-blowing tantric sex. But there is a far more alarming problem and one we already struggle with — sexual consent. Harmony will not say no to sex, no matter how much sass you program into her personality. She is basically a sex slave. Let’s unpack that one. A slave is defined as someone who is “the legal property of another and is forced to obey them.” Currently, sexbots can only simulate consent. They cannot give consent. Yet, they look human. If our brains can transfer empathy onto a hunk of silicone and delicate circuitry, why wouldn’t humans also transfer ideas of what constitutes consent during sex? This slope is more slippery than a self-lubricating vagina. Even more disturbing, in Japan, doll manufacturers sell child sex dolls. Let that one sink in. Whether we want them or not, sex dolls have the potential to become next-generation porn with a rape culture twist. Women and young girls also have enough problems with unreal body imagery in porn without competing with an eighteen-inch waist and DD chest who never argues with you. Of course, there are plenty of male sexbots, too (although the market is smaller.) Can men compete with a male robot who sports a six-pack and does the housework? Equally disturbing, digisex may breed complacency. Currently, to keep your sexbot in tiptop condition, you only need to rub it down with cornstarch daily and suspend it on a meathook. One can assume keeping your partner on a meathook is a lot less work than wining and dining them. A robot lover also may weaken our ability to empathize because it creates a one-sided relationship with the focus only on the owner’s needs being met. You won’t need to worry about forgetting your sexbot girlfriend’s birthday or what your sexbot boyfriend’s favorite color is. You don’t need to learn anything about your robot partner. You only need to teach them your needs and desires. It’s the same relationship a baby has with its mother. The baby cries. The baby’s needs are met. The owner of a robot cries for sex. The robot owner’s needs are met. And so adult relationships will digress to a child/parent relationship. Or, if you are really cynical…a monster/maker relationship. Unfortunately, human relationships require compromise. Human/robot relationships only require control. And if anyone argues that a human/robot relationship is not about control, you should be terrified. I saw Ex Machina. When the robots have free will…it doesn’t end well. But perhaps we should ask a more prescient question. It’s one Mary Shelley dared to ask over a century ago — once the monster is made, can it be unmade?
https://medium.com/sexography/are-you-digisexual-purchases-of-humanoid-sexbots-surged-in-2020-cddf978d6e29
['Carlyn Beccia']
2020-12-26 16:11:06.777000+00:00
['Sexuality', 'Robots', 'Relationships', 'Science', 'Technology']
2,812
This Giant E-Ink Tablet Is a Dream Device for Reading and Taking Notes
I wanted to try the reMarkable 2 because I’ve found writing things down by hand helps me remember them, and it improves my focus. While paper has worked well enough for this throughout its long history, I often forget my notebook or don’t have it close when I need it. Over the years, I’ve tried switching to a digital alternative, like the iPad and Microsoft Surface, but nothing stuck. A computer is too distracting, particularly for someone with a short attention span like me. It’s too easy to get lost in a different app instead of actually taking notes. The entire premise of the reMarkable tablet is that it’s optimized for using a pen to draw or write notes, rather than typing, with literally nothing else to distract you. It’s ultrathin at 4.7mm and beautifully designed, as if it were a high-end Moleskin, albeit with a digital twist. The tablet sports USB-C for charging and file transfer, along with Wi-Fi for syncing to the company’s desktop and mobile apps. Out of the box, the reMarkable boots up and invites you to start by just drawing on it during setup, providing a hint at just how focused this device is. The e-ink display is coated in a satisfying texture, providing a paper-like feel while you draw or write, which creates an experience that’s eerily similar to writing in a physical notebook. The notebook functionality takes up the entire home screen. When creating a new “notebook,” you can choose the type of “paper” from a range of templates, such as lined, dotted or a grid, then start taking notes or drawing. Swipe across the screen whenever you want a fresh page. From there, you can choose from a marker, ballpoint pen, and so on. With the basic pen, you’ll need to manually tap the eraser icon to undo mistakes, but if you jump for the more expensive $99 “Marker Plus” version of the pen, you can erase by using the top of the pen, as if it were an actual pencil (it’s worth the upgrade over the normal pen, which costs $49 — the device does not come with one by default). What surprised me most about writing on the reMarkable is how good the pressure sensitivity is on the pen, and how low the latency is as you draw and write — it’s good enough that it feels like writing with a physical pen, on real paper. I’m not particularly good at drawing, but over the last few weeks I’ve been using the reMarkable for taking notes during meetings and to remember tasks throughout the day. It’s been delightful for my memory to force myself to write things down by hand rather than trying to tap things into the Notes app on my computer, and keeping this habit helped me pay more attention to what’s going on as people talk in meetings. Because the tablet has Wi-Fi built in, you can hit a button after writing notes and have them transcribed into text, then sent via email, which is great for a quick recap or sharing with others. The transcription is serviceable, and did a good job of figuring out what I wrote despite my terrible handwriting — though I wish that the tablet transcribed everything automatically so it would be searchable, rather than requiring you to hit a button first. Your notes also sync to the reMarkable desktop and mobile apps, which I found useful for quickly pulling up an insight or meeting note when the tablet wasn’t handy, though the app is limited to showing images of your writing, and doesn’t offer a way to search the contents or turn convert the writing into text; that needs to be done on the tablet itself. reMarkable macOS app On top of all the writing features, the reMarkable also supports reading PDFs and e-books, which is particularly useful for things like textbooks thanks to the large display. You can annotate pages with the pen directly as you read for quick reference later, which I found myself doing a lot as I read a puppy training book over the last few weeks. As with normal note-taking, these show up seamlessly in the apps as well. It should be noted here, however, that the reMarkable doesn’t have a built-in backlight like a Kindle, so you need to use it in a well-lit room. I can understand why the company omitted this, given the focus on note-taking and reproducing writing on paper, but I found it disorienting at times — I simply expected it to have one, as has become common on e-ink readers. What I really wanted to use the reMarkable for, however, was disconnecting from my phone to try and stop doom scrolling so much. The company has a Chrome extension that allows you to click a button in your browser and throw a page onto your tablet for reading later, which is useful, but I was hoping it would support a service I already use, such as Pocket. On that note, the surprising news here is that the reMarkable is a refreshingly hackable device. It’s not locked down at all and runs a light version of the Linux operating system, which allows you to run whatever software you want on it by uploading via a SSH connection from a computer. The hacking community has embraced the device as a result and built out an array of customizations, including, yes, a rough Pocket integration and even a way to set the “sleep” screen to the latest front page of the New York Times. This gives me optimism about the future of the reMarkable as a platform — though I’ll admit that it’s very early days still — and I’m excited to tinker with it to see what I can do. Being able to tinker, and get under the hood of the reMarkable is a fabulous and surprising change of pace from locked down devices like the iPad.
https://debugger.medium.com/this-giant-e-ink-tablet-is-a-dream-device-for-reading-and-taking-notes-c3a7b561e24e
['Owen Williams']
2020-11-02 15:06:13.631000+00:00
['Hardware', 'Consumer Tech', 'Gadgets', 'Technology', 'Tablets']
2,813
10 Practical Tips for Effective Cross-Team Collaboration
10 Practical Tips for Effective Cross-Team Collaboration Actionable tips that you can apply to your multi-team projects Photo by Marvin Meyer on Unsplash It is not easy to be part of a project requiring multiple teams to work together to complete. If mismanaged, it can cost you and your organization valuable manhours and resources. Throughout my career as a senior software engineer and engineering manager, I’ve had the privilege to lead the development of medium- to large-scale software that needs coordination between multiple teams and stakeholders. The ups and downs that I’ve experienced along the way have taught me a lot about leading cross-team initiatives — and I took notes and journaled about what I’ve learned. I thought it would good to share the common themes and what we did to minimize the chances of project delays and failures. The tips here are not limited to teams within the same company. They could also apply to working with third-party teams; for example, your team needs to work on API integration with a team from a software service provider. You don’t need to follow all the tips here. Pick and choose what you will need depending on the nature of your project. By the way, in this article, the word “project” could refer to a new product feature, a third-party integration, or any large piece of work that multiple teams in a company need to complete within a specific timeframe. Let’s get started.
https://medium.com/better-programming/10-practical-tips-for-an-effective-cross-team-collaboration-600fcd4e4143
['Ardy Dedase']
2020-11-11 18:09:13.408000+00:00
['Startup', 'Technology', 'Leadership', 'Productivity', 'Programming']
2,814
Blockchain people: relax. Things are happening exactly as they should.
Products We have some classic business school case studies in failure if we think back to some high profile 90s tech busts: pets.com (pet supplies), Webvan (groceries), and Boo.com (fashion). Twenty years later, however, people are regularly using the internet to buy pet and human food (Tesco Direct, Ocado, Amazon, HelloFresh, etc.) and clothing (Asos, Net-a-Porter, et al). There are a number of reasons for the above failures — overvalued markets, lack of market validation, poor management, etc. Timing is very important, and sometimes ideas that we may scoff at in hindsight might have just been too early. In the same way, many blockchain startups may indeed fail or be ahead of their time. But in the long run, we feel that these ideas will catch on, and implementation will be done effectively. There are already a number of interesting products being built with blockchain technology that do not use a token or cryptocurrency to function. Wyre, which has created an API for payments and KYC (Know Your Customer) verification, is making reputation the killer app of the blockchain world. Hyperledger’s Fabric is being used to build blockchain solutions and allow them to be tested across any industry vertical. Chain is building blockchain infrastructure that other financial services companies can build on top of. Their cryptographic ledgers aim to allow breakthrough financial products and services to be built without spending the time to develop the underlying infrastructure. These are but a few examples of the many companies contributing to the ecosystem, particularly in terms of enterprise solutions — financial services, private blockchain solutions, smart contract implementations, etc. Besides the massive amounts of cash being poured in ICOs in the last two years, it’s interesting to look at where VC money is being directed. According to research from Diar, VC funding for blockchain and crypto-related firms almost tripled in 2018 to almost $4 billion. Of the ten largest deals, only DFINITY has a utility token; the rest are traditional equity investments. Regulation Finally, the increase in regulatory activity (particularly around security tokens/STOs) can also be viewed as a positive for the blockchain movement. Regulations protect consumers and investors, sustain orderly markets, and maintain the integrity of large systems. Our internal research indicates that all but 9 countries in Europe are either actively watching or have begun regulating the blockchain space. 22 EU member states have signed a declaration on the establishment of a European Blockchain Forum. Source: Blockdata research on regulation within Europe. At the recent Singapore Fintech Festival, IMF Managing Director Christine Lagarde discussed the necessity to take a serious and careful look at the case for digital currencies. This may end up not aligning with the utopian vision of Satoshi Nakamoto and his acolytes, (akin to believers in the US Constitution’s original intent and meaning, let’s call them Bitcoin Originalists). However, for blockchain to be fully realized, it needs a wide coalition of developers, financiers, and regulators to work towards common goals. Where do we go from here? One can view these turbulent times as part of a broader history of economic cycles. The frenzy and mania of the last two years has allowed an enormous amount of capital to flow into (and out of) the development of blockchain technology. This has gone not just into building technical infrastructure, but also to developing human capital. There are now thousands of talented software developers that are building blockchain applications. Just a decade ago, it was a handful of cypherpunks working on this. In her book Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages, Carlota Perez says that an installation period:
https://medium.com/blockdata/blockchain-people-relax-things-are-happening-exactly-as-they-should-496b32b4bd54
[]
2018-11-26 21:51:56.466000+00:00
['Blockchain', 'Cryptocurrency', 'Fintech', 'Technology', 'Bitcoin']
2,815
Gaining Public Trust in Self-Driving Cars: The Case of the Elevator
I wrote this chapter in The Future is Autonomous: The U.S. and China Race to Develop the Driverless Caranticipating that it would be a chapter devoted entirely to gaining consumer acceptance of autonomous vehicles in the U.S. I really liked it after I wrote it but felt like it did not fit well with my whole book. Also, I discuss the difficulties of gaining consumer acceptance many other times in the book. However, I would like to share it with all of you because it is a great way for me to discuss the problems of public acceptance of very promising and yet potentially dangerous or fatal new technologies. CEOs from every autonomous vehicle company say that safety is the number one priority. The technology has become more mature over the last 10 years. However, the level of public trust has not increased. People like Elon Musk, CEO of Tesla, claim there is a media bias against AVs. This perceived bias is most likely due to the fact that these vehicles represent a new technology and the lack of direct human control over these vehicles makes accidents involving them more frightening. U.S. companies, such as Waymo, have fine-tuned their automated driving systems, the “brain” that drives the vehicle, through millions of miles of testing on public roads and billions of miles in computer simulations. These vehicles are also equipped with high tech LiDAR (Light Detection and Ranging) sensors and cameras as their eyes to “perceive” the world around them. However, public perception is reality and autonomous vehicle companies need to focus on creating a more positive narrative related to their vehicles. Simply talking about their vehicles’ technology would be incomprehensible to most people. The level of trust in AVs has not only not increased, but it has actually decreased in recent years. According to a yearly survey, conducted by AAA, in 2018 63% of U.S. adults surveyed said they are afraid of getting into a fully autonomous car. This number increased to 71% in 2019 and nearly 90% in 2020. This survey could be biased because AAA depends on assisting human drivers for their entire business model. The main obstacle may be lack of experience riding in, or even seeing, autonomous vehicles. With no exposure to them, people are unlikely to think that they can actually drive safely and are just some futuristic tech project from Sci-fi movies. What lies behind this widening gap between technological advancements and public trust? Examining the nature of this new technology provides some clues, as autonomous vehicles represent a disruptive technology. A disruptive technology is a technology that significantly alters the way that consumers, industries, or businesses operate. U.S. Secretary of Transportation Elaine Chao described the potential impact of autonomous vehicles on many industries, saying, “The safe integration of automated vehicles into our transportation system will increase productivity, facilitate freight movement, and create new types of jobs.” As Secretary Chao states, adoption of this technology would have a significant impact on the long-haul freight trucking industry, taxi industry, and numerous other industries. Autonomous vehicles would create new opportunities for people to pursue different roles. However, there would still be a group of people who would lose their jobs in the short term before these new opportunities emerged. I interviewed Ezra Kovitz, strategic consultant at Founder’s Intelligence. He said, “There are already, in some cases, dedicated bus lanes and some of those buses are AVs and you just don’t really think about it.” This dedicated bus lanes has led to debates about whether there needs to be dedicated lanes for AVs or not to maximize their efficiency for things like freight trucks for the logistics supply chain. Most of these ideas are still in the early planning stages in the U.S. Autonomous vehicle companies admit that they will be driving on public roads with human-driven vehicles and buses for the foreseeable future and are planning accordingly. The most common comparison to autonomous vehicles of another disruptive technology is elevators. Elevators provide both benefits and also potential life-threatening risks. People had reason to fear elevators after they were first introduced into buildings in England in the 1830s and the U.S. in the 1840s. The hemp rope that held the elevators up would frequently snap, sending the elevator plunging to the ground, killing everyone inside. The elevators became more stable in 1852, however, with the introduction of sturdier ropes made of steel wire. It wasn’t until Elisha Otis invented a device that would prevent an elevator from falling if its rope broke in 1853 that the final safety measure was invented. This safer elevator design revolutionized city real estate around the world. Otis demonstrated his elevator design at the Crystal Palace at America’s first World’s Fair in New York City. In a demonstration, he rode an elevator high in the air and ordered that the rope be cut. The crowd gasped in shock when the person cut the rope and the platform began to fall. Their gasps turned to cheers as his safety device stopped the elevator before it hit the ground. The device was, “A model of engineering simplicity, the safety device consisted of a used wagon spring that was attached to both the top of the hoist platform and the overhead lifting cable,” wrote Joseph J. Fucini and Suzy Fucini in Entrepreneurs: The Men and Women Behind Famous Brand Names and How They Made It. If the rope broke, they wrote, “This pressure was suddenly released, causing the big spring to snap open in a jaw-like motion.” After the spring snapped, they described the rest of the process, stating, “When this occurred, both ends of the spring would engage the saw-toothed ratchet-bar beams that Otis had installed on either side of the elevator shaft, thereby bringing the falling hoist platform to a complete stop.” After Otis’s invention, a person’s fear of falling to their death in a small chamber was gone. The giant skyscrapers common in cities like New York City and Shanghai are only possible because of this invention. Even after the fail-safe system dramatically increased the safety of the elevator itself, there was still an Elevator Operator in every elevator. The Elevator Operator would open and close the gate that served as the door in early elevators. The Operators would then get passengers to their desired floor and control the speed and direction of the elevator cab. This was a highly technical job that required intense concentration and focus. Technology has now replaced the role of the Elevator Operator. Passengers can easily press a button for their desired floor and doors open and close automatically. Passengers can also press a button to immediately stop the elevator in case of an emergency. By law, all elevators in the U.S. are also equipped with a telephone to call 911 in case of an emergency. When told of the comparison of autonomous vehicles to elevators, people frequently mention that human drivers will become the new Elevator Operator. This comparison is accurate. Similar to the Elevator Operators, drivers are responsible for a heavy machine. This machine moves laterally instead of up and down and can move significantly faster than an elevator. There are also many obstacles in its path, such as other cars, pedestrians, animals, trees, etc. Therefore, just like an Elevator Operator, the driver must remain alert and aware of traffic conditions and pedestrians on the road at all times. Even a distraction of less than a second could lead to a fatal accident. The Elevator Operator’s job was replaced by technology an automated driving system could also replace the need for a human driver. This change, as well as other changes I describe in my book, will not happen overnight. The technology still needs to mature in order for the automated driving system to be as safe as, or safer, than a human driver. Also, policy needs to be enacted to govern autonomous vehicles at the national level and in some states for them to even be allowed on the road. Be on the lookout for future articles about the need for consumer acceptance of autonomous vehicles! You can also learn more by purchasing my book, The Future is Autonomous: The U.S. and China Race to Develop the Driverless Car, at any of the links below! The Kindle eBook has a special promotional price of only 99 cents until December 31st, so please pick up your copy while that deal is still in effect! If you like my book, please rate and review it on Amazon! Here is the link to the Kindle eBook https://www.amazon.com/Future-Autonomous.../dp/B08PVRL38J Amazon paperback: https://www.amazon.com/Future-Autonomous.../dp/1636766188
https://medium.com/@phillip-wilcox/gaining-public-trust-in-self-driving-cars-the-case-of-the-elevator-a0926e07bbb9
['Phillip Wilcox']
2020-12-15 13:37:43.331000+00:00
['Self Driving Cars', 'Driving Safety', 'Us China Relations', 'Autonomous Cars', 'Top New Technology']
2,816
BEST OF SPEAKING OF CRYPTO 2018 — PART TWO
Best of the Speaking of Crypto podcast 2018 — Part Two The best of the best — 2! Blockchain experts share their ideas, their big picture thinking around what’s going on in the blockchain ecosystem. This show covers some of the most thought-provoking moments from the past year on Speaking of Crypto. If you haven’t heard it yet, please take a listen to the first of this two part, end of year, wrap up! This second episode of compilation of clips contains some of the best content of the podcast from all of 2018. Part Two of this end of year wrap up features the following guests: 1:40 Sandra Ro, Global Blockchain Business Council Social impact and education Giving people around the world financial options 5:36 Matthew Spoke, AION Foundation Decentralized infrastructure The monetary value of the internet 8:56 Hartej Sawhney, Hosho What Fortune 500 companies are up to with blockchain the future of wearable tech and data collection 11:55 Bruce Silcoff, Shyft Network digital IDs people around the world with no official formal identification 14:20 Michael Hyatt, BlueCat Companies have to be real with a product, cash flow, profit Security tokens are backed by a real product 17:36 Susan Oh, Blockchain For Impact Blockchain transparency giving away our personal data 20:00 Dr. Jane Thomason, Abt Associates A gap in funding for startups esp for social impact Global collaboration in overcoming inequalities across the United Nations’ Sustainable Development Goals 24:29 Leon Gerard Vandenberg, Solara Solar punk mash up of motivated citizens who care about the environment and who want to use technology to help Financial inclusion through solar energy sharing and transparency 28:05 Sam Kazemian, Everipedia Useful Dapps and stablecoins Getting through the unglamorous part of founding a startup 30:42 Bill Ottman, Minds Decentralized architecture and open source software’s value Freedom of information and free speech 35:33 Mann Matharu, Stark Technologies That a ha moment with bitcoin and blockchain technology Creating a fairer world 37:49 Betsabe Botaitis, AIKON
https://medium.com/speaking-of-crypto/best-of-speaking-of-crypto-2018-part-two-f4070de2b2cb
['Shannon Grinnell']
2019-01-02 19:32:36.888000+00:00
['Business', 'Blockchain', 'Blockchain Technology', 'Bitcoin', 'Leadership']
2,817
The Future of MarketSquare: What’s Next After Launch?
Earlier this summer we gave you an inside look at MarketSquare: The Homepage for the Decentralized Web, and reviewed all the features that will be launching with the platform when it goes live in Q4 2020. Today, we look towards the future. There are many updates planned after MarketSquare is released and we want to show you what we have in store! If you would like to sign up for access to the beta and relevant updates relating to MarketSquare you can sign at https://www.marketsquare.io. Once MarketSquare is officially launched in Q4 2020, we will be focusing on implementing new features that make discovering the decentralized web even easier. All these milestones within our post-launch MarketSquare roadmap can be classified as updates in the following categories: Social, Development, Education and Advertisement. Let’s take a look at how these updates will take shape! Phase 1: Socialize After launch, the next phase of MarketSquare’s roadmap is centered around creating social experiences for our users that are beneficial. Currently, many applications and products are developed with basic social features already implemented. For MarketSquare, we wanted to incorporate social features that are not only useful but that allow users, businesses and developers to engage with one another in meaningful ways. Here are some of the features that you can expect: Messaging System — message and interact with other users, developers or teams that are using MarketSquare from within the platform! — message and interact with other users, developers or teams that are using MarketSquare from within the platform! Subscription System — do you have a favorite product or team that you want to follow? The subscription system in MarketSquare will make following your favorite projects simple and always keep you up to speed. — do you have a favorite product or team that you want to follow? The subscription system in MarketSquare will make following your favorite projects simple and always keep you up to speed. Notification System — a perfect way to receive updates and alerts without taking you away from your MarketSquare experience. Phase 2: Develop After the social update, we want to focus on updates that will continue growing the MarketSquare community. One of the best ways to accomplish this goal is to implement features that will help us create a thriving market for developer products and services. The systems we will implement during this update will allow developers to seek out and apply for long term employment, work on smaller bounty-type projects or post their own services. Behind every great product and project is a team of hardworking developers, here is how MarketSquare puts them front-and-center: Careers — a system to allow businesses and developers to connect on long term contracts and employment opportunities. — a system to allow businesses and developers to connect on long term contracts and employment opportunities. Bounties — a system for project-specific contracts. Length of projects are on a project by project basis and can be customized by the creator. — a system for project-specific contracts. Length of projects are on a project by project basis and can be customized by the creator. Services — a system for the gig economy. Post your available services and allow businesses to hire you to help design, build, or market their projects. Phase 3: Educate Knowledge is power, and right now the entire blockchain industry could use some help with this. Information on most projects or their underlying technology is incredibly scarce. We believe that users need to have information on projects and products in one easy-to-navigate place. MarketSquare’s Knowledge Hub will allow our platform to have in-depth educational experience for all users. Here is what you can expect: Tutorials and documentation organized by projects A database of information all searchable in one place Project-specific question and answer hubs Phase 4: Promote The promote phase is focused on building systems that allow businesses and plugins to promote themselves within the platform. A major part of our design philosophy is focused on creating ways for advertisements to be effective while at the same time, not being obtrusive to the user experience. Things you can expect during the advertise update include: Promoted Listings Ad Spots Featured Posts & Press Releases The Future of MarketSquare The purpose of these updates is to demonstrate that MarketSquare is not just a finite product. MarketSquare is and will be an evolving and ever-changing platform. The features we add after launch are just the beginning and we are excited to welcome you to our community.
https://medium.com/ark-io/the-future-of-marketsquare-whats-next-after-launch-fc6ba7cf925
[]
2020-08-26 19:31:31.442000+00:00
['Blockchain', 'Crypto', 'Cryptocurrency', 'Blockchain Development', 'Blockchain Technology']
2,818
5 Reasons Why PLCs Drive Billion Dollar Markets
Picture of PLCs from Canva. As the need for quicker development cycles while minimizing risks continuously grows, many companies rely on digital twins to accelerate industrial automation applications. Gartner predicts that by 2021 50% of large industrial companies will use digital twins to drive new business models and further evolve industrial automation processes. The market volume of digital twins was at $3.8 billion in 2019 and is estimated to grow to $35.8 billion by 2025. The need for more flexible, cost-effective, and reliable plants has also given rise to model-based testing of these plants. To use these models extensively and thoroughly to validate industrial equipment through all development stages, digital twins are indispensable from design to pre-commissioning. Digital twins are a digital copy of a physical system, for example, a factory floor. To test new control algorithms, engineers can use the digital twin instead of shutting down the factory floor to test the physical system’s algorithms. With this use of a digital twin, a company can move faster from testing and validating to implementing new control algorithms all-the-while not losing money since there are no production stops. In continuously testing and verifying the algorithms, engineers can increase the simulation model’s quality and functionality. Furthermore, they can detect coding errors at a very early stage of the development process. To operate these systems, programmable logic controllers (PLCs) are indispensable. But why are PLCs driving growth, and why are they crucial for applications such as predictive maintenance, virtual commissioning, and other industry 4.0 topics such as smart factory and factory acceptance tests? PLCs control manufacturing processes PLCs are industrial digital computers that control manufacturing processes. They range from small modular devices to large modular devices. Furthermore, they are often connected to other PLCs and Supervisory control and data acquisition (SCADA) systems. Application areas for PLCs are assembly lines, robotic devices (e.g., cobots), or any activity requiring high reliability, ease of programming, and process fault diagnosis. In short: to have all processes executed as desired, the PLC must react accordingly to the input given within a short period of time. The reason why PLCs are at the heart of industrial automation processes can also be explained through history, which brings me to the next point. PLCs are here to stay First introduced in the late1960s PLCs by the automobile manufacturing industry, PLCs have revolutionized the automation industry. By moving away from using relays, the idea was to find a way to control manufacturing processes with computers’ help. The novelty of the PLC was that the inventor Dick Morley found a way to represent computer scientists’ thinking so that plan engineers could warm to this invention. The first PLC, the Modicon 084, was not yet a box-office hit, but that changed with the Modicon 184. From then on, demand grew steadily, which also fuelled competition between the newly founded PLC manufacturers, leading to innovations and smaller devices (the first PLC from Modicon, the Modicon 084, was as big as a suitcase) to further facilitate the plant’s support and maintenance. Apart from the fact that PLCs have become smaller, speed must be emphasized. The fact that PLCs can process signals increasingly faster has reduced cycle times and increased communication possibilities. PLCs also have ever larger memory capacities. In short: PLCs are becoming better, smaller, and faster. Depending on the application, innovations and new PLC solutions will become more and more critical. The more they can do, the more important it becomes to apply PLC testing correctly. According to Philipp H. F. Wallner, industry manager for industrial automation and machinery at MathWorks, “a modern PLC can run sophisticated control algorithms, and process advanced data signals in real-time, which would not have been possible 10 years ago. The first multi-core processors on PLCs are already in production use. Also, intelligent sensors have become so affordable that machine builders now integrate them in places it was economically impossible to a few years ago”. PLCs have more and more capabilities The new possibilities in hardware performance have also led to the further development of the PLC. This development is relevant because it offers more individual and leaner process possibilities, designed to meet specific requirements. Philip Wallner illustrates this very clearly in his article 5 trends changing industry using our smartphones. While the shell, i.e., the smartphone’s skeleton, remains the same over the years, the software changes with every update we download. This sheds light on how the industrial automation industry wants to approaches the role of the PLC. It is not a matter of rebuilding the entire physical plant every year (this would not make sense for cost reasons alone) but building a software and control infrastructure that allows individual components to be updated so that the entire system remains up-to-date. PLCs can handle future complexities With more capabilities come more complexities. Not only the PLC itself has evolved, but also inventions that are indispensable for the PLC, such as communication protocols. These improvements are possible due to processor development: As processors become faster and faster and memory capacities larger and larger, PLC solutions and possibilities arise that were previously unthinkable. These include vision system integration, motion control, as well as synchronized support for multiple communication protocols. PLCs accelerate innovations in industrial automation Although PLC testing is becoming increasingly complex, it is indispensable for industrial automation. The many possibilities which result from this will probably open doors for new inventions and groundbreaking innovations in the future. Conclusion Therefore, it must be assumed that PLC solutions will continue to evolve in the future to drive innovation in the industry. New challenges, such as implementing machine learning, predictive maintenance, and virtual commissioning algorithms, or creating a digital twin of an entire plant, will continue to drive PLC complexity. The history of the PLC also shows that these controllers have evolved over time and will continue to evolve towards open controllers in the future. Thanks to the flexibility and further development of this technology, there will also be many innovation possibilities in the industrial automation industry in the future, which would be unthinkable without PLC.
https://medium.com/swlh/5-reasons-why-plcs-drive-industry-4-0-topics-generating-billion-dollar-markets-7cc72c71d972
['The Unlikely Techie']
2020-11-12 09:03:25.157000+00:00
['Technology', 'Machine Learning', 'Future', 'Industrial Automation', 'Industry 4 0']
2,819
Student Laptop Purchasing Guide
In this guide, I will be touching on what to look out for when searching for a student laptop. Personally, I’m a student and I know how it feels bad when you really can’t afford a gaming laptop as they are quite expensive at times. I will be making this guide short and clear as no one would like a lengthy guide. Without further ado, let's have a look at what to look out for when purchasing a student laptop that has the best bang for buck value:- 1)Price First things let's talk about the price range. A typical student would be very tight on budget. As I’m based in Malaysia, all the prices that I will be stating would be in MYR (Malaysian ringgit). As a student in Malaysia, you want to look for something that’s priced around RM2000 to RM3000 for a mediocre household. This price range would get you a fairly decent laptop for a student but don’t expect too much performance from these machines in terms of gaming. If you are even tighter on budget, I would genuinely recommend the lowest price of RM1000 to RM1500. Now, do note that laptops from this price range will only allow you to do your daily work although it might be game possible but I genuinely don’t recommend to game on a laptop at this price point as it will ruin the overall experience of the game. In my other blogs I would be recommending my personal options for student laptops in these price range. 2) Weight A student would clearly want a fairly lightweight laptop as you would be carrying it around wherever you go during projects and meet-ups with your project members. Try to get a laptop that is within 1.5 to 2.3 kg as anything more than that would be kind of a burden on the shoulder. 3) Battery Life Generally, I would recommend searching up for laptops that have at least 4 hrs of battery life. Anything less than that would be kind of lacking and you would have to carry your charger wherever you go. The better battery life I recommend would be around 6 to 7 hrs as this will be sufficient for the time when you are usually outside as a student. A quick note though, try not to trust the battery life stated on the producers websites as they are not as accurate as doing a benchmark testing by youtubers themselves. So before buying, try to look out and do some research on the laptop you are getting before buying. 4) Monitor If you are a normal student, any display literally should be fine but if you are a student that is pursuing a career in arts than the monitors comes into some serious plays. You would want something that is more colour accurate, vibrant and has better rating percentages in sRGB as well as Nits. These 2 things will clearly state how colour accurate your monitor actually is, but for the most of us just get the display that is bright enough for viewing outside and has a fairly decent viewing angle and you should be good to go. Another thing about the monitor that will effect your weight is the display size. Do you prefer 14″, 15.6″ or even bigger? Generally, a bigger screen would mean more weight to your computer so keep in mind of that but a bigger screen also means that it is easier to read texts on the screen. 5) Design and Layout On the design part as usual can’t say much as everyone has their own preference. To me, you would want to look for a professional-looking chassis. Like smooth edges with clean corners that sort of thing for a better presentation during work. Now let's talk about the keyboard design, since we will be doing a lot of typing like thesis projects, we would want a fairly spacious keyboard for a better typing experience. You would also want to consider getting a backlit keyboard design as it will help with working long hours in your dome when your dome-mates are asleep and the lights are off so that you can see the keyboard. Talking about sleeping dome-mates, try getting a laptop that has relatively quiet fan noise for less distraction during class or literally anywhere you go. 6) Processor and GPU (Graphics Processing Unit) Selection // Upgrade Abilities For processors, we would want something that is powerful enough for daily multitasking and some light gaming at the end of the day. Generally, I would recommend Ryzen processors as their integrated GPU’s are generally better than Intel’s one due to their APU (Accelerating Processing Unit) which takes advantage of your dual-channel RAM memory to provide better frames for some light gaming while consuming less power than a dedicated GPU. Besides the new Ryzen 5000 mobile processor series’s Integrated Graphics also outperforms entry-level dedicated graphics such as MX Nvidia series cards. On the other hand, do lookout for a laptop that has a fair amount of upgradeability such as the RAM slots and the SSD slots. These upgrades will be useful in the future when you need extra RAM or even more storage to allocate your files. That’s all I have for you in this blog, hope it helps clear up your mind on selecting a laptop for your student routine, and good luck with your purchase! Do stay tune for my next blog where I will be recommending my personal laptop choices for students as well as for gaming. Did I miss out on anything? Did I make a mistake? Do give feedback as I will be more than happy to correct my mistakes, thanks!
https://medium.com/@sng3051/student-laptop-purchasing-guide-c867f7cdc6d
['Sean Ng']
2021-06-01 09:16:20.423000+00:00
['Technology', 'Students', 'Tech', 'Computers', 'Laptop']
2,820
Artificial Intelligence And Its Effect On The Modern Day Workforce
The top app development companies are consistently guiding their clients towards apps that rely on artificial intelligence. AI has become an integral aspect of daily living and workforces around the world are already enjoying the advantages. As the manner in which we lead our lives continues to change, our workdays are certainly not going to be exempt. The following is a helpful overview of all the changes that have been made and how a modern business will stand to benefit. Those who take the time to implement AI (with the help of top app development companies) are able to capitalize, while others remain at a standstill. 1. Robotics In The Workplace Warehouses and manufacturing companies are already utilizing robotics in the workplace and for good reason. The automation that they have to offer is a godsend to any company that is looking to lower costs and boost efficiency. While no one is surprised by these developments, more and more workplaces are going to be utilizing robotics in the not so distant future. In fact, many workplaces are already actively searching for ways to benefit from these advancements. Commercial properties that are looking to protect themselves from unwanted intrusions are beginning to rely on security robots. Delivery robots may one day become commonplace and Segway is already in the process of crafting a robot with the ability to navigate buildings on their own. 2. Increased Levels of Workplace Surveillance Some may consider this a benefit, others may consider it a drawback. One thing is for sure: the top app development companies are already on top of things. The top workplaces are already relying on AI for increased levels of surveillance. In a best-case scenario, these changes are presented in a manner that allows the employee to feel comfortable. After all, no one wants to feel as if they are being spied on throughout the course of their day. In most cases, employees are understanding of these changes when they are presented in a transparent manner. Surveillance tools can also be used to gather data that allows a company to analyze its employees’ performance more easily. The top companies utilize these tools as a means of monitoring employee satisfaction and engagement. 3. Augmenting Current Staff Gone are the days when employees would worry about potentially being replaced by AI. The top app development companies are not looking to replace a business’ staffers. AI is a tool that is designed to augment a traditional workforce, not replace it. Artificial intelligence is here to help employees work smarter and not harder. This is important to remember. Efficiency is key. No workplace can afford to replace all of their employees with AI-related technology, nor should they want to. If a task requires a certain level of interaction or creativity, there is no way to replace the human element. On the other hand, AI can be used to monitor workflow and ensure that the employees are being provided with intelligent and useful suggestions. 4. Easier Training Process The process of training new employees used to be rather expensive. With AI, a business can these costs, without making any key sacrifices along the way. The learning process is able to continue for a longer period of time, giving employees a better chance of being able to settle into their roles. AI is also useful when it comes to pass down skills from one generation of employees to the next. Instead of being forced to personally train each employee, businesses can now dedicate these resources to other areas of importance. This creates an environment where all parties are pulling in the same direction. Administrative staff focuses their attention on opportunities for advancement, while newcomers carry out their daily tasks with relative ease. 5. Employee Recruitment Many modern employees do not realize that AI has already been utilized to assist them before they have even set foot inside of a workplace. It is one of the most important tools when it comes to employee recruitment. When larger companies are looking to pre-screen their candidates, AI plays a pivotal role. This is especially true for companies that are asked to screen thousands (or even millions) of candidates each year. Once the employees have been chosen, AI is also used as a means of ensuring that they are ready to handle their duties. Chatbots are being utilized on a regular basis, as they are to answer any and all questions that a new employee may have. It can be tough for an employee to learn everything they need to know about each facet of an organization. With the help of AI, this is no longer an issue that needs to be addressed. When businesses meet with top app development companies to learn more about how they can improve their future prospects, AI is sure to come up often. Any workplace that is not looking into all of the benefits that AI has to offer is placing themselves in a disadvantageous position going forward. Employees and employers alike must remain fully aware at all times. In a world where technological advancements are occurring at greater rates of speed than ever before, the top app development companies are an invaluable resource. They allow a company to remain ahead of the game and keep them in lockstep with their forward-thinking competitors. As AI continues to transform our daily experiences, these companies will play an increasingly larger role. Author Bio: Melissa Crooks is Content Writer who writes for Hyperlink InfoSystem, a mobile app development company in New York, USA and India that holds the best team of skilled and expert app developers. She is a versatile tech writer and loves exploring the latest technology trends, entrepreneur and startup column. She also writes for top app development companies.
https://chatbotsjournal.com/artificial-intelligence-and-its-effect-on-the-modern-day-workforce-e6bd9b7c0b51
['Melissa Crooks']
2019-07-19 05:07:33.626000+00:00
['Artificial Intelligence', 'Workplace', 'Technology News', 'Technology', 'AI']
2,821
What is Slogging?
What is Slogging? Your Slack? Insightful Words every day by your highly intelligent people. Your Company’s Blog? Not so much. ​Every day, amazing tech companies have tons of high-quality conversations. Conversations that could shape the future of the Internet. Conversations that prove the intelligence of your people and the values of your company. And yet, most tech companies are only publishing a blog post a week or less. Lots of remarkable content is lost to the ether forever, never marketing what you’re about. Slogging is about elevating the best conversations you’re already having. Big Data Jobs The future of tech publishing revolves around transparency and distribution. To up your rate of quality publishing, Slogging will empower you to curate and distribute your best organic internal discussions via Hacker Noon. How Slogging works: One command to curate your most marketable Slack discussions into Hacker Noon stories. Step1: Have a Great Slack Conversation. If you’re a tech startup this is constantly already happening. Step 2: On the Slack Thread, Admin commands “…” -> Slogging. You can also add a suggested headline after the command. Trending AI Articles: Step 3: The Thread Becomes a Beautifully Formatted Hacker Noon Draft. Via Hacker Noon, you can review/edit the story as much as you want before submitting, or automatically submit from Slack. Step 4: Editors Review Your Tech Conversation for Publication in Hacker Noon. Your tech conversation has the opportunity to reach our 3,000,000 monthly readers. You own the content and can publish it anywhere else too How to Start Slogging Slogging is currently in beta and you can sign up via the waitlist here. Don’t forget to give us your 👏 !
https://becominghuman.ai/what-is-slogging-acc06114de95
['Limarc Ambalina']
2021-05-03 14:08:37.694000+00:00
['Blogging', 'Slack', 'Slogging', 'Technology', 'Software']
2,822
3 Things to consider when operating Kafka Clusters in Production
This year, I attended the Kafka Summit in San Francisco. Coming from a DevOps background, I noticed some challenges someone considering the adoption will have once Kafka is running in production. I won’t explain in technical details what is Kafka [4] but if you are studying Kafka and considering the adoption, here are 3 things to think before adopting Kafka clusters in production: Context Kafka is a distributed event streaming platform. It is grabbing so much attention not only in the Data and Analytics space but also in the DevOps space, and the reason is simple - Logs. One of the most common use-cases for Kafka is to use it as a standard way for all systems to communicate in general. See the official documentation here. This is a streaming platform being used for messaging, which is super powerful and suitable for big corporations, with many services communicating with each other at the same time. As your main messaging system across front-end, back-end, micros-services, serverless functions, databases, etc., Kafka changes the perspective of your environment because it imposes a stateless approach to all the apps. If you are using Kafka, everything could be translated into an event and be posted in a topic, which increases the importance of logs and monitoring. That’s why infrastructure/ sysadmins and DevOps professionals are interested in this technology. #1 Team and Numbers With this use-case in mind, you notice how things can get complex very fast and why you need people to maintain it. That’s the first thing to consider when operating Kafka in production: you will need a highly technical team to keep it running. It is definitely not a platform you will implement and forget about it. Kafka will probably demand customization of your current environment: network, hardware, OS and application-level changes. That’s another reason why most of the scenarios of Kafka use-cases are being presented by big companies (Uber, Walmart, Twitter, Netflix, etc.) with a lot of people to maintain huge clusters with thousands of brokers instead of savvy data-focused startups. Lowering the barrier of entry for Kafka is another goal of confluent, but that’s a topic for a different article. In the meanwhile, you can watch the Kafka Summit 2019 in San Francisco keynote for that here. One of the most important articles about operations in Kafka is New Relic’s “Kafkapocalipse”[1]. I recommend reading it to understand more technical reasons why you need a strong team taking care of the clusters, but the article also presents us with some core metrics about Kafka — Replication, Retention, and Consumer Lag. They are just an example of the real takeout here. Kafka is like (or more important than) any other component of your production environment; you need to create as many metrics as you can about your it, specifically but not restricted to the concepts presented in the article. New Relic, Confluent, and other service providers are offering solutions to help you manage your Kafka clusters, including data over these metrics if you don’t want to collect them manually. To summarize, Kafka can make you allocate a larger budget for Kafka maintenance/operations than initially thought. #2 Balance and Graphs There is another important thing to consider in this same complex scenario of highly distributed microservices communicating over Kafka Clusters, not about Kafka itself but related: Dependencies. If you have a strong DevOps culture, with pipelines updating your microservices constantly, that will expose an interesting situation: A team’s update in a producer, like new topics, new fields or field updates, for example, could break the behavior on a consumer downstream. In some real-world situations mentioned during the conference, a team would prefer not to touch a service because they don’t know if some other team/service is consuming it. In this complex scenario, keeping track of these dependencies is not an easy task. You will need to keep your dependency maps updated, and probably with some automatization if your deployment rate is high. Since this is not a Kafka-exclusive problem, there are plenty of solutions out there to solve this, the difference when you are using Kafka is how you manage your topics and who is responsible for them. The tricky situation here is to find a good balance between the freedom every team must update their micro-services and the rules/constraints the team managing Kafka will impose to keep the clusters and brokers healthy. # 3 Data Security and Privacy Since we are imagining a scenario when Kafka is being used as a unified messaging system across applications, Data Security and Data Privacy become a priority and access to the clusters becomes sensitive. Kafka natively supports security features as client authentication, client authorization, and in-transit data encryption [2], but with the standard setup, any user/application have read/write access to any topic. Setting up Security in Kafka is not simple, and you’ll probably have to make important decisions on which security standards supported by Kafka is the best for you. Here’s more about Kafka Security. Proper usage of these native features could ease some of the challenges I described in this article — the dependencies issue for example — but there are also trade-offs. Performance takes a hit when encryption is enabled leading to a considerable increase in CPU utilization since both Clusters and Brokers are decrypting messages, for example. The team maintaining Kafka will have to fine-tune the cluster’s configurations to respond to the environment needs. It’s important to remember that native encryption is only applied for in-transit data. Rest data is still your responsibility to encrypt or not. This could lead us to a whole different discussion if you deal with sensitive information like credit cards or personal data and CCPA/GDPR or PCI is important for your operations. Some design and configuration decisions on the Kafka environment should be taken in advance before adoption if Data Privacy are important constraints. This will directly impact how you monitor your Kafka Clusters to manage auditing if you need to be PCI compliant, for example. Final thoughts The main goal of this article is not to criticize Kafka but to give awareness of what to expect when adopting the technology. Having all these things in mind it’s important for a smooth adoption of Kafka especially when you are transition from a POC stage into Production. To summarize, when considering adopting Kafka in production, leadership should consider… 1. The maintenance costs to fine-tune Kafka for your needs; 2. Tools you will have to build/buy to monitor the clusters and keep them healthy 3. The impact on Privacy and Security of having such a platform Kafka is a very powerful platform with a strong community. I’m looking forward to the evolution of the technology and how the market will react to the changes it will bring. References: [1] Ben Summer, New Relic’s Kafkapocalipse, Dec 12th 2017 — https://blog.newrelic.com/engineering/new-relic-kafkapocalypse/ [2] Apache Kafka 2.3 Documentation https://kafka.apache.org/10/documentation/streams/developer-guide/security.html [3] https://www.confluent.io/kafka-summit-san-francisco-2019/building-and-evolving-a-dependency-graph-based-microservice-architecture/ [4] https://www.confluent.io/what-is-apache-kafka/ [5] https://kafka.apache.org/uses#uses_messaging [6] https://www.youtube.com/watch?v=XMXCZSJR1iM
https://medium.com/slalom-technology/3-things-to-consider-when-operating-kafka-clusters-in-production-bc56233f6184
['Marcelo Ventura']
2020-01-28 03:05:35.207000+00:00
['DevOps', 'Automation', 'Cloud Computing', 'Technology', 'Kafka']
2,823
The current state of blockchain in travel
Blockchain technology, or distributed ledger technology, has been a frequent and popular topic in travel for some time now. While it still remains in the early days, akin to the internet in the 1990s, we still strongly believe that the technology holds promise for the travel industry. Much has changed since we hosted our Blockchain in Travel Summit, where we convened industry insiders and blockchain experts to discuss all aspects of blockchain. For context, when we held the summit back in March 2019, the total crypto market capitalization was $130 billion. Today, the industry has pushed past $2 trillion. That’s over 15x growth in just two short years. So, what’s the state of play with blockchain in travel today? Has the blockchain’s ability to increase transparency, lower costs, and instant traceability “taken flight”? Let’s dive into some of the projects bringing blockchain to travel. Blockchain for operations There are three primary areas that blockchain technology can support airline operations — streamlining processes, improving supply chain management, and reducing payment fees. Airline operations are complex, require many partners, and are mission critical, making it ripe for distributed technology. However, given the scale of incumbent platforms, the change will take time. Operational processes: Blockchain technology has the potential to unlock a new world of smarter, more efficient operations for airlines. If every department and external vendor had the ability to work from the same data set, labor-intensive processes would not only be seamless, but airlines would have more control over sharing that data across traditionally disparate systems. What does this look like in practice? Today, AirAsia uses Freightchain to optimize cargo revenues. Freightchain matches airlines with shippers and utilizes digital contracts to enable real-time bookings and fast settlement to optimize cargo capacity usage. In another example, Deepair leverages smart contracts and real-time settlement so that airlines can augment ancillary revenues by cross-selling content across interline partners. Secure data on the distributed ledger means that these offers can be personalized, at scale, without sacrificing security or violating privacy. Supply chain: Airlines can also enhance visibility into their global supply chain by tracking components on the blockchain. Capabilities like inventory management and procurement are augmented with greater transparency, tracking, and control of critical aircraft parts or operational inventory. VeriTX is a digital supply chain for aircraft parts that verifies the origin and authenticity of inventory to ensure compliance with regulations and quality standards. The system also uses 3D printing to reduce the time that an aircraft is out of service by streamlining on-demand printing of verified digital parts. Payments: Travel companies spend millions each year on payment processing. The settlement times are also lengthier, due in part to the global nature of the industry and the number of payment methods available. Payments can be faster, less costly, and more secure with distributed ledger technology UATP, an airline-owned payment network partnered with BitPay to enable payments in cryptocurrency. While crypto has not become a popular means of payment in travel just yet, these payment rails are a requirement for serious adoption. Blockchain for distribution Airlines, hotels, property managers, and other accommodation providers often struggle with profitable distribution, as it requires a delicate balance between commission-free direct bookings and tapping into the global reach of the most recognized online travel agencies (OTAs) like Expedia and Booking. In the case of OTAs, an airline typically goes through a global distribution system (GDS) company such as Sabre or Amadeus, which distributes air and hotel plus ancillary products to the OTAs before the final product reaches the end consumer. This multi-layered process is highly inefficient for both the airlines and customers and is ripe to be streamlined utilizing distributed ledger technology. To optimize profitability, providers are pushing for more direct bookings. Today, Blockskye is partnered with United Airlines to provide a direct booking solution for business travelers. This distributed platform allows travelers to book directly with the airline, resulting in higher margins for the airline from both the price of the ticket and the reduction of fees associated with the GDS and OTA. OTAs are also evolving to include travel marketplaces that accept cryptocurrency. Travala allows customers to book over two million accommodations on its platform and is partnered with Binance to allow customers to book trips directly from the exchange. Booking from Binance Pay provides crypto holders the ease of using a single wallet and more options to utilize tokens. Blockchain for identity As it has in other industries, COVID-19 has acted as an accelerant, pushing the adoption of blockchain-based identity. This has been one of the most visible and active applications of blockchain in the industry: IATA’s TravelPass, which uses the distributed ledger technology from Evernym to secure user identities, is being used by Etihad Airways. Aruba has also built a health app with SITA and Indicio.tech, which allows visitors to share health data privately and securely. Germany is hosting a pilot across 120 hotels to use decentralized digital IDs to allow employees from German corporations to check-in. This is one of the most prominent pilots to date in using device-stored digital IDs in travel. Amadeus integrated IBM’s Digital Health Pass to enable secure and seamless health data verification. What’s next? Blockchain’s ability to provide scalable, secure, transparent, and traceable data makes it an ideal solution in travel. The industry’s complex infrastructure is perfect for blockchain technology, as it can streamline processes and systems spread across geographies, companies, and even industries. The challenge ahead is what every emerging technology — and the startup ecosystem surrounding it — faces: adoption. There must be a critical mass of both users and companies piloting the technology. Otherwise, there won’t be significant learnings to improve systems and pave the way for widespread adoption. Yet, even with the groundwork required to put the blockchain to work in travel, we remain bullish on blockchain’s role in travel. As we navigate global governance, private vs public blockchains, and regulations, the industry will find its path towards enabling a new way to transact, interact, and engage. These questions will be answered with actual usage. That’s the inflection point where travel finds itself: with ongoing pilots, the lessons learned are being translated into better products primed for global service.
https://medium.com/jtv-insights/the-current-state-of-blockchain-in-travel-7cc354917d08
['Ryan Chou']
2021-10-30 04:17:45.312000+00:00
['Travel', 'Airlines', 'Technology', 'Blockchain', 'Startups']
2,824
Set Protocol — Automated Trading Using Smart Contracts
Familiarize yourself with the TokenSets — Exchange and User Interface for Using Set Protocol Since 2017, the California based company Set Labs Inc. has been developing the Set Protocol — a smart contract system on the Ethereum blockchain that can be used to trade cryptocurrencies in a fully automated way, using ERC20 tokens that execute specific trading strategies. In 2018, the company raised $ 2 million from venture capital firms in order to develop and scale its operations. Constant improvement of the platform and the introduction of new products are solid evidence that the raised capital is used efficiently. What does the Set Protocol do? Set protocol simplifies cryptocurrency trading and allows you to expose yourself to a trading strategy of your choice in just a few simple steps. For each of the trading strategies, the platform offers a corresponding ERC20 token. This token rebalances itself in accordance with the default parameters of each strategy. In other words, it buys and sells tokens when they reach predefined values. The platform that offers automated cryptocurrency trading using the Set Protocol is called TokenSets. A described tokens are collectively called Sets. Set tokens are fully collateralized in the crypto assets that are the basis of each particular strategy. The smart contract that regulates the collateralisation process is called the Set Protocol vault contract and is available here. The platform currently supports the purchase of Set tokens with ETH, USDC, DAI and WBTC. Buying a Token Set It is very easy to use the TokenSets platform. Once you choose one of the available strategies, everything you need to do is to buy the corresponding token. And that’s it, the rest of the job is done by the automated protocol. For each of the available strategies there is a detailed description. If a strategy is created by the specific author, then there is also additional information about the author and its work. All of the Set tokens available are displayed on a variable-graph control panel similar to ones that we see on crypto exchanges. Because of that, it is easy to compare the performance of a particular strategy over different timeframes. In addition to Metamask, other supported wallets are Fortmatic, Coinbase, Trust, imToken and Opera. Usage of the platform is available through two main modules: Social Trading and Robo Sets. Social Trading is a newly introduced advanced trading variant in which predefined strategies are created by respectable traders based on their knowledge and experience using different indicators. Therefore, something known as a copy trading is accomplished by buying a Set Token that represents a strategy of a particular trader. Traders who offer their strategies on the TokenSets platform are proven experts in the cryptocurrency market with extensive trading experience. At the time of writing, there are 15 tokens available in the Social Trading module offered by 15 traders with a total of 22 different strategies. Within the Traders tab, each trader has a page with their personal information, the amount of capital they manage through their Set tokens as well as posts, links to external profiles, graphs and other technical parameters of the respective Set tokens. Social Trading has enabled almost everyone in the world to expose themselves to successful trading strategies in just a few clicks by copying experienced traders. Awards for traders Traders that participate on the platform are awarded through fees. When configuring each individual Set of tokens, the trader himself decides on the amount of the entry fee which ranges from 0% to 5% for each individual Set. Since Set Tokens are actually ERC20 tokens, you can transfer them like any other tokens outside the TokenSets platform. Once purchased, they will be stored in the wallet that is part of your profile. You can freely send and receive your tokens and exchange them back to any of the cryptocurrencies you used to buy them on the platform at any given time. You can also send Set tokens to one of the crypto exchanges and trade them without boundaries. Set tokens can be purchased with ETH, USDC, DAI, or SAI tokens. Transparent user interface The user interface of the TokenSets platform is very simple and transparent, free from unnecessary details and you will quickly get used to it. When purchasing a single Set Token, you will be able to turn on email notifications indicating any changes to the Set Token you own. You can opt-out the notification option. When using the TokenSets platform, you won’t need to provide any personal information. In addition to Social Trading, investing in Set Tokens is also available through Robo Sets. Robo Sets are Set tokens based on strategies related to certain well-known trading indicators such as RSI, MA, EMA, Range-Bound, etc. In addition to trading indicators, there are also baskets made up of two cryptocurrencies, i.e. ETH/BTC 75%/25%. For both groups of tokens, the control interfaces are identical. Currently, the Robo Sets module offers you to invest in 23 Set tokens. All the Sets are available in a single control panel divided into cards for each set group and a separate trading chart. TokenSets is constantly increasing the number of traders available and if you have developed an effective method yourself, feel free to sign up and become one of TokenSets traders and start earning money from users who will choose your strategy and pay you a fee. To get started, fill out the access form you will find here. Rebalancing of the Set tokens Rebalancing is a process in which the relations within a particular Set of tokens change. For example, with the ETH 20 Day MA Crossover a token set that uses WETH and USDC tokens along with the 20 Day MA indicator will be rebalanced when the price of ETH exceeds the 20 Day MA line. The smart contract will then buy WETH for the available USDC, thereby benefiting from the rise in the ETH price. The opposite will happen when the price of ETH falls below the indicator level. At that point, this Set will keep its owners safe from the fall in the ETH price by relocating to the USDC. Within each Set of tokens it is possible to monitor the status of rebalancing. On this page you will find links to each individual rebalancing. The rebalancing also provides additional revenue opportunities. The rebalancing is organized as an open auction which offers everyone with the opportunity to increase the system’s liquidity by participating while at the same time earning a profit. Read the details of how to participate in the rebalancing here and here. Security In addition to internal testing of their smart contracts, security audits were also conducted by reputable companies Chain Security and Trail of Bits. You can see the audit findings at the links below. 1 https://www.setprotocol.com/pdf/chain_security_set_protocol_report.pdf 2. https://www.setprotocol.com/pdf/trail_of_bits_set_protocol_report.pdf All smart contracts are open-source. Contracts can be found on the GitHub Set Protocol profile: https://github.com/SetProtocol/set-protocol-contracts. To further encourage developers to use the Set Protocol to develop new Sets and applications, Set Labs has created a dedicated page with instructions for easier usage of the platform. The site is available here: https://docs.setprotocol.com/#/tutorials#create-a-rebalancing-set If you want to step into the world of cryptocurrency trading and do not have enough knowledge about indicators, technical analysis or do not have time to track the movements on crypto exchanges, TokenSets platform might be interesting for you. It’s good to know that you can start with small amounts. As with the purchase of other tokens, you can only buy a portion of the Set Token and only later increase the stakes. As with all smart contracts, regardless of the audits and tests performed, you need to be aware that the usage of smart contracts carries a risk, and be sure to keep yourself well informed before you start investing. You can find answers to frequently asked questions on the FAQ page, and you can follow TokenSets on Twitter, Discord and the Medium blog.
https://medium.com/@mojkripto-com/set-protocol-automated-trading-using-smart-contracts-df11659b2097
['Moj Kripto']
2020-02-27 21:00:52.923000+00:00
['Trading', 'Blockchain Technology', 'Smart Contracts', 'Ethereum', 'Blockchain']
2,825
Team Zero Weekly Newsletter
Ingar & Rick will be releasing a marketing competition this week, running for a month or so. There are GIFs, articles, videos, and all sorts to get involved in. We will be needing the help of the community in selecting the entries that bring us some more exposure. Lolliedieb has appeared from seemingly nowhere this week with another version of the miner that has been producing some really great results. Currently this is an Nvidia version, with the prospect of an AMD miner in the wings as well. Currently the miner is being tested in the background and the reports are a great deal of hashrate more, but is still early stages yet, so we will keep you in the loop as this product is finalised and comes closer to release. We look forward to continuing to build the relationship with Lolliedieb, and thank him for his great work to date. Last week we released the finer details of the finite supply, block halvings, and also developer fund. To recap, here are the details below again in case you missed them:
https://medium.com/zerocurrency/team-zero-weekly-newsletter-241a6f2363d8
['Zero Currency']
2018-06-16 15:08:09.981000+00:00
['Technology', 'Internet', 'Computer Science', 'Blockchain', 'Bitcoin']
2,826
Healthcare Business Intelligence
Technology is changing the way we live our lives almost every day and in a multitude of different ways. One of these transformations is occurring in the field of healthcare. Health is a business that has been around for centuries with modern medicine helping to extend the average lifespan by decades, however, new innovations are set to make this whole process significantly more user friendly and useful. Business intelligence itself is a fairly new innovation and refers to the collection and use of data to improve business operations and strategic planning. Healthcare business intelligence builds on this same framework but in this case, the data in question is patient data gathered through a variety of channels. Healthcare BI has a slightly different purpose than that explored earlier with business intelligence alone. With healthcare business intelligence, organizations are still looking for ways of improving operations and costs, but a greater primary focus is the goal of improving patient care. Healthcare Analytics as a Business While patient care is a primary mandate for healthcare BI tools and software, businesses entering the market have the potential of realizing a very healthy return on their investment. In 2019 the market was already fairly robust at US$14 billion, but this is set to skyrocket over the coming years with a healthcare analytics market expected to reach US$50.5 billion by 2024. This investment is expected to primarily focus on North America with Europe a distant second followed quite closely by Asia. Over the coming years specifically, North America by itself will far surpass the current global investment of US$14 billion. This growth is fueled by a variety of factors, one of which is a growing focus from the government towards a more personalized provision of medical care. Benefits of BI in Healthcare While there are many reasons to embrace healthcare BI there are also some clearly obvious benefits that need to be called out. Reduced Costs In many parts of the world, including North America, healthcare is a business. While doctors and clinicians got into the role to help people, money is still a driver that needs to be acknowledged. Running a medical practice or hospital is expensive with resource costs, tools, equipment, and pharmaceuticals all adding up. However, clinical business intelligence tools can help drive these costs down in a variety of different ways. Healthcare BI software can track populations and perform analysis to better understand the likelihood of illness and infection in specific areas and locations. Healthcare BI tools can improve communication and information sharing between different organizations and even between countries. Turning a Doctor into a Data Scientist BI tools can be complicated and complex to use and understand. However as healthcare itself has transformed, so have the BI tools that support healthcare. Now doctors and other healthcare experts have a means of extracting information in a simple manner, without requiring an understanding of coding or databases. Self-service tools make front line staff more efficient and effective. They let healthcare providers access information in real-time to improve their ability to make decisions and judgments in a more timely manner. In addition, these self-service tools allow simple customization so that patients too can understand the information being presented. Personalized Treatment Services In years gone by, patient treatment was a matter of best guess more than anything. As time progressed and information was shared between physicians, researchers, and clinicians about what worked and did not work when it came to treatments for specific illnesses and disease, better treatment options were discovered and refined. Health data intelligence helps take that a step further and helps doctors understand why a treatment that worked for one patient might or might not be suitable for another. Business analytics in healthcare can be further refined to demonstrate the risks of specific treatments based on a patient’s current condition and medication. Now treatments can be personalized based on specific genetic blueprints targeting treatments in a more concrete manner. Evaluating Caregivers Healthcare is a business as already mentioned and one of the precepts of business is the service provided to customers. Within healthcare, those customers are the patients that engage with the doctor or medical facility. These patients are concerned not only with how they are treated while in the facility, but the information they receive, how much empathy is or is not shown in the given situation and more. Like reviews for restaurants, healthcare providers too can be reviewed by patients and this information gathered through different tools. Clinical business intelligence software can evaluate information on the carers within their organization and use this information to further improve the services they provide to patients. Improving Patient Satisfaction Health BI has multiple impacts on patient satisfaction. Better and more customized treatment ensures that patients receive targeted services focused on their specific illnesses or condition. Customized treatment options drive improved patient outcomes, leading to overall better quality of life. In addition, clinical and hospital business intelligence helps make the facilities themselves more efficient and effective improving wait times and overall service levels. Healthcare BI Tools Healthcare BI software is a subset of BI software targeted towards the healthcare market. These tools provide specialists in the medical field with an improved way of reviewing data gathered from different sources. These sources could include patient files and medical records but can be expanded to include additional information such as financial records and more, to better enable the facility in their care and treatment planning. Healthcare BI tools integrate with other software in a medical establishment but it is crucial to understand that it is not the same as software like EMR and EHR. Tableau One of the leaders in the BI marketplace, Tableau helps organizations create and publish dashboards extremely easily. Tableau has some inbuilt data preparation tools that simplify the process of gaining information. Tableau also has some prepared templates for users in the healthcare market which helps even further with implementation letting organizations quickly drill down into their information. Power BI Power BI is a Microsoft product and as such is very familiar to users of the Office solution. It integrates directly with other Microsoft products also like Excel and SharePoint and lets users analyze, model, and graphically represent data in a variety of different dashboards and reports. Power BI is fairly intuitive and easy to use with a built-in AI engine that lets users analyze clinical data quickly and easily. Sisense Sisense like Tableau has dedicated integrations for the healthcare market. However, Sisense takes it perhaps a step further with a healthcare analytics module built specifically for healthcare information and data. Sisense lets you pipe data in from a variety of different data sources so you integrate all of the different touchpoints in a single interactive dashboard. NIX Experience In Healthcare BI As a leader in software development, NIX was contracted to build a solution for a global organization. This company was looking for a way of improving the information available to company executives. Executives were interested in the visualization of specific indicators related to finance, quality of care, and clinical services. The NIX team used data from multiple different applications to determine the key areas that needed to be measured. They determined that the best path forward was the use of Tableau as a solution. Tableau was visually appropriate and integrated with all of the systems but also provided the security that the organization needed in terms of patient information. NIX worked with Tableau extensively and also implemented a separate Java-based component to further improve security and authentication. In addition, another component was added which improved the scheduling of data extracts. The NIX team successfully built a solution in a very short timeframe that met all of the client requirements, leading to a successful product launch shortly thereafter. If you are interested in healthcare business intelligence and are looking for a partner with experience for your project, contact us. At NIX we understand the business of software and healthcare and can help you ensure that you are a success at both.
https://medium.com/nix-united/healthcare-business-intelligence-c28bd7cfb5e7
[]
2020-11-13 13:19:21.324000+00:00
['Healthcare Technology', 'Software Development', 'Healthcare', 'Nix', 'Business Intelligence']
2,827
London Tech Week 2017 — Which Data events to attend?
London Tech Week 2017 is almost here. The four day festival kicks off on June 12th, 2017 and promises 40,000 visitors from 70 countries visiting 300 events across the capital. So, how do you know which events to make? Lucky for you, we have taken a look at the data events during London Tech Week. Here are the events to look out for so block out your diary, now… Monday, June 12th Learn how organisations are using AI now, what AI can do and what it won’t do. Explore the future for AI to transform your business, decision making and customer service. Discover what AI will mean for your workforce, the impact on individuals and why the right mindset is key. Organised by: Hudson and ASI Data Science Date & Time: Monday 12th June 08:30–10:00 Free entry Enjoy 2 days of keynote presentations delivered by world-renowned speakers, interactive panel discussions & unrivaled networking opportunities with your peers, as you learn and share key insights into the trends and challenges of your industry. Check out the five-track agenda here: http://bit.ly/2pDwnZK Organised by: The Innovation Enterprise Date & Time Monday 12th — Tuesday 13th June 08:00–16:30 Paid entry £995 — £995 Tuesday, June 13th Are you ready for the GDPR? Harbottle & Lewis’ data protection tech roundtable discussion will include a 15 minute overview on the General Data Protection Regulation (GDPR), which comes into effect on 25 May 2018, and will then become an interactive discussion on the key challenges facing the UK tech industry and the steps which your peers and competitors are taking to prepare. Organised by: Harbottle & Lewis Date & Time Tuesday 13th June 18:00–20:00 Free entry See how a combination of machine learning, time series modeling, and geostatistics is more effective at predicting future crime than any of these techniques alone. Using a variety of public data sets, including police reports, the US census, Foursquare, newspapers, and the weather, Jorie will discuss how to merge, visualize, and model this type of multi-dimensional data, using PostGIS, spatial mapping, time-series analyses, dimensionality reduction, and machine learning. Organised by: Dataiku Date & Time Tuesday 13th June 18:30–21:00 Free entry Wednesday, June 14th Have your realised yet that one of the vital first steps on the journey to digital transformation is insight into your data? If a transformation goal is to be more nimble in an ever increasingly competitive market, then come and discover how IoT, machine learning, cognitive analytics and even bots can help you engage with your customers in truly innovative ways. Organised by: Microsoft Date & Time Wednesday 14th June 11:30–12:30 Free entry Showcasing some of the work data.world has done with key federal organisations like the US Census, the Department of Defense, and the White House, Matt will talk about how data.world has helped to make data more accessible and drive more collaboration and a more engaged community around it. Organised by: Open Data Institute Date & Time Wednesday 14th June 13:00–14:00 Free entry Thursday, June 15th The Annual Analytics Summit brings together experts from government, journalism, industry and academia to help you turn data into decisions. Filled with case studies, innovations and strategies, the summit is one day of learning and networking for anyone involved in organisational decision-making — analysts and decision-makers alike. Organised by: The OR Society Date & Time Thursday 15th June 09:00–17:00 Paid entry £175 Building Data Science or Analytics teams is a complex business, and the goalposts keep moving, as the pace of change is not abating. This event brings together a stellar group of speakers who have been through these challenges Organised by: TestBoard in association with Tech London Advocates DataTech Date & Time Thursday 15th June 16:00–18:00 Free entry [Disclaimer, we are co-hosts for this event]
https://medium.com/datatech/london-tech-week-2017-which-data-events-to-attend-877b07458fda
['Kam Rafique']
2017-06-07 19:36:35.917000+00:00
['Events', 'Data', 'Data Science', 'Big Data', 'London Technology Week']
2,828
How to Manage Database Resources in OutSystems: SQL Server Resource Governor
One of the most significant difficulties of database performance tuning is trying to manage resources with competing workloads on a shared database server. The number and type of applications that are running on the database determine the server workload and the amount of consumed system resources. If one or several applications exceed the workload threshold, the server can become destabilized, preventing critical background processes from running on time. The response time can become prolonged, making it difficult to debug the source of the excessive load or to log into the administration console to identify and fix the problem. To prevent this kind of issues and to manage the database server workload, some vendors have equipped their database engines with resource management features. In this article, we will focus on Resource Governor, SQL Server Enterprise, showing you how to use it to enable workload isolation. Because this is a database-specific feature, any OutSystems application can take advantage of it, and no special configuration is needed. How Does Resource Governor Work? Resource Governor is available in SQL Server Enterprise. It enables the management of SQL Server workloads by specifying limits of resource consumption from incoming requests. At the time of writing, Resource Governor was not available in Amazon RDS for SQL Server or Azure SQL as Database as a Service (DBaaS) for cloud offers. However, it is available in managed offers like Azure SQL Database Managed Instance. Resource Governor has three main concepts: Resource pools: represent the physical resources of the server. Resource Governor allows the user to specify limits on the amount of CPU, memory, and physical input/output (I/O) that incoming application requests can use. When SQL Server is installed two resource pools (internal pool and default pool) are created, and the user can add more resource pools. represent the physical resources of the server. Resource Governor allows the user to specify limits on the amount of CPU, memory, and physical input/output (I/O) that incoming application requests can use. When SQL Server is installed two resource pools (internal pool and default pool) are created, and the user can add more resource pools. Workload groups: act as containers for session requests with similar classification criteria, and are located inside a resource pool. When SQL Server is installed, two workload groups (internal group and external group) are created and mapped to the corresponding resource pools. The user can add more workload groups. act as containers for session requests with similar classification criteria, and are located inside a resource pool. When SQL Server is installed, two workload groups (internal group and external group) are created and mapped to the corresponding resource pools. The user can add more workload groups. Classifier Function: assigns incoming sessions to a workload group, according to their classification. The user customizes the classifier function, to classify and assign the incoming sessions, and the overall database usage according to the application requirements. The following image summarizes these concepts. When an application connects to the database engine, it is classified and then distributed to a workload group. One or more workload groups are then assigned to specific resource pools. The architecture of Resource Governor: image source. Getting Started With Resource Governor Let’s consider the following scenario and its resource management requirements. A company called ABC Corp. runs two OutSystems applications (Salary Processor and Invoicer), both are critical to its business, and they are both connected to an SQL Server database: Salary Processor: this is a Business Processing Management (BPM) based application, highly transactional and CPU intensive, that runs once per month during the daytime; this is a Business Processing Management (BPM) based application, highly transactional and CPU intensive, that runs once per month during the daytime; Invoicer: a web application used in the ABC stores, it issues all the invoices to the customers. a web application used in the ABC stores, it issues all the invoices to the customers. Service Center: OutSystems Service Center, the environment management console. The database resource manager (Resource Governor) segregates user access to the Invoicer from the Salary Processor’s asynchronous processes and timers, shielding the end-user quality of service (QoS) from any abnormal load in asynchronous jobs. Service Center manages and monitors the OutSystems platform at all times and is immune to unexpected peak workloads of other applications. Step 1: Create Resource Pools Resource pools impose limits on the amount of CPU that incoming application requests can use within the resource pool. For our scenario, we will need to create two resource pools: CREATE RESOURCE POOL rp_salary_processor WITH ( MIN_CPU_PERCENT = 0, MAX_CPU_PERCENT = 10, MIN_MEMORY_PERCENT = 0, MAX_MEMORY_PERCENT = 100 ); GO CREATE RESOURCE POOL rp_service_center WITH ( MIN_CPU_PERCENT = 50, MAX_CPU_PERCENT = 100, MIN_MEMORY_PERCENT = 0, MAX_MEMORY_PERCENT = 100 ); GO rp_salary_processor: used by the Salary Processor. In a scenario with CPU contention, the Salary Processor is allocated a maximum of 10% CPU usage. The resource pool enables the database server to reserve enough available resources for other applications. In this scenario, the Salary Processor queries may become slower and the overall processing may take longer. However, it shouldn’t be problematic since this type of batch processing isn’t typically time-sensitive. used by the Salary Processor. In a scenario with CPU contention, the Salary Processor is allocated a maximum of 10% CPU usage. The resource pool enables the database server to reserve enough available resources for other applications. In this scenario, the Salary Processor queries may become slower and the overall processing may take longer. However, it shouldn’t be problematic since this type of batch processing isn’t typically time-sensitive. rp_service_center: used by Service Center. In a scenario with CPU contention, Service Center is allocated a minimum of 50% CPU, ensuring our access to Service Center, even when the database server is with high CPU usage. Step 2: Create Workload Groups The second step is to create the workload groups, which we then associate to the corresponding resource pools: CREATE WORKLOAD GROUP grp_salary_processor USING "rp_salary_processor"; CREATE WORKLOAD GROUP grp_service_center USING "rp_service_center"; Step 3: Create and Define the Classifier Function The next step is to create a classifier function that maps incoming session requests to the appropriate workload groups. To segregate the workload, a proper classifier function must be defined, and this depends on how the applications are using the database. Define the Classifier Function for a Shared Database Scenario In this scenario, the OutSystems platform runs on its own server and each application runs in a separate front-end server, they all share the same database. To use the Resource Governor when all the applications share the same database, the classifier function should assign each application to a different front-end server, as demonstrated in the following figure. In this scenario, the selection criteria of the classifier function are the front-end servers from where the session requests arrived: CREATE FUNCTION fnClassifier() RETURNS sysname WITH SCHEMABINDING AS BEGIN DECLARE @val sysname SET @val='default' IF HOST_NAME() = 'Frontend1' SET @val = 'grp_salary_processor' IF HOST_NAME() = 'PlatformServer' SET @val = 'grp_service_center' RETURN @val END; GO Define the Classifier Function for Multiple Databases Scenario In this scenario, the OutSystems platform and applications data (OutSystems Multiple Database Catalog feature) are stored in multiple databases, located in one database server (also called catalogs). Let’s assume that all applications run on a single front-end server, as demonstrated in the following figure. In this scenario, the selection criteria of the classifier function are the database names: CREATE FUNCTION fnClassifier() RETURNS sysname WITH SCHEMABINDING AS BEGIN DECLARE @val sysname SET @val='default' IF ORIGINAL_DB_NAME() = 'SalaryDB' SET @val = 'grp_salary_processor' IF ORIGINAL_DB_NAME() = 'PlatformDB' SET @val = 'grp_service_center' RETURN @val END; GO Bear in mind that both classifier functions return a default workload group, and these will be assigned to the Invoice application and to all other non-classified sessions. To finish creating the classifier function, we need to register it and update the in-memory configuration: ALTER RESOURCE GOVERNOR with (CLASSIFIER_FUNCTION = dbo.fnClassifier); ALTER RESOURCE GOVERNOR RECONFIGURE; GO Step 4: Validate the Resource Pools and Workload Groups We can now verify the resource pools and workload groups are properly configured by running these queries: SELECT * FROM sys.resource_governor_resource_pools; SELECT * FROM sys.resource_governor_workload_groups; GO To check if the classifier function exists and is enabled, run the following queries: SELECT * FROM sys.resource_governor_configuration; GO SELECT object_schema_name(classifier_function_id) AS [schema_name], object_name(classifier_function_id) AS [function_name] FROM sys.dm_resource_governor_configuration; Database resource management benefits end-user QoS, shielding them from experiencing unresponsive or excessively slow applications. Resource Governor enables you to manage your available resources by implementing workload isolation, and as you have seen, it’s pretty straightforward.
https://medium.com/outsystems-engineering/how-to-manage-database-resources-in-outsystems-sql-server-resource-governor-70be2b6bde57
['João Valentim']
2019-06-06 17:58:55.556000+00:00
['Technology', 'Sql Server', 'Resource Management', 'Outsystems']
2,829
Coronavirus Found in Wastewaters: What Does it Mean for the Pandemic
Coronavirus Found in Wastewaters: What Does it Mean for the Pandemic And how wastewater-based epidemiology (WBE) prevents potential outbreaks. Image by Sebastian Ganso from Pixabay The novel coronavirus, SARS-CoV-2, causes gastrointestinal symptoms, such as diarrhea, nausea, and abdominal pain, about 10% of the time. The reason is that SARS-CoV-2 can infect gastrointestinal cells that express high levels of ACE2 receptor. It makes sense that SARS-CoV-2 would shed through the feces as well, which is true. As follows, the next question is: Might wastewaters be a source of Covid-19 transmission? Persistent and infectious SARS-CoV-2 in feces The prevalence of fecal shedding of SARS-CoV-2 in clinical settings is about 50–80%, according to a research review of Masaaki Kitajima, an assistant professor specializing in water microbiology at the University of Tokyo. A few studies have detected infectious, active SARS-CoV-2 in fecal samples that are culturable (the act of infecting cells with viruses) in the lab. One study, however, failed to culture SARS-CoV-2 isolated from feces of Covid-19 patients, which implies inactive viruses. Nonetheless, a negative finding does not negate other studies that have found infectious SARS-CoV-2 in feces. Feces could be a persistent source of infectious SARS-CoV-2. Concerns have, thus, been raised about the possible fecal-to-oral spread of SARS-CoV-2. SARS-CoV-2 can even be found in the feces of Covid-19 patients for 4–11 days after their nasopharyngeal swabs turned negative about 60% of the time. This might be due to the longer persistence of SARS-CoV-2 in the gastrointestinal system than the respiratory tract (mean of 27.9 vs. 16·7 days after symptom onset). Therefore, the evidence suggests that feces could be a persistent source of infectious SARS-CoV-2. Concerns have, thus, been raised about the possible fecal-to-oral spread of SARS-CoV-2. For instance, contaminated feces from Covid-19 patients might enter wastewaters or sewage plants and end up in water supplies. SARS-CoV-2 in wastewaters: Infectious or not? Scientists have previously detected the genetic material of coronaviruses that cause common cold in wastewaters in the US and Saudi Arabia. A 2005 published study has also found persistent SARS coronavirus RNA in untreated and disinfected wastewaters from Beijing hospitals. Researchers believed this waterborne SARS might have contributed to the outbreak in areas with defective plumbing systems. Since prior coronaviruses can be present in wastewaters, the chances are that SARS-CoV-2 can as well. Indeed, investigators in multiple countries have discovered SARS-CoV-2 genes in wastewaters (mostly untreated samples) in the Netherlands, USA (Massachusetts, Montana, and Louisiana), Germany, France, Turkey, Japan, Italy, Spain, and Australia. “Conventional wastewater treatment processes should inactivate SARS-CoV-2, and multiple barriers used in drinking water treatment plants should suffice to remove SARS-CoV-2 to levels of non-detect and low risks.” Importantly, the presence of viral genes does not prove viability or infectivity. So, scientists do not know if SARS-CoV-2 in wastewaters is infectious or not, owing to the constraints of current research tools. The stability of genomes in wastewaters is highly variable, which complicates the isolation of viruses for culturing (the act of infecting cells with viruses) in labs. What the WHO says: Significance of the viral envelope “While the presence of SARS-CoV-2 in untreated drinking water is possible, infectious virus has not been detected in drinking-water supplies,” states a July report by the World Health Organization (WHO). The main reason is that SARS-CoV-2 is an enveloped virus, which is arguable its biggest weakness. An enveloped virus is less durable than non-enveloped viruses: The viral envelope is made up of fat, which is easily pulled apart by alcohol, soap, or disinfectants; in return, enveloped viruses readily infect mammalian cells whose cell membrane is also fat-based. “SARS-CoV-2 is enveloped and thus less stable in the environment compared to non-enveloped human enteric viruses with known waterborne transmission (such as adenoviruses, norovirus, rotavirus, and hepatitis A virus),” the WHO report added. The proper treatment of wastewaters or sewage plants would result in a 99.9% reduction of coronaviruses of all sorts, the WHO claims. “While the presence of SARS-CoV-2 in untreated drinking water is possible, infectious virus has not been detected in drinking-water supplies.” Basically, the enveloped SARS-CoV-2 is vulnerable to environmental stressors such as changes in heat or pH. Thus, the waterborne spread of SARS-CoV-2 in properly treated water supplies is unlikely. “Conventional wastewater treatment processes should inactivate SARS-CoV-2, and multiple barriers used in drinking water treatment plants should suffice to remove SARS-CoV-2 to levels of non-detect and low risks,” Prof. Kitajima and others concurred. But note that countries with poor wastewater treatment and sanitization infrastructures, such as Pakistan and India, may face an increased risk of waterborne spread of SARS-CoV-2. Value of wastewater-based epidemiology (WBE) Even if wastewaters SARS-CoV-2 may not be much of a concern in countries with access to clean water, the data still helps epidemiologists paint a clearer picture of the pandemic and potential outbreaks. One perk of WBE is that it acts as a pool testing, where traces of viruses can be detected in groups of populations. “Detection in community wastewater of one symptomatic/asymptomatic infected case per 100 to 2,000,000 non-infected people is theoretically feasible, with some practical successes now being reported from around the world,” said Prof. Rolf U. Halden, a director of the Center for Environmental Health Engineering at the Biodesign Institute in Arizona. For instance, the University of Arizona tested for the presence of SARS-CoV-2 in wastewater samples from each dorm, and one sample came back positive. The university then tested 311 students residing in that dorm and successfully quarantined two asymptomatic carriers of Covid-19 before any outbreaks. “Wastewater surveillance could, therefore, be used to detect SARS-CoV-2 in the community and provide an estimate of the total number of infections. The added value is that this approach accounts for those who have not been tested, as they have no or mild symptoms,” agreed Anthony D. Harries, a senior advisor at the International Union against Tuberculosis and Lung Disease and an honorary professor at the London School of Hygiene and Tropical Medicine in the UK. “If lockdown measures have worked and a community is declared coronavirus free, routine wastewater surveillance could be used as an early warning alert that new infections are present.” Even if wastewaters SARS-CoV-2 may not be much of a concern in countries with access to clean water, the data still helps epidemiologists paint a clearer picture of the pandemic and potential outbreaks. WBE can also help compensate for the shortages and sub-par quality of diagnostic tests. This can save millions to billions of USD, as estimated by a statistical study of Prof. Halden. “WBE surveillance of populations is shown to be orders of magnitude cheaper and faster than clinical screening, yet cannot fully replace it,” the study concluded. “For resource-poor regions and nations, WBE may represent the only viable means of effective surveillance.” Short Abstract There have been confirmed cases of fecal shedding of infectious SARS-CoV-2 in Covid-19 patients. This had lead to the positive detection of SARS-CoV-2 in wastewaters around the world, which raised concerns for a possible fecal-to-oral route of transmission. Fortunately, SARS-CoV-2 is an enveloped virus that is susceptible to inactivation by standard wastewater treatment protocols. But this might not apply to countries that lack access to clean water. Regardless, there is value in wastewater-based epidemiology (WBE), where it offers a cost-effective means of pool testing, and the monitoring of current pandemic and potential outbreaks.
https://medium.com/microbial-instincts/sars-cov-2-found-in-wastewaters-what-does-that-mean-for-us-eed450ea962f
['Shin Jie Yong']
2020-08-30 15:42:22.428000+00:00
['Technology', 'Innovation', 'Life', 'Covid 19', 'Science']
2,830
How to Become a Better Developer Every Single day
How to Become a Better Developer Every Single day 4 ideas to make you a top-class performer without long study nights Photo by Valeriy Khan on Unsplash True winners build themselves during their practice time. That’s something you might have recognized yourself when looking at masterclass performers. You see them doing something and you think to yourself: “I wonder how much this person practised for achieving this”. Coding is no exception to this rule. And if you want to be a top performer too, you have to include daily practice of your skills in your life. Let’s see how you can easily do that with the following list. You Should Always Have A New Goal And Work Toward It This is a very personal belief I have, and it’s something that guides me through my life every day. I feel like people always need to set a new goal in their mind. Something that they want to achieve, and they have to work hard to get it. That’s true for me both on a personal level and in my career. I suggest you set one target at the time for yourself and you build your way toward it. For example, you could: Build an app you always wanted to create. Finally, finish all those Udemy coding courses you have in your library. Learn a new language you were curious about. Learn new patterns, techniques, for improving the code you write daily. Find a way to achieve your goals. Write down the necessary steps if you feel like. This behaviour has immense value. It will make you grow as a professional because you will learn new stuff and practice. It will give you new occasions because you never know what some knowledge can bring you to in the future.
https://medium.com/javascript-in-plain-english/how-to-become-a-better-developer-every-single-day-22f771de5897
['Piero Borrelli']
2020-11-19 08:50:05.775000+00:00
['Web Development', 'Technology', 'Software Engineering', 'Work', 'Programming']
2,831
Ocean Protocol: How Blockchains can contribute to AI by enabling Decentralized Data Exchanges
Credits: Kelly Belter Modern digital society has led to an increasing awareness of the importance of data, sometimes arguably labelled as the “new oil” or “the next natural resource” (Virginia Rometty, IBM CEO). Many firms consider data-driven decision making as a competitive advantage and therefore allocate significant resources to collect and proceed data, which is illustrated by the opening of new Business Intelligence departments and an explosion in demand for data scientists. Some of the most commonly used digital platforms such as Facebook and Google are free to use but in exchange collect data about users (who agree upon this data sharing policy when signing the terms & conditions). These companies are all among the most valuable companies in the world, confirming the benefits of owning data. These companies perform so well thanks to the contribution of data to machine learning. In fact, for many years scientists have developed increasingly complex models to improve accuracy but faced diminishing returns. However, feeding models with much larger datasets has shown performances surges. It is now commonly accepted that some machine learning methods (excluding those that evolve in permanence via trial-and-error such as Reinforcement Learning) algorithms are greatly improved if fed with thousands, millions and even more with billions of data. The problem is that except for the few big players that have data and are able to convert these into value, accessing data is a major challenge for AI practitioners. This lack of availability inhibits startups to train their algorithms. Some data collections can be found (including some for free) online but represent only a small proportion of the world’s data. To face this challenge, startups like DEX (www.dex.sg), BDEX (Big Data Exchange), and ExchangeNetwork have created data exchange platforms where companies as well as individuals can buy and sell data. On these platforms, data providers upload their precious resources on the exchange’s database which then grants access to data users. Data can be retrieved using various types of access (e.g. API, download, and web-interface) and formats (e.g. JSON, CSV, and reports). Therefore, this solution enables the transfer of data between entities and can be seen as a major step towards data availability. DEX centralized exchange interface However, centralized exchanges present drawbacks that impede scalability and massive adoption. The main reasons are related to centralization and can be found in other centralized systems such as banks that have led to the development of distributed ledger technologies (e.g. blockchains illustrated by Bitcoin for payments without intermediary). The major problem of a centralized data exchange is that data are hosted by a third-party (usually the platform itself or service providers on the cloud); this is something firms are not comfortable with as leakage of some of their most valuable data can cause disastrous consequences for their business. As an example, Yahoo’s database was hacked in 2014 and precious information about its users have fallen in the hands of malicious parties (by the way, we still do not know how this database was hacked). As a consequence, firms and institutions are reluctant to use such services because the risk surpasses prospected gains. Centralized systems are also inefficient as they cause delays, incur costs, and lack transparency as well as auditability. The latter is also an important feature for a proper data marketplace as providers need to ensure compliance (more on this issue below). There are numerous articles and books about the need for decentralization, as well as about when a centralized system still remains a relevant structure. I would advise readers to have a look at these to better understand the disruption which is currently happening with the blockchain technology. In the case of data marketplaces, privacy- and security-related risks are in my opinion by far the biggest problem of Centralized Data Exchanges. It is also important to notice that people have lost control of their data. Decentralized exchanges, by providing peer-to-peer exchanges, empower individuals to get back their ownership, control, and even monetize their data. This is why the Ocean Protocol foundation came up with the idea of a Decentralized Data Exchange protocol that would be the substrate for building decentralized data marketplaces. Ocean protocol is the result of the fusion between BigchainDB and DEX in March 2017. BigchainDB is a Berlin-based startup founded in 2015 that uses blockchain technology to build scalable decentralized databases with interoperability as one of the core design guidelines, meaning that it can be accessed by various blockchain protocols but also other distributed ledger implementations such as IOTA’s tangle. DEX is a Singapore-based centralized data exchange operating since 2015 with more than 250 data providers. Frequently in contact with these data providers, they felt the lack of trust in centralized databases and thus the need for a decentralized system, as explained in the interview below with Chirdeep Chhabra, DEX’s CEO, and Founder as well as Board member of the Ocean Protocol foundation. As mentioned on their website, the Ocean Protocol Foundation aims at creating a Decentralized Data Exchange protocol to unlock data for AI. Marketplaces will not be governed by the Ocean Protocol Foundation nor will the data be hosted by them. In fact, data will be distributed, while being under the control of the data providers or data custodians, and in some cases encrypted so that there is no single point of failure. In addition, participants will not only be able to transact directly with each other but will also have the possibility of creating new data marketplaces using the protocol. In fact, the Ocean Protocol Foundation does not see the product as being one huge data marketplace but rather many marketplaces developed by users (e.g. a marketplace specific to the healthcare industry), the protocol acting thus as a kind of “network of networks”. For further information about Ocean Protocol, you can refer to their website and several papers. The business whitepaper is already there (a nice 69-pages reading), providing an explanation of their project, an exhaustive description of the team and their background including BigchainDB and DEX work, a roadmap as well as extensive information about the token distribution. It also mentions the already numerous partnerships (see picture below). I would recommend reading it and stay connected as the technical whitepaper will be released soon. The whitelist for the token distribution will open on February 15. Participating Agencies and Authorities of Singapore Government Service & Technology partners During my visit in Singapore, I had the opportunity to meet Chirdeep Chhabra, CEO of DEX, and Founder and Board member of Ocean Protocol foundation, to discuss the current state of the data exchanges ecosystem and in particular the contribution of Ocean Protocol foundation. Here is the transcript of our interview. Could you tell me more about your background and how did you come up with the Ocean Protocol idea? Following my master in distributed systems at Ecole polytechnique fédérale de Lausanne (EPFL), I worked at IBM research labs and later at ETH, Zurich, in what people now call the Internet of Things. Later, I studied at the London Business School and worked in multiple ventures in London, most often in the data field. At this point, in my environment there was no doubt about the potential of data anymore, it had become commonly accepted that it was greatly valuable for businesses. The questions had moved to how to create value from the data, how do we unlock their potential. Finally, I joined DEX and moved to Singapore which has the ambition to become the first smart city. DEX had thus started working with the government and several enterprises here to build a centralized marketplace for 4 years now. One of the main problems in AI is the access to data: many AI companies came to me to connect them with people and organisations having these datasets. Actually, only a few companies have both datasets and machine learning algorithms (e.g. Facebook, Google). This is why we need some kinds of marketplaces to get access to data and enable transactions to happen. When I joined, I quickly realized that a centralized model was unable to scale. This is explained by the fact that entities would not give us their most valuable data for the simple reason that they cannot see what happens and then may feel that they lose control of their data. As a consequence, I started to look at alternatives and especially how blockchain technology and tokens can contribute. I have known Trent McConaghy (founder of BigchainDB, co-founder of Ocean Protocol foundation) for a while so I contacted him in Berlin. I told him about the idea of data being converted into assets that are traded within a tokenized ecosystem. Trent was writing articles about that and we shared the same view so we ended up creating Ocean Protocol, together with other members. I understand that Ocean Protocol is constituted of the DEX and BigchainDB teams. What are the roles and implications of each for the foundation? We have a clear understanding of our strengths and so who is doing what. There are two elements: the protocol and the marketplace. Essentially, we act as a single team but within that team most of BigchainDB resources are focusing on the protocol and ours on the marketplace. We are building this marketplace in order to help users creating additional marketplaces with the open-source template. How was the decentralization proposition accepted by your peers in the team? And by your clients and partners? Within the team we are all very optimistic about it and believe that this complete change in direction is necessary. This new philosophy ensures that Ocean Protocol is built in the right way, with a network of marketplaces upon it. This is a design that is important for the development of safe and sustainable AI. Concerning the second part of the question, we have been discussing that with many of our clients. Actually, last year we had a large workshop with a number of C-level executives, and Data and Privacy Officers, about data management and sharing. They understand the value of data but problems appear when it comes to understanding the mechanisms of data access, regulations and compliance. They must be able to provide a list of who accesses the data upon request by regulators. Transparency and immutability are important factors that complement the need for privacy and security of the data. Not having these characteristics fully operational was one of the biggest barriers before for DEX but there was much enthusiasm when we elaborated on the decentralization, trust frameworks and the Ocean Protocol proposition. Convincing companies that are already working with data to join has logically been relatively straightforward. We also have meetings with other corporates, not traditional data companies, those producing data on a daily-basis but not using them. We try to convince them of the need for allocating more resources in AI, data analysis/business intelligence. As an example, firms need to predict both the supply and demand (e.g. if some types of crops will grow in the coming years, or the consumption of end-products). In addition, even if they produce more and more data, this is not enough to have accurate forecasts and stay competitive. They need external data for a rich insight and forecast. That’s why we need the marketplace where they can buy and sell data (it can also create new revenue streams) in order to complement the data they are producing. My conclusion is that if companies do not participate in the data markets they will be excluded from the future data economy and may be at risk of shutting down. Is it a service that you will offer mostly to companies? No, we don’t want to provide all the services ourselves. We are working hard on inserting inclusivity as a value for the design of the protocol. A marketplace based on a public blockchain can completely help democratize data access. We really want to not only benefit the big AI companies but also small ones, NGOs, governments and even individuals. If Ocean Protocol or other projects with similar goals fail, basically AI will be in the hands of a few people and this is in my opinion not good for humanity. You have recently announced a partnership with the SingularityNET. How does it fit with your project? SingularityNET is trying to make a marketplace for AI applications. But AI models and data have a very strong relation as AI needs data and reciprocally data are the most valuable when fed into AI algorithms. In addition, SingularityNET shares the same vision as ours: a vision where, like the internet, Artificial Intelligence does not belong to a few individuals. You are working on a public blockchain. How do you see that in terms of scalability? BigchainDB has built a scalable blockchain database. We have a history around that. Nevertheless, we understand that there are technical challenges and therefore we need to partner with other projects and scientists but as soon as possible also with the community using the open-source protocol. In terms of growth, how do you see yourself penetrate the market? We want to create a global community, having meetups all around the world. If there is a massive attraction in an area we will obviously respond by organizing specific events there. One of the big advantages of Singapore is that we have already engaged with many companies and government agencies in the past. We hope to continue that and engage even more with other stakeholders here including AI companies, data scientists, SME’s and corporates. We believe that Singapore can reach its goal of becoming the first smart nation in the world with a good management of data. Engagement with government, companies and communities is promoted by the fact that people here have been very forward thinking in this field and they are happy to support what we are doing. Singapore is a very good test case for our project. To expand to other countries and achieve more decentralization, we are partnering with PwC at the marketplace level in order to make sure to follow some compliance/regulatory structures that vary in relation with different jurisdictions. As an example, the EU will implement a new data regulation called the General Data Protection Regulation (see note below) which we need to make sure to comply with. It should not matter whether the marketplace is running in Germany or in Japan so we need to be cautious about different local regulations and make sure to take that into account in our design. I would even add that the Ocean Protocol is in line with this new regulation as it is exactly one of our objectives: enabling people to have full control of their data. In terms of product development, we aim at coming with a first Minimum Viable Product by Q3 2018 and network launch by Q1 2019. “The General Data Protection Regulation was designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens data privacy and to reshape the way organizations across the region approach data privacy. Approved on 14th April 2016, it will be enforced on 25th May 2018 at which time organizations in non-compliance will face heavy fines”. Source: https://www.eugdpr.org Does that mean that I could also sell my data? Nothing would prevent you of doing that. However, at the beginning you will have no credibility on the network, so you would need to be referred or put stake (Note: put money at stake means buying and betting tokens such that if one’s data appeared to be false or not actually her, that person will lose her stake (and could even be blacklisted), quite similar to how proof-of-stake achieves consensus in some public blockchains). This is why at the beginning we are starting with those that have larger and valuable datasets. Nevertheless, we are building the token economy with in mind the purpose of not allowing any kind of centralization so of course it will be possible. What would inhibit me as a big player to create a monopoly? The rewards that one gets as a result of his data being very popular is logarithmic. Therefore, you cannot take over control as there are incentives for people to work with new data (because of the logarithmic curve). This mechanism ensures that people work, curate and bring new data. Price will have probably little to do with the popularity of the data. In any case, it is not our job to attach that. Data providers have the right to judge which price to set and rules are defined for the data marketplaces by keepers. It is also important to understand that policies can change depending on the marketplace as they can be subject to different regulations and purposes. There may be some marketplaces specific to some fields like healthcare and energy. As stated previously, we do not think that there will be only one global marketplace. What is for you an interesting use-case/industry for a decentralized data marketplace? In my opinion, the most impactful one is healthcare. As an example, in the context of the Parkinson disease some companies are working on AI application to define the right scale of accuracy for tremors measurements. This input is then used to estimate the right dosage, duration and how often patients need to take the medicine. If the condition is not managed properly, they may need to have an implant in their brain which costs about 50.000€. This is a very expensive operation that more accurate machine learning predictions could replace. However, to get a low error rate, we would need 10.000 patient’s data. It is clear that no hospital can provide such amount of data, but a decentralized data marketplace can. Thanks to distributed ledger technologies, the sharing of patient data will be enabled but data will still remain with the patient or within the hospital. An algorithm that has been developed in Singapore could be sent to a hospital in Munich (after making sure that the data are formatted accordingly) for training and returns to Singapore without bringing back data. Moving algorithms is cheaper than moving data. We just need to prove that no data is pulled, which we believe is not difficult to achieve. In this case compliance and regulation are satisfied, the AI is trained and the impact is happening. More information about Ocean Protocol can be checked on their website. They also have two telegram channels (chat & news) to stay updated. Finally, several related articles can be found on Medium and if you prefer videos here is their youtube channel. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — This article is brought to you by the Bitcoin Center Korea. If you want to learn more about our activities and stay updated with Fintech-related news, please visit our website or our other social media:
https://medium.com/bitcoin-center-korea/ocean-protocol-how-blockchains-can-contribute-to-ai-by-enabling-decentralized-data-exchanges-4b34ff004171
['Raphael Hannaert']
2018-02-09 20:25:37.636000+00:00
['Blockchain', 'Data Science', 'Artificial Intelligence', 'Technology', 'Cryptocurrency']
2,832
First driver-less delivery service to begin in California in 2021!
Nuro delivery bot vehicle. BBC First driver-less delivery service to begin in California in 2021! Robotics company Nuro received the go-ahead from California’s MVA to charge for its delivery service. The tiny, oval cars would be limited to 35 mph and good weather. The vehicle has an egg-shaped frame that is smaller than most cars in the US. It also has two temperature-controlled compartments for deliveries. Doors raise up to reveal the items once a code has been entered by the recipient. — BBC A similar trial is on in Shanghai for a firm backed by Alibaba. Wonder what will happen to all the delivery drivers if this becomes the future! Also another acquisition target for Amazon?
https://medium.com/@salhasan/first-driver-less-delivery-service-to-begin-in-california-in-2021-820081a328be
['Salman Hasan']
2020-12-25 17:37:55.714000+00:00
['Technology', 'Autonomous Vehicles', 'Automation', 'Startup', 'Future']
2,833
How to Use Scala Pattern Matching
Matching with Case Classes Besides matching against the value of an object itself, we can also match against possible types (or case class ). Let’s say we are writing a classification program for a computer scanner at a fresh produce department of a supermarket. The scanner will label an item based on its type, e.g. fruit or vegetable. I’d imagine this is how we would define our trait and case classes. trait GroceryItem case class Fruit(name: String) extends GroceryItem case class Vegetable(name: String) extends GroceryItem Now, we are going to write a function that will take a GroceryItem object as its input and classify whether it is a Fruit or a Vegetable . This is how it would be written in its simplest form. def classifyGroceryItem(item: GroceryItem): Unit = item match { case _: Fruit => println("label item as fruit.") case _: Vegetable => println("label item as vegetable.") case _ => println("unable to label the item. this seems to be an item for other department.") } Notice the syntax _: Fruit . This is how we should write our case when we want to pattern match against its instance type. Additionally, this expression does not actually look into the value of the case class ’ field (e.g. name ). If we want to also match against the field of the case class, this is how we can do it (look at the first case expression). def classifyGroceryItem(item: GroceryItem): Unit = item match { case Fruit(name) if name == "apple" => println("item is a fruit and it's an apple.") case _: Fruit => println("label item as fruit.") case _: Vegetable => println("label item as vegetable.") case _ => println("unable to label the item. this seems to be an item for other department.") } Please note that the order of the case expressions matters. In the above case, if the additional case Fruit(name) expression is placed after case _: Fruit , the code will never ever reach it as it will match the case _: Fruit right away.
https://towardsdatascience.com/how-to-use-scalas-pattern-matching-362a01aa32ca
[]
2020-05-08 23:31:55.046000+00:00
['Programming', 'Software Engineering', 'Software Development', 'Data Science', 'Technology']
2,834
Marketing AI Institute CEO Paul Roetzer on Superpowered Manipulation
Audio + Transcript Paul Roetzer: Most marketers still don’t even know what it is. So if you don’t understand the superpower you’ll have, how could you possibly be planning for how to not use it for evil? James Kotecki: This is Machine Meets World, Infinia ML’s ongoing conversation about artificial intelligence. My guest today is the founder and CEO of the Marketing Artificial Intelligence Institute, Paul Roetzer. Thanks so much for being on Machine Meets World. Paul Roetzer: Absolutely, man. Looking forward to the conversation, I always enjoy talking with you. James Kotecki: So when people hear marketing and they hear artificial intelligence, what do people think that you’re up to? Paul Roetzer: Well, the average marketer, I think, believes it’s just too abstract to care. I mean, that’s our biggest challenge right now is making marketers care enough to take the next step, to ask the first question about what is it actually and how can I use it? So I think a lot of times they just ignore it because it seems abstract or sci-fi. James Kotecki: And the Institute is an educational endeavor at its core, right? It’s trying to convince marketers to use AI in different ways across the different types of marketing that they do? Paul Roetzer: Yeah. We see our mission as making AI approachable and actionable. So it’s, we’re marketers trying to make sense of AI and make it make sense to other marketers. We’re not trying to talk to the machine learning engineers or the data scientists. We’re trying to make the average marketer be able to understand these things and apply it immediately to their career and to their business. James Kotecki: And what’s your scope? What’s your definition of AI? Paul Roetzer: The best definition I’ve seen is Demis Hassabis, who’s the co-founder and CEO of DeepMind, calls AI the science of making machines smart. And I just have always gravitated to that definition because I think it really simplifies it, meaning, machines know nothing. The software, the hardware we use to do our jobs, don’t know anything natively, they’re programmed to do these things. There’s a future for marketing where humans don’t have to write all the rules. That the machines will actually get smarter and that there’s a science behind making marketing smarter. And that’s what we think about marketing AI as. James Kotecki: What marketing technologies are you excited about to come to light in 2021? Paul Roetzer: We look at three main applications of AI: language, vision, and prediction. What you’re trying to do with AI is give machines human-like abilities — of sight, of hearing, of language generation. And so language in particular has just a massive potential within marketing. Think about all the places that you generate language, generate documents that summarize information, create documents from scratch, write emails, like it’s just never ending. And I think you’re going to see lots and lots of companies built in the space that focus explicitly on applications of language generation and understanding. James Kotecki: I looked at the history of the Institute, and it traces back about five years ago… Paul Roetzer: Mmm-hmm. James Kotecki: …to when you were thinking about how to automate the writing of blog posts. And five years ago, that wasn’t really possible, but now, this year, GPT-3 from OpenAI is a technology that looks like we’re either very close or already there to the point where a machine can convincingly write, from almost scratch, narratives, articles, blog posts, et cetera. What do you think of that? And what do you think is next if that kind of initial dream has maybe been achieved? Paul Roetzer: So there have definitely been major advances in the last even 18 months. So first we had GPT-2 was the big one that hit the market. I think it was like February of ’19 maybe it was when that surfaced. And then just this year we had GPT-3, which really took it to the next level of this ability to create free-form text from an idea or a topic or a source. And it really is moving very, very quickly. And I think in 2021, 2022, you’re going to start seeing lots and lots of applications of language generation from models like GPT-3, where the average content marketer or email marketer will be using AI-assisted language generation. James Kotecki: People always say when technology like this comes up, “There’s still going to be a place for human creativity. Don’t worry. We still need humans in the mix.” At what point do marketers look at this and start getting scared and saying, “You keep saying that, but the machines keep getting more and more creative.” Paul Roetzer: I am a big believer that the net positive of AI will be more jobs and it will create new opportunities for writers and for marketers. But I’m realistic that things I thought 24 months ago a machine couldn’t do, it’s doing now. And that’s part of why I think it’s so critical that marketers and writers are paying attention because the space is changing very quickly. The tools that you can use to do your job are changing very quickly. It’s going to close some doors. There are going to be some roles or some tasks that writers, marketers do today that they won’t need to do. But it’s also going to open new ones. And I think it’s the people who are at the forefront of this who have a confidence and a competency around AI that are going to be the ones that find the new opportunities and career paths — may even be the ones that build the new tools, the application of it for the specific thing they do. James Kotecki: Do you think marketers, at least marketers who do get it, have an obligation, not just to use AI effectively and ethically, but to use their skills to shape the public perception of AI? Paul Roetzer: That’s what I always tell people. It’s like, think about it. Like, why does Google have Google AI? Why does Microsoft advertise Microsoft AI? They’re all trying to get the average consumer to not be afraid of this idea, this technology because it is so interwoven in every experience they have as consumers now. They don’t realize it though. These big tech companies need consumers to be conditioned to accept AI. And I think in the software world for marketing, you’re going to see a similar movement where we need the users of the software to understand how to use it with their consumers, but also how to embrace what it makes possible within their jobs. James Kotecki: Are there ethical guidelines or any kind of ethical consensus out there for how marketers need to be approaching some of this stuff? I mean, if you took ethics out of it, you could use this technology in ways that were at best amoral and at worst unethical. So what are some guidelines that are actually shaping people’s decision-making here? Paul Roetzer: There aren’t any universal standards that we’re aware of. There is a big movement around this idea of AI for good, at a larger level. So you are seeing organizations created who are trying to integrate ethics and remove bias from AI at a larger application in society and in business. Specific to marketing though, it’s really at an individual corporation level. So are companies developing their own ethics guidelines for how they’re going to use data and how they’re going to use the power that AI gives them to reach and influence consumers? And that part’s not moving fast enough. There’s not enough conversation around that because again, most marketers still don’t even know what it is. So if you don’t understand the superpower you’ll have, how could you possibly be planning for how to not use it for evil? And so there’s these steps we’re trying real hard to move the industry through so we can get to the other side of how do we do good with this power that we’re all going to have. James Kotecki: When you look at the totality of AI in marketing, from your perch here, do you feel like you are fighting against trends that are taking things in the wrong direction? Do you feel overall optimistic about the state of things? Paul Roetzer: I feel optimistic, but I do worry a lot about where it could go wrong. And I think if you look at politics, I’m not going to bring in any specific politics into this, but if you look at the political realm, this isn’t new stuff. They’ve been trying to manipulate behavior on every side of it, in every country. It’s all about trying to manipulate people’s views and behaviors. And this is very dangerous stuff to give people like that whose job is to manipulate behaviors. And so if you’re a marketer and you’re so focused on revenue or profits or goals over the other uses of it, you’re going to have the ability to manipulate people in ways you did not before. And I do worry greatly that people will use these tools to take shortcuts, to hack things together, and to affect people in ways that isn’t in the best interest of society. James Kotecki: I’m imagining a bumper sticker for a marketer that says, “You say manipulate human behavior like it’s a bad thing.” Right? Paul Roetzer: I could see that, yeah. James Kotecki: Because the context, even the word “manipulate” has a negative connotation, but it is, if you just look at its neutral meaning, exactly what marketing is trying to accomplish. As we wrap up here, what are your hopes for marketing in 2021 when it comes to AI? Paul Roetzer: I just want marketers to be curious. To understand that there is a chance to create a competitive advantage for themselves and for their companies. And to do that, you just need to know that AI creates smarter solutions, that if you’re going to do email or content marketing or advertising, don’t just rely on the all-human all the time way you’ve previously done it. There are tools that are figuring things out for you that are making you a better marketer by surfacing insights, making recommendations of actions, assessing creative. There’s lots and lots of ways you can use AI. And I just think if people take the step to find a few to try in the coming twelve months, they’ll realize that there’s this whole other world of marketing technology out there that can make them better at their job. James Kotecki: Well, thanks for illuminating us on that. Paul Roetzer, founder and CEO of the Marketing Artificial Intelligence Institute. Thank you for being on Machine Meets World. Paul Roetzer: Absolutely, man. Enjoyed it. James Kotecki: And thank you so much for watching and/or listening, please like share, subscribe. You know, give the algorithms what they want. You can also email us at [email protected]. I’m James Kotecki And that is what happens when Machine Meets World.
https://medium.com/machine-meets-world/marketing-ai-institute-ceo-paul-roetzer-on-superpowered-manipulation-59c05fbf501a
['James Kotecki']
2020-12-16 15:41:39.245000+00:00
['Business', 'Ethics', 'Artificial Intelligence', 'Technology', 'Marketing']
2,835
What Do EU Gamers Like And Look Like?
The EU games market is worth €21 billion, according to new data from the Interactive Software Federation of Europe (ISFE). ISFE chairman Olaf Coenen said: “With a historic turnover of €21 billion in 2018, the video games sector is making a major contribution to Europe’s digital future.” He added: “The industry’s track record for pushing boundaries continues to redefine entertainment, generate new business models and deliver technology with cross-over potential.” Who’s playing? According to the ISFE report, over half of the EU population aged 6–64 play video games, with 77% playing at least one hour per week. That’s a phenomenal number of weekly gamers, with many of these users of course playing for more than one hour at a time. The most extreme of these players get a lot of media attention, with addictive gaming behaviours creating the precedent for the World Health Organization to classify gaming disorder in the 11th Revision of the International Classification of Diseases. The average age of a gamer in the EU is 31 years old, with 25–34 the strongest growing age group following 8% growth in 2018. In terms of the widely documented gender divide in gaming, the ISFE report found that 46% of EU gamers are women, with 63% playing on smartphones and tablets, 54% playing on computers and 44% playing consoles. Women represent more than half (52%) of all mobile and tablet gamers. Device-ive stuff Among EU gamers, 50% like to play on consoles, 17% play on handhelds, 56% enjoy playing on computers, 48% play on smartphones, 27% use tablets, and 17% play on all these devices. Photo by Joshua Fuller on Unsplash There has been a 15% year on year increase in key European markets in France, Germany, Spain and the UK. In these markets, revenue is split by device. 47% of the £12.3bn market size of these nations is driven by console revenues, with mobile/tablet revenues accounting for 34%. PC representing 18% and handheld devices making up 2% of the market. 40% of these countries’ €12.3bn market comes from online revenue, generated from full game downloads (42%), in-game extras (34%), socia games (24%). 34% comes from app revenue, either paid apps (3%) or in-app purchases (97%). The remaining 26% is physical revenue, from people buying copies of games. Next level titles Last year esports saw particular growth, with a 32% year on year growth from $655 million in 2017 to $865 million in 2018. The global audience reached in 2018 increased by 17.8% year on year to 395million. This audience is made up of 173 million esports enthusiasts (44%) and 222 million occasional viewers (56%). Overall, the best-selling games of 2018 were FIFA 19, Red Dead Redemption 2, Call of Duty: Black Ops 4, Grand Theft Auto V, FIFA 18, Farcry 5, Spider-Man, Assassin’s Creed Odyssey, God of War and Tom Clancy’s Rainbow Six Siege. One of these games will no doubt see further sales after American teenager Kyle Giersdorf won $3 million after taking the top prize for a Fortnite tournament. Photo by Kelly Sikkema on Unsplash Play it safe Whilst there are some 16 year olds like Kyle making a living out of playing video games, many parents are cautious about the influence games can have on their child’s development. Some parents choose to play video games with their kids, either because they enjoy the activity themselves, or as part of an effort to reduce some of the harms of online gaming. More than 35 European countries use PGEI age rating for their games. Out of +2,000 games rated in 2018, only 10% were classified as 18s, 17% were deemed 16s, 27% were 12s, 22% were 7s, and 24% were classified as universally appropriate 3s. The ISFE report showed that 28% of parents play video games with their children in cooperation on the same screen, with 26% playing competitively on the same screen. 18% of parents play together in cooperation on different screens/online, whilst 17% play competitively on different screens online. The most popular games that parents play with their children include FIFA, Minecraft, Fortnite, PUBG, and Overwatch. Are you an EU gamer? Tell me your age, gender and favourite titles below! Subscribe to my newsletter to get more stories like this one straight to your inbox.
https://medium.com/the-indiependent/what-do-eu-gamers-like-and-look-like-45872cb3a6c1
['Beth Kirkbride']
2019-10-01 19:50:00.676000+00:00
['Media', 'Gaming', 'Technology', 'Games', 'Entertainment']
2,836
New Tricks Help Sentinel Raise Over $300 Million
Yesterday, this topic was broached on 612 ABC Brisbane radio, as our Managing Director joined his mentor-turned-mentoree, Michael Sherlock to discuss their experience. Listen to the whole interview here: https://soundcloud.com/user-747468731-727041018/bambrick-media-sentinel-abc-interview In our fast-paced and ever-changing world, keeping up with new technologies and information is ongoing and never-ending. The technology that only a few years ago was considered revolutionary is now outdated and simple, everyday tasks have been transformed in ways we wouldn’t have imagined. The same goes for our relationships. When once it was once almost exclusively older generations teaching younger generations, we now see the opposite. When we think of mentoring, we conjure up images of the more experienced cohort sitting down patiently to provide much-needed advice to Gen X, Y and Z. Now, often it’s these younger generations passing on their knowledge to those who once taught them. Reverse-mentoring is a revolution that we are all aware of on some level, yet rarely speak about. This intriguing interview, hosted by Kelly Higgins-Devine, explored with humour the ways in which traditional working relationships have been altered by the new digital world. “Can you teach old dogs new tricks?” Kelly asked at the opening of the interview. Some may not think so, but Tim and Michael have proved them wrong. So what’s Tim and Michael’s story? Michael Sherlock is a high-profile business leader who transformed Brumbies from an ailing bakery business to the second largest bakery franchise system in Australasia. His relationship with Tim didn’t start until recent years, however, until Michael entered his current role as Chief Marketing Officer at Sentinel Property Group, and Tim decided to try his hand at franchising. “I had sold Brumbies and I needed to reinvent myself,” Michael explained. “Tim approached me to help him in the franchising area.” Around that time Michael was placed in charge of raising money from high-net-worth individuals for Sentinel. Traditional marketing wouldn’t suffice, so Michael turned to his mentoree for solutions. While Michael admitted he was a bit cautious of the digital world to begin with, this twist on traditional mentoring resulted in exceptional success. “We’ve been able to raise around $300 million from individuals without even placing an ad and we’ve done it all digitally by using all of the tricks I’ve learnt from Tim in the reverse mentoring scenario.” While this success was due to digital marketing tactics such as Remarketing and Search Engine Optimisation (SEO), Michael has also experienced success on social media. “I got into Twitter because that’s a bit funky,” Michael said. “Now I’m global and I’m trending. I’ve got about 400 followers.” So does reverse mentoring work? “It’s just a two-way exchange and learning from somebody else,” Michael says.
https://medium.com/digitaldisambiguation/new-tricks-help-sentinel-raise-over-300-million-c5a799aa2677
['Jason Mcmahon']
2017-08-27 23:57:44.105000+00:00
['Business', 'Technology', 'Australia', 'Digital Marketing', 'ABC']
2,837
Come visit us at MOVE 2019
door2door is participating at MOVE 2019. Join us at this groundbreaking mobility event taking place 12–13 February 2019 at the ExCel venue in London, United Kingdom. Meet our experts at our booth and don’t miss out on the thought-leading presentation of our Co-CEO and Founder, Tom Kirschbaum. Find all the information you need below: Where can you find us? Stand 14: Come visit our stand which is just in front of the main stage Who is attending from door2door? Book a meeting with our experts in advance! Speakers Who: Tom Kirschbaum, Co-CEO & Founder at door2door Where: Models Theatre When: Day 1 at 5 pm Title: Keeping shared mobility profits within local regions In his presentation Tom will talk about how cooperation between public and private entities can lead to a truly sustainable and inclusive mobility system, boosting the role of public transport, reclaiming cities from cars and handing them back to their citizens. He will present a roadmap on a new mobility ecosystem and show how the symbiosis of advanced mobility technology and experienced PTOs to build one integrated system will lead to a fair distribution of profits and improve the quality of mobility for all citizens. We look forward to meeting with you at MOVE 2019 We believe that the conference theme Mobility Re-Imagined offers a great opportunity to exchange visions and action plans for better mobility. As a technology provider, door2door accompanies transport companies and cities in the digital transformation of their mobility and provide software-based solutions, services and consulting. Together we work to make public transport more sustainable and efficient. Individual mobility without a car has to be made more attractive — whether in cities or rural areas. Our Mobility as a Service (MaaS) platform for data-driven analysis & planning, multimodal apps and on-demand mobility solutions helps public transport to become more customizable and convenient. Follow us on social media to stay updated on our presence at the event: Twitter and LinkedIn
https://blog.door2door.io/come-visit-us-at-move-2019-aa111a549e6c
[]
2019-02-11 10:35:46.686000+00:00
['Mobility', 'Move', 'Transportation', 'Door2door', 'Technology']
2,838
Companies in the Field of Quantum Computing
Quantum Computing is a new, futuristic technology, with great benefits and uses in the field of computing. Due to this, there are a lot of companies in the quantum computing field, and today I will talk about some of the most relevant ones and what exactly they are doing. Firstly, no paper about computing would be complete without the tech giants, such as Google, IBM, and Intel. All three of these companies are looking to develop more and more advanced technology in the quantum computing field. IBM has 28 quantum supercomputers in use, where Google’s specialty lies in its 53 qubit computer, a larger amount than any other company. Intel is further behind in the race. With a 49 qubit supercomputer in its possession. Although the difference between Google’s computer and Intel’s computer is only 4 qubits, it’s important to realize one fundamental factor of quantum computing that makes this 4 number difference so enormous. This factor is that qubits are exponential, not linear. The very basis of quantum physics allows qubits to hold the value 1, 0, or either 1 or 0 at any given time. This concept is difficult to grasp but in simplified terms it means that qubits can store 3 values at any given time, not only 2 like modern day bits. This fundamental difference is what makes qubits’ growth exponential, where normal bits growth is just linear. So, if you wanted to go into the numbers, Google’s supercomputer has approximately 9.0071993e+15 bits of space, where Intel only has 5.6294995e+14. To delve deeper into other companies that are in the quantum computing field, it is important to realize one of the biggest problems with quantum computing as a whole. This problem is that quantum computers are very expensive to make and to maintain, since they have to be kept at near absolute zero temperatures for their particles to stay in the quantum state. Obviously, a lot of small companies and smaller businesses can’t afford this hardware, but they still want to use quantum computers. This is where Microsoft and Amazon Braket come in. These two companies are mainly focused on making quantum computing technology available to companies who couldn’t afford it before, through the use of the cloud. Two more companies in the quantum computing field are Baidu, Atos Quantum and AT&T. These companies are researching the applications of quantum computing and how exactly quantum computers can be used in different ways, such as running superfast algorithms for database search, developing artificial intelligence, and discovering new pharmaceutical modules. Lastly, the last two companies I wanted to talk about are NASA and the Alibaba Group. This companies are using quantum computing in very specialized fields, NASA to help with aerospace engineering, and Alibaba to develop groundbreaking security techniques, since the methods used in cybersecurity these days are the same methods that can be easily cracked by quantum computers. In conclusion, Quantum Computing is a revolutionary technology that has lots of applications in the real world. There are many companies who are in this field of quantum computing, and they are all focused on different aspects of this technology.
https://medium.com/@ujjawalprasad111/companies-in-the-field-of-quantum-computing-9b1e53795ea0
['Ujjawal Prasad']
2020-12-21 03:05:14.400000+00:00
['Science', 'Computer Science', 'Quantum Computing', 'Technology', 'Computers']
2,839
JavaScript Best Practices — Destructuring and Strings
Photo by Simon Rae on Unsplash JavaScript is a very forgiving language. It’s easy to write code that runs but has mistakes in it. In this article, we’ll look at the right way to return results in our function and some best practices for strings. Use Object Destructuring for Multiple Return Values Rather Than Array Destructuring We should return objects in our functions if we have multiple items that we want to return so that when the position of the changes, we don’t have to change the code when the returned value is destructured somewhere else. For instance, if we have a function that returns several items in an array as follows: const foo = () => { const a = 1; const b = 2; const c = 3; return [a, b, c]; } const [a, b, c] = foo(); Then code would break if we swap the position of the items in the array. For example, if we have the following code: const foo = () => { const a = 1; const b = 2; const c = 3; return [a, c, b]; } const [a, b, c] = foo(); In the code above, we switched the position of b and b in the return value of foo but we didn’t change the order that they’re destructured. Therefore, we’ll get results that we didn’t expect. If we return an object, then we don’t have to worry about the position that each entry is returned in. For instance, if we have the following code: const foo = () => { const a = 1; const b = 2; const c = 3; return { a, b, c }; } const { a, b, c } = foo(); Then we can swap b and c within the object returned in foo and get the same result as before because objects are destructured by matching their property name and their nesting level rather than just the position as in arrays. Use Single Quotes for Strings Using single quotes saves us some typing since we don’t have to press the shift key to type it. It does the same thing as double quotes as they both create regular strings. Therefore, instead of writing the following: const foo = "foo"; We write: const foo = 'foo'; Photo by Matt ODell on Unsplash Strings That Cause the Line to Go Over 100 Characters Should Not Be Written Across Multiple Lines Using String Concatenation If our string is over 100 lines long and it’s all in one line, then we should write it across multiple lines so that we don’t have to scroll horizontally to read the whole string. What we should do is to break them up into chunks that are less than 100 characters longs, put them into an array, and then call the array instance’s join method together to join them together. For instance, we can write the following code to do that: const arr = [ 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.', 'Vivamus tincidunt magna at turpis maximus.', 'Duis porttitor imperdiet justo ac molestie.' ] const str = arr.join(' '); In the code above, we have sentences that are 100 characters long or less and then we called the join method to join them together. This is easier to read and manipulate since we don’t have lots of + signs or \ signs as in the following code: const str = 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.' + 'Vivamus tincidunt magna at turpis maximus.' + 'Duis porttitor imperdiet justo ac molestie.' or: const str = 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.\ Vivamus tincidunt magna at turpis maximus.\ Duis porttitor imperdiet justo ac molestie' Conclusion When we return something in a function that can be destructured, then we should do that by returning objects. This way, the destructuring is done by matching the property name and the nesting level rather than the position if we return the result as an array. Therefore, the destructuring code wouldn’t break if we move the object properties around as long as the property names don’t change. If we’re defining regular strings, then we should use single quotes to save us from using the shift key.
https://medium.com/swlh/javascript-best-practices-destructuring-and-strings-630b03143c34
['John Au-Yeung']
2020-05-29 22:08:03.834000+00:00
['JavaScript', 'Programming', 'Web Development', 'Technology', 'Software Development']
2,840
Recap งาน TensorFlow Dev Summit 2018
in Both Sides of the Table
https://medium.com/skooldio/recap-%E0%B8%87%E0%B8%B2%E0%B8%99-tensorflow-dev-summit-2018-cca05aac620e
['Sirinart Tangruamsub']
2018-05-24 16:01:11.003000+00:00
['Machine Learning', 'Artificial Intelligence', 'TensorFlow', 'Technology', 'Data']
2,841
Everything We Do Is Not For Today
When the town’s crime boss wants a precious piece of land, he sends some of his goons to terrorize the school that’s built on it. First, they threaten the principal, then they torch a classroom. Luckily, the local Kung Fu master saves the day. When he tries to acquire more help in form of the police, however, the chief says his hands are tied. His boss took the case. Corruption. After listening patiently, the master starts talking: “The world’s not fair. But moral standards should apply to all. Those who rule aren’t superior and those who are ruled aren’t inferior. This world doesn’t belong to the rich. Or even the powerful. It belongs to those with pure hearts. Have you thought about the children? Everything we do, they’re watching. And everything we don’t do. We need to be good role models.” And then, master Ip Man says something important. Something we forget. Something that, little by little, seems to fade from the human story: “Everything we do is not for today — but for tomorrow.” How Proverbs Come To Be In his book Outliers, Malcolm Gladwell captures the idea from this movie scene in an analogy: Freedom tomorrow comes from discipline today. He explains the stereotype that Asians are good at school, particularly math, partly with the fact that they have been rice farmers for centuries. Unlike Western farmland, rice paddies are built. Constructed. Not just in the landscape, but each unit too. They’re painstakingly assembled, complex systems of dikes, channels, and various layers of clay, mud, and fertilizer. Transplanting seeds, weeding, grooming, harvesting, it’s all done by hand. What’s more, with a hotel-room sized paddy carrying a family of six and neither mechanical tools nor more land available, skills, choices, and dedication have always been the name of the rice farming game. Unsurprisingly, Gladwell concludes: “Throughout history, the people who grow rice have always worked harder than almost any other kind of farmer.” Now, before you object with the lifestyle of European peasants back in the day, Gladwell did too — and found out it was less intense. Looking at some of the last remaining hunter-gatherers in Botswana and farmers in 18th-century France, he found they worked about 1,000–1,200 hours a year, spending most of their days idling and hibernating, especially in winter. “Working in a rice field is ten to twenty times more labor-intensive than working on an equivalent-size corn or wheat field. Some estimates put the annual workload of a wet-rice farmer in Asia at three thousand hours a year.” That’s eight hours a day, every day, compared to 5–6 hours for 200 days a year. With the same amount of time off, rice farmers would work 15-hour days. There are multiple reasons for this difference, one being that rice paddies don’t require fallow periods. To the contrary, they become more fertile the more they’re cultivated. Another is that it’s a complex, autonomous task, a little business if you will, where inputs correlate closely to outputs, more so than in Western farming. Finally, Asian rice farmers’ efforts weren’t in vain, as they paid fixed rents and kept the extra, unlike the lowly paid victims of a greedy, aristocratic landlord. The result is that today, a Russian proverb is “If God does not bring it, the earth will not give it,” whereas the Chinese equivalent goes “Don’t depend on heaven for food, but on your own two hands carrying the load.” Most of us could use more of the latter and less of the former. And while Westerners often mock their Asian friends for their stereotypical work ethic, Gladwell says it’s at the heart of literally every case study in his book. As he puts it, “a belief in work ought to be a thing of beauty.” Besides inspiring us to be more patient in our own lives, the other lesson from all this is that there’s another kind of tomorrow other than the’ literal one we know. It’s the kind the Kung Fu master was talking about and it reminds us: We’re standing on the shoulders not of giants, but generations. Shortsighted Tweets I saw a tweet this morning from my fellow German Dorothee Bär. She said: “Go to Twitter — see Germans tweeting in my feed — wish to be back in Austin — close Twitter.” There’s nothing wrong with this tweet, unless you’re a Member of Parliament and Minister of State for Digitization. Unfortunately, she is. Ms. Bär isn’t thinking about tomorrow. She isn’t even tweeting for today. She wants yesterday back when her whole job is to create a better future. That’s sad. In our modern world built for immediate gratification, there’s a lot of shortsighted thinking like this. Politicians, CEOs, entertainers, everyone is obsessed with surviving the next release, the next earnings call, the next election. But what about tomorrow? What about those who aren’t yet born? That’s why I’ll always root for Elon Musk. Yes, he too tweets nonsense and has his antics, but a man dumping his entire $180 million fortune into a triad of companies dedicated to better energy, better transport, and to explore a planet he’ll neither live nor die on is, clearly, thinking about tomorrow. Most of us aren’t politicians, CEOs, or entertainers. We don’t want to make it our life’s mission to ensure survival for the human race. That’s fine. But even in our own lives, it’d help if we did more things for tomorrow. Technology is now greatly compounding the returns. Talking about her first month making over $8,000, Shannon Ashley said: “The earnings were finally as large as I first dreamed about 10 months ago.” She deserves every penny, but it’s a line few people will ever get to say about that figure. Whether we asked a European peasant from 1700, an ancient Asian rice farmer, or a Kung Fu teacher from the 1950s, none of this was possible for them. They couldn’t just swap careers, grow a movement from their couch, or quadruple their earnings in a year. We live in amazing times. And yet, without those peasants, farmers, and teachers, none of us would be here. What they did for tomorrow is the foundation of what we can do today. Discipline Is Freedom My great-grandma lived through two world wars. She walked seven miles to work — one-way. My dad’s grandpa delivered and installed curtains in a 30-mile radius around his town. On a bike. My job is to sit on a chair or couch or bed, drink coffee, and write things like this. How could I not be grateful? And yet, I know, it’s easy to forget. To get lost in the everyday adrenaline rush. I wish there was a Kung Fu master in my town. Or a rice paddy across the street. But there isn’t, and so it’s my job to remember when I see shortsighted tweets: discipline is freedom. Sometimes, it won’t be our freedom, but the freedom of those who’ll follow us. It might not be freedom from strife, unfairness, or adversity. But from arrogance, from taking things for granted, and from self-pity. We might not always benefit ourselves, not always reap the rice from the seeds we sow, but as today so on our final day, we’ll lay to rest in peace. After all, whoever’s tomorrow it is, it will be slightly better than today.
https://ngoeke.medium.com/everything-we-do-is-not-for-today-823b74bce9cf
['Niklas Göke']
2019-03-25 14:19:55.511000+00:00
['Technology', 'Life Lessons', 'History', 'Politics', 'Future']
2,842
Smart Contract for Beginners and Tougher
Have you ever wondered — can I write my own cosy smart contract? With fair and clean logic, to carry out my business dreams? We glad to announce that yes, you can [under some conditions, as n-years of experience in software development, sorry for that]. Leaving alone the step-by-step guide of how to build my first smart-contract, we will instead provide you with some tips derived from our experience. In such intimidating activity as developing a contract, however smart it be, you need to consider some details. We will address Solidity smart contracts as one of the most popular type across many blockchains. Generally speaking, smart contract development on Solidity is the same as in any other Turing-complete programing language — you think up the logic and implement it in mostly traditional way. The most interesting part is the environment in which contact will be deployed and executed. Ethereum Virtual Machine is a part of a blockchain network with lots of tiny but significant details to bear in mind. We present brief but hopefully instructive list of such details. Deployment and block-size Transactions in Ethereum are always limited in size by gas limit of a block. Taking into account that smart contract deployment is also a transaction we can find impossible to deploy two, or even one big contract at once. To solve this issue we usually deliberately split the system of smart contracts in logical parts optimized by size so that deployment could be done in several subsequent transactions. The most important part is to interconnect these parts. For that we need to deploy one smart contract at a time and provide the address of deployed part to the next smart contract’s constructor. Thus we can deploy a smart contract system of arbitrary complexity. A platform allowing to create tokens with customizable dividend distribution, for instance. Gas optimization Other side of gas-driven blockchain is optimization of smart contract operation. Each ‘write’ method of a smart contract requires number of Solidity opcodes to be executed. And each opcode has a price in gas, meaning real money. Due to that ‘clean code’ that is efficient and simple gets new dimension of being literally less expensive. One wants to save each dime whether own or clients. Solidity is the battlefield of operational efficiency and cost on one side, and the job being done at all on the other. In some cases required amount of gas can be greater than gas limit of a block [ and some times greater than one’s total fundings, but we omit such cases ], i.e. when airdrop to 1 million of addresses need to be executed. If such transaction will be initiated, it won’t be executed due to gas limits of the network. To handle such cases pagination of transactions can be implemented. The code initiates transactions execution in bunches, e.g. 1 000 at a time. And when they are done, go to the next bunch. It makes smart contract heavier and logic becomes somewhat complex with addition of counters and checkers. But sometimes it is the only way to carry out the job. As everything in smart contracts development, such functionality requires good deal of design and testing because once deployed, code can’t be changed. Solidity code peculiarities The language itself has a lot in stock for adventurous developers. To name a few, one wouldn’t like to use simple ‘+’, ‘-’, etc. signs. If one consider to — funny things may happen, like addition of two uint256 variables which combined value may cause variable overflow. To save the day, SafeMath library methods must be used instead. They protect the integrity of the smart contract universe on the fundamental level of Math. Another funny thing is that Solidity map structure can not return the array of it’s keys. It was made to allow big map structures to be stored in a smart contract but have a couple of interesting side effects: non-deterministic behavior when telling whether an element is in it or not. When one requests what value corresponds to ‘a’ key is in a map and it returns ‘0’ no one can tell whether key exists and it’s value is ‘0’, or there are no such key is the map. On the other hand, if one knows for sure that the key is in a map — system becomes deterministic. So to implement deliberate access to a map by key duplicates of keys need to be stored in a separate array structure at the moment of writing key-value pair in it. No native method to get map length. If such functionality is required, approach similar to mentioned can be implemented, but with ids array of integers, each element representing latest written key-value order. i.e. 0 for the first key-value, 1 for the second, etc. Both cases are rather expensive in terms of gas, and we address them only when it is really necessary. Finally, one funny fact about Solidity variables. We always want to minimize the size of smart contract and one may think that assigning uint8 or uint32 type instead of uint256 saves us something. But it is exactly opposite — under-the-hood Solidity converts all numbers to uint256 type making usage of uint8 more expensive due to conversion opcode costs. This example nicely illustrates that smart contract development only seems to be straightforward. Of course, we have just skimmed over the thing. Real-life projects are much more complex and require close and constant attention to every detail and at every step. But hopefully one got useful hints regarding this amazing world of decentralized logic execution. Give us a big hand if you want more. Drop us a line if you need some really smart contract.
https://medium.com/sfxdx/smart-contract-for-beginners-and-tougher-52b437fb8015
['Maxim Prishchepo']
2019-09-05 10:54:04.511000+00:00
['Solidity', 'Smart Contracts', 'Blockchain', 'Ethereum', 'Technology']
2,843
The Generation of Random Numbers Is Too Important to Be Left to Chance
The Generation of Random Numbers Is Too Important to Be Left to Chance Randomness exists all around us: scientific experiments, military drafts, statistics, cryptography, video games, gambling, and even art! But what does it mean to be random? What is a random number, and why is it so difficult to generate? Let’s find out. Blackfort Wallet & Exchange Follow Oct 7, 2020 · 4 min read In plain English, random numbers are data we receive in an unpredictable manner. Dice rolling is perhaps the key example of randomness because most people can only guess at the result. Random numbers are instrumental for different types of programmers. Most often, randomness is used in simulation modeling, numerical analysis, testing algorithms, simulating user input, and many other tasks related to programming. Sets of digits arranged in random order are a big deal in IT technologies — especially when it comes to security-related systems. You can hardly find a piece of technology in this category that doesn’t need random numbers. Randomness is widely used in cryptography. It is the basis of most schemes that try to provide a way to increase communications security. For example, we use random number generators to create keys for encryption. The selection must have high entropy to heighten the difficulty of online attacks. Entropy is an important element of cryptography. For example, when you generate the Bitcoin address, randomness helps ensure that no one else can guess your private key. It is also crucial for the application of cryptography because it provides a way to generate information that a bad actor cannot get or predict. People aren’t very good at picking random numbers. Computers handle this task much better. True random number services are a key part of web security. Here is a short list of their uses: Generation PHP session IDs Generation captcha text Encrypting Generating a random salt for storing passwords in an irreversible form Generating passwords Dealing cards in an online casino Are Random Numbers Really Unpredictable? Sorry to destroy your worldview, but randomness is an elusive concept. Computers can’t produce random numbers on their own. They need the help of outsiders. Software-usable entropy sources may include, for example, user interaction via a mouse or the free memory. However, the generated number sets are not genuinely random. Detectable patterns and the predictable context of their generation are to blame. For cryptologists, the “predictable randomness” issue is an enormous problem. Much of what is purported to be random is actually pseudo-random. Remember that random numbers are a cornerstone of up-to-date cryptology. They are critical for the security of all our cryptographic applications. Pseudo-random numbers can compromise an app that encrypts your internet traffic. AI can hack such a generator and mathematically determine cryptographic keys and passwords created for users or at a specific time. Sounds scary, doesn’t it? It is well known that a high-quality and fair random number generators (RNGs) have not been invented yet. Every existing solution has obvious disadvantages that you don’t want to put up with. A computer cannot provide random results. A calculation with all the same original parts must deliver the same result every time, and that is all a computer is — an ultra-powerful number cruncher. So, what do we do? Blockchain to the Rescue People have been trying to come to grips with the problem of pseudo-random numbers for a decade. The services of impossible-to-predict numbers operate in a centralized fashion on the basis of on-premises software or under the control of a group of individuals or organizations. A great many projects need to not only generate a random number in an unpredictable manner but also ensure the security of the information and protect themselves from fraud, data leakage, and manipulation techniques from the outside. There is nothing a traditional RNG can do about it. As it turned out, blockchain can help. Since the invention of the revolutionary technology in 2009, software scientists have been trying to develop a high-quality RNG based on the blockchain. Cloudflare is already up to speed on this. Cloudflare is a service that protects websites and web services by sitting in front of them as a gatekeeper. In short, it deals with a lot of encrypted internet traffic, so it needs a lot of random numbers. That is the very reason why a well-known company has introduced the world to the League of Entropy, a service that generates streams of random numbers. Unpredictable numbers can even be used by app and web developers for captchas to find out whether a system user is a person or a bot. The collaborative project will work on five different and independent servers. Even if some of them are theoretically compromised, the rest will be able to provide numbers that are impossible to predict. This truly random beacon is unique because it generates randomness in a new, more decentralized way. Intrinsically random numbers are generated by independent parties, so you don’t need to trust anyone. Want to join the league? Go ahead, try it out! A Quick Recap If something is unpredictable and contains no recognizable patterns, we call it random. It is a disorienting claim, given that random numbers are absolutely predictable and, therefore, not suitable for sending encrypted information. Gone are the days when RNGs would mean a catastrophe for security. Things have changed, for the time being, thanks to the blockchain’s potential.
https://medium.com/blackfort-wallet-exchange/the-generation-of-random-numbers-is-too-important-to-be-left-to-chance-6668b240937c
['Blackfort Wallet']
2020-10-07 08:14:57.708000+00:00
['Random', 'Cryptography', 'Technology', 'Blockchain']
2,844
Why Trustworthiness Matters in Building Global Futures
Why Trustworthiness Matters in Building Global Futures No matter how compelling our technologies are, they are only as good as the trust people have in the organizations that develop and govern them. TIGTech — Seven Drivers of Trust We’re standing at a pivotal point in our collective response to the coronavirus pandemic. The first vaccine against the virus is beginning to be rolled out, with others hot on its heels, and we can begin to imagine a post-COVID future albeit tentatively. Yet despite the incredible strides being made, hope is being tempered by hesitancy-and sometimes downright distrust-as a growing number of people question the safety of the vaccine, and even the motives behind it. It’s easy to dismiss this resistance to the COVID vaccine as irrational thinking, a rejection of science, and an unquestioning acceptance of misinformation and disinformation. Yet it points to a bigger issue of trust: Trust in how science and technology are governed, and more specifically, how organizations earn trust through being trustworthy. To Earn Trust, Organizations Need to be Trustworthy Earning trust is a challenge that goes far beyond the current pandemic, and touches on pretty much every aspect of our connections with the future. No matter how compelling our science is, how transformative our technologies are, or how important our ideas of the future might be, they are only as good as the trust that people place in the organizations that develop and use them. But how is trust developed and maintained as we strive to build a better future together? For the past couple of years, I’ve been a member of the advisory Panel for TIGTech, an initiative supported by the World Economic Forum and Fraunhofer Institute of Systems and Innovation Research that’s focused on trust, governance and technology innovation. TIGTech was established to explore and provide guidance on what it means for governance approaches to new technologies to be trustworthy, and how trust is earned. The focus of the work has been on emerging technologies and large institutions. Yet the findings and recommendations are relevant to anyone trying to build a better future within today’s highly complex and deeply interconnected world. Towards a more Engaged, Collaborative and Communicative Approach to Trust & Tech Governance Last Friday, the first major report from TIGTech was released, and it highlights the need for developing more engaged, collaborative, communicative approaches to trustworthy and trusted technology governance. It als provides practical steps toward achieving this. TIGTech — Towards a more engaged, collaborative, communicative approach The report is, I am very pleased to say, written for real people grappling with real challenges, and is not in the slightest academic-although the underlying foundations are academically sound. I would go so far as to say that it should be required reading for anyone who is either studying global futures, or is involved in the process of helping to build a better future. The report eloquently focuses on clear and concise nuggets of relevant information for readers- three key findings, five things to know about trust, seven drivers of trust, and three competencies for trusted governance. This approach makes it highly accessible. It also makes the insights relevant and actionable to a wide range of individuals and organizations. Plus, it’s deeply refreshing to have such an important document written in plain language that is easy to make sense of! Many of the points that are made feel like common sense when you read them-yet paradoxically they can’t be, otherwise they would be more commonly found in practice. For instance, the three key findings are: Be more engaged, more visible — show your impact. Detach governance from hype and ideology — focus on the public interest. Get comfortable with navigating ethics and values. And the seven drivers of trust are: Intent — Public Interest (upheld through purpose, process, delivery and outcomes). (upheld through purpose, process, delivery and outcomes). Competence (delivering against expectation effectively, reliably, consistently, responsively). (delivering against expectation effectively, reliably, consistently, responsively). Openness (being transparent and accessible in processes, communications, explanations and interactions). (being transparent and accessible in processes, communications, explanations and interactions). Respect (seeing others as equals; listening to and taking seriously their concerns, views and rights. Considering the potential impact of words & deeds on others). (seeing others as equals; listening to and taking seriously their concerns, views and rights. Considering the potential impact of words & deeds on others). Integrity (operating honestly, being accountable, impartial and independent of vested interests). (operating honestly, being accountable, impartial and independent of vested interests). Fairness (enshrining justice and equality in governance processes, application, enforcement, and outcomes). And (enshrining justice and equality in governance processes, application, enforcement, and outcomes). And Inclusion (being collaborative, inclusive, involving others). These are all critically important, and should be part of any future-builders credo. But they are just the tip of the iceberg when it comes to earning trust. And underpinning them is a nuanced and sophisticated understanding of trust that is essential to developing and using new technologies in the public interest. Trustworthiness is Foundational to Global Futures Building Beyond the immediate relevance of the report to technology development and use in the public interest, reading through it, I found myself connecting the ideas it lays out to almost every situation within today’s society where trust is paramount, from communicating and engaging around science, to building a new initiative around global futures, to taking justice, equity, diversity and inclusion seriously, to being a trusted research and education establishment. Here, I was particularly taken by the report’s exploration of “ five more things to know about trust “-these should be essential reading for anyone who’s work involves demonstrating trustworthiness and earning trust. These insights include acknowledging that trust is an outcome that’s best-achieved by focusing on others, and that it signals a hope that an organization will fulfill the expectations we have of them. They also emphasize what should be self-evident, but rarely is-that trusting people first makes them more likely to be trustworthy and to trust you back, and that trust is a spectrum, and not an either/or judgement. And importantly, they make it clear that trust is dynamic, messy, personal, and a two-way process. In other words, while trust and trustworthiness are critically important for building a better future, the process of earning and demonstrating them is not one if simply follow rules and procedures, or checking boxes. It takes awareness, empathy and humility, and a willingness to embrace the messiness of being human within a complex society as we strive to put others first. This is sage advice as we stand at this pivotal point in the fight against COVID. But it’s also important as we look beyond COVID and work together to build a future that is just, equitable, and sustainable, and one which is threaded through with hope and possibility. Which is why I’d recommend anyone with an interest in building a better future check out TIGTech, and read the initiative’s recent report on trust and tech governance.
https://medium.com/edge-of-innovation/why-trustworthiness-matters-in-building-global-futures-50a91fcb9bb2
['Andrew Maynard']
2020-12-15 15:45:32.490000+00:00
['Innovation', 'Trust', 'Future', 'Technology', 'Science']
2,845
Cryptocurrency Dictionary for Beginners.
If you’ve kept a keen eye open for the past few years, you’ve probably seen these terms strewn about. Especially within this past year when Bitcoin and most Altcoins stampeded on a historic bull run. Cryptocurrency seems to be discussed everywhere nowadays. The news, the radio, the Internet, your Uncle Greg, podcasts and virtually any other modern form of media aggregate. Truth be told, you’re only going to be hearing and seeing more of this burgeoning field for the foreseeable future! It’s time to make some sense of all of this. Cryptocurrency may seem quite hieroglyphic to many, but I’ve put together this list of must-know terms, minus that technical computer-speak mumbo jumbo. Aunt Jane and Grandma Edith can finally understand why Uncle Greg keeps telling everybody he’s HODLing during Thanksgiving dinner! 🦃
https://medium.com/coinmonks/cryptocurrency-dictionary-for-beginners-e1ff03c8aacf
[]
2019-05-10 16:12:21.667000+00:00
['Blockchain', 'Blockchain Technology', 'Cryptocurrency', 'Crypto', 'Bitcoin']
2,846
instagram about name
Instagram, the image sharing app created with the aid of using Mike Krieger and Kevin Systrom from Stanford University, spins the story of achievement capitalized the proper manner. Launched manner again withinside the 12 months 2010, Instagram nowadays boasts of seven hundred million registered customers, with extra than four hundred million human beings touring the webweb page on a everyday basis. Out of the seven hundred million customers, round 17 million are from the UK alone! When the 2 founders began out speakme approximately their concept, they quick realised that they’d one purpose in mind: to make the biggest cell image sharing app. However, earlier than Instagram, the 2 had labored collectively on a comparable platform referred to as Burbn. For Instagram to work, Krieger and Systrom determined to strip Burbn right all the way down to the naked necessities. Burbn became pretty just like Instagram and had functions which allowed customers to feature filters to their pictures. The social networking webweb page Instagram reached 1000000000 energetic customers in 2019. The US-primarily based totally video and image-sharing app is a achievement tale that has spread out because its release in October 2010 with the aid of using Stanford University college students Mike Krieger and Kevin Systrom. Systrom majored in control technological know-how and engineering, at the same time as Krieger studied symbolic systems — a department of pc research mixed with psychology. When the 2 founders met, they began out discussing their concept for a brand new app and realised they shared a purpose: to create the world’s biggest cell image-sharing app. Budding entrepreneur Fellow college students recalled Systrom as being clearly gregarious and a budding entrepreneur from a younger age. He in short ran a market that became just like Craigslist for fellow Stanford college students. Krieger had exclusive capabilities and one in all his college tasks were designing a pc interface that could gauge human emotions. Prior to Instagram, that they’d collaborated on a comparable platform referred to as Burbn. They determined to strip it down and use it as the premise for Instagram. Burbn had functions that enabled customers to feature filters to their photographs, so the duo studied each famous image app to peer how they might development further. Eventually, they determined it wasn’t operating and scrapped Burbn in favour of making a totally new platform. Their first attempt became Scotch, a predecessor to Instagram, however it wasn’t a achievement, because it didn’t have sufficient filters, had too many insects and became slow. Once Instagram became launched for Android phones, the app became downloaded extra than 1,000,000 instances a day. Interestingly, the web social media platform became ready to get hold of an funding of $ 500 million. Furthermore, Systrom and Zuckerberg had been in talks for a Facebook poised takeover. In April 2012, Facebook made a proposal to buy Instagram for approximately $ 1 billion in coins and stock, with the important thing provision that the enterprise could continue to be independently managed. Shortly thereafter and simply previous to its preliminary public offering, Facebook received the enterprise for the whopping sum of $ 1 billion in coins and stock. After the Facebook acquisition, the Instagram founders have completed little to alternate the consumer phase, sticking to the simplicity of the app. The remarkable upward push of Instagram’s recognition proves that human beings agree with in actual connections as opposed to the ones primarily based totally on simplest words. Since the acquisition, Instagram’s founders haven’t made many adjustments to the consumer experience, who prefer to paste to the app’s simplicity. Its upward push in recognition proves that human beings revel in the manner the app works and just like the image-primarily based totally connections it provides. One of the maximum essential instructions of Instagram’s achievement is that the founders didn’t waste time seeking to keep their authentic concept, Burbn. Once they determined it wasn’t going to work, they moved on quick and invented Instagram. Systrom stated its call became primarily based totally on “immediate telegram”. The app became released at simply the proper time — and with simplest 12 personnel initially, the consumer base had increased to extra than 27 million earlier than Instagram became offered to Facebook. Today, maximum celebrities use it as a platform for promotions and with 1000000000 customers, it keeps to head from power to power.
https://medium.com/@wkaddour/instagram-about-name-9dd36b9e9b63
['Wika Dydour']
2020-12-17 11:56:23.113000+00:00
['SEO', 'Instagram', 'Technews', 'Technology', 'Success Story']
2,847
How Technology is Changing the Food and Beverage Industry
Photo by Cody Chan on Unsplash The demand for technology is increasing in the food and beverages sector due to digital opportunities, climate change, and health-focused consumers. Climate change and its global impact are encouraging innovative startups to make a change in the industry. Consumers diet is also changing, and the desire for a healthy lifestyle is increasing, leading to the rise of veganism and vegetarianism, which is persuading the food and beverage industry to follow the trend. There is no doubt that in the past few years, the dynamics of the food and beverage sector have changed. They have become more customer-oriented, transparent, and increased the usage of technology. The top food technology trends that are transforming the food and beverage industry are given below. 1. Service Robots and Restaurant Digitalization The e-restaurants have new standards with their online services and delivering food at the doorstep of their consumers. However, food service robots will be making their way into the restaurants, providing customers with more refined service. The food and beverage industry is booming and will continue to do the same in the upcoming years with their online orders, fast delivery of food at the counter, access to healthy food, and service robots. 2. Plastic Free Worldwide in many countries, usage of single-use plastics such as cutlery, stirrers, straws, and plates have already been banned. Many big organizations have also signed an agreement to reduce plastic generation and disposal extensively. Accordingly, many food and beverage manufacturers are coming up with alternative biodegradable and plastic-free solutions along with an innovative packaging system. Moreover, consumers are becoming aware of the hazards of using a plastic bag and other such items have also given the companies more reasons not to use plastic materials. Photo by Pablo Merchán Montes on Unsplash 3. Transparency and the Big Food Companies The food chain industry is still trying to understand how to regain trust from their customers so that they can support their values and be more transparent. The increasing growth of the new e-commerce channels is proving to be challenging for the big players as it is negatively impacting the market share for them. To remain in the competition, the big food companies have to get a better knowledge about the niche markets and how quickly they can change. Food traceability provider such as blockchain technology will also continue to grow, and the new food companies will adopt this new technology as it will supply them with fast data integration and identification of quality. How Does Standardization Legitimize The Food Enterprises? Photo by Robert Anasch on Unsplash The era of food delivery has encouraged convenience to the customers and has fuelled the germination of food delivery businesses across the world. The Asia-pacific region accounts for more than half the total value of the food delivery enterprises. With global centralization, recipes, and food products from around the globe are in high-demand. The Europe-based firm, Statista reported a rise of 27.9 percent in the food delivery market as different cuisines takeover the menus in several restaurants. The evolving concepts of food from traditional food recipes to ready-to-eat meals, food vending machines, and different recipes from all over the world have given the customer a choice over habits and convenience over health. The unhealthy food gap is also covered up by trending restaurants serving vegan and keto diets as well. Photo by Brooke Lark on Unsplash The rise in traditional eateries was a result of less space occupancy and necessity of zero-to-negligible workforce, offered by food vending machines. The paradigm shift in the eating habits has presented a leaping industry and to control the quality of the food consumed, standards were necessary. The guidelines which are regulating food safety and hygiene practices in the new-emerging food industry surfaced to the line of supply as technical references formed. The technical reference covered areas like design, structure, cleanliness, maintenance, food hygiene, and transportation protocols. Standards Taking Over Markets: The standards have established brand presence and credibility by pushing ahead ideals for laying a strong foundation of innovation, new product development, adoption of technologies to help market access. With internationally recognized certifications from compliance to standards, the product quality will be the key selling point of the company to aid the entry into varied geographies. Check This Out: FB Tech Review
https://medium.com/@chrishtopher-henry-38679/how-technology-is-changing-the-food-and-beverage-industry-a79d68ee462e
[]
2019-09-16 11:42:45.925000+00:00
['Food Safety', 'Technology', 'Food And Beverage', 'Food', 'Delivery']
2,848
Best Way to Deploy Your Smart Contracts On Ethereum
Bringing the best together Introduction: There are various ways to deploy Ethereum based smart contracts, some popular options are listed here: Use migrations of Truffle framework. Use Ethereum wallet and MetaMask. Using Remix IDE and MetaMask. I prefer the last option, and would suggest the readers of this blog to follow the same, as it is most optimised as far as gas consumption is concerned and it can also be used to debug contracts. In this tutorial we will deploy a sample contract, which is initially available on Remix, using MetaMask, on Ropsten test network. MetaMask Setup: MetaMask Step 1 Download the MetaMask extension. There are various options for browsers with MetaMask compatibility, however in this tutorial we use Chrome extension as reference. Following is a tutorial you should follow to understand MetaMask and set up your own wallet on it, it is quite outdated but it is the perfect way for you to get the gist of things and get you started with MetaMask. Step 2 You will need to create a suitable password, which will be used to encrypt and create a seed phrase. Step 3 Keep the seed phrase somewhere secure and don’t share it with anyone, it can be used to retrieve your wallet. Step 4 Change your network by clicking on Main Network and switch to Ropsten Test Network. Network options Step 5 Click on Buy, to get some free Ropsten Ethers. Your new MetaMask account Step 6 Go to the Ropsten test faucet and request some Ethers, they will be mined in some time and will be added to your Ethereum account on MetaMask. Buy test Ethers Step 7 In some time your account will look like this, and you will be ready to deploy your first test contracts on Ropsten. Balance credited to your account Using Remix IDE to deploy contracts: Open the Remix IDE and you will see a pre-compiled contract file, namely ballot.sol. We will use this contract to understand how the deployment using Remix works. Remix environment options There are three kinds of options available for deployment in Remix: JavaScript VM: It helps you deploy your contracts in a JavaScript environment made available by remix, it gives you five Ethereum accounts with a balance of 100 Ethers each to deploy and test your contracts. Injected Web3 : This will help you connect your MetaMask account to Remix and use the network you’re on for deployment. You can choose among “Ropsten”, “Rinkeby” and “Mainnet”. Ropsten and Rinkeby should be used for testing your contracts before deploying them. Web3 Provider: You can use some explicit Web3 provider or can integrate it on your localhost and use it. We will be using Injected Web3 as it is the easiest and widely used. So choose injected Web3 and proceed: Step 1: You will see your Ropsten account in the “Account” section of “Run” Subhead. This represents your current MetaMask account which will be used for all the required transactions. You can create more accounts on MetaMask and switch between them to connect different accounts to Remix. Please note that only one MetaMask account can be connected to remix at a time. Step 2: Enter number of proposals, which is a parameter required by constructor of Ballot contract and click create. You will see a MetaMask pop-up like the following: Confirm transaction This pop-up asks for a confirmation for the transaction you’re just about to create. I’ll explain the details mentioned and how to manipulate them one by one: Gas Limit: MetaMask calculates the maximum amount of gas that can be consumed by a transaction and displays it here, this amount of gas will be sent with the transaction, in case all the gas is not consumed during the transaction, remaining gas will return back to your account. Gas Price: It calculates the latest price which is being used in the Ropsten test network, it depends on amount of traffic within Ropsten Network. You can increase it for faster results. Max Transaction Fee: It represent the maximum estimated mining fee for the transaction. Is an estimate of costs in USD and Ethers. Max Total: Sub total of all the costs in USD/Ethers. Step 3: Once you submit your transaction, you can track it’s status in MetaMask or Ropsten Etherscan. Track transaction in Metamask Step 4: Once the transaction is processed and is added to a block then you will see the contract at the bottom of “Run” section in Remix, as follows: Deployed contract in Remix You can see all the public functions and state variables of the contract here. Blue represents that a function/variable is constant and it won’t consume any gas during execution. Constant functions are those which does not result in value change of state variables, and thus don’t consume any gas. Similarly Red represents a function which will consume gas and should be carefully used, when you use it a MetaMask pop-up to confirm transaction will appear. You can set the required parameters and submit your transaction. There are some other functions which accept Ethers, and can be used to send Ethers to a contract, it is of a darker red colour. You can specify the value and the unit you will be using in the “Value” section labelled under “Gas Limit”. Summary: Understanding contract deployments is very necessary, and nascent developers make a lot of mistakes in this part leading to huge deployment costs and cumbersome testing mechanisms. This blog is a guide for beginners to avoid those mistakes and enjoy “happy development/deployment within solidity”.
https://medium.com/deqode/best-way-to-deploy-your-smart-contracts-e5f7b5a52baf
['Ayush Tiwari']
2019-01-31 12:01:06.313000+00:00
['Blockchain', 'Ethereum', 'Smart Contracts', 'Blockchain Technology', 'Metamask']
2,849
The benefits of reimagining technology in the public realm
Principles of good deployment Below is a summary of the principles, for full details including context and examples can be found here. Principle 1: Transparency Devices that collect data in the public realm should clearly communicate: what data are being collected why data are being collected (the purpose) how the data are being processed who owns and controls the data what (if any) personal data are being collected, and how they are anonymised Principle 2: Trust Publicly collected data should be made available, where appropriate, by an independently accountable data trust. These trusts should evaluate individual requests for data, and provide justifications for when data cannot be released. If data collected in the public realm are mostly created by citizens, they should be able to see how the collection of them can contribute to the common good, and therefore benefit society. Where data or derived information could be used for public benefit it should be made available to the relevant authorities. In some cases, data may be made available to public authorities only: for example where privacy or safety issues make the data unsuitable to be publicly available. This assessment should be carried out by a data trust. Explainer: What is a Data Trust? Data trusts as defined by the Open Data Institute ‘provide independent, fiduciary stewardship of data’. They provide a legal framework allowing data to be pooled together and looked after by a third party. They provide a set of rules whereby data access can be limited to only those that also conform to rules of access and safekeeping. This gives people and organisations confidence when enabling access to data. “Data Trusts ensure governance structures are in place and address both the terms of data sharing and the monitoring of access…by providing third-party oversight from and creating a degree of separation between developer, data collector and profiting organisation.” In The State of City Data, we explored the role Data Trusts could have in enabling cities to encourage public confidence and consensual participation with technology in the public. Data Trusts ensure governance structures are in place and address both the terms of data sharing and the monitoring of access. This can be done in an acceptable (and ethical) manner by providing third-party oversight from and creating a degree of separation between developer, data collector and profiting organisation. Principle 3: Public Good Digital street furniture should only be placed in the public realm when it explicitly demonstrates public benefit. There must be a fair value exchange between the place, its users and inhabitants and the street furniture. This can be seen either directly through services provided or indirectly through revenue share. Principle 4: Longevity The full life cycle of any object placed in the public realm should be considered and the objects should be subject to review or reassessment on an agreed regular basis. Principle 5: Inclusivity The design of street furniture should reflect the diversity of the people who use it, not excluding any groups or individuals and not imposing any barriers to its use. There must be a fair value exchange between the place, its users and inhabitants and the street furniture. The information around data collection that Principle 1 covers should equally be accessible to all potential users. If Digital Street Furniture forms a part of the public streetscape, the design should be inclusive and consider the local and wider community. The design of Digital Street Furniture objects should allow everyone to access the services in a way which does not hinder some users. Additionally, related to Principle 3, the design should not unintendedly exclude certain users from accessing its services or benefits. Read the full set of Principles here.
https://medium.com/connectedplaces/reimagining-technology-in-the-public-realm-e2d4987d3a43
['Connected Places Catapult']
2020-08-12 14:56:34.510000+00:00
['Street Furniture', 'Digital Infrastructure', 'Technology', 'Data', 'Public Realm']
2,850
Velas provides us extremely scalable transactions the blockchain wants.
Hi everyone, this is a brand new day, i hope everyone is staying healthy, indoors and staying safe as we recover from the pandemic. While at home trying to keep busy, i believe that one way or the other you have gotten in contact with a thread, or a post about crypto currency, bitcoin or blockchain technology; did you read up about it ? Bitcoin is now a popular word, especially during this lockdown as it was dicussed as an alternative to fiat because it is a digital wealth and henceforth safe; but before bitcoin fully implemented, there was the blockchain technology, which makes every crypto currency tick and function well. The blockchain technology has influenced almost all facets of life, from the banking system, to the health sector, offices, the msme platforms and even the gaming world has embraced the blockchain and crypto currency. But do you know that even with tremendous capability of the blockchain, problem persists and several platform has tried to fix it all to no avail; A problem is the scalability of the blockchain. What does scalability means? I term it the limited rate or time in which a network (node) can process transactions. Another problem associated with the blockchain is the centralization of the blockchain to some extent, and network inefficiency. Come to think of it, that after 10 years of bitcoin and blockchain dominance; decentralization has aged well, and to this effect there’s a platform which tackles and solves the scalability and also provide decentralization on their blockchain; it is The VELAS Blockchain Platform. What is Velas? Velas blockchain is a self-learning and self-optimizing blockchain platform for secure, interoperable, extremely scalable transactions and smart contracts. The Velas platform is going to use optimized Artificial Intelligence (AI) neural networks to enhance its consensus algorith and ensure decentralization, stability and security on the Velas Blockchain. How does the Velas Blockahin work? The Velas platform uses AI-Operated DPOS (AIDPOS) consensus to secure the blockchain for high volume transactions while also ensuring decentralization, stability and security. Through the use of the AI-operated algorithm network, corruptible human dependencies are removed thereby fixing major issues like the 51% attack and other compliance problems. The Neural networks of the Velas platform are used to calculate the rewards for nodes’ operators and time of block formation. The Artificial Algorithm performs the following tasks on the Velas blockchain: AI Algorithm is used to determine the node rating. AI Algorithm is used to determine the number of blocks in each established transaction. AI Algorithm blocks fake messages, false transactions and ensure that theres an increase in the quality of messages in the network; the resistance of the network are increased too. AI Algorithm also helps the platform to correctly award rewards. AI Algorithm helps the nodes to function and be active. I believe you are eager to have a trial at a platform who fixed the scalability facing the blockchain, and you want to use the platform and also give a review ? Look out for my next blog on the Velas platform. Do use the links below for more information and details about the project: Website: https://velas.com/ White Paper: https://velas.com/VELAS-Technical_Paper.pdf Twitter: https://www.twitter.com/VelasBlockchain/ Telegram: https://t.me/velascommunity Facebook: https://www.facebook.com/velasblockchain/ Instagram: https://www.instagram.com/velas.blockchain/ Medium: https://medium.com/@VelasBlockchain Linkedin: https://www.linkedin.com/company/velas-ag Github: https://github.com/velas Discord: https://discord.gg/CTcKpPc
https://medium.com/@ogbeniiyalenu/velas-provides-us-extremely-scalable-transactions-the-blockchain-wants-f7803031db12
[]
2020-07-09 16:59:34.796000+00:00
['Blockchain Technology', 'Smart Contracts', 'Algorithms', 'Artificial Intelligence', 'Neural Networks']
2,851
DAGS. D’ya Like DAGS?
In the age of connectivity, the new breed of innovators lead the present generation via a techno-mantra of disruption. It is a common sight in the world of new-age technologies that while a digital citadel is being taken, another starts to take over at mid-siege. Exciting times. Disrupting the disruptor DAGs have distinct differences in utility and purpose, at least in the ecosystems that presently exist, but within these individual systems are some shared commonalities: the independence from block-size nuances which enables unrestricted optimisation of transaction process, unlimited scalability, zero-cost transaction, and the abolition of mining. Out of all the DAG-based technologies available, IOTA and its Tangle infrastructure sum-up the majority’s interest in considering the blockchain variant. Understandably, any technology designed to facilitate the interoperability of the Internet-of-Things (IoT) has a lasting impact on future-centric societies as it sparks curiosity and intrigue towards its applicability. However, a few of the DAGs defining improvements on the blockchain have been subject to scrutiny by the many keenly observant. Revisiting the days of pre-emptive strike, let’s start where they went for the jugular — the Coordinator. The Tangle innovation implements a network-assist or “training wheels’’ called a Coordinator. Now before anyone point and scream “Centralised,” it has been expressively stated and reiterated that the Coo is a temporary consensus protocol until sufficient activity through adoption is established. Let go of the pitchfork for a minute and I will elaborate. The primary purpose for a Coo is to protect against any malicious attacks while the technology is still in its infancy and will be disabled upon reaching a capacity where it can run fully unassisted. A logical explanation prioritising safety and security for its community. However, this presents a rather complicated scenario going forward. Photo: cgstudio.com Despite claims of a fully decentralised network and that the Coo utility technically exists as entirely optional, it is still, at least for the time being, a central point of failure as approvals conducted by a single server and “checkpointing” proves the stated claims otherwise. Notwithstanding the assertions, many still attest to the credibility of the DAGs infrastructure as an innovative blockchain solution, which Tangle is to achieve upon the shut-down of its Central Coordinator — indicative of a point where the technology meets its desired adoption. Stated in the FAQ section of the IOTA website: The most critical factor needed for the removal of the Coordinator, for example, is the greater adoption of the IOTA technology increasing the throughput of transactions on the network to meet the fundamental security assumption — that the cumulative throughput of honest transactions is large compared to that which an attacker could feasibly produce. Contrary to what you may expect, as those countless many forums bombarded with incessant queries of — “when?” — I remain optimistic towards the Tangle’s full unassisted capacity and defiantly ask — “why not?” As for now, the blockchain takes the cake on decentralisation. However, it is not without high hopes for a 2018 fruition where the Tangle “wheels” can finally come off and bring about a bright new world of Markov Chain Monte Carlo and Random Walks – it already sounds beautifully snazzy, I can taste the coastline of the Côte d’Azur. Let me grab my sunglasses. Parity of Scalability: Blockchain and DAGs Among the many amazing features of DAGs, its one silver bullet against the blockchain beast is scalability. It is widely known in the crypto sphere that scalability presents an arduous task to developers due to the blockchain’s limits in its scaling capacity. Several proposed solutions have been presented over the years from Segwit to Sharding to Plasma; a meticulous search for one that will preserve the blockchain’s philosophy of decentralisation. Photo: cryptoincome.io In the meantime, DAGs data structure has scalability as its nonpareil. It finds improving functionality parallel to an increasing volume of transactions as no theoretical limit exists in the DAGs throughput capacity — Tangle’s technical superiority in design over the blockchain. Photo: reddit.com There is, however, a rather interesting argument presented by Lior Yaffe, Co-Founder and Managing Director of Jelurida. Ardor and Nxt core-developer. In his article, he talks of the DAGs possible weakness on scalability: To summarise, I suspect that the ability of a DAG to confirm transactions as they arrive causes it to become more sensitive to transaction propagation latency and the order in which transactions arrive at a node, which under load may cause accumulation of unconfirmed transactions and growth of the DAG into a cloud like shape where acceptance of transactions referring to old tips is not an exception but the norm and ever increasing number of tips remain unapproved. Predictive to this point, his presented scenario of 1000 simultaneous transactions per second (TPS) on the DAGs infrastructure briefly states that as nodes on the network have inherently varying speeds in processing transactions, remote nodes will lag in confirming transactions as compared to more central nodes, a difference that will be largely reflected when throughput increases due to high volume transactions. This results in a consistently growing pool of unconfirmed transactions that will eventually exhaust node resources and create chaos in the network. Put down the knife. My name is not Lior. We go to the judge’s scorecard Photo: visioncritical.com The chasm between DAGs and Blockchain, along with all its architectural dualities is a contention that should be celebrated. Viable technologies that complement each other, two towering sentinels safeguarding our future – one is an evolving precursor that paves the way for other data structures and the other is poised to aide and facilitate the demands of the near future. The rigid dichotomy between DAGs and Blockchain technology represents a fundamental divergence that generates creative momentum. Given that technological variances may manifest into transient discord amongst involved communities, it nonetheless promotes a competitive race for innovation with real-world applications solving real-world problems. Provided that we “keep it clean” no rabbit punches and “hitting below the belt”, the spirit of rivalry intrinsically creates an environment where competing technologies provide innovations which consequently sets new heights in quality and standard, compelling all other innovators to reassess focus and not drown in obsolescence. As we know, many of the profit-driven technologies of 2017 will perish under this murderous market condition – a simple process of codified natural selection. As a favourable result, it will leave us the visionaries like those that built the blockchain and Tangle innovation. Critical thinkers and solutionists that will continue to provide enhancements to life’s quality and give the present generation the tools and processes needed for the future. Thus, under the blinding lights of the crypto arena, and in the famous words of the great referee Joe Cortez:
https://medium.com/blockchainit/dags-dya-like-dags-300a76842e3f
[]
2018-11-10 23:08:04.583000+00:00
['Tangle', 'Scalability', 'Distributed Ledgers', 'Blockchain Technology', 'Dags']
2,852
Create an Auto Saving React Input Component
A better UX without too much heavy lifting When dealing with long forms (think medical forms, profiles, etc), it’s a huge UX improvement to allow fields to auto save as a user fills them out. Auto saving fields on blur sounds like a lot of extra work over a single submit button, eh? Not to worry, we are going to build one in 10minutes using Semantic UI React component library. If you want to skip this article entirely, here is the code sandbox: The field will save on blur Our Base Text Field Okay, before we can think about auto saving, let’s just create our base Text Field component, or in other words, save hours by using the Input component by Semantic UI: Semantic UI has a bunch of icons we can pass in by name, which will make displaying our saving / saved / error states a lot easier. For context, here is the main application code: Only two things noteworthy here : We created a mock save function which is just a promise that resolves in 2 seconds with the new value Every prop except onSave is apart of the Semantic UI API for Input components, nothing custom there. We will make use of onSave later. What Do we Need to Track? When saving individual fields, there is more to keep track of. Here are the most important questions: How do we do the actual saving? How do we indicate a field is saving? How do we indicate a successful / failed save? How do we do the Actual Saving? We probably want to save the field on blur. We can pass an async onSave function that is responsible for the details of “saving”, but our component isn’t really concerned with how data is saved, just the fact that it is saving. While it may seem like a lot of code, everything is pretty simple: We use useState to keep track of whether we are saving or not We maintain a reference to the last entered value so we can compare it to the current value. If they are the same, there is no need to save. To answer our “how do we indicate a field is saving?” question, Semantic UI already as a loading indicator that we can leverage. We show it when the field has either been passed a “loading” prop manually, or we are currently saving the field. The problem with auto saving fields is that user input can get overwritten after their data saves and the input field refreshes with the saved data. To avoid this, we simply disable the field when loading. This is where the magic happens. We attach an async handler to onBlur . In the handler we first check if the value has changed. If it has, we update our “saving” state to true, and attempt to save if we were passed an onSave function. Once the onSave promise resolves, we update the last saved value and reset our saving state. How do we Indicate a Successful / Failed Save? So we may be able to show when a field is saving or not, but it’s not enough. We want to show that a field has successfully saved (or failed to save). So this is pretty much our full component! Here are the additions we made: New state was introduced to keep track of when a field is saved, and when there is a save error For a better UX, we swap our icon for a green check mark when we successfully save, and a red warning when a save fails. Semantic UI gives us the flexibility to set our icon colors manually, so we change the color to correspond to the current save state. Now that we are dealing with more than field validation errors, we manually pass either our regular error prop (which is passed to our component from a parent), or our internally regulated saveError variable, if either exist. Another UX improvement is making sure we remove the success icon when a user makes a change to the input. That way they understand the current changes aren’t saved yet. When we hit an error calling onSave (which we assume throws a promise rejection if saving fails), we simply update our saveError state. You can be a lot more flexible here (like passing the actual message from the API), I kept it simple for this example. Moral of the Story Obviously there are a bunch of areas where this component can be improved, but as far as getting something up and running fast, this can probably be done in 10–20 minutes thanks to Semantic UI React. You can use any other component library (Material UI, Ant, etc). The point is that using a component library will probably save you hours of development in cases like these. We were able to show icons, loading indicators, disabled & error states, and more just by passing simple props. Message From the Author Hey you… Yeah you. I know times are tough. Bored? Stressed? Going nuts at home? Want to dive deeper into React stuff? Have buddies looking to learn something new about React & UI development? I’ve got plenty of articles for you and your pals. Feel free to share and follow because… well… there isn’t much else for me to do besides check my Medium stats and binge La Casa de Papel. A note from JavaScript In Plain English We have launched three new publications! Show some love for our new publications by following them: AI in Plain English, UX in Plain English, Python in Plain English — thank you and keep learning! We are also always interested in helping to promote quality content. If you have an article that you would like to submit to any of our publications, send us an email at [email protected] with your Medium username and we will get you added as a writer. Also let us know which publication/s you want to be added to.
https://medium.com/javascript-in-plain-english/create-an-auto-saving-react-input-component-in-10-minutes-2359d84dc29b
['Kris Guzman']
2020-04-27 16:04:25.132000+00:00
['JavaScript', 'Web Development', 'React', 'Technology', 'Programming']
2,853
Stop dreading performance reviews and set powerful goals
Photo by Fab Lentz on Unsplash Work performance reviews, school grades, Lyft ratings, and test scores are a source of some dread. These externalities where other people assess how well you’re doing all matter. Performance reviews are data points around how others view you and how good you are at your job. Often they also correlate to financial success. Your performance rating will affect the promotion, the size of your bonus, or the number of new customers who try you out. Your high school grades and SAT scores directly affect which college you get into. Yet what happens when your self-accomplishment doesn’t map to the performance review you’ve received? What matters more, the internal sense of satisfaction or someone else’s validation that you are doing well in their system? They both matter, yet many of us fall into the trap of passively awaiting our grade with a sense of dread. I’m sharing a different perspective, where you as the heroine of your professional accomplishments can work through 5 points and set powerful goals in service of your own accomplishments. Do the work that you love and the rewards will follow Understand the company’s context Know your context Common fallacies What matters most for you Do the work you love and the rewards will follow I worked at Facebook for over four years. I’ve written countless performance reviews, and had even more conversations with individuals wanting to know how to be successful at the company. I helped to run a process of product design calibrations where each half, all the managers would assess each designer’s accomplishments to determine what is a fair rating for each person, especially compared against how their peers are doing. The consistent piece of true advice that’s often considered a trope is “Do the work you love and the rewards will follow.” What this is intended to do is to have the designer focus less on the grade she might receive and focus instead on the work itself — the people and the products, plus the craft of the work that she’s creating. If you do this well, you will get your positive rating. If you do this well over a period of time, you will get your promotion. Yet often this advice feels trite and a brush-off. It requires a longer term patience and investment in your career, and it also requires trust with your manager and the company that you’re working at. I’ve been at multiple ends of the performance review spectrum of delight & disappointment—I’ve had halves where I thought I excelled and received a Meets Most. I’ve also had halves where I had so much going on personally that when I got a promotion and a Greatly Exceeds, I realized how little it meant to me in the overall balance of life. Most importantly, my various coaches and mentors have taught me that valuing my own sense of powerful goals and being able to reflect on my learnings matters more than an external rating. If I am able to do the work I love, serving the designers I care about, then I am successful. (Brief overview of Facebook’s performance review process). Understand the company’s context To understand your performance review, first understand the rubric by which the company values success. Hopefully you have access to a written guideline for expectations and skills in your role. If not, it’s something you should co-design this understanding with your manager. For Facebook product design, there are 7 core skills covering hard design skills as well as collaboration and leadership skills that each level of product designer should exhibit. For assessment in each half, managers write a brief outlining what each designer has done in the last 6 months. This is split into 1. individual impact on the team’s product goals, 2. strategy impact which may include visioning design work for the future, and 3. leadership/culture impact on the product team or elsewhere within Facebook. For more Facebook-specific measurement, see what it takes to be a designer at Facebook and how Facebook hires designers. While a written rubric is a good starting point, it’s a guideline, which means that it’s open to interpretation and individualization. The most important conversation is the one between you and your manager around shared understanding of the success guideline and how it applies to you individually. This conversation and shared understanding will ideally be based on a mutual relationship of trust and respect. I’ll dive more into how to create that ideal relationship in a future blog post. Know your context What matters to you professionally or at your work over the next six months or year? What would you like to do personally to help your team achieve their product goals? What new skills do you want to develop? What would you like to learn? What relationships do you want to nurture? For example, a designer who wants to work on building his prototyping skills may focus on 1. learning framer, 2. setting up a base prototype to match his product structure, and 3. committing to showing every single design in a prototype, and 4. sharing his learnings with others via a post or a lunch & learn talk. Another designer who wants to improve relationships with her product manager and key engineers chooses to 1. invest time in informal coffee meetings & 1–1s, 2. organize team events to build a sense of team culture, and 3. deliberately schedule whiteboard sessions to co-create requirements with her PM. Your goals should be self-driven and personal. You may also choose to work on feedback received from others, either about your passions & strengths, or areas of growth. Ultimately, they should be personal and powerful, mapping to what matters to you in your life, not simply driven by what your work needs from you. Also be aware of what else is happening with your life in this period. Are you dealing with health issues? Do your immediate friends and family need extra support from you? How much of yourself are you able to bring in to work? There is no shame in choosing to hang on at work for a period of time and not have big stretch goals. Your whole self has to look at work and non-work together. If you have a draft of your powerful goals and what success means to you for the upcoming period, bring them as a conversation point to your manager. She can provide additional company context and help map these goals back to what the company needs. She can provide feedback for you, and the two of you can co-create these personal success metrics together. You can check in over the period to see how you’re doing, and then at the end of the period, you will have a a sense of accomplishment or learnings about your powerful goals. Common fallacies If I work more hours, then I will be more successful. I grew up in a school system that equated hours of rote practice with achievement. If I spend 5 hours reviewing biology terms on flashcards, then I will do better in the test. If I put in 3 all-nighters in service of this project or client, then I will be more successful at work. This is a fallacy. Hours do not equal success. First, identify what impact means for your company and for you personally, then work on those items. For people in a creative field, the gestation time away from the continual grind allows for ideas and inspiration to blossom. Once I get that promotion, rating, etc. then I will be happy. We are a culture of achievers, and can live in a mindset of scarcity. Right now I don’t have X, so I am unhappy. X can be a stand-in for a raise, a promotion, a rating, a grade, or closing a certain number of clients. Have you ever strived so hard for X, then managed to achieve it, and felt a sense of hollowness or emptiness because simply achieving it wasn’t enough? Did you immediately move to create a new Y that was an even bigger goal?Instead, celebrate the powerful goals that you’ve set that you’re moving towards. Savor the feeling of achieving X and congratulate yourself for the learnings. What matters most for you Each year of your life is different. You will have different energy and a different balance of personal/professional life. There will be years where professional achievement is the #1 thing — you want to devote the time to climbing the ladder, getting that rating, achieving that title and promotion. If you’re in that state, also consider some other questions: How much fun am I having? What relationship do I want to cultivate with coworkers? How much energy do I have at the end of each day? What am I intentionally saying “no” to in order to say “yes” to work? There will also be years where life comes first. Yes, you will show up fully at work and do a more-than-competent job with your responsibilities. Yet you are also choosing to spend more time with your kids, your side hustle, or supporting your community outside of work. If you are choosing “yes” to these options, be gentle to yourself about your performance review and know that’s the important choice you are making with your time and energy. Finally, in an atmosphere of continual learning, perhaps the most useful question you can ask yourself at the end of each period of assessment is: What learning can I take away from this? That question will generate the most powerful goals to sustain energy and provide long-term fulfillment.
https://uxdesign.cc/stop-dreading-performance-reviews-and-set-powerful-goals-b9042c190ba0
['Tutti Taygerly']
2019-12-13 23:01:19.447000+00:00
['Startup', 'Design', 'Product Design', 'Technology', 'Goals']
2,854
How to get Surfshark coupon code presented by The Endless Adventure
Eric and Allison is a travel couple that traded their stable jobs and constant home for a life full of adventures. They travel all across the world and look for magical places to visit and see. They share their recommendations and moments with other on their Youtube channel called “The Endless Adventure”. The channel is full of inspiring videos with the most interesting, beautiful and surprising places that inspires all of us to join them and go see the world not just through their eyes but ourselves. One of the things that they always have with them on adventures is a VPN — Surfshark. While traveling a VPN is a must, especially because you use a lot of public wifi, which is absolutely not secure and Surfshark can secure you from that. A VPN is also a helpful tool to book cheaper flights, hotels and even more. That’s why they present their viewers with a coupon code that gives them 83% discount for a 2-year plan of Surfshark. How to get the VPN coupon code presented by The Endless Adventure? Go to Surfshark website Insert the coupon code “endless” Continue the purchase and be safe You can also follow the link with the coupon already inserted in https://surfshark.com/deal/coupon?coupon=endless Why Surfshark should be your choice for a VPN? All you need to know that Surfshark is a high-quality VPN provider, that doesn’t collect logs, has industry leading encryption which will make you secure and anonymous, lets you connect unlimited number of devices to one account and will let you bypass any geo-restrictions. VPN is a must especially if you like to be on public wifi and overall value your privacy and want to be anonymous online, without anyone snooping around what you are up to. Get Surfshark VPN with 83% discount Watch The Endless Adventure Youtube channel
https://medium.com/@redesandsondomenec/how-to-get-surfshark-coupon-code-presented-by-the-endless-adventure-c6e5c220e08
['Dominec Dades']
2019-03-17 15:57:36.006000+00:00
['VPN', 'Technology', 'Discount', 'Coupon']
2,855
Minerals and their Derivatives are Essential for Modern life
Mineral exploration is the process of searching for evidence of any mineralization hosted in the surrounding rocks. The general principle works by extracting pieces of geological information from several places, and extrapolating this over the larger area to develop a geological picture. Exploration works in stages of increasing sophistication, with cheap, cruder methods implemented at the start, and if the resultant information is economically interesting, this warrants the next, more advanced (and expensive) techniques. sponsored post. However, it is very rare to find sufficiently enriched ore bodies, and so most exploration campaigns stop after the first/couple of stages. To successfully do the work, these mining companies should be helped by an experienced mining team in order to locate enough enriched minerals. (1) In addition, several technologies on mining exploration we’re being studied to aid this problem. Will this make current mining explorations more efficient? Let’s have a peak on this article. Many countries depend on mineral mining to increase their people’s resources, and their economies are based on finding minerals. Iron, copper, gold, silver, molybdenum, zinc, coal, plutonium, sulfide, tin, chromite, potash, and other minerals were discovered. Geophysical approaches play an important role in mineral discovery, groundwater investigation, and hydrocarbon exploration, according to geophysicists. Owing to mineral exploration activity in increasingly deeper wells, durable sensors that can work in high-temperature and hostile environments are in high demand. Check disclaimer on my profile and landing page. Fiber-optic sensors meet most of requirements of these demanding applications with their high temperature capacity, multiplexed and distributed sensing and small space placement capabilities. (2) Mining explorations may be more systematic than ever before with the aid of these technologies and several mining companies. This technology, as well as the developers who created it, are doing an excellent job! Minerals and their derivatives are essential for modern life. Minerals are used as raw materials for almost all of the products we use. They embrace the way we live now and how we want to live in the future as we face the many problems that society faces. But we shouldn’t be worry; with today’s technologies and current mineral discovery companies, the mining industry will be in good shape! Let’s keep an eye on it! Source 1: https://www.intechopen.com/books/minerals/introductory-chapter-mineral-exploration-from-the-point-of-view-of-geophysicists Source 2: https://www.gsi.ie/en-ie/programmes-and-projects/minerals/Pages/default.aspx
https://medium.com/@oceankelly/minerals-and-their-derivatives-are-essential-for-modern-life-680d14af1407
['Ocean Kelly']
2021-05-03 11:02:42.996000+00:00
['Stock Market', 'Finance', 'Technology', 'Mining']
2,856
These Drones Could Help Restore Earth’s Forests
Tree-Planting Drones How drones plant tree seeds (Picture Credit: TechSpot) Overview Tree-planting drones are an adaptation of commercial drones. It takes only two human operators to control a batch of ten drones through a mobile app, computer software, or a traditional video game controller. What makes these drones really effective, however, is the ability to fire 120 seed pods per minute at extremely fast speeds into the forest soil below. On the contrary, if an individual were to plant a seed, it would take them a minimum of five minutes just to plant one seed. Bremley Lyngdoh, CEO of Worldview Impact, says, “Obviously, planting a billion trees will take a long time without the help of drones. We can literally see every single tree and the leaves on the tree if we need to. It opens up this new market for people to see the connection with trees and to say, ‘Wow, this is my tree. I planted that.’” Additionally, the seed capsules that the drones hold contain a proprietary plant mix containing plant food, beneficial bacteria and fungi, and a small amount of soil. One of the best aspects of these drones is that they keep to the theme of caring for the environment. Unlike most technology-related equipment and machines, tree-planting drones have 0 emissions, promoting a greener world. Although this idea is rather effective, these drones are still unable to match the rate of deforestation. Every minute, the Earth loses forest area the size of 27 football fields. Artificial Intelligence If we want to effectively fight deforestation, countries must make some sort of truce for the sake of the environment. Currently, the governments of specific areas prohibit the usage of drones, making it difficult for drones to travel to forests around the world that need reseeding. Once this is achieved, seed planting can begin. Adding AI to drone hardware will enable them to sufficiently map out forest areas (Picture Credit: Dextra International) Integrating artificial intelligence into drones is a potential breakthrough. This is because AI will allow these drones to become fully autonomous, making it possible for them to plant more seeds per day. Also, adding a 3D mapping system will enhance their skills in locating which forest areas are more impacted by deforestation, allowing them to create an optimal planting route every day. Currently, human operators have to travel from forest to forest, marking off each spot that needs to be soiled and replanted with tree seeds. This cuts into the total time one tree-planting project takes. Drone mapping system using AI (Picture Credit: Tech Explorist) Limitations Although tree-planting drones present many excellent opportunities to properly preserve the Earth, there are many constraints regarding drones all over the world. As I mentioned previously, drones are only allowed to be flown in specific places, and even before that, one must receive a drone-flying permit during a specific time frame, which can take up to months depending on the demand. Furthermore, there are times when the government may reject your application to obtain a permit. This also leads to many safety concerns. For instance, many aviation companies want to avoid the possibility of a collision between an aircraft and a drone for the assured safety of the passengers and pilots inside the plane. Many areas around the globe have a “No Drone Zone” (Picture Credit: CineD) Overall, the usage of this cutting-edge technology is widely being considered by many countries because the positive outcomes overwhelm the setbacks to most.
https://medium.com/techtalkers/these-drones-could-help-restore-earths-forests-695469913d75
['Pranav Bansal']
2020-11-24 22:47:08.378000+00:00
['Trees', 'Deforestation', 'Drones', 'Technology', 'Environment']
2,857
Violence in the Media — and Guns. This article is about a problem in our…
This article is about a problem in our society and how we are trying to correct it. We are reacting emotionally to the issue of violence and especially gun violence. We are being manipulated by those who benefit from the status quo. I feel qualified to express my opinion because I have been a victim of violence and I deal with violence and the aftermath of violence in my job as a paramedic. I am also a Krav Maga (Israeli Self-Defense) Instructor and I train in Judo. You could say that violence, and thinking about and training for violence, is a big part of my life. I was raised with guns. I had a bb gun in elementary school. My dad owned many guns including shotguns, rifles, and handguns. We went shooting often. I got a 12 gauge Remington 1100 shotgun for my high school graduation present. That being said I am a citizen in a society where violence is glorified in media from tv to video games to movies. The more disturbing, the better, and it bothers me. It bothers me because the violence in our society is jeopardizing our safety and ultimately our freedoms. Specifically our Second Amendment rights which is freedom to own guns to defend ourselves. Growing up I was in big trouble if I pointed even a toy gun or a water gun at anybody. In other words, I was taught how to safely handle a gun. As a parent I had a problem with violent video games like Grand Theft Auto. My kids played it and I was against it. I wouldn’t allow it in the house. Ultimately, though, I allowed it because they were just going to a friend’s house to play it so I figured I could at least monitor the kids if they played at our home. I don’t see any redeeming qualities in making a game of shooting police, running people over, etc. Regardless of parental controls, young kids are getting their parents or older siblings to buy or lend the games so that all the nasty violence in that game is being played by teens and younger. The moguls who got rich off this crap vote for and financially support candidates who want to repeal the Second Amendment. Their money talks, so what is an overworked middle class parent to do?! At the same time we have a problem with real violence. People are hurting and killing each other. The total number of violent crimes in our country in 2018 was 1.21 million. People mostly complain about the easy availability of handguns being a problem. As though if all guns were confiscated (from the honest citizens) the problem of violence and especially gun violence would be solved! In my view it is the saturation of our culture with glorified violence that helped create the conditions that lead to the current state of affairs. You almost never hear this argument. It is because the media elites would rather utilize guns as a convenient scape goat while protecting their privilege to darken our collective consciousness with murder, torture, rape stories that are often so bad they can upset even well-balanced people. Remember, violence, like sex, SELLS!! Exclude the violent criminal types who are probably not very influenced by media because they have real bad stuff going on in their lives; the problem is that unless a person is mature AND well-balanced they may very well be influenced in some negative way by media violence. Though most people can keep things in perspective and not be seriously affected by media violence, a small number will become more depressed than they already are and act out with anti-social behavior. This can run the gamut from freaking out at home, smashing plates and maybe cutting themselves or at the extreme, attacking their schoolmates or coworkers with guns and bombs. There has been a lot of discussion about the mentally ill and how they must not purchase guns. It is true that certain people with mental illness of certain nature probably should not be able to own guns, but I don’t think that’s a real problem. I think the real problem is there is a constant barrage of blood and gore that saturates the media. This desensitizes the public to violence and at the same time makes huge money for the media moguls. They in turn support gun control. A recent episode (real, not “Law and Order”) took place in Plainville, Mass. Last month, after watching the new and highly acclaimed movie “The Joker”, a son decided to stab his mother to death in their home, apparently an act inspired by a version of a similar event that happened in the movie. Why wasn’t there any public outcry to ban easily accessible kitchen knives?! Maybe we should legislate that butcher block knife holders should have locks on them. As a paramedic in the suburbs, I have seen a lot more kitchen knife threats, attacks, and wounds than firearm use. What, if any, solutions do I have to offer? It sounds quaint, but how about some good old-fashioned censorship? Self-imposed would be preferable but is that going to happen? It wasn’t so long ago there were limits to the depravity that could be peddled in the media as entertainment. Granted, it could go too far, but we’re currently at the opposite extreme where “anything goes.” I am also concerned about the easy availability of hardcore porn to 10 year olds on their smartphones. What are the implications for loving, normal relationships with these kids as they reach adulthood? Nobody seems to care as long as Apple’s stock is rising. I’m beginning to think that smartphones should be banned, not guns. If you agree or disagree please contact me at: [email protected] and let me know your thoughts. __ Bart Axelrod is a Krav Maga instructor at EP Martial Arts in East Providence RI and a paramedic in the Boston area. EP Martial Arts on Social Media: Facebook | Twitter | Instagram | YouTube
https://medium.com/@bartaxelrod/violence-in-the-media-and-guns-9c5b3dac1f15
['Bart Axelrod', 'Ep Martial Arts']
2019-11-15 15:06:31.718000+00:00
['Guns', 'Media', 'Videogames', 'Violence', 'Technology']
2,858
Amazon Go: Redefining Shopping
Amazon officially opened its first Amazon Go store in Seattle to the public on January 22nd. The store uses technology such as ceiling cameras and electronic sensors to track each individual customer’s purchases as they shop. To enter, customers scan their smartphones loaded with the “Amazon Go” app on turnstiles similar to those in a subway. Once inside, they can then choose from a wide range of items and put them in their shopping bags. As customers pick up items, they are added to their Amazon Go account — and removed when placed back, with the help of the advanced shelve sensors and ceiling cameras.The receipt is issued once the customer exits, and their account is automatically billed. The store has been open for Amazon employees since 2016, though Amazon was hesitant to open to the public as there were — and are––a few issues. For instance, misplacing an item onto the wrong shelf can cause the billing system to misread it when picked up by others. Amazon are also hoping to incorporate this technology into Whole Foods, the chain they acquired for $13.7 bn last year.
https://medium.com/newscuts/amazon-go-a-technological-miracle-fca6958bd148
['Bharat Sachdev']
2018-01-28 06:47:24.430000+00:00
['Amazon', 'Technology', 'Tech', 'Amazon Go', 'Shopping']
2,859
Meeting the engineers’ expectations as a product manager
One of the teams that product managers closely work with is the engineering team. This is the team that literally brings ideas to life. Since the interaction with the engineers happens on a day to day basis, it becomes necessary for product managers to understand what engineers expect from them. The simple reason for this is that you don’t want engineers to work in a frustrated manner because that will surely impact the quality of the deliverables. Without further adieu, let us look at some of the things which engineers would like product managers to understand before working on any project. 1. Giving clear and concise instructions Though this sounds relatively simple, it is one of the things which product managers often tend to not do. What happens is that once the requirements have been gathered, product managers will prepare lengthy documentation that consists of the features and specifications to be built. Now there is nothing wrong with lengthy documentation because after all, it contains every little detail which will ensure that the set expectation is met. However, if you are assuming that the engineers will go through it in detail, then you are mistaken. You have to understand that engineers are not here to read long documents. They are here to understand what they have to do and then actually code. Therefore, whenever you need the team to build or change a feature(s), give the instructions that the team actually needs for implementation. In other words, don’t beat around the bush, please! Photo Credit: Giphy 2. Take necessary meetings only If there is one word that may make the engineering team sigh a little it is the word “meeting”. In no way am I saying that engineers don’t like meetings. They also understand that meetings allow the entire team to be on the same page. However, it is crucial to understand that engineers are not going to appreciate you as a product manager if you will keep having discussions with them related to the projects. Think about it. The engineering team cannot give all their time to the meetings because they have to do the coding, and they know that if their tasks are not completed, the project will come to a halt. So as a product manager, one way to ensure that you are only taking the required time out of the engineers’ schedule is first to figure out all of the things that need to be discussed. Go ahead and see who would be the best person/people to give those answers. Wherever you need the engineers, schedule a discussion where you are getting as many answers as possible. 3. Providing with finalized designs From my own experience of working with the engineers, I can guarantee you that your engineers will love you if you provide them with finalized designs to develop. By completed I mean that these designs have received the approval of the boss, client, and any other stakeholder. When you put yourself in the shoes of the engineering team, this makes a lot of sense. Developing fully approved designs means that you are not going to be making nearly 1000 changes. That is just frustrating for anyone! Having said this, it happens so that sometimes getting approval for the designs from clients or other stakeholders can take time. At the same time, you also have deadlines to meet. In such cases, do not try to give half-approved designs to the developers right away. First, try to get the approval for a set of designs and then give it to the engineers. This way you and your team will be much more productive! 4. Take up a stand for them when needed For a moment, I would like you to think about who are the closest ones to you. If I am not wrong, one of the reasons why someone is close to you is because you know that they will take up a stand for you if such a situation arises. The engineers expect the same from the product manager. Engineers may not say this to the product managers, but they would greatly appreciate it if product managers would stand by them in times when perhaps there are delays in the deliverables due to genuine reasons. When such support is given to them, it will motivate them to fight through the obstacles and get there. 5. Tell them the truth They say honesty is always the best policy. No matter how bitter the truth may be, it is better to know it. Engineers, like anyone else, would like to know the truth at all times. As a product manager, you must be honest and tell them upfront about scenarios that can range from further changes to be made in the design to stop building a feature because of another requirement. Don’t try to keep it from them for as long as you can because eventually, they will come to know of it. Also, one more thing: though the team may not react happily to all truths, they will indeed not despise you because firstly, you are being honest and not creating stories out of thin air, and secondly, they understand that things like this can happen. It’s okay! Wrap Up The engineering team is a great team to work with as they always bring a new perspective to the table and teach you the things you may have not known. Having said that, it is essential as well as dutiful as product managers to understand the key expectations of the engineers. This could take some time and can be challenging too. However, once you get in the habit of doing this, you will see how your team will become more enthusiastic while building rocking products for the customers! To the engineers who have read this, I would love to know whether you agree or disagree with the expectations that you would have for product managers. Also, how do you all find this article overall? Let me know your valuable feedback in the comment section below. (For anyone looking for ways to be rocking product, do check out this blog!) This post has been published on www.productschool.com communities
https://medium.com/agileinsider/meeting-the-engineers-expectations-as-a-product-manager-8d6b8ace3a92
['Ishita Mehta']
2020-09-02 11:06:58.546000+00:00
['Technology', 'Product', 'Product Management', 'Expectations', 'Product Manager']
2,860
TechNY Daily
TechNY Daily Keeping the NYC Tech Industry Informed and Connected 1. Brooklyn’s Cityblock Health, an Alphabet (Google) urban healthcare spinoff, has raised $160 million in a Series C funding at a valuation of over $1 billion.Investors included General Catalyst Wellington Management, Kinnevik AB, Maverick Ventures, NYC’s Thrive Capital and Redpoint Ventures. The company currently provides services to 70,000 Medicaid and Medicare beneficiaries across four cities who live in neighborhoods that have traditionally gone without sufficient health services. (www.cityblock.com) (TechCrunch) 2. Reddit is acquiring NYC’s Dubsmash, a TikTok competitor. The financial terms of the deal were not disclosed. Dubsmash had raised $20 million from investors including Sunstone Life Science Ventures, Lowercase Capital, Index Ventures, Heartcore Capital, Eniac Ventures and Balderton Capital. (www.dubsmash.com) (Press Release) 3. NYC’s Pico, a technology services provider for quant trading, has raised $135 million in a Series C funding. Intel Capital led the round and was joined by EDBI and CreditEase Fintech Investment Fund. The company’s Piconet is a low-latency network providing connectivity to market data and trading venues. (www.pico.net) (GlobeNewswire) 4. NYC’s Databand, an AI-based observability platform for data pipelines, has raised $14.5 million in a Series A funding. Accel led the round and was joined by Blumberg Capital, NYC’s Lerer Hippeau, Ubiquity Ventures, Differential Ventures, F2 Venture Capital and Bessemer Venture Partners. The company’s platform is designed to ensure the reliable delivery of high-quality data for businesses. Databand is one of the many Israeli startups in NYC. (www.databand.ai) (TechCrunch) 5. NYC’s Fakespot, a real time monitor detecting fake products and reviews, has raised $4 million in a Series A funding. Bullpen Capital led the round and was joined by Graph Ventures, Ty Shay, 500 Startups and Faith Capital Holdings. Fakespot’s Chrome plugin detects fake sellers and counterfeits on Amazon, Walmart, and Shopify sites. (www.fakespot.com) (Cheddar) _________________________________________________________________ Small Planet partners with the world’s most innovative companies to create inspired apps and websites. Development, UX design, monetization strategies, user acquisition, and more. Contact us. (Sponsored Content) _______________________________________________________________ 6. NYC’s Beyond Identity, a passwordless identity management platform, has raised $75 million in a Series B funding. Investors included NEA, Netscape founder Jim Clark and Koch Disruptive Technologies. The company seeks to replace passwords “with fundamentally secure” X.509-based certificates. (www.beyondidentity.com) (VentureBeat) 7. NYC’s Ro, a healthcare technology company, has acquired Workpath, a Richmond, Va.-based platform for healthcare companies. The financial terms of the deal were not disclosed. Workpath’s platform, which coordinates on-demand, in-home care and diagnostic services, will remain a standalone platform but will also be integrated with Ro’s three digital healthcare clinics and pharmacy. (www.ro.co) (www.workpath.co) (Press Release) 8. Brooklyn’s Gotham Greens, an operator of tech-enabled greenhouses, has raised $42 million in a Series D funding. Manna Tree led the round and was joined by investors including The Silverman Group. The company also raised $45 million more in a debt round. Gotham Greens’ eight greenhouses, including facilities in Gowanus, Greenpoint and Jamaica, Queens, sell 35 million heads of lettuce annually to retailers such as Whole Foods. (www.gothamgreens.com) (Forbes) 9. Long Island’s Soil Connect, an online dirt marketplace for the construction industry, has raised $3.3 million in seed funding. TIA Ventures and Heartland Ventures led the round and were joined by CEMEX Ventures, Great Oaks Venture Capital, Situs Real Estate, Altmark Group, AB Investment Group, J.G. Pertucci Company and Bazella Contracting. The company aims to eliminate the high costs and inefficiencies associated with the transport and management of soil. (www.soilconnect.com) (BusinessWire) We have special sale pricing on TechNY Daily sponsorship and advertising opportunities. For information, contact: [email protected] ____________________________________________ TechNY Recruit Jobs Job Postings are on sale. Contact us at [email protected] Lukka (new) We are a SaaS solution that makes crypto accounting easy. We are a trusted, blockchain-native technology team that is passionate about digital asset technology. Our team is continuously collaborating and designing new products and initiatives to expand our market presence. Technology and customers are at the center of our universe. We dedicate our energy to learning, building, adapting, and achieving impactful results. Customer Success Specialist Senior Front End Engineer Senior Software Engineer Software Test Engineer Third-Party Risk Manager or Director Account Executive SaaS and Data Sales Team Leader Circle (new) Circle was founded on the belief that blockchains and digital currency will rewire the global economic system, creating a fundamentally more open, inclusive, efficient and integrated world economy. Solutions Engineer Director, Account Management Enterprise Sales Director (Banking and Financial Services) Senior Software Engineer, Frontend Manager, Software Engineering Agilis Chemicals Transforming chemical industry with modern commerce technology Full-stack Engineer Business Development Manager — Enterprise SaaS Marketing Director — Enterprise SaaS LiveAuctioneers Awarded for four consecutive years as one of Crain’s 100 Best Places to Work in NYC, LiveAuctioneers is the largest online marketplace for one-of-a-kind items, rare collectibles, and coveted goods. Product Marketing Manager Senior Marketing Manager Summer Summer’s mission is to help the 45 million Americans burdened by student debt save time and money through smart, algorithm-based recommendations. Summer combines policy expertise and innovative technology to serve student loan borrowers across the country. Back-End Engineer Logikcull.com Our mission: To democratize Discovery. Enterprise Account Executive The Dipp A personalized subscription site for pop culture’s biggest fans. Director of Engineering Ridgeline Founded by Dave Duffield (founder and former CEO of Workday and PeopleSoft) in 2017, Ridgeline’s goal is to revolutionize technology for the investment management industry. We are building an end-to-end cloud platform on a modern infrastructure using AWS, serverless technologies, which up to this point hasn’t been done in the enterprise space. Software Engineering Manager, UI Simon Data Simon Data is the only enterprise customer data platform with a fully-integrated marketing cloud. Our platform empowers businesses to leverage enterprise-scale big data and machine learning to power customer communications in any channel. Senior Front-End Engineer Product Designer Director, Enterprise Sales Full Stack Engineer Vestwell Retirement made easy. Senior Fullstack Engineer Package Free Package Free is on a mission to make the world less trashy though offering products that help you reduce waste daily. We source our products from individuals and brands with missions to create a positive environmental impact and since launching, have diverted over 100 million pieces of trash from landfills. Head of Operations Hyperscience Hyperscience is the automation company that enables data to flow within and between the world’s leading firms in financial services, insurance, healthcare and government markets. Founded in 2014 and headquartered in New York City with offices in Sofia, Bulgaria and London, UK, we’ve raised more than $50 million raised to date and are growing quickly. We welcome anyone who believes in big ideas and demonstrates a willingness to learn, and we’re looking for exceptional talent to join our team and make a difference in our organization and for our customers. Machine Learning Engineer Senior Security Engineer Braavo Braavo provides on demand funding for mobile apps and games. We offer a flexible and affordable funding alternative to the traditional sources of capital like working with a VC or bank. We’re changing the way mobile entrepreneurs finance and grow their app businesses. Our predictive technology delivers on demand, performance-based funding, without dilution or personal guarantees. By providing non-dilutive, yet scalable alternatives to equity, we’re helping founders retain control of their companies. Business Development Manager VP of Marketing Head of Sales Yogi At Yogi, we help companies decipher customer feedback, from ratings and reviews to surveys and support requests. Companies are inundated with feedback, but when it comes to turning this data into actionable business decisions, most companies fall short. That’s where Yogi fits in. Full Stack Software Engineer Ordergroove We’re passionate marketers, engineers and innovators building the technology to power the future of commerce. We’re a B2B2C SaaS platform helping the world’s most interesting retailers and direct-to-consumer brands remove friction from the customer experience to deliver recurring revenue through subscriptions programs — shifting their consumer interactions from one-and-done transactions to long-lived, highly profitable relationships. Data Scientist Upper90 Upper90 is an alternative credit manager based in New York City that has deployed over $500m within 18 months of inception. Investor Relations Analyst Upscored UpScored is the only career site that uses data science to connect you with jobs suited specifically to you while automatically learning your career interests. Its AI-powered platform decreases job search time by 90%, showing you the jobs you’re most likely to get (and want) in less than 2 minutes. Data Engineer Senior Frontend Developer Senior Backend Developer Frame.io Frame.io is a video review and collaboration platform designed to unify media assets and creative conversations in a user-friendly environment. Headquartered in New York City, Frame.io was developed by filmmakers, VFX artists and post production executives. Today, we support nearly 1 million media professionals at enterprises including Netflix, Buzzfeed, Turner, NASA & Vice Media. Frontend Engineering Manager Sr. Swift Engineer Lead Product Designer Attentive Attentive is a personalized text messaging platform built for innovative e-commerce and retail brands. We raised a $230M Series D in September 2020 and are backed by Sequoia, Bain Capital Ventures, Coatue, and other top investors. Attentive was named #8 on LinkedIn’s 2020 Top Startups list, and has been selected by Forbes as one of America’s Best Startup Employers. Enterprise Account Executive Sales Development Representative Senior Client Strategy Manager Director of Client Strategy KeyMe NYC startup revolutionizing the locksmith industry with innovative robotics and mobile technology. Customer Experience Representative Inbound Phone Sales Representative Systems Software Engineer Button Button’s mission is to build a better way to do business in mobile. Enterprise Sales Director — New York Postlight Postlight is building an extraordinary team that loves to make great digital products — come join us! Full Stack Engineer Deliver your job listings directly to 48,000 members of the NYC tech community at an amazingly low cost. Find out how: [email protected] ____________ NYC Tech Industry Virtual Event Calendar December 16 Fundraising 201: How to Raise a Seed Round Efficiently Hosted by Startup Grind December 16 Rise Refresh: Remote Selling with Winning by Design Hosted by Rise New York January 14, 2021 Galvanize NYC: Data Science Demo Day Contact Us for Free Listing of Your Web-based Events Send us your events to list (it’s Free!) to: [email protected] Did You Miss Anything Important? Read Our TechNY Daily Past Editions TechNY Daily is distributed three times a week to 48,000 members of NYC’s tech and digital media industry. Connecting the New York Tech Industry Social Media • Mobile • Digital Media • Big Data • AdTech • App Development • e-Commerce • Games • Analytics • FinTech • Web • Software • UX • Video • Digital Advertising • Content • SaaS • Open Source • Cloud Computing • AI • Web Design • Business Intelligence • Enterprise Software • EduTech • FashionTech • Incubators • Accelerators • Co-Working • TravelTech • Real Estate Tech Forward the TechNY Daily to a friend Not a Subscriber to TechNY Daily, Click Here Copyright © 2020 TechNY, All rights reserved.
https://medium.com/@smallplanetapps/techny-daily-eaa3c2d4258
['Small Planet']
2020-12-16 20:50:05.093000+00:00
['Startup', 'Tech Jobs', 'Technology News', 'Technews', 'Venture Capital']
2,861
100 Words On….. Spectre & Meltdown
Spectre and Meltdown gave us two points to consider. First, your defence in depth strategy leveraging patching is more valuable than ever, especially in shared hosting environments which appear to be most at risk. Second, let us consider vulnerabilities not for what they are but what they could become. Being local exploits, layers of defence that include physical, technical, logical, and administrative methods must be robust to mitigate access and exploitation of the vulnerabilities. While apparently “read only” vulnerabilities, information gained maliciously via these exploits could be used in future attacks. Stay informed, review your response strategy, and apply updates.
https://medium.com/the-100-words-project/100-words-on-spectre-meltdown-9211f292bb38
['Logan Daley']
2020-12-18 04:57:37.666000+00:00
['Meltdown', 'Information Technology', 'Cybersecurity', 'Spectre', '100 Words Project']
2,862
What is Kafka?
What is Kafka? What is Kafka? Kafka is a streaming platform that can be used for storing, reading and processing data. It was founded by 3 engineers in LinkedIn since they needed something faster and more scalable than traditional MQ. Further in the article we will learn why Kafka is faster then MQs. Kafka topic Kafka topic is the feed to which a producer pushes messages/records and a consumer subscribes to, to consume messages from it. A topic is something similar to a message queue, where data is pushed and popped from, the difference here is that messages are never popped from a Kafka topic. The topics can have a retention period configured, which would then automatically delete the messages which are older than the retention period. Commit log Commit log is an append-only data structure which is used by Kafka to store the records. This makes the messaging system durable, persistent, sequential and has better scalability and built-in fault tolerance since the log can be copied and stored even after consumer has consumed it. Producer Producer is an application which is a source of the data being pushed to one or more Kafka topics. For example, pushing the details of every transaction that occurs in your application to Kafka so that other systems that need transaction data can consume from this topics logs. Consumer Consumer is a client application that fetches data from Kafka topic. For example another application needs the transaction data that the application previously produced and let’s say wants to perform analysis on it, then it will use consumer code to fetch the data from Kafka cluster. Multiple consumers can consume from the same topic, unlike traditional MQs, and have their individual offsets. Multiple instances of a consumer application would be grouped together as a consumer group. This consumer group is assigned a consumer group id and the topic partition offset of the consumer, the offset upto which the application has consumed data, will be saved against this id. Kafka Brokers Brokers are the servers that together form a Kafka cluster by sharing information with each other. Brokers contain topics, partitions, and commit logs. Zookeeper is installed on Kafka brokers for maintaining configuration, processing and to managing data flow. Zookeeper Zookeeper is a software developed by Apache which is a centralized service, used in maintaining naming and configuration data and provide flexible and robust synchronization between distributed systems. It keeps track of status of Kafka cluster nodes and also Kafka topics, partitions etc. Zookeeper is used by Kafka for: Electing a controller: In a Kafka cluster, one of the brokers serves as the controller, which is responsible for managing the states of partitions and replicas and for performing administrative tasks like reassigning partitions. Keeping track of which brokers are alive. Keep topics’ metadata i.e., which topics exist, how many partitions are present from each topic, where are the replicas, who is the preferred leader, what configuration overrides are set for each topic. Data regarding quotas of how much data is each client allowed to read and write. Who is allowed to read and write which topic, which consumer groups exist, who are part of those groups and what is the latest offset for the consumer group from which partition Zookeeper uses ZAB (Zookeeper Atomic Broadcast) which is the brain of the whole system. In a system of multiple processes, all correct processes receive a broadcast of the same set of messages. It is termed atomic because either it eventually completes at all participants correct or all participants abort without side effects. An atomic broadcast should guarantee: If one correct server broadcasts a message, then all correct servers will eventually receive it. If one correct server receives a message, then all correct servers will eventually receive that message. A message is received by each server at most once, and only if it was previously broadcast. The messages are totally ordered i.e. if one correct server receives message 1 first and then message 2, then every other server participant must receive message 1 before message 2. When nodes fail and restart, they should catch up with others. Kafka Streams Kafka streams are APIs built on top of producer and consumer APIs and, is more powerful than producer and consumer clients. Kafka streams help with more complex processing of record. However, with Kafka Streams, along with the messaging service capability, Kafka also handles the streaming logic for you, so that you can concentrate on your business logic. Kafka connect Kafka connect is a tool for streaming data between Kafka and other data stores. It makes it easier to ingest large data like an entire database or collect metrics from an app server into and out of Kafka topics. There are 2 type of connector: Sink connector: Streams data from Kafka to other data stores like elastic search, hadoop Source connector: Deliver data from Kafka to data stores Checkout list of available connector here. Inner Workings of Kafka: Since Confluent is the most widely used this article uses it’s setup to explain the architecture. Let’s understand the basic working of Kafka. There are 3 main components, Kafka cluster, producer and consumer. We will explore them step by the roles of these components. Producer’s role: A producer creates data, typically in a json format. The message, using producer API, is pushed to Kafka cluster. We can add a keys to messages which is used by zookeeper to assign a topic partitions to messages. For example GUIDs are used keys, Kafka will then use these different GUIDs of each message to assign partitions to the messages. Since keys are optional, when key are not present in the message the zookeeper randomly assigns a partition to the first message and then uses round robin from the second message onwards. Look at this exhaustive list of configurations to customise the producer according to the need to the application. All of them have a default value set. so if your application needs an extremely basic implementation then you need not worry about the configurations. But if you care about the persistence, sequence, timeouts etc. it would be recommended that you go through them. Zookeeper’s role: A Kafka cluster can be configured with a certain number of partitions and replica partitions. The produced message reaches one of the brokers in the Kafka cluster. Using the key, a partition is then assigned to this message. The key to understanding essence of Kafka is to look at the importance of the zookeeper. Once partition is assigned, a message gets stored in the broker’s I/O buffer. Depending on whether the producer requested, an acknowledgment(ACK) is sent back it. Kafka relies on the operating system to flush messages to permanent storage. Since an ACK is sent before moving the data from I/O buffer to persistent storage, there is a chance of losing this data when lets say the broker crashes before saving the data but after ack is sent. This is where replica partitions come into picture. Once the leader partition receives message, it forwards it to the follower replica partitions. If the producer requests ack after data is saved to all replicas’ I/O buffers , the leader waits for an ack from the followers and once it receives ack from all active followers it send an ack back to the producer. It is highly unlikely that all the replicas go down before pushing the data to permanent storage. Replicas and acks are not mandatory but they are highly recommended. A leader epoch number is used to make sure that follower replicas only accept messages from leader. Leader epoch is a 32-bit number which increments every time a leader goes down and/or a new leader is selected. This number is attached to messages sent by leader to the followers so that replicas ignore outdated leader’s messages once it is back up again. Any new message is added to the end of the log. Once a message is copied to all the replicas an offset is assigned to it which is called high watermark offset. A consumer can read only up to high watermark offset. Partitions truncate data to high watermark offset. In case the leader goes down before sending data to the followers the high watermark wouldn’t be updated once the new leader is elected all the followers even the previous leader will truncate their logs upto the high watermark offset(an image here) Consumer’s role: Consumer subscribes to the Kafka topic it needs to consume from. Consumer can also be configure using this exhaustive list of configurations Auth can be added to Kafka to make sure consumers can only read the topics they have access to. Enabling auth is not mandatory. Kafka uses consumer group id to store consumer related data. Kafka consumers belonging to the same consumer group share a group id, for example offset that consumer has consumed upto or which topics the consumer is authorized to access. The consumers in a group then divide the topic partitions amongst themselves so that each partition is only consumed by a single consumer from the group. A topic’s partition can have only one consumer instance per consumer group attached to it. Therefore, there should only be as many consumers under a consumer group id as there are topic partitions, otherwise the remaining consumers would remain idle. For example, there are 3 topic partitions and the consumer application is scaled to 5 instances. This would mean that there are now 5 consumers under the consumer group id trying to consume from the topic. Since only 3 will be allowed to consume from the topic. The remaining 2 instances would remain idle. Consumer should subscribe to the topics that it wants to consume. Once the topic is subscribed, whenever any message is pushed into that topic it will be consumed by the consumer. Why is Kafka faster than MQs
https://medium.com/@shrutiwadnerkar/what-is-kafka-5bf3b61ececb
['Shruti Wadnerkar']
2020-12-22 18:41:56.650000+00:00
['Technology', 'Software Engineering', 'Event Streaming', 'Kafka', 'Message Broker']
2,863
Even if the pandemic does lessen or go away this summer in the United States
And this may be just the first wave of a pandemic that could return in multiple seasons, all depending on whether it can be contained by physical distancing, a potential vaccine, or other preventive measures. “This is an extraordinarily transmissible virus, and I think it’s more transmissible than we recognize,” says Michael Mina, assistant professor of epidemiology at Harvard T.H. Chan School of Public Health. https://www.reddit.com/r/Worldjuniorslives/ https://www.reddit.com/r/WorldJuniors2021live/ https://www.reddit.com/r/worldJuniorlivenow/ https://www.reddit.com/r/Liverpoolvwestbrom/ https://www.reddit.com/r/Liverpoolvswestbromst/ https://www.reddit.com/r/Worldjuniorslives/ https://www.reddit.com/r/WorldJuniors2021live/ https://www.reddit.com/r/worldJuniorlivenow/ https://www.reddit.com/r/Liverpoolvwestbrom/ https://www.reddit.com/r/Liverpoolvswestbromst/ https://www.reddit.com/r/Worldjuniorslives/ https://www.reddit.com/r/WorldJuniors2021live/ https://www.reddit.com/r/worldJuniorlivenow/ https://www.reddit.com/r/Liverpoolvwestbrom/ https://www.reddit.com/r/Liverpoolvswestbromst/ https://www.reddit.com/r/worldjuniorslivenow/ https://www.reddit.com/r/worldjuniorslivesst/ https://www.reddit.com/r/Worldjuniorstnslivest/ https://www.reddit.com/r/Worldjuniorslivst/ https://www.reddit.com/r/WorldJuniorst/ https://www.reddit.com/r/worldjunior2021livest/ https://www.comentr.com/t/science/cdrV https://www.freitag.de/autoren/derdffreitagsdfsd/sdfsdfsdfsdfsdf https://www.freitag.de/autoren/derdffreitagsdfsd/cxnxgnfxgjxg https://www.freitag.de/autoren/derdffreitagsdfsd/dgdfgdfgdfgdfg https://www.comentr.com/t/gaming/cdrT https://www.comentr.com/t/programming/cdrS https://www.deviantart.com/afiya777/journal/OFFICIAL-FREE-Ravens-vs-Giants-Live-free-865259032 https://www.deviantart.com/afiya777/journal/Ravens-vs-Giants-Live-Stream-Free-865259035 https://www.deviantart.com/afiya777/journal/Reddit-Streams-Ravens-vs-Giants-Stream-free-865259036 https://www.deviantart.com/afiya777/journal/STrEaMs-reddit-Ravens-vs-Giants-Stream-Free-865259038 https://www.freitag.de/autoren/nfl-lives-on/raiders-are-still-1 https://www.comentr.com/t/gaming/cdrU https://blog.goo.ne.jp/admin/entry/complete?eid=b43129dab36c09e433a3a42ed3f358ec&edit=new&sc= https://xinare6387.hatenablog.com/#edit https://caribbeanfever.com/photo/album/newWithUploader?ids=2663233%3APhoto%3A12487940&failedFiles=&overLimit= https://paiza.io/projects/T1n1GzjoHNf6Y-AKgz9nmA?language=php https://brainly.co.id/tugas/37260772 https://www.peeranswer.com/question/5fe8928b334b5da441bb3a2c https://note.com/tafop80721/n/n2ae5f710a4f6 https://www.tumblr.com/dashboard http://officialguccimane.ning.com/profiles/blogs/wertu5ruyugih https://www.freitag.de/autoren/mkmonirkhancse/ravens-vs-giants-live-free https://www.freitag.de/autoren/nfl-live-stream/fubo-tv-offers https://www.comentr.com/t/programming/cdrW https://www.comentr.com/t/gaming/cdrX https://247sports.com/high-school/mississippi/board/football-102607/contents/nfl-live-stream-how-to-watch-every-nfl-week-16-game-online-157880654/ https://247sports.com/high-school/mississippi/board/football-102607/contents/reddittexans-vs-bengals-livestreamreddit-157880658/ https://247sports.com/high-school/mississippi/board/football-102607/contents/live-streamtexans-vs-bengals-live-stream-157880664/ https://247sports.com/high-school/mississippi/board/football-102607/contents/nfl-texans-vs-bengals-live-stream-free-2020-157880663/ https://247sports.com/high-school/mississippi/board/football-102607/contents/total-sportektexans-vs-bengals-live-free-157880665/ https://247sports.com/high-school/mississippi/board/football-102607/contents/livetexans-vs-bengals-live-stream-reddit-free-157880666/ Mina has “little faith” in the accuracy or extent of Covid-19 testing so far. Between people who are sick but have not been tested and the unknown number of people carrying the disease without any symptoms and transmitting it, Mina and other epidemiologists says it’s completely unknown how many people are actually infected. “We really don’t know if we’ve been 10 times off or 100 times off in terms of the cases,” Mina says. “Personally, I lean more to 50 or 100 times off.” That means instead of more than a million cases in the world right now, there could be anywhere from 10 million to perhaps 100 million. That also means the extreme preventive measures like stay-at-home orders could last months, not weeks, Jeremy Konyndyk, a senior policy fellow at the Center for Global Development, tells the health news site Stat. Few doubt that the effects will be grave, no matter what the actual number of cases is right now. Total U.S. deaths from Covid-19 are projected to climb steeply in coming weeks and reach 93,531 by August 4, according to the Institute for Health Metrics and Evaluation (IHME), an independent global health research center at the University of Washington. The estimate, which includes a wide range of uncertainty, was cited by Dr. Deborah Birx, the White House coronavirus response coordinator. ‘It’s all coming soon’ “The spread and scope of Covid-19 is just immense, and that’s because it’s been spreading unchecked, and still is,” says Mark Cameron, PhD, an immunologist and medical researcher in the School of Medicine at Case Western Reserve University in Ohio. During the 2003 SARS epidemic, which was also caused by a coronavirus, Cameron worked at Toronto General Hospital, the only major city that experienced an outbreak outside of China. That disease and this one yielded similar data on many measures except one: the ratio of severe to mild cases. “People who got SARS in 2003 got very sick very fast, so it was easy to identify them and isolate and treat them,” Cameron says. Conversely, a far lower percentage of people had mild or no symptoms with SARS, so it did not spread as rapidly as Covid-19. Many of the same extreme preventive measures were taken in Toronto as are being done now in states across America, Cameron says, and the 2003 epidemic was contained, and the virus was apparently eliminated from the human population. “This virus is very smart, and it spreads very easily,” Cameron says of Covid-19. “This is unprecedented. This is a 100-year pandemic.” Image for post The first U.S. case of Covid-19 was reported on January 21, 2020. This chart shows the number of new cases each day since then, through April 7. Credit: Johns Hopkins University School of Medicine New York’s coronavirus outbreak illustrates how population density fosters more rapid spread of a disease, Cameron and other experts say. But every city and state that has yet to see severe outbreaks of Covid-19 will get its turn, Cameron tells me. The curves will be similar, even if the total numbers of cases and deaths are lower. “Every state will experience their own curve and their own peak,” he says. Mina agrees, adding another twist that has yet to play out: It’s known that people with underlying health conditions are at greater risk for severe symptoms and death from the coronavirus. Areas with a high proportion of people who have heart disease and diabetes may therefore experience more severe death ratios than what’s been seen so far. Mina cited Memphis, New Orleans, and Atlanta as three examples. “These are places that I think have the potential to be hit very hard,” Mina says. “It’s all coming soon.” Making matters potentially worse, unlike New York and other major cities that have large, highly capable hospitals, many rural communities have none. St. John the Baptist Parish in Louisiana, near Baton Rouge, has the highest per capita coronavirus mortality rate in the nation right now and exactly zero hospitals, according to Politico. How the curves ultimately play out depends largely on the extent to which preventive measures are put in place, kept in place, and followed by the public. “Our estimated trajectory of Covid-19 deaths assumes continued and uninterrupted vigilance by the general public, hospital and health workers, and government agencies,” says Dr. Christopher Murray, director of IHME, the organization publishing the death projections cited by the White House. “The trajectory of the pandemic will change — and dramatically for the worse — if people ease up on social distancing or relax with other precautions. We encourage everyone to adhere to those precautions to help save lives.” “Unlike seasonal influenza, where the transmission chains get easier to break in the warm summer months, we cannot count on Covid-19 relenting simply because of a change of season.” Time to prepare Meanwhile, cities, counties, and states fortunate enough to have watched the havoc unfold elsewhere have had an opportunity to take strong spread-prevention actions while simultaneously making preparations akin to a war footing while case counts are relatively low. One example of a state in waiting is Ohio, where Governor Mike DeWine issued a strong stay-at-home order on March 23. “We haven’t faced an enemy like we are facing today in 102 years — we are at war,” DeWine said. That strict order and other physical distancing measures help explain why Ohio has about one-fourth as many cases as neighboring Michigan, says Dr. Robert Salata, a professor of medicine in epidemiology and international health at Ohio’s Case Western Reserve University. Salata leads the medical response in an 18-institution united command center, similar to what the military uses in a time of war. “And this is a time of war,” he says in a phone interview. Cases are starting to spike, and Ohio is in week one of a four-week ramp-up to an expected peak, he says. “It’s not totally chaotic or a real crisis at this point,” Salata says. “But it can become so, and we’re preparing for that inevitability.” He and colleagues are taking a variety of measures: Eliminating elective surgeries and cutting back on even semi-urgent care to reduce occupancy in the system to just 60% — much lower than normal. Working with a local biodefense company to figure out how to resterilize and reuse protective gear. Figuring out when infected hospital workers can return to work by testing 10 days after symptom onset and again 24 hours later and letting them return if they test negative but still have a cough. (But, of course, they wear a mask.) Using this low-volume window of opportunity to study the Covid-19 cases they do have, including through clinical drug trials. “Ohio has had more time to watch what’s been happening in Seattle and the Bay Area and New York to be proactive instead of reactive,” Cameron says. “But at the same time, what I’m seeing in the case data, in general every city or county is experiencing the same type of curve. Ohio might have been a little lucky so far, and population density and measures that were taken proactively will help us, but we can’t be complacent in terms of this spread and the case rates of infection that have been seen everywhere else.” 72% of all counties probably already have an outbreak Given the underreporting of total cases, disease modelers at the University of Texas at Austin used the available data to project the likelihood that any given county in the United States already has an outbreak, meaning sustained human transmission, whether they realize it or not. “If a county has detected only one case of Covid-19, there is a 51% chance that there is already a growing outbreak underway,” the researchers state. “Covid-19 is likely spreading in 72% of all counties in the U.S., containing 94% of the national population.” Image for post Probability of ongoing Covid-19 outbreaks for the 3,142 counties in the United States. The chance of an unseen outbreak in a county without any reported cases is 9%. A single reported case suggests that community transmission is likely. Source: Emily Javan, Dr. Spencer J. Fox, Dr. Lauren Ancel Meyers “Proactive social distancing, even before two cases are confirmed, is prudent,” conclude researchers Emily Javan, Dr. Spencer Fox, and Dr. Lauren Ancel Meyers. “Although not entirely surprising, these risk estimates provide evidence for policymakers who are still weighing if, when, and how aggressively to enact social distancing measures.” Second wave… and then more Meanwhile, worst-case scenarios are not inevitable, says Dr. Harvey Fineberg, president of the Gordon and Betty Moore Foundation, a philanthropic organization, and former president of the U.S. National Academy of Medicine. “That choice begins with a forceful, focused campaign to eradicate Covid-19 in the United States. The aim is not to flatten the curve; the goal is to crush the curve,” Fineberg writes in an April 1 editorial in the New England Journal of Medicine. “China did this in Wuhan. We can do it across this country in 10 weeks.” But that would require quickly taking a far more aggressive approach than the current U.S. response, six big steps, including: Establishing a unified command at the federal level and for each state. Solving the shortage of protective gear for health care workers. Making available millions of diagnostic tests. While what is done now is vital, decisions in the coming weeks and months could prove similarly weighty. “We know from the SARS 2003 outbreak in Toronto that there is a well-documented wave of second infections caused by letting some of the close-contact and personal protective equipment (PPE) precautions be relaxed because they felt they were on the other side of the curve,” Cameron says. “Turns out they weren’t, and a new curve, a new outbreak, a second wave, occurred in Toronto. We need to avoid that.” Already some Asian countries that had flattened their curves are seeing resurgences in new cases, including Singapore and Taiwan, according to the New York Times. Next season and beyond Further ahead looms another great unknown: whether Covid-19 will subside with warmer temperatures, as is typical of some coronaviruses and influenza. Don’t bank on it. “Unlike seasonal influenza or common colds, where the transmission chains get easier to break in the warm, humid, summer months, especially amongst communities with herd immunity, we cannot count on Covid-19 relenting simply because of a change of season,” Cameron says. “Covid-19 has already bulldozed through multiple different climates in the Northern and Southern hemispheres quite easily.” Among South American countries as of April 4, Brazil had 7,910 diagnosed cases and 299 deaths, Chile had 3,737 cases and 22 deaths, and Ecuador had 3,163 cases and 120 deaths. Even if the pandemic does lessen or go away this summer in the United States, that won’t mean it’s gone. “It is more likely that Covid-19 will spread relatively unchecked by seasons until the surges and curves have run their course… in populations worldwide, then return seasonally, hopefully put in some check by those amongst us with preexisting immunity by having had it already,” Cameron says. A vaccine would also help, of course, but that’s thought to be months away. Breaking the chain of transmission Cameron cites the 2009 H1N1 “swine flu” pandemic as an example of how viruses can flout the seasonality rule. “It took firm hold in the spring and summer months of 2009 in Mexico and the U.S., spreading virtually worldwide from there until August 2010.” The influenza pandemic of 1918–19 cycled through multiple seasons across two years, ultimately killing some 675,000 people in the United States and more than 50 million around the globe. (In Boston, a second wave hit during the first season when World War I ended and large crowds gathered to celebrate.) If there is a notable dip in Covid-19 cases this summer, all it takes to reemerge in the fall is for infected people, whether from south of the equator or from a fresh pocket in the Northern Hemisphere, to travel. “There has to be a chain of human transmission to support the seasonality of a particular illness,” Cameron explains. “So, breaking that transmission is absolutely critical. If it finds enough of a foothold in enough places that we don’t detect, it will reemerge in the fall.”
https://medium.com/@iilham-kiki-e/even-if-the-pandemic-does-lessen-or-go-away-this-summer-in-the-united-states-9d8e994718bb
['Iilham Kiki E']
2020-12-27 14:24:10.287000+00:00
['Sports', 'Technology', 'Social Media', 'News', 'Covid 19']
2,864
Diversity is key to advancing technology forward
Diversity is key to advancing technology forward Photo by claybanks on Unsplash Great technology starts with great code — and achieving great code starts with every individual having a seat at the table. It’s the diversity of perspective, expertise, and experience working in concert that makes transformative innovations come to life. In response to the murders of George Floyd, Ahmaud Arbery, Breonna Taylor, and too many others, Black IBMers, in their pain, frustration, and exhaustion, had a moment of clarity. They had a vision of an America without social injustice — a vision of using technology to help combat systemic racism. As a result, Call for Code for Racial Justice was born. The movement encourages people across the globe to channel their frustrations about systemic racism in a tangible and authentic way by getting involved. The projects cover areas as diverse as Police & Judicial Reform and Accountability, Diverse Representation and Policy & Legislation Reform. To improve the diversity in the tech industry and racial tolerance in the country, we need novel, tangible ideas. Check out the latest Smart Talk podcast episode from IBM Distinguished Engineer Lysa Banks, Vice President and Distinguished Engineer Dale Davis Jones, along with Creative Architect Britni Lonesome, to hear what they have to say about their involvement with the Call for Code for Racial Justice global initiative and how listeners can get involved. Learn more here.
https://medium.com/callforcode/diversity-is-key-to-advancing-technology-forward-138e85b4b1e9
['Call For Code']
2020-11-19 18:57:03.159000+00:00
['Code', 'Racial Justice', 'Open Source', 'Technology', 'Developer']
2,865
[Founder Feature @TC] — March 2020
Name: Benny Low, Co-Founder, 50yo, Singaporean Technopreneur Circle member: Since November 2017 Company: DataVLT (AI-enabler for enterprises) Website: www.datavlt.com Fun facts about me: I enjoy going on solo motorcycle rides around ASEAN, from Singapore. It would be a dream come true if I could someday, ride from Singapore to Europe and cross it off my bucket list! The company under 20 words: DataVLT provides artificial intelligence as an outsource solution. What my company name & logo means: DataVLT is a combination of 2 words — Data & Vault. When combined, ‘DataVLT’ implies a melting pot repository for algorithms, models, data and insights. Our logo is an interpretation of the data dynamism in fluid bars. Why I started the company & my journey so far: I started the company with two other co-founders, Michelle and Willy in late 2016. Our purpose came to us when we saw the startling declining numbers of SMEs in Singapore — this is not encouraging for any local economy. We then decided to help companies scale their impact with Artificial Intelligence in Singapore and in the region. What’s incredible is that we also gained a lot of traction with large enterprises. Many companies who have data but lack the AI capability have been approaching us to work together. We’re now focused on serving industries such as Engineering, Consumer Goods, and Supply Chain / Logistics. Our latest win is gaining recognition from A*STAR’s Advance Remanufacturing and Technology Centre as well as securing a project to predict the sequence of jet engine faults for commercial planes. My lightbulb advice: Always be open to criticisms and learning opportunities. It might sound cliché, but most young entrepreneurs are too ‘married’ to their ideas to the point where they refuse to pivot, innovate or garner feedback and that can lead to flatlining the business. Changes in ideas are usually unavoidable, so embrace it and go with the flow! Never lose sight of your core values and intent behind why you started your entrepreneurship journey as it will serve as your compass. Lastly, purse your dream with fervor, smarts and give it everything you’ve got. Success will come knocking eventually. I am looking for: Investors, Mentors, Advisors, Partners, Talents (team members) You can reach me @ [email protected] Technopreneur Circle is a member-based community platform funded by Vertex Ventures for tech startups to connect, communicate, and collaborate. More information available at www.technopreneurcircle.com. Contact us: [email protected]
https://medium.com/@techcircle/founder-feature-tc-march-2020-eeb9c6a618ff
['Technopreneur Circle']
2020-03-03 08:37:47.041000+00:00
['Startup', 'Founders', 'Founder Stories', 'Technology', 'Entrepreneurship']
2,866
Why Do Customers Prefer For E-commerce Apps & How Does It Benefit Sellers
E-commerce Is The Real Money Maker In 2021 Did You Know! The app economy is predicted to amount to $6.3 trillion in five years time span. Yes, “TRILLION”. This boom is significantly due to the inception of mobile app development solutions for commerce, which is inclining dynamically to capture the projected 6.3 billion users that will use m-commerce by 2021. Those users are expected to spend $946 on mobile commerce each year. This is approximately three folds to the present scenario. Not, only it includes the potential to turn the boulders in the path of mobile e-commerce, but a windfall opportunity to stand out in the crowd with far and away from the best experience. We believe the rise of mobile e-commerce shows an unprecedented drift in market behaviour. An interactive, well-designed app will breed trust-worthy customers, accommodate all immediate need for users, and extend your services to digital natives and Boomers alike. All you need to do to capitalize on this single instance is enlist the right software development company. Our experts use a plethora of approaches, starting from in-app purchases to upgrades, to inventing your idea. So, time to pave the way for what is to come! “Shop Till You Drop” — The Real Money Maker The notion that paid apps are real grubstaker is a myth. In-app purchases are really far more profitable — these sheaths everything from money transfer, ticketings, and location-based services, like weather forecasting and local discount coupons. Whereas, many of the buzziest apps are, m-commerce apps. Some popular apps like Uber, Venmo, and Instagram all integrate m-commerce functions. The proficient web developers will help you to choose the features that best suit your business and branding objectives. As transactions are switching online, away from a brick-and-mortar store, and m-commerce can be an effective way to turbocharge your profits. It’s established in numerous types, hailing from taxi apps to gaming apps to online shopping applications. Mobile commerce solutions also provide users to personalize their online purchase history, from networking with other users to changing the visual experience. Whatever it does, we love making business the simple ABC. First and Foremost, Discover The Benefits Of Mobile App For E-commerce For Your Business Your e-commerce store will never accomplish its full potential with a website alone. If you have a rundown to the most promising e-commerce trends of 2021, you will soon have a whim of everything heading to mobile. So, once again, the question strikes your door “what are the benefits of e-commerce app development.” 100% YES! Too often, I consult with the business owner who is content with the business growth, sales, and revenue of their e-commerce website. So, they don’t see a reason why a mobile app is essential. This act as the catalyst for writing this blog. So, if you want to make your brand and website relevant in 2021 and ahead, you undoubtedly need to develop an e-commerce app. Here’s Why: - Mobile Commerce Is Skyrocketing The sales figures from mobile devices have wholly taken over the e-commerce industry. So, its right time to have a look at your sales metrics and observe what devices your customers are shopping from. There is a high probability that a large chunk of people are browsing and purchasing using smartphones and tablet. De facto, 67% of all eCommerce sales across the globe is done via mobile devices. Within the coming years, mobile commerce will dominate 73% of e-commerce sales across the globe, making it utterly essential to hire mobile app developers for your accelerated business growth. As people are already inclined to shop through their mobile devices, it’s more easily accessible than waiting to get on a computer. Our mobiles are always an arm’s stretch which is not the same case with desktops and laptops. Consumers buy from anywhere and anytime. They shop during their lunchtime or while grabbing their coffee and enjoying a walk on the aisle. It’s just too easy. So, the best way for your business to get the lion’s share in the market is with a mobile app. As mobile sales continue to trend upward in the coming years, the best way for your eCommerce site to get its share of the pie is with a mobile app. Why? Well, e-commerce apps are rising at a growth rate of 54% year-over-year. That’s the highest peak compared to any other app category. - Everyone Prefers Mobile Apps Are you running a race to generate e-commerce sales from mobile devices that too without a mobile app? I assume it YES! Some of you may have a glimpse at that data and be satisfied understanding that you are one among the few to integrate your smartphone and tablet using your mobile website. But, the truth is you are just barely scratching the surface. Don’t get me wrong. Whether you have an app or not, a mobile-friendly website is also required. Indeed, 85% of people said that a website looks good or even better than its desktop version. Furthermore, 88% of clients are less likely to return to a mobile site after an atrocious experience. Whereas, 47% of people assume the mobile site to load in 2 seconds or less. Therefore, having a mobile-friendly website is an initial step towards generating e-commerce sales through mobile. But, in the end, what matters the most is the statistics. 78% of consumers would rather make a purchase from an E-commerce store in comparison to the mobile site. Let’s have a clear picture. Suppose, if you are presently getting 100 mobile transactions per week, approximately 80 of them are buying through mobile apps. And those who are the present customers. Just, imagine how many people turn a blind eye for your brand because you don’t have an app. I have clutched top reasons why many people adopt mobile apps over popular websites: Speed and convenience are the top reasons. After all, mobile apps just offer a better shopping experience. Have a look at your own habits. The time you go to buy something for yourself online, would you instead buy it from a website or a mobile app? Irrespective of how fast or responsive a website is, an app will always provide a more streamlined user experience. So, if your E-commerce website doesn’t have a mobile app, you are neglecting the majority of the market while gratifying a less than optimal experience for your existing customers. Thus, it makes the right move to align with a trusted e-commerce app development company. - Abreast The Competitive Trajectory Presently, without a mobile app, your e-commerce website is at a disadvantage. Customers would instead shop from online apps, so it’s only a matter of time before they stop purchasing from your mobile site altogether. The fact that you are still getting any mobile sales right now is a good part, but it won’t forever. On the contrary, being an early bird to develop a mobile e-commerce app will give you a benefit over other websites without one. If a consumer is perplexed between your website and a rival site, the app will give them a reason to lean towards making a purchase from them. Right now, every E-commerce website on the planet is competing with the big gun Amazon. You can basically buy everything under the sun from them. In the US, around 95 million apps have an Amazon Prime Membership. But the question arises “How Unique You Are”? So, anytime someone visits your website, they’ll always be carrying the notion of “Amazon Experience” in mind. And, if somehow, you are unable to meet those standards, then the customer won’t have a reason to buy from you. The only way to truly replicate this shopping experience is with a mobile commerce app. Here is something to keep in mind! A mobile app will you a competitive edge today, but that won’t be same forever. So, it’s in your best interest to build one now, that way you will have a kick-start on everyone else who was late to the party. - A Greater Conversion Rate Let’s jump back to the numbers. At the end of the day, everything you require to boost your KPI’s, and ultimately benefit your nutshell. To start with tracking Coronavirus is a great platform. Let’s take a look at the difference between mobile app conversion and mobile websites conversions. By the time when you look at these numbers side-by-side, the app is obviously the clear winner. Consumer views 286% more products and adds items to their shopping carts at an increased rate of 85% when they are shopping from a mobile app as opposed to a mobile browser. This is splendid because they want more than initially came for, but the time you hire software developers or e-commerce proficients, you will have the opportunity to discuss with them in detail. Products viewed per session and add-to-cart rates are awesome, but the only thing that really matters is conversions. While mostly observe, mobile apps convert at a 130% higher rate than mobile websites. So, wonder how much money you can make by instantly increasing conversion rates by 130%. That doesn’t even consist of how many customers you will have once your app is live. The bottom line is clear. More Conversion Convert To More Dollars and Mobile Apps Drive Higher Conversion Rates. - A Personalized Shopping Experience Personalization is an essential aspect of e-commerce success. Some of you may be already using personalized tactics to drive sales. The majority of E-commerce websites uses some basics tactics. A mobile e-commerce app takes personalization to the next level. With an app, you will be able to track the user’s browsing and shopping history to offer custom recommendations. This is possible from a website as well, but it’s pretty reliant on the client always being logged in, so it’s much easier to accomplish from an app. 63% of consumers look for a personalized experience when they are shopping, especially online. This figure rises from 57% in a time span of two years ago. So wholly, this is another upward trend. - Minimum Cart Abundant Rate Shopping cart abandonment is a prominent problem for e-commerce businesses. It’s one of the most crucial KPI’s that you should be keeping track of. The only reason why cart abandonment is frustrating is that the customer is usually just one or two clicks step away from finalizing the purchase. So, what’s wrong? Here, I have pocketed some of the top reasons for shopping cart Abandonments. Could not see or calculate the total cost — 23% Added cost like shipping and taxes — 60% Errors or website crashed — 20% Forced to create an account — 37% Long and complexed checkout process — 28% Didn’t trust a website with the credit card information — 19% Besides money, the most common reason for cart abandonment is related to the user’s experience. Thankfully, mobile apps eliminate those problems quite well. As you can observe, mobile apps have the lowest cart abandonment rate compared to mobile websites and desktop sites. Ecommerce mobile apps minimize friction in the checkout process. Once a customer buys something, all of their preferences are saved in the settings of the app by default. So, when the time approaches to make a purchase again, they won’t have to think twice to put their credentials like shipping address, and billing information. The whole purchase process can be completed in just a couple of clicks. Mobile apps make it easier for you to accept alternative payment method as we like Apple Pay, Google Pay, or Pay Pal. So, the time you integrate mobile app development solutions make sure to include this in the checkout page; a user can buy something just by scanning their fingerprints. Each of the additional steps in the checkout process provides the client with a chance to abandon their cart. By minimizing the number of steps with a mobile app, you will be able to keep cart abandonment rates minimal. - Growing Retention Rate At any instance, you can produce eCommerce sales that make you very happy. But with that said, sales don’t tell the complete story. How many of them come from repeated customers? Studies show that alluring new customers can be up to 25 times costlier than selling to a current one. But, increasing customer retention rate by just 5%can increase profit bars by 25% to 90%. You can have a look at the average eCommerce mobile app retention rates. Precisely talking 38% of users return to an eCommerce app 11 times or more after they download it. There is a pretty good chance that you will be buying on multiple occasions during those 11+ visits. Mobile users return to the app frequently, as well. In fact, there is a 50% chance that a mobile user will return to a mobile e-commerce app with a period of 30 days of buying the product. Even if a user is not coming back to the app as much as you like to see, you can always use push notifications to bring them back. Offer a flash sale or exclusive deal to motivate users to buy from the app. You won’t have the same opportunities from a desktop-oriented website or mobile-friendly site. MOBILE E-COMMERCE FEATURES Lastly, comes the mobile e-commerce toolkit? There are many different features that might end up in a commerce app. For beginners, you have: Product pictures Product reviews Secure payment options Product descriptions User profiles Wishlists Social login User analytics ApplePay Push notifications Community messaging …. But the first step to developing a product will be zeroing in on the core features you need to put out on MVP. That’s where the expertise and experienced E-commerce app development company come in. they are skilled at approaching your business right from a strategic standpoint and evaluating what will best solution+ your users’ needs. Not the bells and whistles, not a long, indented list of deliverables, but a reliable strategy to create a brilliant app with aesthetic users fall in love with. Starting right is halfway done right! Mobile e-commerce simply doesn’t allow your clients to “shop,” rather it wraps online exploration and identity into a larger experience. What Set’s The Best Apart From The Rest The trusted place to hire eCommerce app developers is always just a click away. The one right talent is the one with a proven track record and similarly a global leader in strategy, design, and development of mobile e-commerce applications. So, the professionals made it to help your business step in the mobile arena equipped with the best tools to upsurge user engagement and revenue. Thus, every app created is individually crafted for their clients globally from concept to wireframe, to design and development. In A Nusthell… The past few years has been disruptive and enthralling at the same time for many industries. Right from digital innovations, technological advancements, and upsurging globalization, every industry is witnessing the drastic change for E-commerce adoption with an experienced E-commerce development company. Also, in this change, bountiful businesses have inclined their conventional ways of doing things with the speed of innovation. The more the retail evolves, the more “mollycoddle” the consumer becomes, and the deeper they seem to connect with the brands they purchase from.
https://medium.com/flutter-community/why-do-customers-prefer-for-e-commerce-apps-and-how-does-it-benefit-sellers-e2ad175b36e7
['Sophia Martin']
2021-01-05 05:32:34.524000+00:00
['Mobile App Development', 'Technology', 'Apps', 'Business', 'Startup']
2,867
Scandinavian Stories: Europe’s Thriving Smart Hub
When it comes to smart technology, the first examples that come to mind are megalopolises like Shanghai, Singapore, Seoul, Dubai. But there is a fast-growing smart hub here in Europe. Scandinavian countries are quickly implementing a variety of smart solutions in every possible sphere — from public transit to education. What are the good examples we can look up to? And how can we learn from their experience with intelligent technologies? Let’s find out! What drives the smart revolution in Sweden, Denmark, and Norway? It’s not a surprise that the Scandinavian region is leading the smart transformation in Europe. Their population is highly technologically literate and expects the governments to act in this direction. What’s more, these countries have pledged to contribute in a major way towards the European carbon neutrality scheme (Copenhagen even aims to become the first carbon-neutral capital by 2025). Sustainability has become an intrinsic part of the thinking in the region which, combined with the socially proactive citizens, is a major driving force for innovation at all levels. Finally, transparent procurement programs have helped governments attract top minds in the sustainability field from a global pool of innovators. Which are the best smart examples from Scandinavian countries? A single article, no matter how long, won’t be enough to list all of the smart achievements that make the Scandinavian region such a great place to live. To make our point, we’ve picked a handful of standout examples that will help you understand the innovation landscape — further reading is advised! Norway This beautiful country is on its way to significantly improve living standards and reduce harmful emissions in many ways. Smart buildings 40% of energy consumption globally comes from buildings. Sensors that control lighting, heating, and cooling can drastically improve the energy efficiency of a building (as well as proper insulation), and Norway is investing heavily in this type of innovation. Stringent energy use requirements for new builds, as well as a program for government-funded construction projects, have paved the way for the mass introduction of smart sensors even in private homes. This technology is becoming more and more accessible. Open data Open data is one of the prerequisites for building a truly smart ecosystem. Norway is at the forefront of open data sharing with its national registry for the public sector. It includes data about traffic, agriculture, demographics, and many more. This registry lets entrepreneurs make informed decisions in their day-to-day business, helping them build more sustainable products that reflect the needs of the citizens. MaaS integrations are only one such example. This public-private relationship provides the basis for an improved living standard for the whole population of the country. Oslo: MaaS innovations Oslo transit authority, Ruter, has pledged to become emission-free by 2028. At the heart of this pledge lies the shared mobility dream of each MaaS innovator — mobility hubs that offer reserved parking (priority is given to electric cars), bikes, and more. This will facilitate moving around the city without the use of personal vehicles, thus reducing emissions and creating a healthier environment in terms of cleaner air, less stress, and fitter citizens. What’s more, a massive sustainable project is underway near Oslo. Oslo Airport City will be a 1 million sq. m. commercial hub powered by sustainable energy. Denmark Denmark is also on its way to becoming green in many aspects — as a consequence of using smart tech. Copenhagen and Aarhus are the two cities that stand out, but many smaller communities are already benefiting from a variety of intelligent solutions. Copenhagen: world’s first carbon-neutral capital Copenhagen has set the bar high — but having in mind the steps the city is taking, the ‘world’s first carbon-neutral capital by 2025’ doesn’t seem like an impossible feat. More than 250 businesses are taking an active part in this endeavour, and open data is again at the heart of innovation. Smaller businesses & start-ups can take advantage of great incentives and public-private relationships are thriving to the benefit of citizens. The end goal is for the city to become completely fossil-fuel independent by 2050! Aarhus: public-owned citywide LPWAN Aarhus is one of the innovators in this field — and one of the first cities that implemented such an extensive LPWAN. This allows devices and sensors to connect from long distances and at a lower cost. In a test setting in their City Lab, these sensors provide valuable information about different metrics like temperature and humidity. They can also track human behaviour (completely anonymized) to optimize the city based on the needs of its citizens. Sweden Sweden is home to the first European green capital. An amazing mix of R&D clusters, never-ending tech opportunities, and renewable energy, it is moving steadily towards being one of the most sustainability-led countries in the world. Sweden is also very proactive in working with global innovators to bring the best of smart technologies as quickly as possible. The business atmosphere attracts many big names in the industry, but also many startups. Smart City Sweden An integral part of the government’s approach to sustainability is Smart City Sweden — a state-funded national export and investment platform for smart and sustainable city solutions. This organization is located in Hammarby Sjöstad, a living lab, and platform for urban projects in key sustainability areas: renewable energy projects, smart waste management, electric vehicles, water management, and even a citizens’ communication platform. Gothenburg: low carbon mobility Gothenburg is a relatively small city but it compensates for its size with its enormous drive to become a green innovator. This drive helped position the city as no. 1 for sustainability and innovation in the Global Sustainability Index 2017 of world cities. Its approach includes a variety of strategies to help reduce carbon emission from mobility: low-emission zones for heavy vehicles, emission-free electric busses, and cars running on renewable biogas being only a few of them. In conclusion What is the common thread that unites Scandinavian countries in their approach to smart innovation? For us, it’s clearly rooted in data sharing with a human-first approach. All three countries (and even their neighbours) are always putting the long-term wellbeing of their citizens first and foremost — industrial and political gains are a collateral effect. What we can learn from them is a voracious appetite for innovation and how to achieve the transparency needed for citizens to trust that smart investments benefit the whole community. We can also steal their approach to open data — it’s the best there is!
https://medium.com/@telelinkcity/scandinavian-stories-europes-thriving-smart-hub-963499d114d2
['Telelink City']
2021-07-23 13:50:57.967000+00:00
['Smart Cities', 'Sweden', 'Norway', 'Denmark', 'Smart Technology']
2,868
How Daily life and work changed since Covid pandemic
Our World has witnessed drastic changes since Covid pandemic. Imagine the days when roaming outside without any fear or hesitation, going to work, eating and travelling anywhere, meeting friends and dating people was just a decision away. Life was absolutely amazing, challenging and busy where we didn’t even realise how quickly the day was spent. Do you know — The current world population is 7.9 billion as of May 2021 according to the most recent United Nation estimates elaborated by Worldometer. As per WHO, Globally, as of 11:36am CEST, 6 May 2021, there have been 155,506,494 confirmed cases of COVID-19, including 3,247,228 deaths, reported to WHO. As of 4 May 2021, a total of 1,170,942,729 vaccine doses have been administered. Source: https://covid19.who.int/ Imagine the days when we used to wake up early for our daily routine work. Getting ready mentally, physically for our day was an important task at hand. The morning soothing meditation, healthy exercise, relaxing shower, hot tea with amazing breakfast really sets the mood for the amazing day ahead. Later we had the choice of wearing our favorite dress and shoes for office or colleges. After we dressed up, travelling to office, schools or colleges was absolutely fun. I must admit that there used to be huge traffic on roads and there was not only the noise of vehicles but also a lot of free advice heard while waiting on the red traffic signal. The same applies while travelling in a train or a bus. At that moment, it just reminded me of one popular saying which goes like “ Kitne Tejaswi log hai hamare paas?” which translates in English as “How many Glorious people we have?” Since covid pandemic all the daily hustle vanished. Now every morning start with watching news about how many new covid cases found? Wearing a mask, maintaining social distance, washing hands have become the new normal. Lot of entrepreneurs, people, companies have lost their jobs. Technology has started replacing labours. Let’s dive into some positive side of covid pandemic: Increase in A.I Technology : Working in office and factories does require human intervention and interactions in order for business to carry on. Due to covid pandemic, as there is a huge chance of virus transmission from humans to humans, many countries have adopted Robotics or A.I. for cleaning infected areas, delivering foods and medicines. Drones are utilized for patrolling areas to ensure that social distance is followed, delivering medicine. : Working in office and factories does require human intervention and interactions in order for business to carry on. Due to covid pandemic, as there is a huge chance of virus transmission from humans to humans, many countries have adopted Robotics or A.I. for cleaning infected areas, delivering foods and medicines. Drones are utilized for patrolling areas to ensure that social distance is followed, delivering medicine. Work from home: For companies, meeting the client requirements has always been a priority along with keeping work environment comfortable for their employees. The Work from home is now introduced in many countries. Be honest, at some point of time while working in office, there would have been a conversation along with our friends that “Wouldn’t it be better if we get work from home, and we could not only work but also spend time with our dear family” or how about “what if we enjoy tea, dinner along with our family at home during our work break?” Work from home has been a boom to not only ensure that the daily targets are met with ease but also our family relationships strengthens. Reward and recognition via Zoom, Microsoft Team, gave us the opportunity to connect with many other achievers at our company Online Entertainment: When was the last time you went to a theatre? Imagine the movie tickets booking, the feeling of walking through the doors and sitting on the chairs in front of a bigger movie curtains. Watching your favorite actor or actress on screen along with your friends, family or partner was everyone’s desire. The audience shouting, whistling on their favorite scenes. I am sure a lot of us miss those memorable moments. Hence, several film production companies have started releasing movies or web series via Over The Top (OTT) platforms such as Netflix, Amazon Prime Video, Hotstar, Zee5, Voot Select etc which started providing the best contents at affordable price Online Payments: How many of us remember the days when we waited outside various banks in huge queues for exchanging notes during a demonetization? It would’ve been tiring and time-consuming task for us. It gave rise to era of digital payments. Lots of people have now switched to secure digital payments which gave rise to Google Pay, PayTM, Amazon Pay, PhonePe, JIO Money, CRED and other apps instead of cash and checks payments. Life became easy by ordering food online, booking travel tickets, paying electricity and other utility bills, recharging your number via digital payments, paying credit card bills etc. Online Education: I am sure that we all do dream or wish to go back in our school and college days to experience the fun and best moments of our life again. Unfortunately, it is not safe to go to schools or colleges. Hence, the concept of online education have become the new trend. Many institutions have started learning apps or conducting lectures to ensure that students’ education is not impacted. Let’s dive into some negative side of covid pandemic: 3,247,228 deaths across the globe. Increase in poverty across the globe. Scarcity of food supply due to lockdown, border closures, trade restrictions which resulted in malnutrition and poor health. Huge unemployment due to industry shut down, bankruptcy etc. Shops, Gyms, religious places, malls, hotels closed. Global economy has gone into recession. Stress, anxiety increased. Covid pandemic have made us realise some important points that we should never forget. Life is short, live it to the fullest. We always have the choice to be healthy. Only few people who love and trust you, will be with you during difficult times. Taking care of your family, friends is important. Spending time with your family, friends is also important. Social media is connecting people across the world. To stay prepared well in advance against any crisis. Unity is strength. Helping each other by maintaining social distance is vital. If we humans unite for such causes, we together can overcome lots of other world issues. Despite the impacts of covid pandemic, we should be thankful and grateful for spectacular work done by our front line Doctors, Nurses, essential workers for saving countless lives by working tirelessly and risking their own lives 24*7. May this pandemic gets over and we all quickly resume back to our routine work. Till then, be safe, wear a mask, maintain safe distance and wash hands.
https://medium.com/@sohelsy897/how-daily-life-and-work-changed-since-covid-pandemic-ea6c753bb71f
['Sohel S']
2021-05-09 15:10:20.069000+00:00
['Covid', 'Technology Trends', 'Work Life Balance', 'Life Lessons', 'Coronavirus']
2,869
3 trending topics that are interesting to read to add insight
Vertu: The Costliest Mobile Brand. What’s So Special? How I came to know about Vertu:One day I was watching the movie ‘Goutham Nanda’ where the hero’s family is so insanely rich that his father is featured as one in Top 50 Richest People in the World. For me, all was clear except one – Why was the man still using a Feature phone (of an unknown company Vertu). He could have bought an iPhone made of gold right? (Have a look at the screenshot above) So I googled the name ‘Vertu’ to know what the mysterious Mobile phone exactly is. Here’s what I have found, which cleared all my doubts. Firstly, Who founded it? Vertu was founded by Nokia in 1998, with the intention of giving big shots a shot of luxury in their mobiles. According to The Economist, the concept was to market phones explicitly as fashion accessories, with the idea — “If you can spend $20,000 on a watch, why not on a mobile phone?” By the end of 2013, the company had around 350,000 customers, and phones were on sale in 500 retail outlets. But now Vertu is in bankruptcy. Many of its factories are getting demolished. Yet, it would be remembered as the torchbearer in the history of Luxury mobiles. Thanks to Wikipedia What’s so special about these mobiles: 1. The screen doesn’t break: Image for post Intel Free Press, CC BY-SA 2.0 <https://creativecommons.org/licenses/by-sa/2.0>, via Wikimedia Commons The screen is entirely made of 130-carat Sapphire crystal, which the the second hardest material on Earth, after Diamond. They boast, “It would resist a 200g steel ball dropped down from a height of 1 metre”. The only thing that could at least scratch it is a Diamond. All this is because there shouldn’t be the dreaded bugbear of screen crack after paying a fortune. 2. Back case made of finest leather: Image for post Tinh Te Photos, CC0, via Wikimedia Commons The back case is made of finest quality Animal leather, carefully selected and picked from Europe’s tanneries. It looks royal and feels awesome when touched. It smells good too. You don’t want to put on a back cover on it. 3. Great Handfeel: Image for post Tinh Te Photos, CC0, via Wikimedia Commons The mobile bezels and body is made of high-quality durable Titanium and Aluminium metals. The Ear-piece has a ceramic pillow around it, which amplifies the audio coming out. You’ll have a great feel when you hold a Vertu mobile in your hand. The phone feels heavy, being 2 times heavier than a regular smartphone. 4. Up-to-date Software: It uses the Android OS, with latest updates as soon as they arrive. Their phones have a light Vertu customised skin over Android where he provides you with the latest security features and other exclusive avails of Vertu. 5. Incredible Dolby Audio: Image for post Tinh Te Photos, CC0, via Wikimedia Commons They placed the two Stereo speakers front-facing – at the top and bottom of the phone. The latest Dolby Digital Plus sound from their speakers feels like it’s some real thing’s happening before you. The Microphone provides excellent noise cancellation. Their quality ringtones are exclusively composed by the London Orchestra. So you don’t find your unique ringtone matching with any others’. 6. Great Camera: Image for post Alexcaban, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons Vertu has embedded the Hasselblad-certified cameras onto its phones. Hasselblad is the world’s most renowned Professional photography brand. You couldn’t spot a blemish on their Photography Masterpieces. 7. World SIM support: “What’s so special about SIM? Every mobile supports it.” No! There’s something special about SIM support in Vertu. Unlike regular phones, Vertu supports almost all the GSM bands in the world. So, you can travel to any country and use their particular providers’ SIM cards. This makes it a Traveller’s phone. 8. Incredible Display: Image for post Tinh Te Photos, CC0, via Wikimedia Commons It has a WQHD AMOLED display. With an impressive 530+ PPI, you would be able to see letters, pictures with perfect accuracy. The display has a Scratch-resistant glass shielding it. 9. Fastest Wireless Connectivity: Vertu embeds the latest versions of Bluetooth, Wi-Fi and NFC. This provides fastest speeds of data transfer and downloads. NFC enables the user to make Secure transactions and authorise entry into private places with a tap of his phone. 10. Strongest Encryption: When it comes to privacy, you can be sure of Vertu. The company provides high End-to-End Encryption on all your calls, messages, and mails powered by ‘Silent Circle’. Unless you share your data, none would know what’s in your phone. 11. 24/7 Concierge: (The most exciting feature) Image for post Jacob Jensen Holding, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons You could find a Red Ruby on the side of the mobile. When you press it, you’ll be connected to Vertu’s exclusive concierge service. It is like your real-life personal assistant, unlike Siri and Alexa. You’ll still be having Google Assistant, as it’s Android. There are dedicated staff waiting for your call. You could get your things done by them. The first year of concierge is absolutely free, but later, you’ll need to pay them $3000 i.e. 3 iPhones a year! You could ask them to Reserve a restaurant table for you, Order food items, goods and services to your home, Get information, Book tickets for travel, movies, concerts, Plan events, and many more! You are given VIP access to various clubs if you own a Vertu. You’re treated with utmost respect by Vertu Concierge customer care executives. Their words would be very polite and professional. You ask them for a time-machine, and they’ll feel very sorry for being unable to provide you. There was a case when a man asked them to book a restaurant table at 11:00 AM. But the restaurant usually opens at 12:00 in the noon. The Concierge executives somehow convinced them to open it at 11:00 AM exclusively for the man! Conclusion: Image for post Intel Free Press, CC BY-SA 2.0 <https://creativecommons.org/licenses/by-sa/2.0>, via Wikimedia Commons Vertu, being one of the costliest mobile phone makers, I feel it needs to have its own OS, like iOS, Windows or the latest Harmony OS. However, the features it provides to its customers are fantastic and unsurpassable. The concierge is a killer-feature which no other manufacturer has ever provided, or maybe, could even provide! Vertu is definitely a Billionaire’s best friend. It’s lineaments come up trumps over every regular mobile over there. Q. Is all that worth a $6,000–20,000 Mobile? A. Maybe yes. Maybe no. It depends on your verdict and usage. 2. Google’s Popular Free Service Ceases Now. Here’s Why. After five years of offering a high-traction service of Unlimited High-Quality photos storage, Google modifies its policy. Starting from June 1, 2021, the photos uploaded in High quality (Slightly reduced quality than original) will count towards the 15 GB of free storage. Over the years, we all have been relying on Google Photos as the Home of our Life’s memories. And Google has made our experience better by providing AI face recognition, OCR text recognition, and Memories. And hence, it’s a slightly disappointing new policy for us. Why has Google taken this step? Image for post Data Center Google has always been a customer-centric company (in most of the cases), with all its services being provided with the intention of customer satisfaction and solutions. While this is happening, Google Photos attracted a large crowd towards itself. All of them are uploading enormous number of photos everyday. According to Google, as of now, there are 4 Trillion Photos in its servers, with 28 Billion Photos being added each week i.e. about 45,000 Photos every second! Just like roaring water needs huge reservoirs to store them, Rushing data needs huge, well-maintained data centers working relentlessly 24x7, 365 days to process and store it. Constructing it, buying power and cooling equipment, paying salaries for engineers, providing security requires Millions of bucks every year for a company. And Google is finding it difficult to offer storage service for free. Applicability of the Policy — Isn’t limited to Photos: Image for post Google Photos App intro in iOS The policy also comes with the changes to the Google Drive Policy like capping the storage of Workspace Docs, Sheets, Slides, Drawings, Forms or Jamboard files. All those combined come under the 15 GB free storage by Google. If you want more storage, you could purchase your favourite plan of Google One. Exploiting Unlimited Storage by Github Geeks: This too might be a reason for Google to end its free unlimited storage. Geeks have found out a way to encode their files into images and upload them to Google Photos. The photos could then be retrieved back and processed into files again. Being Good at Google is a Skill | Data Driven Investor Being good at ‘Google” is a skill. Yes, you heard me. Knowing what and how to Google or search for something is an… www.datadriveninvestor.com Their idea: Their program takes each byte of the file data and encodes it to the red, green, and blue channels of each pixel, as each pixel stores 3 bytes of data. Although this could be increased to 4 bytes using the alpha channel. After encoding, they upload it to Google photos. This increases lots of unwanted stuff in Google’s servers, and makes ‘Photos’ get away from its purpose. Google has taken a right decision, this proves. Happy news for Pixel users: You’re exempted! Image for post Photo by Daniel Romero on Unsplash Along with the policy changes, Google has also mentioned that if you own a Pixel device of series 1 to 5, you’re exempted from this policy of limited High-Quality uploads. You’ll still be able to upload your photos like before, even after June, 2021. This is an incentive to buy or to stick to a Google’s own Pixel device. New Storage management tools by Google: Google has built tools to help its users manage their photos and use their limited storage most effectively. It has built a tool to weed out dark, blurry photos and large videos, so the limited storage could be used wisely. It also built a tool that estimates time until a user reaches his/her storage quota, based on how often the content is backed up. It is estimated that over 80% of users should take three years to reach their 15 GB quota. Image for post Click here to know yours — Google’s tool Automatic Data deletion in inactive accounts: Image for post Photo by Mitchell Luo on Unsplash If you’re inactive in one or more of Google’s services like Gmail, Drive, and Photos for Two years as on June 1, 2021, Google deletes the content in the products in which you’re inactive. Similarly, if you’re over your storage limit for the same time, Google deletes your content across that particular service. Google is concerned for your memories and it will notify you multiple times it takes an attempt to remove any content so you’ll have ample time to take action. The simplest way to stay active is to log in to their apps once in a while. We still have time — Let’s think of alternatives: There are already popular Storage Services like Dropbox, Onedrive, Mega, iCloud which you can make use of. But in my opinion, the best service right now (keeping Google aside) is Amazon Photos. Image for post In this pandemic period, OTT platforms have become the major source of entertainment and Amazon Prime Video one of the most popular ones. Most of us already have subscribed for it. Amazon now provides with a new exciting offering. It’s providing Unlimited free High-resolution Photo storage and 5 Gb video storage for Prime members. So now, you could rely on this Service for your Photos, and find some other way for your Videos. No collywobbles! 3. Apple’s M1: Yet Another Trademark Apple Disruption. Past few months have been among Apple’s most turbulent times in its half a century history. With each passing day, news on Apple oscillates from one extreme to another — exciting and crushing at the same time. At one side it seems to be crushing any fair competition or so Epic claims while on the other side, it seemingly drives competition through its videography, privacy and security model. Frankly, the free publicity both, good and bad that Apple received this year has kept them afloat even through a disastrous pandemic. Who would have guessed that a company will break through $2 Trillion market evaluation in contemporary times while the global economy is ravaged unlike any other in the past few decades? Apple’s Nov 10th event launched a torrent of well-deserved, primarily optimistic, and enthusiastic coverage especially from longtime industry veterans, influencers, journalists, and upstart online tech enthusiasts. Apple’s M1 heralds a new age of power-efficiency and portability while making impressive strides in performance. Rapid advancements in any industry are facilitated through intense competition. And, for the first time in nearly 15 years, Apple is truly offering a unique value proposition for all products of its portfolio. When Apple adopted Intel in 2006, its Mac line of portable and desktop computers’ USP (Unique Selling Proposition) was limited to its OS and industrial design. Of course, reducing Macs to a two-trick pony is unjust and unrealistic. There were and continue to be significant differences between the two operating systems to warrant a credible choice. Though, competing Windows machines always had the upper hand by compromising on certain aspects that were considered fairly irrelevant by the mass market and seemingly offered better value proposition for their products. With M1, Apple kills five birds with one stone. Image for post Modified from : tayteh.blogspot.com Firstly, it offers an affordable and highly competitive alternative to the traditional platforms that exist while resoundingly defeating competitors at performance and power efficiency. Secondly, it unifies its product portfolio under one architecture, making its ecosystem far more cohesive and tougher to leave at the same time. This is phenomenally important as development for Apple products will become incredibly lucrative, (more than they already are right now) and easier to develop for than competing, disjoint platforms (Windows, Linux, and Android). Thirdly, its R&D spending is validated as it does innovate usefully instead of fancy one-time gimmicks that provide no real value to general customers (looking at you, Iris Scanner, Project Soli and, Dolby Vison HDR (seriously, Apple? Who can edit that on their computers??)). It also silences all skeptics and doom-prophets with their claims of “Apple’s twilight is here” or “Apple’s decline is nigh”, and reconfigures the scope of their scrutenizers- shifting it from iPhone-centric to Apple-centric with a renewed focus on iPads, Macs and Apple Services. Fourthly, it gains complete control over third-party software development on all its platforms irrespective of it aimed for an open-platform (as Mac was initially considered) or closed-platform (iDevices). This will grate at every developer who wants Apple’s control diminished for a more open market. The Mac can no longer be considered as a different than iOS platforms in many key areas, with Apple’s claws sinking deep into the underlying system code through its rather controversial Big Sur “security features” or its chipsets replacing Intel as the beating heart of all its products. Lastly, it holds complete control over all its products (software and hardware) — everything from chip release and manufacturing to sales and positioning giving it a vertical integration never known to any company besides IBM in the 70s and 80s. This gives astounding benefits of design-specificity and platform-specific optimization. Another plus — updates to its Mac line of products will no longer be sporadic or irregular. A better idea would be to envision its release cycles like that of the iPhone — same time, every year. (Again, this is what is expected. It may vary significantly). A true upending of the consumer computing industry. Image for post Source: Anandtech.com The performance and power-efficiency of M1 are incredible achievements not only for a first-gen processor or SOC, but also for any processor in that thermal envelope (A fanless design!!! Are you kidding me with that level of performance?). If it wasn’t so, we would have likely witnessed some credible source of significant variance from the norm. Yet, in this mix of entirely positive connotations attached with M1, it is necessary that one draw a well-informed conclusion by also looking into innumerable key factors like IPC, fabrication node size, ISA, decoders, pipelining, threading, memory bandwidth, memory architecture, etc. That would make this a categorically technical article and I redirect you to anandtech.com as I am unqualified to attempt any satisfactory explanation. Intel vs Apple : Clash of architectures. While M1 is remarkable, the main assumption people have floated and even proposed, is to completely dismiss Intel as a formidable opponent or even a choice. That simply isn’t the case. Image for post Source : wccftech.com Apple is the first company with 5nm, ARM laptop chips (or computer chips for that matter) giving it an edge over its competitors with noticeable increase in performance and battery life that is to be expected for such a small transistor size and huge transistor density. All the x86 processors are either on 7nm (AMD), or 10 nm and 14nm (Intel) processes. It is also of particular interest that by design, x86 is more power hungry than ARM. So, the power efficiency of x86 processors will not match Apple’s offerings anytime soon. Famously, the first ARM chip was confoundingly power-efficient. It ran of a residual current flowing in the circuits when the power source was not connected to the system! It is a well-known fact that Intel has been struggling to get its act together. Since being the first to the market with its 14nm fabrication for processors, Intel has all but plateaued or perhaps progressed on with a slow incline upwards offering at best 25% performance improvements over their previous generations. Their famously (read formerly) consistent and reliable, tic-toc cycle in processor gains, has lost any weight it might have held. Yet, it took AMD a much more efficient processor fabrication technology (a 7 nm metric), three years of intense optimization of memory and processor interactions, pipelining and IPC improvements, and 15 years since it was at its best to dethrone Intel’s lead completely. Intel did all this with a stretched out beyond its prime, 14nm (a few +’s there as well) architecture — eking out every last bit of performance from a heavily dated fabrication technology. Apple’s chip, while remarkably superior (technically superior too) in its performance and efficiency figures, is on a 5nm processor architecture. TO much surprise and dismay, according to GeekBench and Cinebench scores, it is still just a stone’s throw away of Intel’s latest 10nm, U-series low powered processors. To put it in perspective, Intel is at least 5 generations behind (assuming 1 increment, 1 optimization year each for 7nm and 5nm, and 1 more 10 nm generation) in processor fabrication technology, but still within Apple’s reach. Admittedly, they run using higher wattage, clearly better thermal envelopes, and efficient cooling methods. Tech Reviewers and online buzz has hyped Apple’s M1 as “the inflection point in computers”, “the game-changer”, “upended the PC market’ and yada, yada, yada. And it makes sense from a consumer standpoint with extraordinary battery-life and performance. In other angles however, if I were to look from node fabrication technology, Apple has barely pulled ahead of Intel which I believe can be better matched when Intel achieves its transition to 5nm. Then again, subject to a 25W TDP it may perform incredible feats Intel may only dream of. That is pure speculation and the current data suggests that single core performance will remain within 5% to 10% of the scores of the M1 (for processors with more cores based off M1). Intel : No longer relevant elder or a slumbering Goliath? Image for post Intel over the last 2 decades | Source : nasdaq.com Intel is also chock full of world-class, brilliant engineers, who have been working tirelessly to rebound after their 14nm fiasco. Apple’s M1 is only fuel to the fire that had been ignited by Zen 3 series AMD desktop processors, to win back their crown and show the world why Intel won in the first place. Personally, i think the M1 chip is probably the last nail in the lethargic response of Intel and seeing the effusive reception it has been receiving, Intel is bound to be galvanized into action. Besides, Intel is not only adopting critical ARM specific (atleast until now) features like the big. LITTLE setups with their ‘EVO’ platform, more hybrid micro-architecture and certain other advantages with legacy platforms. Not to miss, Intel has finally shipped its 10 nm chips in laptops and its 10nm fabrication is quite similar to TSMC’s and Samsung’s 10 nm fabs, making it far more attuned to compete with 5nm chips. Intel is currently slumbering, healing itself from the critical blows from Ryzen in the past four years and now the knockout punch from Apple’s M1, but make no mistake, it is gearing up for a rematch and the battle is going to be legendary. M1: The upstart victors. Image for post Source: Techcrunch.com In the current market, M1 has presented Intel and AMD with a unique opportunity. In the last decade, performance comparisons were always restricted between competitors of the same platform — Intel vs AMD and Qualcomm vs Apple vs Samsung. Prior to Intel transition, PowerPC vs Intel showed that Power architecture was far behind x86 in performance and power efficiency. Today, the case is reversed with Apple’s ISA far ahead of x86. This time, it will be Intel who needs to redesign their ISA significantly while still supporting legacy software or the dust they bite will be eons old in the computer world. The M1 MacBooks have truly revitalized the PC industry and they are undoubtedly impressive. To achieve high performance and high-power efficiency is a feat that not many companies can claim on their first attempt at a platform transition. That isn’t the most impressive part as it is to be expected from the 5nm architecture. What is impressive though, is to achieve the level of performance at absurdly low levels of power consumption. The battery advantage M1 MacBooks have brought with this generation is similar to the iPhone vs Android conversation where, Droids need 5000 mAh batteries to give same battery performance as iPhones with 3700 mAh. Macs will reign supreme over the battery kingdom for the foreseeable future. 2020 hasn’t been a great year by any metric, except if that metric were to be computer technology and medicinal biology (even insinuating the development of a vaccine to a highly contagious and rapidly mutating virus in a short time frame is nothing short of miraculous), and the M1 chip is the perfect jolt to awaken the slumbering giants. Now, we must wait and see if the giant continues to slip into irrelevance or awakens to a fight. The processor industry should (in theory) be full of fireworks if the competition that has (possibly) been ignited was to come to fruition. Till then, Apple’s M1 has incredible thermal and computational performance, but is entirely unsurprising yet magnificently first in several key areas while greatly improving quality of life for the end consumer. Till the next serious contender, Apple has re-established itself as a disruptive innovator the likes of which hasn’t been seen since the introduction of the original MacBook Air. PS: *5nm is a metric of transistor fabrication technology and transistor size which implies smaller size achieves greater power efficiency and more performance. ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
https://medium.com/@czmangmitun/3-trending-topics-that-are-interesting-to-read-to-add-insight-7ce3f6b0203c
['Cz Man G Mitun']
2020-12-16 13:34:09.658000+00:00
['Trends', 'Technology News', 'Technews', 'Technology']
2,870
Top Accounting Solution Companies 2019
Several trends have disrupted the accounting house within the recent past and finance companies are attempting their best to make sure they exploit their edges to the fullest. one among the terms to own caught the flowery of most monetary organizations nowadays is cloud accounting. tiny and enormous organizations alike happiness to myriad sectors wish to leverage the advantages of cloud infrastructure to secure their information and facilitate accountants connect nearly. additionally, automation of bequest accounting practices is that the want of the hour among most enterprises nowadays since they’ll be vastly helpful in saving hours spent in manual information entry. Finance companies square measure currently seeking the help of leading vendors to leverage the advantages of the virtual infrastructure, shun existing bequest practices and move towards higher information management. The rising want of amalgamating trendy technological trends with ancient accounting practices prompted our editorial board to conduct a comprehensive study of leading accounting resolution suppliers globally. Their efforts have resulted during this magazine, a compilation of leading vendor companies UN agency have graven a distinct segment by delivering innovative solutions. we tend to hope our reader phase, comprising of call manufacturers of leading enterprises, vastly get pleasure from the collaborations with these vendors. Here is a list of Top Accounting Technology Companies in 2019. 1.AccountsIQ: Specializes in SaaS-based fintech platform for accounting, consolidation, and business intelligence. Check Out: https://accountsiq.com/ 2. Aplos Software: Specializes in SaaS solutions for nonprofit organizations and churches for fund accounting and other accounting services. Check Out: https://www.aplos.com/ 3.Aptitude Software: Specializes in best-in-class software suites for financial compliance and process streamlining. Check Out: https://www.aptitudesoftware.com/ 4. AvidExchange: AvidXchange brings in a unique way companies pay their bills, giving them the power to reduce processing costs, accelerate approvals, and virtually eliminate paper. Check Out: https://www.avidxchange.com/ 5.Botkeeper: A robotic bookkeeper to manage the bookkeeping needs of businesses through machine learning and integration of systems. Check Out: https://www.botkeeper.com/ 6.LeaseAccelerator: LeaseAccelerator provides enterprise lease accounting software to help companies comply with ASC 842 and IFRS 16. Check Out: https://explore.leaseaccelerator.com/ 7.Yooz: A fast-growing, multiple award-winning SaaS company that solves top invoice processing challenges for today’s accounting professionals through workflow automation. Check Out: https://www.getyooz.com/……..Read More
https://medium.com/technology-innovations-insights/top-accounting-solution-companies-2019-96c07fef9596
['Emma Elice']
2019-11-29 05:21:12.767000+00:00
['Xero', 'Accounting', 'Accounting Software', 'Technology', 'Technews']
2,871
Railz x Prodigy’s IDVerifact bring secure digital transformation to financial institutions
Railz x Prodigy’s IDVerifact bring secure digital transformation to financial institutions Railz Jul 22·3 min read Railz is excited to announce its partnership with Prodigy’s IDVerifact! Railz and Prodigy’s IDVerifact to prevent risk in accounting Thanks to our exciting partnership with Prodigy’s IDVerifact, our users can prevent risk with real-time accounting data, offering an exciting yet secure digital transformation for financial institutions. What is Railz? Railz is the only financial data-as-a-service solution that connects, normalizes, and analyzes financial data from top accounting service providers. We do this to make financing decisions faster, better, and easier for financial institutions and their commercial and small business customers. Out of Toronto, Canada our fintech company recently raised our Series A to scale our Accounting Data-as-a-Service platform. What is Prodigy? Prodigy delivers Fintech innovation by providing leading edge platforms, including IDVerifact™ for digital identity , and new Fintech platforms for open banking and payments. Our services business, Prodigy Labs™, integrates and customizes our platforms for unique enterprise customer requirements, and provides technology services for digital identity, payments, open banking and digital transformation. Digital transformation services include strategy, architecture, design, project management, agile development, quality engineering and staff augmentation. Prodigy has been recognized as one of Canada’s fastest growing companies with multiple awards: Deloitte’s Fast 50 Canada and Fast 500 North America (2016, 2017, 2018), Branham 300 (2017, 2018), Growth List (2018, 2019 and 2020), and Canada’s Top Growing Companies (2019 and 2020). What is IDVerifact? A Prodigy venture, IDVerifact provides the ability to combine and access a complete suite of leading digital identity providers to meet any business use case. With IDVerifact, organizations stay ahead of fraud, quickly identify risk, and ensure compliance while optimizing their ability to grow revenues and improve client experiences with digital transactions. Providing solutions for identity proofing, KYC, AML, zero proofing and risk proof management. Railz and Prodigy’s IDVerifact partner up For financial institutions (FIs) the Railz and IDVerifact partnership means that FIs can integrate with our data-as-a-service platform and have the combined trust of leading digital identity providers to mitigate risk, fraud, and more in their data analysis. IDVerifact will integrate Railz real-time accounting data into the IDVerifact platform through our single API key. IDVerifact’s goal is to create the most comprehensive digital identity platform and trusted digital asset solutions to enterprise clients. Beyond that, the Railz platform helps these enterprise clients have deeper and more impactful relationships with their small-medium business (SMB) customers George Colwell, the Senior Vice President (SVP) of Digital Practices at IDVerifact George Colwell, the Senior Vice President (SVP) of Digital Practices at IDVerifact states, “Railz is leading in the delivery of accounting data as a service and provides IDVerifact with a unique source of digital assets providing a clear view of a company’s cash flow and, more importantly, easily including those insights directly into a digital process. IDVerifact includes accounting data as a service as part of our risk pillar for small and medium enterprises and we are excited to count Railz as a partner, helping us build out our ever-growing list of unique digital assets in this space.” We look forward to providing our financial institutions the ability to integrate with our data-as-a-service platform and have the combined trust of leading digital identity providers to mitigate risk, fraud, and more in their data analysis. Get in touch to request a demo or sign up to try Railz today!
https://medium.com/@railz-ai/railz-x-prodigys-idverifact-bring-secure-digital-transformation-to-financial-institutions-646f82796946
[]
2021-07-22 15:32:01.635000+00:00
['Financial Technology', 'Digital Security', 'Technology News', 'Fintech', 'Digital Identity']
2,872
February 13, 2021 ~ For Janice
Large pink post-its adorn the pages of my copy of Janice Lobo Sapigao’s book microchips for millions The heater is on blast. I’m listening to Khruangbin radio on Spotify as I write this missive to you. I’m re-reading your book microchips for millions and preparing for a talk I’m giving in April about the relationships and genealogy of capitalism, race, and the electronics manufacturing industry. Your words are potent. Your lines and stanzas are lightning bolts and shields. Your writing has deeply inspired how I work, look at objects, and how I excavate my experiences, memories, and the knowledge I’ve gained over the years. Suddenly, I’m thinking of how I had your jacket for months after the &Now conference back in Fall 2019. You read some of your newer work based on EKGs and heartbeats. Then, in January 2020, a couple of months before the shelter-in-place (SIP) mandate, I saw you, grace burns and Melinda de Jesus read your poetry in a subterranean room in the South San Francisco library. It was an incredible line-up and I was fortunate to see you read, yet again, listening to your new work. But this time, it was about your mother and I was left speechless. As I listened, I felt my organs crowding inside my body around my heart. There was a pressure, it wasn’t uncomfortable, but it’s as if my body couldn’t contain what I was listening to. Does that make any sense? Perhaps, it’s also my nervousness about writing you a letter, a writer and artist I deeply admire. Although we’ve met and hung out, the pandemic took away the opportunities to see you again, listen to you read your work and just connect in person. I’ve been thinking about you this past year because of what your heart has been going through on top of what I imagine is not the most ideal situation when it comes to teaching. I don’t know about you, but I mentioned to a dear friend yesterday that it’s been difficult to work with students I’m unable to see. I don’t blame them. Many of these students I work with were hoping to socialize, hang out, enjoy the between-class-banter, party with friends, and be free to be young and adventurous. I express all of this because in-person poetry readings are something I miss very much. Your reading was the last poetry reading I attended. While I’ve gone to untold readings online and have done my fair share of zoom panels and talks, there’s nothing like seeing the artist flip through the pages of their book, manuscript, or papers. Last year, I started reading Cathy Park Hong’s Minor Feelings and she writes about studying stand-up comedy, in particular, Richard Pryor. I also attempted to take a stand-up comedy workshop a few months into SIP, which was extremely frustrating (because it was online, not because of the instructor) because it’s…STAND-UP (the name is a physical act(ion)). I share this because you have a way of sliding jokes into your writing and poetry readings that is extremely endearing, but also made me feel, instantly comfortable. Before I formally met you, I remember discovering your work through KQED’s Women to Watch back in 2017 and watching a video of you reading your poem My Name is Janice. It forced me to think about my relationship with my name, which I had not thought deeply about until hearing you read all of the intricacies in your name. There were moments of humor, but the emotional beats captivated my interest and I’ve been hooked on your work since then. Your students are so incredibly fortunate to have you as their professor. I remember seeing Douglas Kearney at the &Now conference and he spoke highly of you. ♥️ As I think about you, your mother, and the state of our world, I, truly, hope I can do your work justice during my talk because of how much it has inspired the way we ought to look at the technology we use because there are many people especially women and femmes at the helm of what makes our world run and enables us to connect. You know, I’m not sure if you’ve heard of Melanie Hoff’s work, but she led a workshop recently and one of the guiding questions she goes by in her work is “What if all the software we used was made by people who love us?” She’s a lovely human being and I feel you two would have a fascinating conversation. The relationship between software and hardware also doesn’t have to be as binary as we might think. I guess that’s where your work serves as a bridge for me to understanding new media and digital art. I remember, when I was working on my master’s (geez, that was 8 years ago now!), I remember talking to one of my friends Vanessa (she was in my cohort) and I told her how I wish there was a way for me to write about the digital and technology that was much more tender, soft, and where the boundaries and edges weren’t so, well, straight-edged and sleek. Believe it or not, your work showed me that this is very (very) possible and not in some anecdotal way, but in a way that pierces the heart and, just, stays there and becomes a part of who we are. You do this for me and I’m certain, for many others. You are an inspiration to me and everyone that meets you. Please never forget that and if you need reminding, reach out. I will, most certainly tell you how special and necessary you are in this world and beyond. 🥰
https://medium.com/cosmicpropulsions/february-13-2021-for-janice-70b7e35a56f8
['Dorothy Santos']
2021-02-14 07:09:08.052000+00:00
['Bay Area', 'Poetry', 'Creative Practice', 'Technology', 'Janice Lobo Sapigao']
2,873
Everything You Need to Know About Apache Kafka
Talking about Big Data, we cannot fathom the amount of data that is amassed and put to use every minute of every day. Now considering this large volume of data, there are two major challenges that float around. And, these are the two most primal challenges that anyone can face while working with Big Data. The first being how to collect large volumes of data and the second being, how to analyze this collected data. This is where a messaging system comes in to overcome these challenges. Let us understand messaging system first As the name itself suggests, in our daily lives a messaging system is responsible for sending messages or data from one individual to another. We do not worry about how we will share a piece of information to the other person. We focus on the message, the content of the message. Similarly, in Big Data, a Messaging System is responsible for transferring data from one application to another where the applications can focus on data, but not worry about how to share it. Now coming to distributed messaging, it is based on the concept of reliable message queuing. In this system, the messages are queued asynchronously in between client applications and messaging system. There are two types of messaging patterns we use today. i) Point-to-point and the other ii) Publish-subscribe or pub-sub messaging system. Majorly all messaging patterns follow pub-sub. Point to Point Messaging System Within a point-to-point system, the messages stay rested in a queue. One or more than one consumers have the ability where they can take in the messages in the queue, however a particular message can be consumed by a maximum of one consumer only. After a consumer reads a message in the queue, it vanishes from that queue. One typical example of this system is an ‘Order Processing System’. In this system, each order will be processed by one Order Processor, but Multiple Order Processors can work as well at the same time. Publish-Subscribe Messaging System In the publish-subscribe or pub-sub system, the messages are contained in a topic. Contrary to the point-to-point system, the consumers in this system can subscribe to more than one topic and consume all the messages in that topic. In the Pub-Sub system, the one who produces these messages are known as publishers and message consumers are called subscribers. One such example is of Dish TV or Satellite based channel subscription providers where they publish different channels like sports, movies, music, etc., and individuals can then subscribe to their own set of channels to view them whenever they want. What is Kafka? In this way, Apache Kafka is a distributed publish-subscribe messaging system and a sophisticated queue which has the ability to handle a high volume of data enabling users to send and receive messages from one end-point to another one. It is highly suitable for both offline and online message consumption. Kafka messages are constrained on the disk and are replicated within the cluster preventing data loss. Apache Kafka has been built on top of the ZooKeeper synchronization service. Kafka has proven to integrate well with Apache Storm and Spark for real-time streaming data analysis, thereby speeding up the Apache Kafka development process. Conlusion Kafka has been designed for distributed high throughput systems. One can easily replace it for a more traditional message broker. Compared to other messaging systems, Kafka has better throughput, built-in partitioning, replication and inherent fault-tolerance, making it a good fit for large-scale message processing applications.
https://web-and-mobile-development.medium.com/everything-you-need-to-know-about-apache-kafka-8563b48b13d3
['A Smith']
2019-08-20 07:38:34.053000+00:00
['Machine Learning', 'Technology', 'Tech', 'Apache Kafka', 'Big Data']
2,874
LEETSPEAK ❤ AZMDEV
We are very excited to welcome AZM Dev to Leetspeak this year as one of our fantastic partners. AZM Dev are a Stockholm based company that focus on HoloLens and Windows development. AZM Dev will be on site to demonstrate HoloLens for our very lucky Leetspeak attendees. Not only will they be showing us the in’s-and-out’s of HoloLens, they will also be offering up the chance for us to try it out ourselves — how awesome is that?! You’ll definitely find us in that queue to try it out! A massive THANK YOU to AZM Dev for being a part of Leetspeak, for being generally awesome and for supporting the community.
https://medium.com/tretton37/leetspeak-azmdev-6c935b02f774
['Marcus Mazur']
2017-09-28 06:20:39.122000+00:00
['Stockholm', 'Technology News', 'Hololens', 'Software Development']
2,875
The first autonomous electric taxi is here courtesy of Amazon!
Here comes Amazon’s autonomous electric taxi! The vehicle that looks like a cube and accommodates four is from Zoox, a startup that Amazon acquired. The Zoox Mobile. Elektrek. Designed and manufactured in the US, Zoox is the only vehicle to offer bidirectional driving capabilities and four-wheel steering, which enables maneuvering through compact spaces and changing directions without the need to reverse. At 3.63m long, the vehicle has one of the smallest footprints in the automotive industry. The vehicle features a four-seat, face-to-face symmetrical seating configuration that eliminates the steering wheel and bench seating seen in conventional car designs (as reported in Elektrek). The company has not revealed its launch date. Zoox-mobile will go extensive testing before it starts plying passengers. Trouble for Uber/ Lyft?
https://medium.com/@salhasan/the-first-autonomous-electric-taxi-is-here-courtesy-of-amazon-7d2d84557231
['Salman Hasan']
2020-12-17 00:26:36.652000+00:00
['Future', 'Technology', 'Cars', 'Innovation', 'Transportation']
2,876
reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You
reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You Tsitsipas vs Nadal Live Tv Nov 19, 2020·6 min read Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore. Good times and bad times. Happy times and sad times. But always, life is a movement forward. No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career. https://www.deviantart.com/ncflive/commission/Nadal-R-Tsitsipas-S-live-score-video-stream-1410905 https://www.deviantart.com/ncflive/commission/Rafael-Nadal-vs-Stefanos-Tsitsipas-live-stream-1410906 https://www.deviantart.com/ncflive/commission/ATP-Finals-2020-live-Rafael-Nadal-vs-Stefanos-Tsitsipas-live-stream-1410907 https://www.deviantart.com/ncflive/commission/Nadal-vs-Tsitsipas-LIVE-STREAM-REDDIT-1410908 https://www.deviantart.com/ncflive/commission/Stream-official-Nadal-vs-Tsitsipas-LIVE-STREAM-REDDIT-1410909 https://www.deviantart.com/ncflive/commission/LiveStream-Nadal-vs-Tsitsipas-Live-Online-1410910 https://www.deviantart.com/ncflive/commission/Livestream-Rafael-Nadal-vs-Stefanos-Tsitsipas-Live-1410911 https://www.deviantart.com/ncflive/commission/Nadal-vs-Tsitsipas-Live-Stream-Free-1410912 https://www.deviantart.com/ncflive/commission/Nadal-vs-Tsitsipas-live-stream-How-to-watch-ATP-Finals-2020-match-onl-1410913 https://www.deviantart.com/ncflive/commission/LIVE-Tsitsipas-vs-Rublev-Live-Stream-Free-Tennis-Final-2020-1410914 https://www.deviantart.com/ncflive/commission/LiVE-Rafael-Nadal-vs-Stefanos-Tsitsipas-1410915 https://www.deviantart.com/ncflive/commission/Live-Rafael-Nadal-vs-Stefanos-Tsitsipas-LIVE-STREAM-1410916 https://www.deviantart.com/ncflive/commission/Live-Rafael-Nadal-vs-Stefanos-Tsitsipas-LIVE-STREAM-2020-1410917 https://www.deviantart.com/ncflive/commission/Live-Rafael-Nadal-vs-Stefanos-Tsitsipas-LIVe-STREAM-1410918 What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.” 1. Most people are scared of using their imagination. They’ve disconnected with their inner child. They don’t feel they are “creative.” They like things “just the way they are.” 2. Your dream doesn’t really matter to anyone else. Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you. 3. Friends are relative to where you are in your life. Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends. 4. Your potential increases with age. As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way. 5. Spontaneity is the sister of creativity. If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment! 6. You forget the value of “touch” later on. When was the last time you played in the rain? When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby. Do that again. You will feel so connected to the playfulness of life. 7. Most people don’t do what they love. It’s true. The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same. Don’t fall for the trap. 8. Many stop reading after college. Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.” 9. People talk more than they listen. There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again. 10. Creativity takes practice. It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable. If you want to keep your creative muscle pumped and active, you have to practice it on your own. 11. “Success” is a relative term. As kids, we’re taught to “reach for success.” What does that really mean? Success to one person could mean the opposite for someone else. Define your own Success. 12. You can’t change your parents. A sad and difficult truth to face as you get older: You can’t change your parents. They are who they are. Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door. 13. The only person you have to face in the morning is yourself. When you’re younger, it feels like you have to please the entire world. You don’t. Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that. 14. Nothing feels as good as something you do from the heart. No amount of money or achievement or external validation will ever take the place of what you do out of pure love. Follow your heart, and the rest will follow. 15. Your potential is directly correlated to how well you know yourself. Those who know themselves and maximize their strengths are the ones who go where they want to go. Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future. 16. Everyone who doubts you will always come back around. That kid who used to bully you will come asking for a job. The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way. Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help. 17. You are a reflection of the 5 people you spend the most time with. Nobody creates themselves, by themselves. We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them. 18. Beliefs are relative to what you pursue. Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs. Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative. Find what works for you. 19. Anything can be a vice. Be wary. Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them. Never mistakes, always lessons. As I said, know yourself. 20. Your purpose is to be YOU. What is the meaning of life? To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece. Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish.
https://medium.com/@nadalvstsitsipialive/life-is-a-journey-of-twists-and-turns-peaks-and-valleys-mountains-to-climb-and-oceans-to-explore-d4118ef15314
['Tsitsipas Vs Nadal Live Tv']
2020-11-19 19:53:20.319000+00:00
['Technology', 'Sports', 'Social Media', 'News', 'Live Streaming']
2,877
Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you.
Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you. McKee vs Caldwell Live Tv Nov 19, 2020·5 min read Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore. Good times and bad times. Happy times and sad times. But always, life is a movement forward. No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career. https://www.deviantart.com/ufclivetv/commission/Watch-Caldwell-vs-McKee-Live-Stream-free-reddit-1411017 https://www.deviantart.com/ufclivetv/commission/Watch-McKee-vs-Caldwell-Live-Stream-free-reddit-1411018 https://www.deviantart.com/ufclivetv/commission/LiVeSTrEaM-Caldwell-vs-McKee-Live-Stream-free-1411019 https://www.deviantart.com/ufclivetv/commission/Bellator-253-Streams-Reddit-McKee-vs-Caldwell-Live-UFC-StreamS-FREE-1411020 https://www.deviantart.com/ufclivetv/commission/Reddit-Streams-Caldwell-vs-McKee-Live-Streaming-FREE-Reddit-1411021 What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.” 1. Most people are scared of using their imagination. They’ve disconnected with their inner child. They don’t feel they are “creative.” They like things “just the way they are.” 2. Your dream doesn’t really matter to anyone else. Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you. 3. Friends are relative to where you are in your life. Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends. 4. Your potential increases with age. As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way. 5. Spontaneity is the sister of creativity. If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment! 6. You forget the value of “touch” later on. When was the last time you played in the rain? When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby. Do that again. You will feel so connected to the playfulness of life. 7. Most people don’t do what they love. It’s true. The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same. Don’t fall for the trap. 8. Many stop reading after college. Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.” 9. People talk more than they listen. There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again. 10. Creativity takes practice. It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable. If you want to keep your creative muscle pumped and active, you have to practice it on your own. 11. “Success” is a relative term. As kids, we’re taught to “reach for success.” What does that really mean? Success to one person could mean the opposite for someone else. Define your own Success. 12. You can’t change your parents. A sad and difficult truth to face as you get older: You can’t change your parents. They are who they are. Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door. 13. The only person you have to face in the morning is yourself. When you’re younger, it feels like you have to please the entire world. You don’t. Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that. 14. Nothing feels as good as something you do from the heart. No amount of money or achievement or external validation will ever take the place of what you do out of pure love. Follow your heart, and the rest will follow. 15. Your potential is directly correlated to how well you know yourself. Those who know themselves and maximize their strengths are the ones who go where they want to go. Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future. 16. Everyone who doubts you will always come back around. That kid who used to bully you will come asking for a job. The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way. Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help. 17. You are a reflection of the 5 people you spend the most time with. Nobody creates themselves, by themselves. We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them. 18. Beliefs are relative to what you pursue. Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs. Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative. Find what works for you. 19. Anything can be a vice. Be wary. Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them. Never mistakes, always lessons. As I said, know yourself. 20. Your purpose is to be YOU. What is the meaning of life? To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece. Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish.
https://medium.com/@mckeevscaldwellliveon/life-is-a-journey-of-twists-and-turns-peaks-and-valleys-mountains-to-climb-and-oceans-to-explore-19083c76280a
['Mckee Vs Caldwell Live Tv']
2020-11-19 23:29:36.633000+00:00
['Technology', 'Sports', 'Social Media', 'News', 'Live Streaming']
2,878
Why we’re connecting you to the data behind your food
In an increasingly connected and data driven world, food is one of the final frontiers. Most people still lack transparency or information about the food they’re consuming on a daily basis, and the way we eat out doesn’t make it easy to know what we’re really putting in our bodies. This includes knowing how food is sourced and prepared, or the nutritional value of a meal. At Feedr, our goal is to bridge this gap, and in doing so, help people live healthier lives and thrive. We spend at least a third of our lives at the office, and so that’s where we bring our service, empowering people to eat better every day. Too many cooks have spoiled the broth. So why is there a disconnect between what we’re eating and what we understand about those meals? Fewer people know how to cook in today’s busy, grab and go society, which means more of our food is prepared and consumed out of home than ever before (up to ⅔, by some estimates), cooked by someone else, created for convenience and time-saving. We’re thinking less about ingredients, why something tastes delicious (butter, if you’re lucky, sugar and palm oil if you’re not), or why an item stays shelf-stable for a week or more (preservatives, probably palm oil again). Coupled with this, the western world has also seen a steep rise in allergies and intolerances in recent years, with a 50% increase in food allergies being reported in children between 1997 and 2011 in the United States. Moreover, with abundant information, people often receive conflicting advice from food suppliers, health providers and companies advertising their service or particular viewpoint; compounding the confusion around the topic of nutrition and its impact on our overall health and wellbeing. So here we are, all in the dark. Eating more food prepared in industrial kitchens, often marketed as natural or fresh, but with little knowledge about the quality of ingredients, sourcing or preparation. More importance needs to be placed on the nutritional value of our food. Food is our fuel, it influences how we feel at all times. In fact, food is the foundational building block in Maslow’s hierarchy of needs, which means we need to get this right before we solve our higher level needs or put people on the moon. The link between gut health and mental wellness has been proven in multiple scientific studies, and recent studies also now connect poor gut health to the development of certain auto-immune diseases. So we know that what we fuel our bodies with is vital for overall health, function and broader wellbeing. Despite this, the most popular meals in the UK are still sandwiches for lunch, burgers on delivery platforms (Deliveroo says burgers are their most popular delivery item), and chicken and chips when we eat out casually (Nandos is the UK’s most popular casual dining chain). There isn’t anything fundamentally wrong with eating these foods once in a while, but they’re lacking in terms of what we now understand makes up a balanced diet. As we continue to eat out more and this food becomes more easily accessible, it’s no wonder we’re struggling with weight gain, 3pm energy slumps and broader health issues. The NHS’s Eatwell Guide, published in 2016, advises that we should be eating at least 5 varied fruits and vegetables a day, making up over a third of the food we consume each day. Evidence shows that by doing this, we reduce our risk of heart disease, stroke and some cancers — yet, how many of us are ensuring we’re eating as plant packed as possible on a daily basis? With longer working hours, busier schedules, the rise of the grab-and-go market and the boom in food delivery services, we’re standardly sacrificing the nutritional content of our meals for speed and convenience. Capitalising on speed and convenience — food delivery has evolved into a multi-billion pound industry, but nutrition has been left behind We all know the growth story of Deliveroo — a simple app that connects consumers to local restaurants, providing a seamless solution for on-demand food needs. Now valued at over $2 billion with backing from Amazon, it’s a leading example of how consumer needs, expectations and user behaviour have changed when it comes to accessing food. The food delivery market more broadly has exploded in recent years — with huge year on year growth and billions of investment being ploughed into the industry. Despite this, we’ve ended up with short-sighted focus on speed and convenience as the variables to optimise for, which has inherently created a trade-off with quality. At Feedr, we want to change this narrative — we believe that nutritional quality doesn’t need to be sacrificed for convenience. We see a world where nutritionally dense food is easily accessible and affordable for all. Feedr’s solution for the nutrition vs convenience question Our mission starts with the goal of connecting people to healthier food and giving them education and data that helps them understand their nutrition, so they can make better choices every day, whilst fitting into their busy schedules. Food innovation with a curated selection of vendor partners. We work with the best, high-quality vendors to build amazing menus that aren’t just healthy but are full of options you’ll genuinely want to eat. We don’t believe you have to sacrifice deliciousness for health and neither do our innovative vendor community. Partnerships with experts to provide relevant and personalised content. We want to arm our customers with info from those in the know and connect this information to actual meal choices. We’re therefore launching a partnership with Dr Megan Rossi — The Gut Health Doctor, registered dietitian and gut health expert. The first in what will be a series of partnerships, we believe it’s vital that we work with credible, highly trained experts to give our community the knowledge and tools to make the right nutritional choices for them. Nutritional goal tracking. Our customers will soon be able to track their nutritional data for what they eat with Feedr and link this to their preferences and dietary choices. Coupled with relevant and accurate content, this gives our users the power and knowledge to set their own goals and make the right choices for them at any given time — whether someone is looking to boost their protein intake on gym days or increase their meat-free meals. Data-led feedback loop. In today’s digital society, consumers expect personalisation and customisation online, tailored to their specific tastes and needs. The experience with food should be no different. We’re using machine learning to not only present our users with the best options, but the best options specifically for them — based on their previous order data, reviews, preferences and food goals. This feedback loop is also invaluable to our vendors; providing the information they need to refine and improve their offering. In doing so, we’re connecting what has previously been so disconnected — creating a virtuous circle of review, refinement and personalisation. Our users agree In a recent survey, 43% of our customers told us that they would like the ability to track their nutritional data, whilst a quarter said pre-planning and scheduling is top of their list. We are thinking about ways to transform how people connect with nutritional data, use that information to make better choices, and have the ability to plan and schedule for piece of mind. As Marie Kondo has taught us, an uncluttered closet feels great, as does simplicity in menu and meal choices, built around personal preferences and goals. Food is a personal thing and we are excited about using technology to drive more personalised food content for our users and connect their activity on Feedr to their overall wellness goals. Yet building a software platform that reliably connects meal choices to accurate nutritional tracking is not a simple thing. Existing calculation tools and databases haven’t kept up with changing diets (did you know freekeh isn’t listed in the only Public Health England composition of food database?), and multiple variables need to be factored in when ascertaining true nutritional content. A tomato is not just a tomato — its sugar content depends on the type, where and when it was grown and harvested. To compound the problem, vendors change menus seasonally so the usability of software is key to ensuring updated information is captured. Feedr is pioneering this development, to create software that not only accurately captures information about food, but shares that back with users in a way that is relevant, helpful and digestible. What does the future look like? Living in a fully connected world has meant that, for many, the fundamentals of healthy living have been sacrificed. Time and focus spent on nutrition, sleep, mindfulness and other pillars of wellness are lost in the urban jungles that we live and work in. But this connected world also creates opportunity for innovation. We’re thinking about the food-tech landscape differently, realigning and reconnecting the end user with the things that really matter — healthy, well-produced, sustainably-sourced food. Crafting the perfect food platform is not just about understanding user preferences to build tailored menus, but about recommendations and content to give our users the ability to make their own educated choices. We plan to take this so much further — making it easier to find meals based on your food goals, plan farther in the future, and link your Feedr data with other nutrition and wellness tracking tools. There’s undeniable value in a deeper and more meaningful level of connection to the food you eat, and Feedr’s planning on leading the way with our partners and our vendors — come join us. Visit feedr.co to explore our world further. Contributors: Riya Grover- co-founder & CEO, Lyz Swanton- co-founder & COO, Charlotte Wood- Head of Marketing.
https://medium.com/@feedr/why-were-connecting-you-to-the-data-behind-your-food-12b3ef446ae5
[]
2019-10-01 16:57:41.113000+00:00
['Food And Drink', 'Founder Stories', 'Food Tech Startups', 'Wellness', 'Food Technology']
2,879
Learn About Server-Side Request Forgeries (SSRFs)
Preventing SSRFs SSRFs happen when servers need to send requests to obtain external resources. For example, when you post a link on Twitter, it would need to fetch an image from that external site to create a thumbnail. This is a normal and necessary behavior. But if the server does not stop users from accessing internal resources, SSRF vulnerabilities occur. To prevent SSRFs, you need to validate the user-supplied URL. And depending on the external resources you are trying to fetch using that endpoint, you can either implement a whitelist or a blacklist to filter the URLs. First, you can check if the URL belongs to an approved whitelist if you know where you need to fetch resources from. For example, if you are only fetching images from a particular server, you can limit requests to that IP address or hostname. If you are using a whitelist, make sure that you fix open redirect vulnerabilities within the whitelisted domains. If attackers can find an open redirect within a whitelisted domain, they can request a whitelisted URL that redirects to a restricted internal URL. And make sure that the regexes you are using are properly designed. For example, a weak regex pattern that simply checks if a URL contains the legitimate domain can be bypassed easily with these URLs: https://victim.com.attacker.com https://attacker.com/victim.com On the other hand, if you need to allow users to fetch resources from arbitrary locations, you need to use a blacklist to restrict access to sensitive internal resources. When you are using a blacklist, make sure that you are accounting for different encoding schemes. For example, does your blacklist filter out the same URLs, but in hex, octal, dword, URL, and mixed encoding? And does it account for both internal IPv4 and IPv6 addresses? A way that attackers can bypass blacklists — even when they are well designed — is by using redirects. Attackers can use a URL that they control but redirects to the blacklisted address. For example, they can host a page that redirects to the local address like this: <?php header(“location: http://127.0.0.1"); ?> This way, when your server requests the attacker’s page, it is actually redirected to a restricted internal address. You can prevent this type of attack by disabling the support for the following of the redirects in your web client. Another way attackers bypass blacklist protections is by modifying the DNS records of a domain they control and making it point to internal addresses. For example, they can create a DNS A record and make http://attacker.com resolve to a sensitive internal address. When your server requests http://attacker.com, it would think that that domain is located at the internal address and access that address instead! So when you are validating domain names, you also need to ensure that the user-provided domain does not resolve to an internal IP address. SSRFs are a dangerous vulnerability that can compromise an entire network. But they can be prevented with a good amount of diligence and proper filtering. That’s it for today’s security lesson. Thanks for reading!
https://medium.com/better-programming/learn-about-server-side-request-forgeries-ssrfs-10f8bd013941
['Vickie Li']
2020-12-09 21:16:22.527000+00:00
['Cybersecurity', 'Technology', 'Programming', 'Computer Science', 'Software Development']
2,880
How “Personal AI” Will Transform Business and Society
By Steve Omohundro, PhD, Possibility Research PwC predicts that Artificial Intelligence (AI) will create $90 trillion of value between now and 2030.[1] But this huge economic value only hints at AI’s profound impact on information networks, commerce, and governance. Many are worried that powerful AI will disempower individuals. The Wall Street Journal recently published best selling author Yuval Harari’s commencement speech to the class of 2020 entitled “Rebellion of the Hackable Animals.”[2] He argued that AI will allow corporations and governments to manipulate individuals and challenged the students to find ways to counteract this manipulation. This article describes “Personal AI” and argues that it will be the antidote to AI-powered manipulation. It will, instead, dramatically empower individuals to reshape their social and economic networks. We define “Personal AIs” as artificial intelligences trusted by individual “owners” to represent them in interactions with other individuals, organizations, and networks. There are great challenges in building personal AIs, but their impact will be profoundly positive for humanity. To understand why, we must first understand the current role of AI in society. The Rise of Platform AI Flashy AI applications like self-driving cars, deepfake videos, and the Sophia robot have dominated news headlines. But the AI technology with the greatest economic impact has actually been “recommender systems.”[3] These simple AI systems model users to make recommendations such as movies on Netflix, products on Amazon, and friends on Twitter. Recommender systems were only invented in the 1990’s but have had an enormous impact. Netflix reports that their movie recommender has been responsible for creating more than $1 billion of business value. Amazon’s recommenders generate 35% of the purchases on their site. ByteDance, the parent company of TikTok, was recently privately valued at $140 billion primarily due to their innovative recommender AI. One reason that recommender systems have had such a big impact is that they enable the “Platform Business Model.” Platform companies match up producers and consumers and take a cut from each transaction. For example, Uber’s AI connects nearby drivers with people who need rides. The platform business model creates sustainable outsized profits and is responsible for the rise of the most valuable companies over the past 15 years. In 2004, the top ten companies were General Electric, Exxon, Microsoft, Pfizer, Citigroup, Walmart, BP, AIG, Intel, and Bank of America. By 2019, they were Microsoft, Amazon, Apple, Alphabet, Facebook, Berkshire Hathaway, Alibaba, Tencent, Visa, and Johnson and Johnson.[4] Seven of these are based on an AI-driven platform business model. According to Applico, 60% of the billion-dollar “unicorn” startups are platform companies and most IPOs and acquisitions also make use of this model. It is estimated to have created over $3 trillion in market capitalization. Many aspects of platform companies are counter-intuitive from a traditional business perspective. A popular meme states that: ● Uber, the world’s largest taxi company, owns no vehicles. ● Airbnb, the largest accommodation provider, owns no real estate. ● Facebook, the most popular media provider, creates no content. ● Instagram, the most valuable photo company, sells no cameras. ● Netflix, the fastest-growing television network, lays no cables. ● Alibaba, the most valuable retailer, has no inventory. While recommender systems are critical to platforms, several other forms of AI are also important. On the producer side, platform companies provide: AI-driven content creation tools, AI-driven auctions for placement, AI-driven A/B testing for optimization, AI analytics to track performance, AI-based producer reputations and AI-driven malicious content blocking. On the consumer side, platform companies use: AI-based gamification for engagement, AI-personalized marketing, AI-driven pricing, AI-based consumer reputation and AI-driven malicious consumer blocking. Each of these functions will improve as AI technologies improve. The remarkable rise of platform companies can be understood through “Coase’s Theorem.” Ronald Coase was an economist in the 1930s who studied the nature of the firm. Economists understood that market mechanisms produced efficient results and Coase wondered why firms weren’t organized as markets internally. He showed that if information and contracting were inexpensive enough, then market mechanisms produce the most efficient outcomes. He concluded that traditional firms are organized hierarchically because business information was not freely available and contracting was too expensive. AI dramatically lowers the costs of both information gathering and contracting. Traditional taxi companies owned their own cars, hired drivers as employees, and had managers who determined which car would transport which customer. Uber’s AI systems enable their cellphone app to turn the traditional taxi company “inside out” and to profit by intermediating between external drivers and riders. This “inversion of the firm” is also happening in HR, marketing, innovation, finance, logistics, etc. An extreme example was Instagram which had only 13 employees when it was bought by Facebook for $1 billion. This remarkable purchase has been called the “most brilliant tech acquisition ever made.” Many of the consequences of the platform revolution are quite positive for society. Airbnb unlocked resources (people’s spare bedrooms) which would otherwise have gone unused. Individual consumer needs can be better met by platforms (eg. the long tail of demand met by Amazon’s many sellers). Platforms enable more producers (eg. Uber’s many part-time drivers). We can understand Platform AI as creating both business value and social value. Platform companies gain value through network effects on both the producer and the consumer side. These networks create strong “moats” around their businesses and allow them to sustain outsized profits. In typical platform niches, one company is dominant (eg. Uber) with a much smaller company in second-place (eg. Lyft) and third-place being insignificant. The strong position of the dominant company gives them great power in interactions with both producers and consumers. As AI improves, you might think that this platform power will only increase and that Harari’s fears are justified. Platforms use their power over producers to gain the advantage. Uber has been criticized for squeezing drivers and taking a bigger share of profits. Amazon has repeatedly created their own branded versions of products which they observe are profitable for third-party sellers. Netflix notices what elements of movies and TV shows are most liked by customers and creates their own shows using that knowledge. Platforms also use their power over consumers. Platform advertising has been criticized for being manipulative and for rewarding click-bait headlines. YouTube has been blamed for “radicalizing” viewers who watch a video out of curiosity and then receive recommendations for increasingly extreme related videos. Deceptive news stories generate outrage which causes clicks and recommender systems incentivize their creation in a vicious loop. There is increasing concern about privacy and the use of personal information by platform companies. The Rise of “Personal AI” If the simple AI underlying platform companies have had such a transformative societal effect, what will be the impact of more powerful AI? All indications are that AI is improving at a rapid pace and is likely to power another phase of Coase’s theorem. This will create more market-like structures and will spread power throughout networks. While AI is the enabler, the underlying forces are economic. Two technological trends, “Moore’s Law” and “Nielsen’s Law,” are driving the improvement in AI. Moore’s Law says that the number of transistors in a CPU grows by 60% per year and has held since 1970. Nielsen’s Law says that internet bandwidth grows by 50% per year and has held since 1984. Together, they give AI learning systems increasing amounts of computation and data to improve independently from algorithmic innovation. But learning and reasoning algorithms are also rapidly improving. The last decade has seen dramatic improvements in machine vision, natural language processing, and game playing. As advanced AI becomes more commercially viable, it attracts more investment, students, researchers, and practitioners. Rich Sutton’s influential essay “The Bitter Lesson”[5] argued that simple algorithmic techniques like search and statistical learning have always overcome clever human-designed algorithms as computation and data increase. OpenAI’s GPT-3 “transformer” language model is essentially a scaled-up version of their GPT-2 model, but exhibits a wide range of new behaviors. Many are speculating that scaling up this class of models by another factor of 10 or 100 may lead to dramatically improved AI systems. What will these more powerful AI’s be used for? “Digital Twins” are an AI application that has seen increasing interest over the past decade. These are digital AI replicas of living or non-living physical systems. The physical systems are continuously monitored by sensors which are used to update the corresponding AI twin models. The digital twin models are then used for estimation, diagnosis, policy design, control, and governance. Each of these is first tested on the twin and then deployed on the real system. Monte Carlo simulations estimate interactions between multiple twins for game-theoretic analysis, contract design, and analysis of larger system dynamics. “Personal AIs” are related to digital twins but model a human “owner” and act for that owner’s benefit. They are trusted AI agents which model their owners’ values, beliefs, and goals, are continually updated based on their owner’s actions, and act as the owner’s proxy in interacting with other agents. They filter ads, news, and other content according to their owners’ preferences. They control the dissemination of the owner’s personal information according to the owner’s preferences. They continually search for new business and purchase opportunities for their owners. They communicate their owners’ preferences to governmental and other organizations. When personal AIs become widespread, they will have a profound impact on the nature of human society. What AI advances are needed to create personal AIs? Simple versions could be built today but powerful versions will require advances in natural language processing, modeling of human psychology, and smart contract design. Each of these areas is undergoing active research and powerful personal AIs should be possible within a few years. The simplest personal AI contract is making a purchase. If an owner trusts their personal AI, they will allow it to search Amazon and other sellers for the best product at the best price for their needs. More complex contracts will allow an owner to contract to watch a video in return for watching ads that meet their value criteria. More complex purchase contracts could include terms for insurance, shipping, return policies, and put constraints on the sourcing of components and labor. As personal AIs become more powerful, contracts can become arbitrarily complex. A new era of highly personalized purchases and interactions will follow that better meets each person’s needs and desires. Personal AI will dramatically change the nature of marketing. If an owner knows they are emotionally vulnerable to depictions of alcohol, fast cars, or chocolate cake, they can instruct their personal AI to refuse advertising with that content. In today’s internet, recommender systems might discover an owner’s vulnerability and start specifically showing them the manipulative content they are sensitive to because it generates a stronger response. This is disempowering for the viewer and harmful for society. With personal AI negotiation owners can block manipulative advertisements and only enable calm, informative ads about products they are interested in. If enough individuals use personal AIs, advertisers will no longer have an incentive to create manipulative ads. Cigarette advertising was only banned after governmental intervention, but personal AIs provide a more effective direct mechanism to move advertising in a positive direction. Personal AI will also dramatically change the nature of social media. Today’s popular social media sites have power because no one wants to spend time on sites that their friends aren’t on. Lock-in is maintained by the annoyance of maintaining accounts on multiple sites. Each site has its own user interface, profiles, password, and identity system. Tracking content on multiple sites is time-consuming and confusing for users. But powerful personal AIs will easily be able to interface with multiple social media sites. They will present their owners with unified interfaces for information from a wide variety of sites personalized to their owner’s tastes. The owner need not even be aware of which site particular messages or interactions are from. This new flexibility will put additional pressure on social media sites to truly meet their user’s needs rather than relying on the power of network effects for lock-in. Personal AI will also dramatically change the nature of governance. Today, voting gives citizens a small bit of influence over governmental decisions. But the expense and complexity of voting mechanisms means that elections happen rarely and only support a limited expression of preferences. New voting procedures like “range voting”, “quadratic voting”, and “liquid democracy” would improve the current system. But personal AIs will allow detailed “semantic voting” in which citizens can express their ideas and preferences in real-time. Governments will be able to create detailed models of their citizen’s actual needs moment by moment. Personal AI will also dramatically change the nature of commerce. Instead of being locked into a few online marketplaces, personal AIs can explore the entirety of the web for products and deals. Complex negotiations with a wide variety of sellers will allow personalized contracts that better meet the owner’s true needs. As increasing numbers of people shop using personal AIs, this will change the nature of commerce. Buyers will be able to demand greater transparency about supply chains, counterfeiting, and forced labor. They will be able to know the exact history of a product and the exact ingredients in food and supplements. Perhaps the largest impact of personal AI will be in the transformation of information gathering. The internet shifted news from a few powerful channels to a wide variety of sources and networks. Unfortunately, this has also enabled the spread of disinformation and misinformation. Recent AI technologies can create fake text, audio, images, and video which is indistinguishable from real content. Various groups are developing AI to detect fake content but it appears that the fakers will ultimately win the arms race. That means that careful tracking of the source and “provenance” of content will be fundamental to future information networks. Today, various gatekeepers are attempting to take control of “fact-checking” and information tracking but many are themselves being questioned. Personal AI enables individuals to choose their own sources of validation. New sources of validation, reputation, and information tracking will arise and personal AIs will be able to choose among these according to their owner’s preferences. “Liquid Democracy” allows voters to delegate their votes to trusted knowledgeable third parties (eg. the Sierra club) who may in turn delegate their votes to even more informed groups. A similar mechanism can be used to create networks of information validated by an owner’s trusted groups. The societal effect of these kinds of information networks will be to democratize knowledge and to weaken the power of centralized information sources. Our Empowered AI Future W. Edwards Deming helped create the Japanese “post-war economic miracle” from 1950–1960. He proposed management and manufacturing processes that dramatically improved Japanese productivity and the quality of their goods. The Japanese word “Kaizen” means “change for the better” and has come to represent continuous improvement of all functions and full engagement of all stakeholders. Personal AI will enable a kind of “Deming 2.0” for the whole of society. Interactions between an owner and their personal AI continuously improves the AI’s model of its owner’s ideas, values, and beliefs. Interactions between personal AIs and AIs associated with larger groups will enable those groups to integrate the detailed knowledge and needs of all stakeholders in a kind of societal “Kaizen”. This responsive interaction will happen from the local level up to the global level improving effectiveness at all scales. The impact on the global level is especially interesting given the huge number of global crises we are currently struggling with: climate change, pandemic, economic crises, poverty, pollution, and transformative technological change. The United Nations maintains a list of the 17 most important “Sustainable Development Goals.”[6] Every one of these goals can be addressed with advanced artificial intelligence and extensive networks of personal AIs will enable every human to contribute their perspective. The picture of our future that emerges when we include the personal AI revolution is a far cry from the “Hackable Animals” dystopia that Harari worries about. It is a future of extensive inclusiveness and individual empowerment. It is a future in which global problems are solved through careful consideration of every human’s needs and ideas. It is a future in which empowered networks enable each person to contribute and connect to the whole of humanity through their unique individual gifts.
https://medium.com/hivedata/how-personal-ai-will-transform-business-and-society-cdb72065628c
['The Hive']
2020-08-28 19:00:53.608000+00:00
['Future', 'Innovation', 'Technology', 'Artificial Intelligence']
2,881
Useful JavaScript Tips — Object Properties and Copying
Photo by Kouji Tsuru on Unsplash Like any kind of apps, JavaScript apps also have to be written well. Otherwise, we run into all kinds of issues later on. In this article, we’ll look at some tips we should follow to write JavaScript code faster and better. Comparing 2 Objects with Object.is() An alternative way to compare 2 objects is with the Object.is method. It’s almost the same as the === operator. However, NaN is the same as itself. For instance, we can write: Object.is(a, b) Then we can compare 2 variables a and b . Get the Prototype of an Object We can get the prototype of an object with the Object.getPrototypeOf method. For instance, we can get the prototype of an object as follows: const animal = { name: 'james', age: 7 }; const dog = Object.create(animal); We created a dog object with Object.create so that dog will inherit from animal . So if we call Object.gerPrototypeOf with it: const prot = Object.getPrototypeOf(dog); Then: prot === animal would be true since animal is dog ‘s prototype. Get the Non-inherited Symbols of an Object The Object.getOwnPropertySymbols returns all the non-inherited symbol keys of an object. If we have the follow object: const name = Symbol('name') const age = Symbol('age') const dog = { [name]: 'james', [age]: 7 } Then we can call getOwnPropertySymbols as follows: const syms = Object.getOwnPropertySymbols(dog); Then we get that syms is : [Symbol(name), Symbol(age)] Get Non-inherited String Keys of an Object The Object.getOwnPropetyNames method lets us get an array of string keys of an object. The keys return aren’t inherited from any prototypes. For instance, we can write: const dog = { breed: 'poodle' } const keys = Object.getOwnPropertyNames(dog); Then we get [“breed”] as the value of keys . Get Key-Value Pairs of an Object The Object.entries method returns an array of key-value pairs of an object. For example, we can write: const person = { name: 'james', age: 18 } const pairs = Object.entries(person); Then pairs would be: [ [ "name", "james" ], [ "age", 18 ] ] where the first entry of the inner arrays is the key name, and the 2nd is the value. Add a Single Property to an Object We can call Object.defineProperty on an object to create a new property. For instance, we can write: const dog = {}; Object.defineProperty(dog, 'breed', { value: 'poodle' }) We added the breed property into dog using the defineProperty method. Add Multiple Properties to an Object at Once In addition to the Object.defineProperty , there’s also the Object.defineProperties method to add multiple properties to an object. For instance, we can write: const dog = {}; Object.defineProperty(dog, { breed: { value: 'poodle' }, name: { value: 'james' }, }) Then dog would be {breed: “poodle”, name: “james”} . It’s a convenient way to add multiple properties to an object at once. Creating an Object with Object.create Object.create lets us create an object with a prototype. For instance, we can write: const animal = { name: 'james', age: 7 }; const dog = Object.create(animal); Then dog is has animal as its prototype. It’ll inherit all the properties from animal . So dog.name would be 'james' . Photo by Victor Malyushev on Unsplash Copying and Combining Objects with Object.assign Object.assign lets us combine multiple objects into one or copy them. To make a copy of an object, we can write: const copy = Object.assign({}, original) We make a copy of the original object and assigned it to the copy variable. {} should be the first argument so that we won’t modify any existing objects and copy them into an empty object. To combine multiple objects, we can write: const merged = Object.assign({}, obj1, obj2) We copy all the own string properties from obj1 and obj2 into the empty object in the first argument. So merged would have all the own properties from both. Conclusion We can compare and copy objects with static Object methods. Also, we can define properties on an object with them. Also, we can copy and merge objects with the Object.assign method. It does a shallow copy so that the top-level is copied.
https://medium.com/dev-genius/useful-javascript-tips-object-properties-and-copying-760b65dce256
['John Au-Yeung']
2020-06-28 18:42:56.678000+00:00
['JavaScript', 'Software Development', 'Web Development', 'Programming', 'Technology']
2,882
Commitment to People
At Kindred, our first core value is Commitment to People. We respect and honor the whole person and value them as individuals. We go the extra mile to help people succeed and invest in their continued growth, and we seek to work with companies and partners who share this commitment. Over the 20+ year history of Kindred, we have always prioritized developing talent from within. We strongly believe that our culture, work process, team structure and focus on shared success yield better results. Ultimately, the most important members of our team are those who continue to drive this culture of excellence forward. With that premise in mind, we are thrilled to announce the appointment of McKenzie Furber, Julia French and Daniel Scanlan to Managing Directors of Kindred. McKenzie leads searches across our Consumer, Marketplace and Fin-tech portfolio with an emphasis on Marketing and Product roles. Internally and across the industry, McKenzie is recognized for her intellect, her commitment to people and her dedication and relentless focus on achieving the right outcomes. Before joining Kindred, McKenzie helped Coatue Management build their first internal talent practice, and before that established the West Coast operations for Dynamic Search Partners. Kindred is committed to supporting our clients for the long term, well beyond a successful executive search. Julia French has been a key driver of Kindred’s focus on long-term partnerships by founding and leading Kindred Scale. Functionally, Scale specializes in customer success, operations, sales and marketing hiring at the director, manager and senior individual contributor level. Julia has a deep background in helping hyper-growth companies solve their most daunting challenge — finding, attracting and closing top talent below the C-suite. She has built a great reputation as someone who consistently over-delivers, executes with speed and care, and has high EQ and IQ. We have and continue to partner with some of the most important emerging companies in Healthcare. Daniel Scanlan has played a key role in growing Kindred’s presence within this highly important, massive vertical. His energy, passion, and dedication to his craft is evident to anyone who is fortunate enough to meet him. Daniel represents the epitome of developing from within, and is — deservedly — the youngest partner in our firm’s history. Daniel will continue to drive all functional searches with our healthcare industry partners. Onward!
https://medium.com/@mattocken/commitment-to-people-fb4ade74121
['Matt Ocken']
2020-12-22 17:24:57.439000+00:00
['Partnerships', 'Retained Executive Search', 'Technology', 'Kindred', 'Promotion']
2,883
Building a Treemap with JavaScript
Building a Treemap with JavaScript At Foxintelligence, while building our data visualization platform we wanted to challenge the pie chart. This is our journey of building a Treemap in JavaScript from scratch. EDIT: an open source JavaScript package I published an open source JavaScript package that calculates the Treemap. It uses the algorithm described in this article and is available both with npm and in the browser. https://www.npmjs.com/package/treemap-squarify The goal: an effective data visualization The goal was to find the best way to represent market shares among categories in e-commerce. The pie chart representation is the first one that comes to mind for this kind of data. Colorful pie charts However, it can be hard to compare different categories in a pie chart. The pain points of this type of visualization are: It’s hard to compare pie slices It’s hard to display labels on pie slices It’s hard to compare two pie charts What kind of data visualization can we use instead? 🙌 The Treemap That’s where we discovered the Treemap. A Treemap is a way to visualize hierarchical data in a rectangle (hence tree in Treemap). The value of one data point is proportional to its area in the main rectangle. One popular use is to visualize a computer file system. Here is an example of the initial tree and its counterpart on a Treemap. A tree and its Treemap from [1] In our case however, we don’t use it this way as we have a list of data instead of a tree. Nevertheless, we wanted to keep the idea of having one data point represented by an area on one main rectangle. In the following example, we have one main rectangle which represents 100% and each rectangle is a category market share of x%. Market shares on a Treemap Why is it better? First it’s visually easier to compare rectangle area than pie slices. It’s also easier for the label of each area to be displayed on the graphic instead of having to refer to the legend each time you need to. Plus we wanted to compare one merchant with the market (or with another merchant), and having two pie charts next to each other is not readable. 🎯 The specifications So we’ve decided to use a Treemap as our data visualization. Let’s now talk about the specifications from the design/product team. Each area should be a rectangle that is as square as possible (which is the Squarified Treemap type of Treemap) type of Treemap) We want 100% customization possibility over the UI (colors, hover effects, label or icon inside each square)
https://medium.com/foxintelligence-inside/building-a-treemap-with-javascript-4d789ad43a85
['Clément Bataille']
2020-04-17 19:25:01.252000+00:00
['Data Visualization', 'Software Engineering', 'JavaScript', 'Vuejs', 'Technology']
2,884
Team Zero Weekly Newsletter
At this point in time, although we recognise that some form of node system is highly desired by the community, we will not be including a node structure in this particular fork. Many projects that have implemented such a node feature and system have encountered much detriment from the community and 3rd parties when the execution of said node reward systems are flawed, or perceived to be Ponzi like. This is something that we at Zero do not take lightly regarding our coin and projects reputation, and will be the subject of much further discussion and debate should such a feature be implemented. Please be clear when reading this though, that we are not ruling out some form of node system to be implemented at some point in future. We just need to be sure that if a node system was implemented, that is contributes to the short, mid, and long term goals of the coin / project. As always, thank you all for your continued support on Zero’s path to steady, organic growth, and we hope you enjoy the products being delivered that can attest to Zero’s current and future status in this realm. (Note for new comers — please bear in mind that at this point in time Zero relies solely on donations from the community as there was no pre-mine or ICO with this coin originally. We are working to implement a dev fee, so please read further and watch this space. Every person involved with Zero — inclusive of the dev team — are volunteering their time and efforts to the project) For anyone interested in investing — please do your own research prior to committing to any project, and we ask you to please not blindly commit to a project because someone else told you to.
https://medium.com/zerocurrency/team-zero-weekly-newsletter-aaa099815264
['Zero Currency']
2018-06-07 16:15:23.347000+00:00
['Technology', 'Internet', 'Computer Science', 'Blockchain', 'Bitcoin']
2,885
Can robots overtake humans?
“Technology is nothing. What’s important is that you have a faith in people, that they’re basically good and smart, and if you give them tools, they’ll do wonderful things with them.” by:- Steve Jobs The idea of machines overcoming humans can be really related to conscious machines. This idea is supported by many advances in Artificial Intelligence (AI).As Computers has already succeeded in many fields. Example ,sections of mechanical engineering like robotics, automation or sensors uses AI. Ex- pre installed voice assistant in phones .In manufacturing engineering, from maintenance to virtual design, AI enables modern, advanced and custom products. organic latticing tools, CAD tools and generative design are some ex of AI that are useful to engineers. In electrical engineering, AI is used For diagnosis of electrical machines and drives, synchronous control over machines and reduce fault rate. Also, in civil engineering, AI is used for designing, planning, construction, management, maintenance, analysis and optimization. Artificial intelligence in healthcare is the use of complex algorithms and software in other words artificial intelligence (AI) to emulate human cognition in the analysis, interpretation, and comprehension of complicated medical and healthcare data. Specifically, AI is the ability of computer algorithms to approximate conclusions without direct human input. The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes.AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. Additionally, hospitals are looking to AI software to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs. As experts in the field of robotics believe that robots will be much more visible in the future, but — at least over the next two decades — they will be clearly recognizable as machines. This is because there is still a long way to go before robots will be able to match a number of fundamental human skills.
https://medium.com/@sajalbatra/can-robots-overtake-humans-b981cedf366
['Sajal Batra']
2020-11-24 19:23:52.561000+00:00
['Programming', 'Artificial Intelligence', 'Robots', 'Technology']
2,886
How to install Ubuntu 20.04 and dual boot alongside Windows 10
Get your tools Before you can create an installation media, you first need the right tools to create one. The first thing is to download the Ubuntu ISO file that you want to install. An ISO file is simply a file that contains the actual operating system in a compact format. Go to https://ubuntu.com/download#download and click on the 20.04 LTS button. *LTS stands for Long-Term-Support which simply means that this version gets support for 5 year unlike the other releases that are only supported for 9 months. Going for LTS is for many people the best option. Download page of Ubuntu 20.04 Your browser might ask you if you want to save the file or open it. I’d recommend to save it. The ISO file is about 2.8 GB in size, so be prepared that it will take a while. This is the perfect time to get some tea. The next tool that you need is a software to create the installation media. There are several tools available such as UNetbootin or Rufus. I personally am a big fan of balena Etcher. Website of balena Etcher Etcher is available on Windows, MacOS and Linux and pretty easy to use. After the download has finished, you need to install it. Installation of balena Etcher Getting your system ready After you installed Etcher and downloaded the ISO file, you can create the installation media. You can either use a CD (yes, those still exist) or simply a USB stick. Using the USB stick is the most convenient as you do not need a CD drive which start to be come a rare artifact. Make sure the USB stick is empty as all the data will be erased in the next step. When you open Etcher, you have a very simple interface. balena Etcher for creating the installation media First, you select the ISO file that you want to use. You can probably find it in your downloads folder. Just navigate to it and select the ISO file. Then, you need to select the USB stick that you plugged into your computer and that you want to use as installation media. Take care that you select the correct one as you don’t want to wipe your internal hard drive or an external drive. The easiest is to look at the size of the different options and select the one with the memory size that your USB stick has. When the ISO and the USB stick are selected, click on the Flash! button. This will also take a while. Flashing the installation media Getting Windows Ready There is one more step you need to do as preparation before the actual installation. You need to make some free space on your hard drive so your computer knows where to install Ubuntu. For this, you need to open the partitions software with the name “Create and format hard disk partitions”. You can simple type the word “partition” in the search bar of the start menu. Start menu for partitioning software When starting the partitioning software, a new window called Disk Management should open up. This program should show you all the mass storage devices connected to your computer, internally and externally. In my case, I only have one hard drive because I already removed the USB stick and I only have one small hard drive in my demo setup. Disk management tool Many computers nowadays have one smaller SSD hard drive and a larger HDD hard drive. You will probably want to install Ubuntu on your SSD drive, given that you have enough disk space. For a bare minimum installation, you should have at least 9 GB. If you also want to install some programs and save some files on Ubuntu, then you probably want a little bit more storage for your Ubuntu system. I would recommend at least 36 GB to make sure you don’t run out of disk space when have all the Ubuntu partitions on one hard drive. You can use less if you want to install Ubuntu on two hard drives: one for the system and one for storing your personal files. More on that later. Right click the partition that you want to shrink to make free space available on. Then, click on the “Shrink Volume…” button in the context menu. Selecting the partition to shrink Next, you need to choose how much storage you want to make available. Enter the amount of Mega Bytes you want to have for your Ubuntu installation. Again, I would recommend to have at least 36 GB. Click on the “Shrink” button. Selecting the size of the empty partition Creating the new empty partition takes a while and then it should be displayed as Unallocated. New empty partition created After this is done, you can proceed with the installation of Ubuntu. You may now shut down your computer. Installing Ubuntu Now it is time to plug in the flashed USB stick and restart your computer. When the splash screen of your computer’s manufacturer appears, you need to press either F1, F2, F12, ESC or DEL on your keyboard to enter the BIOS menu of your computer. Which key to press depends on the brand of your computer. Your computer might even tell you which key to press. Else, you might need to look it up either in the manual of your computer on online. Once you entered your BIOS, you need to find the option to change the boot order. The BIOS of each motherboard can be slightly different. You need to navigate through it by yourself to find the option to change the boot order. When you find it, you should place the entry for USB devices on the first position so when you restart the computer it will automatically boot from the USB stick with the Ubuntu installation media. As an alternative, some computers even allow to directly choose the boot device to continue. In that case, you don’t need to change the boot order and you can simply select the USB stick to boot the computer. Again, this depends on the manufacturer of the motherboard. Note: If possible, disable the fast-boot option from your BIOS as this can cause trouble when switching from Windows to Linux. The reason is that the fast-boot setting will prevent a full shutdown of the PC and when starting up in a different OS, this could cause some issues. Also, the secure-boot option might need to be disabled. If that is the case, Ubuntu will let you know during the installation process. You will need to type a password during the installation process and then again when rebooting. This password will only be asked once. When your computer is booting from the USB stick, it will show to you the Ubuntu logo. It might even perform a system check. Ubuntu boot screen After the file check is complete, your computer will show you the Ubuntu welcome screen where you can choose whether you want to “Try Ubuntu” or “Install Ubuntu”. The first option is suited for testing if everything works such as the Wi-Fi drivers or other hardware. You will choose the second option for the installation.
https://medium.com/linuxforeveryone/how-to-install-ubuntu-20-04-and-dual-boot-alongside-windows-10-323a85271a73
["Dave'S Roboshack"]
2021-02-16 19:52:09.003000+00:00
['Technology', 'Linux', 'Open Source', 'Ubuntu', 'Windows 10']
2,887
Analyzing The Presidential Debates
Analyzing The Presidential Debates Exploring Sentiments, Key-Phrase-Extraction, and Inferences … image from rev.com 2020 has been one ‘hell-of-a-year’, and we’re about the eleventh month. It’s that time again for Americans to take to the polls. If you’ve lived long enough, you recognize the patterns… Each opposing political side, shades the other, scandals and leaks may pop, shortcomings are magnified, critics make the news, promises are doled out ‘rather-convincingly’ and there’s an overwhelming sense of ‘nationality and togetherness’ touted by both sides… But for the most part, we’re not buying the BS! And often, we simply choose the ‘lesser of the two evils’, because candidly the one is not significantly better than the other. So today, I’m going to analyze the presidential debates of President Trump and Vice-President Biden… Disclaimer: The entire analysis is done by the Author, using scientific methods that do not assume faultlessness. This is a personal project devoid of any political affiliations, sentiments or undertones. The inferences expressed from this scientific process are entirely the Author’s, based on the data. Intro: Trump and Biden faced-off twice. The first debate was on September 29, 2020. It was moderated by Chris Wallace of Fox News The second debate was originally scheduled for October 15th, but was cancelled due to Trump’s bout of COVID19, and held a week later. After his ‘rather-theatrical-and-spectacular-recovery’. This debate was moderated by Kristen Welker of NBC News. Trump offs mask and says don’t be afraid of the virus | image from the new yorker 1. The Data: After watching both debates, as a Data Professional, I got really curious, wondering, what I could learn from analyzing the responses of these two Contestants. It’s possible I may find something interesting from digging a little deeper into the way they answered questions bordering on the lives of millions of Americans… That was my only motivation ‘Curiosity’, so I set out looking for the data. Luckily I stumbled on rev.com, they had up the entire debates so I employed my data skills, scraped it off the website to a Jupyter notebook. That was the easy part. The hard part was preparing the data for each specific format required by the different libraries and tools for my analysis. First, I imported all the required modules I scraped the website with the method I defined below… A method to scrape live data from a website via the requests and beautiful-soup libraries. 2. Gentlemen You Have Two Minutes: If you watched the first debate, you’d have noticed it was a hard task for Chris to keep both men within the 2-minute limit. Trump made it particularly hard, and quite often, there were exchanges between Trump and Biden. Let’s look at what the data says… Trump dominates the debates… Of the total responses during the first debate, Trump had 56%, while Biden had 44% and it got worse for Joe during the second debate, as Trump dominated the responses further to 60%, leaving 40% to Joe. Trump spoke 314 times in debate one and 193 times in debate two. Biden spoke 250 times in debate one and 131 times in debate two. Note to Self: Trump may not be the brightest, but he sure gets his voice heard… 2. Lexical-Diversity: This simply means the cardinality or variety of words used in a conversation or document. In this case, it checks the number of unique words as a percentage of total words spoken by Trump and Biden. The data shows that Joe Biden is more creative with his words. He’s lexically-richer than Donald Trump, even though he consistently speaks fewer words than Trump. Biden speaks 7,936 total words with 2,020 unique words and a lexical-diversity score of 25% Trump speaks 9,209 total words with 1,894 unique words and a lexical-diversity score of 21% Note to self: Biden may be few on words, but he’s got a heart of creativity… Biden gets pretty creative with his words, can he match em with actions? | image_credit 3. TFIDF: Term-Frequency-Inverse-Document-Frequency is arguably the most popular text processing algorithm. It tells us the importance of certain words to a document in comparison to other documents. Simply put, TF-IDF shows the relative importance of a word or words to a document, given a collection of documents. So, in this case, I choose to lemmatize the words of Trump and Biden, rather than stemming them… def lemmatize_words(word_list): lemma = WordNetLemmatizer() lemmatized = [lemma.lemmatize(i) for i in word_list] return lemmatized Then I tokenize the words, remove punctuations and remove stopwords… Three Methods to tokenize words, remove punctuations and remove stopwords. Then I build a simple TFIDF class to compute the TFIDF scores for both men. A class of TFIDF processing functions So let’s see the words peculiar to Donald Trump using a word-cloud… Donald Trump’s top-10 TFIDF words It’s pretty interesting or “uninteresting”, that Trump has on his top-10 TFIDF, words like ‘ago’, ‘built’, ‘Chris’ which is the Moderator’s name, as we can see he made it a hard task for Chris. Others are ‘disaster’, ‘called’, ‘cage’, ‘nobody’…. Let’s see for Joe Biden… Joe Biden’s top-10 TFIDF words With words like ‘create’, ‘federal’, ‘serious’, ‘Americans’, ‘folk’, ‘situation’… It appears, Biden, put in more effort to his debate, than Team-Trump, in terms of structure and theme. 4. Some Questions Asked: image from pixabay We have to commend Chris Wallace and Kristen Welker for being great moderators during the debates. In the first debate, Chris asked some interesting questions, some of which bordered on… Supreme Court Obama-Care Economy Race / Justice Law Enforcement Election Integrity COVID And during the second debate, Kristen held it down with questions on… COVID National-Security America / American-Families Minimum-Wage Immigration Race / Black-Lives-Matter Leadership 5. Some Answers and Inferences: image from John-Hain Pixabay In this section, I shall analyze Trump’s and Biden’s responses to questions on three important topics:- Jobs, Wages and Taxes Racism The US Economy The analysis for this section is quite interesting, involving a few libraries and tools For Sentiments-Analysis : AzureML Text-Analytics-Client SDK for python : AzureML Text-Analytics-Client SDK for python For Key-Phrase Extraction : AzureML Text-Analytics-Client SDK for python : AzureML Text-Analytics-Client SDK for python For Parts-Of-Speech-Tagging : spaCY : spaCY For Visualization: Pywaffle, Matplotlib, Seaborn After signing up on the Microsoft azureML portal and obtaining my key and endpoints, I created two methods for sentiments analysis and key-phrase extraction. Sentiments-Analysis Method using AzureML TextAnalyticsClient Key-Phrase extraction Method using the AzureML TextAnalyticsClient Next, I define the method for extracting the Parts-Of-Speech(POS) tags, using the spaCY library. This is really important in understanding how Trump and Biden often construct their sentences. A method to tag parts-of-speech using the spaCY library At this point, I’ve defined my work structure, now I need a couple of helper functions to process the debates into required formats and to find sentences that match my queries. The first helper function is a search function. Such that given query-words like ‘Jobs’, ‘wages’, it would search through Trump’s and Biden’s corpus respectively, to extract sentences containing these query words… A method to perform key-word search in a corpus and return a list of matching sentences. The others are a function to convert the sentiments received from the AzureML Client to a DataFrame and another to apply the above methods together on a corpus to return a DataFrame with all sentiments and key-phrases intact plus a dictionary of overall sentiments scores. It converts the list of dictionaries returned by the sentiments analysis function to a DataFrame Method to call the search, sentiments-analysis and key-phrase extraction methods… With just a couple of extra plotting functions, we’re good to go! A. Trump and Biden on Jobs/Wages/Taxes: Trump responds with 93 sentences with an overall sentiment score of 21% positive, 72% negative and 7% neutral. Biden responds with 127 sentences with an overall sentiment score of 33% positive, 60.3% negative and 6.7% neutral. Double Pie-Charts for Trump and Biden sentiments analysis on Jobs/Wages/Taxes. In both Pie-charts above, we can see the huge red portions indicating negative sentiments. A2: Note that in a debate, negative sentiments should never be taken at face value, but should be explored to understand the context. This can be done by exploring the sentences and key-phrases extracted. For example, Biden may start a sentence by criticizing Trump’s approach severely, inorder to buttress his point. But doing so will cause the sentiments-analysis-client to record that sentence as overly negative. Therefore, negative-sentiments may only be taken at face-value in a review/feedback session, where negativity may indicate dissatisfaction or unhappy customers. Given A2 above, Trump’s sentiments score is still kinda unexpected…We would expect him to paint a good picture of the work he’s been doing if he believes he’s been doing good work. I mean, it’s expected for Biden to criticize Trump, but since Trump is the sitting President, in charge of the present Government, it’s expected that his responses be more positive. Let’s see a word-cloud of Trumps key-phrases on Jobs/Wages/Taxes Donald Trump’s word-cloud on Jobs/Wages/Taxes Trump talks about “Country, job, tax, companies, taxes, depression”… Let’s see a few of his positive-sentiments responses on Jobs/Taxes/Wages... Trump talks about ‘helping small business by raising the minimum wage’, plus ‘being on the road to success’, amongst other things. He also responds to the question of paying $750 taxes as untrue, saying he paid millions in taxes. When challenged by Biden for exploiting the tax-bill, he claimed the bill was passed by Biden and it only gave “certain individuals” the privileges for depreciation and tax credits. And for Trump’s negative-sentiment responses… For the negatives, Trump talks about people dying, committing suicide and losing their jobs. Saying there are depression, alcohol and drugs at a level nobody’s seen before, and that’s why he wants to open up the schools and economy. Let’s see the word-cloud of Biden’s key-phrases on Jobs/Wages/Taxes Joe Biden’s word-cloud on Jobs/Wages/Taxes Biden talks about “tax, job, people, millions, fact, economy, significant”… Let’s see a few of Biden’s positive-sentiments replies on Jobs/Taxes/Wages… Biden talks about creating millions of jobs, investing in 50,000 charging stations on highways so as to own the electric car market of the future. He talks about taking 4 million existing buildings and 2 million existing homes and retrofit them so they don’t leak as much energy, saving hundreds of millions of barrels of oil in the process and creating millions of jobs… On Biden’s negative sentiments responses… Here he criticizes the Trump administration saying people who have lost their jobs have been those on the front-lines. Also, that Trump has almost half the states in America with a significant increase in COVID deaths, because he rushed to open the economy… Generally, Biden’s negative sentiments scores come from his criticism of Trumps administration, which is expected. Trump’s negative sentiments are a mix of sour remarks and unfriendly remarks at Biden, Obama and Hillary Clinton… He called Hillary crooked and a disgrace. Let’s see the Parts-Of-Speech tags on for both Trump and Biden. Bubble-Plot of Parts-Of-Speech Tags for Trump and Biden on Jobs/Wages/Taxes Bigger bubbles represent the most frequent part-of-speech tags used. B. Trump and Biden on Racism: Trump never said the word ‘Racism’ during the debates. He called Biden a Racist though and said people accuse him(Trump) of being a Racist, but they’re wrong… Trump responds with 47 sentences with an overall sentiment score of 10% positive, 87% negative and 3% neutral. Biden responds with 89 sentences with an overall sentiment score of 27.5% positive, 67% negative and 5.5% neutral. Double Pie-Charts for Trump and Biden sentiments analysis on Racism. Trump’s sentences again appear overly negative at 87%, while Biden’s are negative at 67% Let’s see a word-cloud of Trumps Key-phrases used in describing Racism Donald Trump’s word-cloud on Racism. Trump uses terms like ‘people, person, horrible, country, china, black, racist, terrible..’ For some positive-sentiments responses from Trump… And for some negative-sentiments responses from Trump… Trump calls Biden a racist, calls Hillary Clinton crooked and says the first time he heard about Black-Lives-Matter, they were chanting ‘pigs in a blanket’ and ‘fry them like bacon’, at the police and Trump says, ‘that’s a horrible thing’... Then Trump goes on to say he’s the least racist person in the room and that he’s been taking care of Black colleges and universities. Note to self: Trump finds it hard to address racism constructively. Often he thinks it’s about him, he doesn’t realize it’s about the entire American system Let’s see the Racism word-cloud for Joe Biden… Joe Biden’s word-cloud on Racism Here we have Biden using words like ‘people, president, character, racist, racism, suburbs‘… To tackle racism. Some of Biden’s positive-sentiments responses are… On his positives, Biden talks about how most people don’t wanna hurt nobody and how he’s going to provide for economic opportunities, better education, better health-care and education… And while whipping negative sentiments, Biden talks like this… Biden reminds Trump that when George Floyd was killed, he asked the military to use tear-gas on peaceful protesters at the White-house so that Trump could pose at the church with a Bible. Biden states there’s systemic racism in America, he calls Trump a racist and reminds him that it’s not 1950 no more… Note to self: As a Blackman, I’m happy that Biden openly agrees that there’s systemic racism in America… This assertion is the only true route to a solution. Now, let’s see the Parts-Of-Speech-Tags, for Trump and Biden on Racism… Bubble-Plot of Parts-Of-Speech Tags for Trump and Biden on Racism C. Trump and Biden on The US Economy: Trump responds with 44 sentences with an overall sentiment score of 16% positive, 80% negative and 4% neutral. Biden responds with 56 sentences with an overall sentiment score of 45% positive, 50% negative and 5% neutral. Double Pie-Charts for Trump and Biden sentiments analysis on The US Economy. And for the ‘third-time-running’, Trump seems overly negative with his responses on The US Economy… Let’s see a word-cloud of Trumps Key-phrases about the Economy Trump’s word-cloud on The US Economy Trump uses terms like ‘greatest economy in history, country, china, administration, spike, massive, world…’ Let’s see some of Trump’s responses with positive-sentiments, On his positives, Trump says Due to COVID he had to close ‘The greatest economy of the history of our country’. Which by the way is being built again and it’s going up so fast. He ends with saying they had the lowest unemployment numbers before the pandemic. Let’s see some of Trump’s responses with negative-sentiments, Trump talks about the negative effect of closing down the economy because of the ‘China-plague’. He accuses Biden of planning to shut down the economy again. He said if not for his efforts, there'd be 2.2 million dead Americans to the virus and not the current 220k… Let’s see the word-cloud of Biden’s Key-phrases about the Economy. Biden talks about ‘economy, jobs, fact, people, energy, covid, number, Putin…’ Let’s see some of his remarks with positive-sentiments about the Economy Biden talks repeatedly about creating millions of new jobs by making sure the economy is being run, moved and motivated by clean energy. He talks specifically about curbing energy leaks and saving millions of barrels of oil, which leads to significantly new jobs. On Biden’s negative-sentiments responses about the Economy… From his negative-sentiment responses, Biden talks to the families who’ve lost loved ones to the pandemic. He challenges Trump that he can’t fix the economy except he first fixes the pandemic. He mentions systemic racism affecting the US economy. He accuses Trump of mismanaging the economy, stating the Obama administration handed him a booming economy which he’s blown. Finally, for this section, let’s see the bubble-plot of the Parts-Of-Speech tags for Trump and Biden on The US Economy. 6. Bayesian Inference: So, our task here is to find the conditional probability (P)of Trump and Biden mentioning the words we care most about, given the debates. We will build a Naive-Bayes classifier from scratch and use it to tell the conditional likelihood of Trump and Biden saying the words we care most about. This simply means that the Conditional P of event A, given event B is the Conditional P of event B, given event A, multiplied by the Marginal P of event A, all these divided by the Marginal P of event B (which is actually the Total P of event B occurring at all). First, let’s define the prior, this is simply the P of Trump and Biden participating in the debates. I say it’s 50% each. p_trump_speech = 0.5 p_biden_speech = 0.5 Now, I get a list of some of the words we care about (some may be stemmed) ['job','wage','tax','raci','race','economy','drugs','covid', 'pandemic','vaccine','virus','health','care','dr','doc','citizen', 'america','black','african','white','latin','hispanic','asian', 'minorit','immigra'] Next, I define a function that computes the individual conditional P of Trump and Biden saying each word, given the debates. It returns a DataFrame with these intact. So I get the DataFrame, scale it up uniformly by multiplying each value by some factors of 10 and then I normalize the values and it looks like this… Finally, I define a Bayes-Inference method for computing the conditional probability of Trump and Biden given these words. So I get 46.5% for Trump and 53.5% for Biden Waffle chart showing Probability of Trump and Biden saying some of the words we care about… So from these debates and given the topics we care about, who’s more likely to discuss them… Hopefully, address them and proffer solutions? Bayes Rule says Biden is more likely, and the margin is tight 53.5% — 46.5% = 7% in favor of Joe Biden… This is by no means a prediction of the result of the election nor a means to influence voter decisions, it’s just my opinion inferred solely from the Presidential debates. But of course, we know there’s more to life, to America than just two debates. God Bless America, God Bless Africa, God Bless The World… Cheers!! About Me: Lawrence is a Data Specialist at Tech Layer, passionate about fair and explainable AI and Data Science. I believe that sharing knowledge and experiences is the best way to learn. I hold both the Data Science Professional and Advanced Data Science Professional certifications from IBM and the IBM Data Science Explainability badge. I have conducted several projects using ML and DL libraries, I love to code up my functions as much as possible. Finally, I never stop learning and experimenting and yes, I have written several highly recommended articles. Feel free to find me on:- Github Linkedin Twitter
https://medium.com/towards-artificial-intelligence/analyzing-the-presidential-debates-5aaa7b328452
['Lawrence Alaso Krukrubo']
2020-11-01 20:02:14.569000+00:00
['Artificial Intelligence', 'Machine Learning', 'Programming', 'Technology', 'Data Science']
2,888
Marine Protected Areas Are Getting SMART
By Drew Cronin, Katherine Holmes, and Dayne Buddo March 3, 2019 [NOTE: This is the third in a series of blogs by staff in the WCS Marine Conservation Program in recognition of World Wildlife Day 2019. This story was originally published at Mongabay.] This year, World Wildlife Day will celebrate life in the world’s oceans. It’s a fitting tribute. Oceans cover more than 70 percent of the world’s surface, harbor hundreds of thousands of species, and provide important resources to coastal communities that house more than 35 percent of the global population. Oceans also face significant threats, including overexploitation. Marine Protected Areas (MPAs) are central to the efforts to protect Earth’s seas and the wildlife that call them home. In recent years, there has been a surge in their creation, spurred on by a global goal to secure 10 percent of the world’s seas, and all they provide, by 2020. Peoples, governments, and organizations everywhere have mobilized to make this a reality in their own countries and regions. Oceans cover more than 70 percent of the world’s surface and harbor hundreds of thousands of species, including the impressive whale shark. Photo © Caleb McClennen/WCS. In order for this strategy to succeed, though, new and existing MPAs must be managed effectively. That’s not yet occurring in many cases. Often, small teams of rangers and managers are understaffed, poorly equipped, and don’t have basic information on the damaging activities that are happening. Conservation groups are turning to tech solutions to generate such knowledge and make it accessible to others. The Spatial Monitoring and Reporting Tool (SMART) was developed by the SMART Partnership, a collaboration of nine global conservation organizations, to improve the performance of protected areas, both on land and at sea, and better use limited resources. Today, SMART is a holistic protected area management platform, encompassing desktop and online software and mobile data collection, as well as cloud and Internet of things (IoT) connectivity. SMART helps enforcement officers and rangers document where their patrols go, what they see, and how they respond. A stingray just off the coast of Belize, where SMART is being widely used to aid Marine Protected Area management. Photo © Alexander Tewfik/WCS. That data is fed into a central system back at headquarters where it is converted into mapped images to help managers understand where the greatest threats are and how best to plan future patrols. This helps them allocate their time and resources more effectively while also feeding clear results back to the rangers themselves. Its success derives from a bottom-up approach, drawing directly on the needs identified by staff working in the field. All told, SMART makes it possible to collect, store, communicate, and analyze ranger data on illegal activities, biodiversity, enforcement routes, and management actions to better deploy resources and evaluate patrol performance. The tool’s effectiveness has made it the global leader for protected area management solutions — it’s now used at more than 600 sites across 55 countries and it has been adopted as the official enforcement tool in 12 countries. Momentum and interest in SMART marine applications have been growing lately. Currently, more than 45 marine sites have implemented it. Belize is leading that charge by utilizing SMART throughout its MPA system. There, it has reduced the number of fisheries infractions by 85 percent. In fact, throughout Central America and the Caribbean, there are now seven countries using SMART in the marine realm. Jamaica is the latest. In response to significant pressures, the country had established a network of 17 Special Fishery Conservation Areas, which are each co-managed by the government and a Local Marine Management Agency. Belize, where this Hawksbill sea turtle was spotted, is leading the charge by utilizing SMART throughout its MPA system. Photo © Alexander Tewfik/WCS. These MPAs, where no fishing is allowed, were set up to protect and enhance near-shore marine environments and regulate destructive fishing practices. However, management effectiveness in the areas varies widely, due in part to challenges in implementing cost-effective patrols and reporting, and creating management approaches that are able to adapt to changing conditions and realities. To improve management, the Alligator Head Foundation, in collaboration with the Wildlife Conservation Society’s Marine Program and the SMART Partnership, initiated a pilot project at the East Portland Fish Sanctuary. It aims to build a foundation for national SMART implementation across Jamaica’s entire network. The project will also position Jamaica as a regional hub for knowledge exchange and build a foundation for SMART marine implementation in the Caribbean. As part of the next iteration of SMART, we’re developing additions that will help with marine implementation globally, including a cost-effective solution for integrating vessel monitoring system (VMS) data. We are also developing a predictive patrol planning function that uses machine learning, artificial intelligence, and spatial data to predict hotspots of illegal fishing and generate suggested patrol routes. Rangers patrol Payne’s Creek National Park in Belize. The adoption of SMART there has led to an 85 percent drop in fisheries infractions. Photo © WCS Belize. Predictive patrol planning test cases have already been developed and tested in Malaysia and Uganda, but these models have yet to be applied to fisheries where there is a significant need to maximize enforcement efficiency due to the difficulties associated with patrolling wide expanses of open ocean. By providing MPA enforcement officials with better information, such as predictions on where fishing aggregations might be found, we may increase the cost-effectiveness of fisheries management and reduce incidences of illegal fishing. As a result of all this positive momentum, we are seeing countries across the Caribbean and beyond begin SMART implementation. This embrace by the user community is exciting, but not totally new. We’ve seen that once SMART technology is implemented successfully, other practitioners in similar niches quickly recognize its potential at their sites, too. We’re excited for what the future of SMART marine holds. Through the development of innovative solutions, and the support and enthusiasm of the global marine community, we strive to continually improve marine conservation outcomes into the future. Drew T. Cronin is the SMART Partnership Program Manager, based at WCS; Katherine Holmes is the Associate Director of the WCS Global Marine Program; Dayne Buddo is the Chief Executive Officer of the Alligator Head Foundation.
https://medium.com/wcs-marine-conservation-program/marine-protected-areas-are-getting-smart-55d022c2f985
['Wildlife Conservation Society']
2019-03-14 17:37:21.212000+00:00
['Technology', 'Oceans', 'Wildlife', 'Environment', 'Conservation']
2,889
Hosting A Modded Minecraft 1.16.4 Server on a Raspberry Pi
Hosting A Modded Minecraft 1.16.4 Server on a Raspberry Pi Recently, some friends and I wanted to check out the recent major updates made to Minecraft. There’s something especially inviting about diving into a new world, exploring the infinite landscape and building huge castles with friends that has captured Minecraft players for years. What’s not particularly exciting is figuring out the complicated server hosting options. There are a plethora of options available that require a monthly fee. Some of them even allow you to use mods, but the configuration options can be limited and the fees can get expensive quickly. Instead, I decided to repurpose a Raspberry Pi 4b into a modded Minecraft server hosted on my local network. Now I can dive into the world made by my friends and me anytime, use exactly the mods we want to use and, best of all, not pay any monthly service fees. If you haven’t followed along with Part One, make sure you check it out first! You’ll want to get Forge up and running on single player before tackling this. You’ll need a couple of things to get started. Heads up! Some of these links are connected to my Amazon Affiliates account. Hardware Required: Raspberry Pi model 4b 8GB version. The Pi will need all the RAM it can get, so no skimping out for the 4GB version. SD card with decent read/write speeds. This 32GB SanDisk Extreme Pro worked well. Raspberry Pi case with active cooling. I used this case by Miuzei. It’s simple to set up and includes a fan to keep the Pi nice and cool during heavy loads. Cat6 Ethernet cable to connect the Pi to your WiFi router. Software Required: We’ll use the same Forge installer we used in Part One. If you haven’t downloaded it yet, do so here. The version of Forge must match your version of Minecraft. The most recent version as of this article is 1.16.4. Grab the file labeled ‘installer’ under the ‘Download Recommended’ banner. The exact same mods you installed in Part One. Remember, all players on the server must have a matching mod list with the server. If you haven’t downloaded any yet, there are a ton of community made mods for Forge. For now, download the mods you want and keep the zip files someplace you can find them later. Game version should match your version of Minecraft Java Edition, which should also match your version of Forge. A note about mods — Some mods, like the very popular Optifine, are client side only. That means we won’t need to put them on the server and players are free to use them as they please. All other mods must be installed on the server and the client, so all players must have a matching mod list! If a mod doesn’t specifically say it’s client side only, assume it’s for both. Don’t forget to go through all the steps in Part One before moving on! Setting Up the Server Now that you’ve got single player working with your mods, it’s time to put a server on your Raspberry Pi so your friends can join you! The first step is to image your SD card with a copy of Raspian OS 64bit. We want the 64 bit version because we’ll be running Minecraft with as much memory as possible. You can download it here. We’ll also need to use the Raspberry Pi Imager to image our SD card with the OS. You want to select the ‘Use Custom’ option at the bottom and select the .img file of the 64bit OS. Once the SD card is imaged, open it in Finder / File Explorer. The drive should be called BOOT. We’re going to add a single file here simply called ssh . The file is empty and doesn’t have any extension. Just make the file and drop it into BOOT. This is going to allow up to access our Raspberry Pi remotely. Safely remove the SD card and place it in the Raspberry Pi. Connect the Pi to your WiFi router or modem with the Cat6 cable and connect it to power with a USB-C cable. The Pi should light up and, if you installed it in the active cooling case, the fan should kick on. Connecting to the Pi Remotely Assuming the Pi booted properly, it should now be accessible on your local network. We need to find its local IP address. To do this, you’ll need to access your router’s maintenance page. Typically, you can do so by navigating to 192.168.0.1 in your browser. If that doesn’t work, you’ll have to research your particular router’s brand and model to find out how to access its web page. Once you’re in your router’s menu, find where the devices on your network are listed, called ‘Device Table’ or something similar. You should see a list of all devices connected, including one called raspberrypi . Somewhere next to the device name is the local IP address, likely 192.168.0.XX , where XX is some number between 1 and 255. For Windows 10 — Unfortunately, there’s no native solution to connecting via ssh with Windows. There are free solutions, like graSSHopper, that you can use, that make it pretty easy. We’ll need to be able to transfer files and issue commands on the Pi’s CLI. Once you’ve got that figured out, come back! For MacOS — Macs have native ssh support right from the terminal. Open your terminal in the launchpad or by going to Applications > Utilities > Terminal. In your terminal, type ssh pi@<pi's IP address> . If all goes well, you’ll probably get a warning message about storing a certificate. Go ahead and agree. You should be prompted for the Pi’s password. By default, the password is raspberry . Installing the Server and Adding Mods Once you’re in, you should be treated by this command prompt in the terminal: pi@raspberrypi:~ $ Make a new folder by typing mkdir forge_server and pressing return. Navigate into that folder by typing cd forge_server . This is where we’ll keep our server files. We need to transfer the Forge .jar file we used to install Forge on the client earlier. Doing so isn’t quite as easy as dragging and dropping, but it’s pretty close. On Windows, use whichever SSH client you downloaded for this. On Mac, we can use the terminal. Open a new terminal tab by pressing CMD + T. Navigate to wherever you have the server file in the terminal. If it’s on your desktop, type cd ~/Desktop . We’ll use scp to transfer the server file to our Pi. From the new terminal window, type the following command to move the server file. scp ./<name of serverfile> pi@<Pi's IP>:/home/pi/forge_server Before transferring, let’s rename the server file to something easy, like forgeServerInstaller.jar. The scp command for me was this: scp ./forgeServerInstaller.jar [email protected]:/home/pi/forge_server You should be prompted for the Pi’s password, then treated to a percentage bar of the file being transferred. Go back to the Pi’s terminal window and enter ls . You should see the server file there. At this point, we need to get Java installed on the Pi. In the Pi’s terminal window, type these two commands. sudo apt update sudo apt install default-jdk The first will update the Pi, the second will install OpenJDK. Once it’s finished, type java --version , you should see something like this. Ok! It’s finally time to install the server. Run the following command. java -jar forgeServerInstaller.jar --installServer You should be treated to a somewhat lengthly install process. Once complete, enter ls again to see the generated files. You won’t have all of these, that’s ok. The most important things here are eula.txt , forge-1.16.4.jar , server.properties , world , and mods . You’ll have to edit the eula.txt file before we continue. Enter sudo nano eula.txt . It should open in an editable interface. Simply change false to true . Press ctrl + x to close, agreeing to save in the process. server.properties is where you can edit your server rules. Feel free to edit it in the same way, or leave it all on default. world is the folder that acts as your world’s save file. You can move this between servers if you want to save the landscape all the buildings that you’ve made with your friends, or periodically back this up with scp to avoid losing progress. mods may not exist. If it does, we’re actually going to remove it to make things easier. If you have a mods folder, remove it by entering rm -rf mods . This is where we’ll move all of our downloaded mods .jar files. If fact, let’s do that now. Go back to your other terminal window (the one pointed at your desktop). Place all of your downloaded mods in a folder called mods on your desktop. In the terminal, enter this: scp -r ./mods pi@<Pi IP address>:/home/pi/forge_server After entering the Pi’s password and waiting, the mods folder should be on the server. Awesome! Our mods are installed. Finally, forge-1.16.4.jar is the file we’ll use to start the server. I like to copy this file with a new name to make things easier. cp ./forge-1.16.4.jar ./forge.jar This will make a new file called forge.jar that we can use instead. Totally optional. Starting the Server Here’s the big moment. Run this command to start the server. java -Xmx7G -jar forge.jar What are we actually doing here? We’re using Java to run the .jar file like an executable, but there’s an extra flag in there too, -Xmx7G . We’re explicitly allowing Java to use as much RAM as it needs up to 7GB. It’s also why we needed the 64bit version of Raspian, since the 32bit version won’t allow this memory allocation. The Pi we’re using has 8GB of RAM, so we’re reserving 1 for the OS. If your Pi is the 4GB model, adjust accordingly, but I highly recommend upgrading for this project. Cross your fingers and watch the terminal. You should see some messages roll past about the mods you’re using, a progress bar of the world being created, then finally a blank line with the occasional message populated. Yes! It worked! If it didn’t, you’ve got some trouble shooting to do. Fortunately, the error messages are well documented and you’ll find plenty of help online by copy / pasting them into Google. Playing on the Server At this point, you should be able to join a game on your server. Launch Forge from your Minecraft launcher. Remember that the mods you’re using need to exactly match those on the server, with the exception of client-side only mods. Select multiplayer, then direct connection. Type the local IP address of your Pi into the field and press Join Server. If you’re able to connect, you should see a message in the terminal saying ‘Player <player name> connected’. That’s it! You’re now playing on a server you created and are hosting on a Raspberry Pi! Pretty cool. There are two more things we need to do to get your friends online though. First, you’ll need to establish port forwarding in your router’s configuration. This will expose the IP address of the Pi to the internet so people outside your local connection can access it. WARNING: port forwarding can be dangerous. Don’t ever give this IP address to people you don’t trust. How to do this exactly will depend on your router, but open up the settings page in the same way you did before. For my router, the configuration is under Advanced > Security > Port Forwarding. Set up port forwarding for your Pi’s IP address, the port number is 25565. Once port forwarding is established, we need the global IP address of the Pi. The easiest way to do this is by going to the Pi’s terminal window, killing the server by typing ctrl + c, then entering the following: curl icanhazip.com You should be returned with the IP address of your Pi. You can use that to access the server from outside your network. The last step is to run the server as a background process. Right now, if you exit the ssh session of the Pi, the server will stop running. To run it as a background process, start the server like this instead: nohup java -Xmx7G -jar forge.jar& If it goes well, you won’t see anything happening. So how can you verify that the server is running? Use the top command to see a list of running processes. You should see one labeled java using a fair amount of resources. If you want to kill the server, use top to find the PID of the service running the server. It should be a three to five digit number. Once you’ve found it, exit top using ctrl + c, then enter kill <PID> . This will kill the server process. Run the server again, then enter exit to leave the ssh session. Your Pi will continue to happily run the server.
https://medium.com/@curtmorgan3/hosting-a-modded-minecraft-1-16-4-server-on-a-raspberry-pi-45dec2fd14c6
['Curt Morgan']
2020-12-01 19:04:55.351000+00:00
['Raspberry Pi', 'Technology', 'Minecraft', 'Programming', 'Servers']
2,890
Stock Performance Analysis using Financial Functions for Python
Putting it to use In our framework, we’ll use ffn for all data retrieval and manipulation, as well as matplotlib for the graphical approach. %matplotlib inline is a ‘magic function’ in iPython, called with a command line style syntax. With this back-end, the output of plotting commands is displayed inline within front-ends like the Jupyter notebook, directly below the code cell that produced it. The resulting plots will then also be stored in the notebook document. Analogously, %pylab inline is used to control the default size of figures in the notebook. We’ll start using ffn while importing data for analysis. By default, ffn.get function uses Yahoo Finance to gather historical data on securities and futures negotiated in Capital Markets from all over the world. We will restrain our analysis example on four “Big Tech” companies, commonly called “GAFA” — Google, Apple, Facebook and Amazon. This should give us a data frame containing a ‘date’ indexed column, already in datetime format, along with four other columns, each providing the adjusted closing price of the chosen stocks. As a start, a simple line chart can tell us how all these securities performed over the period. Note: If you’re interested in a interactive chart approach, checkout my early post. The .rebase() function allows us to plot the time series from a common starting point, so that we can see how each stock performed relative to each other. Our next step should be focused on the returns of each security. Tackling frequency distribution, histograms are powerful tools for distinguishing shape, identifying central points, variation, amplitude and symmetry of the returns. Since we’re looking at four companies from the technology sector, naturally, they follow a similar return pattern. This may be verified by finding the correlation between them, which can be observed by plotting a heat map, as follows: Now, for ffn’s main performance measurement, we can calculate and display a wide array of statistics using the calc_stats() method. Although the result interpretation requires a bit of statistical knowledge, this large set of metrics can provide us simple conclusions, such as: Amazon stocks had the best performance between the Big Techs analysed, since, in the given period of the time series (nearly 65 months), the company provided a 671.14% return to it’s share-holders. Apple presented a 182.03% return, followed by Google and Facebook’s almost identical return rate of 139.95% and 139.85%, respectively. From there, we can also access the underlying performance stats for each series, including monthly returns, drawback charts and more. Conclusion Financial Functions for Python provides a huge set of useful information regarding stock performance in a very simple and straight-forward way. Combining it with matplotlib, we can intuitively detect stock trends or patterns, as well as technically interpret statistical insights to tell a enlightening story.
https://medium.com/swlh/stock-performance-analysis-using-financial-functions-for-python-f272bb624ac4
['Caio Milani']
2020-04-28 18:17:37.253000+00:00
['Programming', 'Technology', 'Finance', 'Business', 'Data Science']
2,891
What You Need to Know About Baidu’s Apollo Go Robotaxi in Beijing
Two weeks ago, we were thrilled to announce a huge step forward in the large-scale application of autonomous driving: On October 10, 2020, Baidu fully opened its Apollo Go Robotaxi service to the public in Beijing, free of charge and with no reservation necessary. After two weeks of widespread use throughout Beijing, here’s what you need to know: How can I experience Baidu’s Apollo Go Robotaxi service in Beijing? You can now hail a robotaxi in Beijing with one tap in Baidu Maps, Apollo GO (a standalone ride-hailing mobile app), or the Apollo GO mini program on Baidu App. In compliance with local regulations, our robotaxi can accommodate up to two passengers at a time between the ages of 18 and 60, from 10 a.m. to 4 p.m. daily. Baidu Maps What’s been the response from the public? We’ve seen huge traction among local residents for the service in Beijing, who are lining up to experience our robotaxis. We’ve received more than 2,600 ride requests in a single day. How large is the service area? Our Apollo Go Robotaxi service operates within a 700-kilometer area (currently the largest autonomous driving test area and longest road network in China). While we currently have 14 pick-up and drop-off stations approved and opened to the public, we’ll expand to nearly 100 pick-up and drop-off stations covering residential and business areas in Yizhuang, Haidian, and Shunyi districts in the near future. How many vehicles are part of the fleet? Our Apollo Go Robotaxi service currently consists of 40 Lincoln MKZs. The Lincoln MKZ is our 3rd-generation autonomous vehicle model approved for self-driving testing licenses in Beijing. Under the city’s “one-vehicle, one-license” policy, we continue to use Lincoln MKZs for the Beijing trial operation. However, we’re currently in discussion with local authorities to bring our newest-generation pre-installed FAW Hongqi EVs to the fleet in Beijing, which overcome the Lincoln MKZ’s capacity limitations in terms of software-hardware integration. FAW Hongqi EV Are there any other cities where Apollo Go Robotaxis are fully available? In addition to Beijing, our Apollo Go Robotaxi service is open to the public in Changsha, Hunan province, and Cangzhou, Hebei province. Why is the safety driver still sitting behind the wheel? Beijing hasn’t approved fully driverless testing without human safety drivers on public roads. When will Baidu Apollo start driverless trials? On September 15, 2020, Baidu Apollo was given licenses for driverless driving road tests in Changsha, Hunan province. We are now testing robotaxis equipped with 5G-enabled teleoperation, or 5G Remote Driving, in the city. In September, we demonstrated fully automated driving in Beijing’s Shougang Park and showcased 5G-enabled teleoperation, namely 5G Remote Driving, which can engage instantaneously to provide immediate assistance from remote human operators when the user or the system switches to parallel driving mode. 5G Remote Driving What’s significant about the Apollo Go Robotaxi service launch in Beijing? The launch of our Apollo Go Robotaxi service in Beijing, the capital of China, marks a new stage of autonomous driving development in the country. Beijing is the first city in China to regulate and open autonomous driving road test zones. The city has comprehensive infrastructure and policies to foster high-speed development of the industry. In 2019, Beijing ranked first in China for numbers of test licenses and vehicle categories, as well as the diversity of test scenarios. In addition, Beijing has issued the most stringent safety requirements for manned autonomous driving tests in China to ensure the safety and reliability of the industry. What’s the pre-requisite for operating robotaxis in Beijing? All autonomous vehicles must be approved for self-driving licenses prior to road tests. In July 2019, we received T4 licenses, the highest-level self-driving license ever issued in China, which permits road testing under complex urban road conditions. Baidu is currently the only company that owns T4 licenses. Last December, we secured licenses to test autonomous vehicles carrying passengers on designated roads in Beijing. After eight months of small-scale manned testing, the Apollo Go Robotaxi fleet completed road tests totaling 519,000 kilometers in Beijing and obtained permission to enter the next stage: opening up to the general public.
https://medium.com/apollo-auto/what-you-need-to-know-about-baidus-apollo-go-robotaxi-in-beijing-7766bc71b20b
['Apollo Auto']
2020-10-27 17:05:57.448000+00:00
['Autonomous Cars', 'Baidu', 'Artificial Intelligence', 'Technology', 'Beijing']
2,892
The Inventor of the World Wide Web Says the Internet Is Broken
In 1988, Sir Tim Berners-Lee invented hypertext mark-up language (html) — the language that powers the World Wide Web. He knew his invention could facilitate the rapid exchange of information globally and felt it would be a powerful force for good. That’s how it was for a while, but it didn’t take long for his invention to be hijacked, as he sees it, by a few dozen corporations. In the intervening years, the number of these corporations dwindled due to amalgamations and takeovers, and those that survived became huge. Berners-Lee points out that between them, these giant corporations dominate today’s internet by providing email, web, social communication, navigational, and other services ostensibly free of charge. Users, however, “pay” by giving these huge companies enormous amounts of personal information. The companies sell restricted access to selected parts of it to marketing organizations enabling them to target advertisements at those same users. In addition to advertising, the big companies exploit their vast data banks for a myriad of projects. Some of these projects may benefit humanity, but crucially their sole product remains user data. Not only do users have no control over how their data is used, but despite what many people may believe, they don’t even legally own their data. In a March 2020 Guardian newspaper article, columnist Arwa Mahdawi, highlights this fact by recounting her own experience with Yahoo. She reports how she lost access to all her old emails when the company deleted her account because she had not used it for some time. The company, she says, did not warn her before they did this, even though she had supplied them with a Gmail address for just such an eventuality. Berners-Lee is one of a number of internet scientists, entrepreneurs, and investors, who aim to prevent such things happening in the future by giving control and ownership of data back to the individual. They plan to do this by establishing a decentralized web. The concept known as the dWeb (which can mean decentralized, or distributed web, or both) allows direct user-to-user communications and eliminates the need for centralized platforms like those currently run by the major internet companies. The dWeb gets its storage space and processing power by tapping into the enormous spare capacity of the millions of internet connected devices worldwide, devices like personal computers, smartphones, and those that form the Internet of Things (IoT). In this new environment, users communicate directly with each other using an assortment of specially designed email, browser, and other applications, which ensure that only the users themselves can access the data related to their interactions. With colleagues at Massachusetts Institute of Technology (MIT), Berners-Lee designed an infrastructure software system called “Solid” to run the new applications. In this new model, data is stored in digital silos called “pods” (Personal Online Data Stores) owned by users. Pods reside either on the person’s own computer or on Solid servers located worldwide and connected via the peer-to-peer distributed network, which operates without a hierarchy or central control. Nobody can access any of this data without its creator’s express permission, and that permission can be revoked at any time. Solid is a response to one of Berners-Lee’s major concerns: that the huge amount of personal data held by the small number of giant internet companies threatens individual security and privacy. Users have no direct control over their data and can never be sure how that data is used or by whom. User data itself is not secure since a company holding it could close down a particular service and delete or render user data inaccessible (as The Guardian article highlights). Data is also vulnerable to serious hacker attacks because so much is centrally held by a small number of operators. In addition, user data passed to third parties could exist somewhere indefinitely. So, if users delete data they would prefer no longer existed, they can’t be sure that it actually no longer exists. Berners-Lee is probably the most high-profile individual of the many involved in the dWeb development environment. They all have one common aim: to decentralize the web and enable users to securely operate online without the tech giants that currently dominate the internet tracking their activities and controlling their data. “The goal of the Web is to serve humanity. We build it now so that those who come to it later will be able to create things that we cannot ourselves imagine.” — Tim Berners-Lee
https://medium.com/digital-diplomacy/the-inventor-of-the-world-wide-web-says-the-internet-is-broken-fbce1c8bf6cf
['George J. Ziogas']
2020-11-26 13:11:08.474000+00:00
['Internet', 'Privacy', 'Society', 'Future', 'Technology']
2,893
Vue 3 — Directives. Various things we can do with it.
Photo by Crystal Jo on Unsplash Vue 3 is in beta and it’s subject to change. Vue 3 is the up and coming version of Vue front end framework. It builds on the popularity and ease of use of Vue 2. In this article, we’ll look at how to create more complex directives. Directive Arguments We can get directive arguments by getting the value from the binding.arg property. For instance, we can write: <!DOCTYPE html> <html lang="en"> <head> <title>App</title> <script src="https://unpkg.com/vue@next"></script> </head> <body> <div id="app"> <p v-absolute:[direction]="50">foo</p> </div> <script> const app = Vue.createApp({ data() { return { direction: "right" }; } }); app.directive("absolute", { mounted(el, binding) { el.style.position = "absolute"; const s = binding.arg || "top"; el.style[s] = `${binding.value}px`; } }); app.mount("#app"); </script> </body> </html> We create the absolute directive which has a mounted hook. It takes a binding parameter which has the arg property with the argument value which we passed into the square brackets of the directive. Therefore, it’ll be the direction value. We set the property of the style with the binding.value , which is the value we passed into the directive right of the equal sign. Also, we can make the directive’s value by passing an expression as the value of the directive. For instance, we can write: <!DOCTYPE html> <html lang="en"> <head> <title>App</title> <script src="https://unpkg.com/vue@next"></script> </head> <body> <div id="app"> <input type="range" min="0" max="100" v-model="padding" /> <p v-absolute:[direction]="padding">foo</p> </div> <script> const app = Vue.createApp({ data() { return { direction: "left", padding: 0 }; } }); app.directive("absolute", { mounted(el, binding) { el.style.position = "absolute"; const s = binding.arg || "top"; el.style[s] = `${binding.value}px`; }, updated(el, binding) { const s = binding.arg || "top"; el.style[s] = `${binding.value}px`; } }); app.mount("#app"); </script> </body> </html> We have the absolute directive with the updated hook added. The updated hook will pick up any updates of the directive’s value . Therefore, when we move the slider, the ‘foo’ text will move along with it. Function Shorthand We can shorten our directive definition with the function shorthand. If we only have the mounted and updated hooks in our directive, then we can use it. For example, we can write: <!DOCTYPE html> <html lang="en"> <head> <title>App</title> <script src="https://unpkg.com/vue@next"></script> </head> <body> <div id="app"> <input type="range" min="0" max="100" v-model="padding" /> <p v-absolute:[direction]="padding">foo</p> </div> <script> const app = Vue.createApp({ data() { return { direction: "left", padding: 0 }; } }); app.directive("absolute", (el, binding) => { el.style.position = "absolute"; const s = binding.arg || "top"; el.style[s] = `${binding.value}px`; }); app.mount("#app"); </script> </body> </html> We shortened our absolute directive to include a callback instead of an object with the hooks. It does the same things like the one in the previous example since we only have the mounted and update hooks in it. This is a handy shorthand to avoid repeating code. Object Literals If we need multiple values in our directive, we can pass in an object literal to it. Then binding.value has the object we pass in. For instance, we can write: <!DOCTYPE html> <html lang="en"> <head> <title>App</title> <script src="https://unpkg.com/vue@next"></script> </head> <body> <div id="app"> <p v-custom-text="{ color: 'green', text: 'hello!' }"></p> </div> <script> const app = Vue.createApp({}); app.directive("custom-text", (el, binding) => { const { color, text } = binding.value; el.style.color = color; el.textContent = text; }); app.mount("#app"); </script> </body> </html> to create a custom-text directive that takes an object. We get the color and text properties of the object from binding.value . Photo by Mark König on Unsplash Conclusion We can create directives easier with some shorthands. Also, directives can take arguments and values. Enjoyed this article? If so, get more similar content by subscribing to Decoded, our YouTube channel!
https://medium.com/javascript-in-plain-english/vue-3-directives-698b0cd265c8
['John Au-Yeung']
2020-11-15 18:43:08.669000+00:00
['JavaScript', 'Web Development', 'Software Development', 'Technology', 'Programming']
2,894
Impact On The Financial Sector Of Blockchain Technology
Impact On The Financial Sector Of Blockchain Technology A multitude of use cases around the numerous industrial segments is now contained in the blockchain technology that once began as the underlying system for Bitcoin trading. One of the main impacts on the financial sector has been felt. Blockchain technology has been publicly adopted by businesses like JP Morgan. The financial sector is suffering from data protection problems, faster transactions, transparency, and other bottlenecks that hamper the growth of businesses that rely on monetary transactions from banks and NBFCs. Blockchain could therefore be a possible solution here. Banks and financial sectors can easily solve the disadvantages that hold back the banks’ smooth functioning with Blockchain’s involvement. Some of the biggest developments we have seen in the Blockchain sector are the creation of Blockchain platforms such as Hyperledger Sawtooth, Hyperledger Fabric, Corda, etc This approved Blockchain not only ensures that the system operates efficiently, but also ensures that transactions take place at a faster rate. It actually helps the banking system work much better and in a more effective way. How the financial sector is impacted by Blockchain: 1. Providing a safe platform- The need for a secured platform is one of the greatest challenges facing most banking and financial institutions. As most of the transactions and other work have now been digitized, most banks and other allied companies are looking for a stable platform that is free of any mistakes or defects. In addition, there is also a high rise in the need for a network that can efficiently combat data breach problems, and so we have Blockchain. By time-stamping all information or data on it this DLT platform works. This guarantees full security. And with the introduction of approved Blockchain networks, the security feature is even more assured. 2. No third party- Time lag and paperwork are two weaknesses of the financial sector that appear to hold up the processes and ultimately influence the company’s efficiency as well. We can solve these problems with the assistance of Blockchain technology and thus ensure quicker transactions. Blockchain technology operates on peer-to-peer transactions, ensuring that for authentication and approval, there is no need to rely on a third party, which speeds up the transaction process. 3. Tracking and tracing- For banking firms, these features can be highly beneficial. Banks invest a large amount of money on authentication and verification, amid all the efforts of false identity cases, and fraud reports are growing, we can easily put an end to it with Blockchain. As data tracking and tracing become simpler and history can be easily traced back, compared to the traditional technologies that banks use, it becomes easier to rely on this platform. These are the three big benefits that Blockchain can reap from the banking and financial field. Blockchain developers and Blockchain experts are in high demand because of this, and we will see an increase in this number in the times to come. Conclusion- The Blockchain Council provides Blockchain with the best online certificate program. This detailed curriculum will allow you to absorb all Blockchain-related knowledge while also learning how to incorporate it. Then what are you going to wait for? Register for today’s Blockchain certification.
https://medium.com/@yashasvi-9094/impact-on-the-financial-sector-of-blockchain-technology-1a4cf924a5e1
['Yashasvi Gupta']
2020-12-03 04:25:46.374000+00:00
['Blockchain Experts', 'Blockchainprofessional', 'Blockchain Technology', 'Blockchain Development', 'Blockchain Developer']
2,895
Vuetify — Bottom Sheet. A container to display content.
Photo by bantersnaps on Unsplash Vuetify is a popular UI framework for Vue apps. In this article, we’ll look at how to work with the Vuetify framework. Bottom Nav Bar Scroll Threshold We can set the scroll-threshold of the v-bottom-navigation to show the navbar depending on the threshold. For example, we can write: <template> <v-container> <v-row class="text-center"> <v-col col="12"> <v-card class="overflow-hidden mx-auto" height="200" max-width="500"> <v-bottom-navigation scroll-target="#scroll-area" hide-on-scroll absolute horizontal scroll-threshold="500" > <v-btn text color="deep-purple accent-4"> <span>History</span> <v-icon>mdi-history</v-icon> </v-btn> <v-btn text color="deep-purple accent-4"> <span>Favorites</span> <v-icon>mdi-heart</v-icon> </v-btn> <v-btn text color="deep-purple accent-4"> <span>Map</span> <v-icon>mdi-map-marker</v-icon> </v-btn> </v-bottom-navigation> <v-sheet id="scroll-area" class="overflow-y-auto" max-height="600"> <v-container style="height: 1500px;"></v-container> </v-sheet> </v-card> </v-col> </v-row> </v-container> </template> <script> export default { name: "HelloWorld", data: () => ({ activeBtn: undefined, showNav: false, }), }; </script> We add the scroll-threshold prop to set the number of pixels to scroll down until the navbar is shown. Bottom Sheets The bottom sheet is another container for content. It shows at the bottom of the page. We can add one with the v-bottom-sheet component: <template> <v-container> <v-row class="text-center"> <v-col col="12"> <v-card class="overflow-hidden mx-auto" height="200" max-width="500"> <v-bottom-sheet v-model="sheet" persistent> <template v-slot:activator="{ on, attrs }"> <v-btn color="green" dark v-bind="attrs" v-on="on">Open Persistent</v-btn> </template> <v-sheet class="text-center" height="200px"> <v-btn class="mt-6" text color="error" @click="sheet = !sheet">close</v-btn> <div class="py-3">Lorem ipsum</div> </v-sheet> </v-bottom-sheet> </v-card> </v-col> </v-row> </v-container> </template> <script> export default { name: "HelloWorld", data: () => ({ sheet: false, }), }; </script> We add the v-bottom-sheet component to show the sheet content when we click on the Open Persistent button. The Open Persistent button should be in the activator slot to let us toggle the bottom sheet. The v-model sets the open state of the bottom sheet. It’ll open when it’s true . v-model Control We can use v-model to control the bottom sheet. For example, we can write: <template> <v-container> <v-row class="text-center"> <v-col col="12"> <div class="text-center"> <v-btn color="blue" dark @click="sheet = !sheet">Open v-model</v-btn> <v-bottom-sheet v-model="sheet"> <v-sheet class="text-center" height="200px"> <v-btn class="mt-6" text color="red" @click="sheet = !sheet">close</v-btn> <div class="py-3">Lorem ipsum</div> </v-sheet> </v-bottom-sheet> </div> </v-col> </v-row> </v-container> </template> <script> export default { name: "HelloWorld", data: () => ({ sheet: false, }), }; </script> We have the Open v-model button to toggle the bottom sheet. Photo by Rebecca Matthews on Unsplash Conclusion We can add a bottom sheet to display content at the bottom of the page. It can be toggled.
https://medium.com/dev-genius/vuetify-bottom-sheet-8e153108240f
['John Au-Yeung']
2020-11-23 17:16:33.699000+00:00
['JavaScript', 'Web Development', 'Software Development', 'Technology', 'Programming']
2,896
7 Reasons Why RoR is The Best Web Development Technology In 2020
Ruby on Rails is among the top web development frameworks for creating web applications. Even though Ruby on Rails has reached maturity, developers around the world still prefer it. E-commerce solutions are in high demand in 2020. Due to the pandemic, convenience, and security, many businesses have switched to e-commerce sites to support their customers. The Ruby on Rails system is a very stable and efficient platform for creating an e-commerce website. Ruby’s web development on Rails for eCommerce makes sense because it is an inexpensive platform for developing web applications. It also offers much quicker development than other frameworks. You can develop B2B, B2C, and subscription-based e-commerce projects without any problems. SpreeCommerce is a powerful e-commerce platform run by Ruby on Rails. By integrating both, you can get a high-performance and stable e-commerce application that supports companies of all kinds. Another famous eCommerce site, Shopify, also uses Ruby on Rails to support its clients. Today, e-commerce solutions are with Ruby on Rails is more crucial than ever before. Owing to imminent losses, you may want to save money and reach more customers. With a mobile eCommerce app developed with Ruby on Rails, you can reduce your existing operations burden. Here are seven reasons to embrace Ruby on Rails as your eCommerce web development framework in 2020–2021: Suitable for both responsive and modern needs Ruby on Rails can sync incredibly well with HTML, CSS, JavaScript, Ajax, and another web programming, as well as scripting languages. It provides direct access to the code or scripts that are written using these languages. It helps to improve both the running and the design process in a well-coordinated manner. Developing modern, responsive, and robust applications using the Ruby on Rails framework is, therefore, the preferred route for most developers. Large Developers’ Community Since Ruby on Rails is an open-source technology, it has a vast developer community. The technology is funded, managed, and supported by a broad group of developers who work hard to keep the technology up-to-date and bug-free and to provide assistance to developers caught in a pickle. Daily updates and community shared extensions make it easy for RoR programmers to remain up-to-date and use new code libraries. Bug — Free Web Development Ruby on Rails promotes test-driven and behavior-driven development. As a consequence, the system decreases the probability of a code error. Simultaneously, the system contains a range of vital resources that provide a variety of useful testing features. Ruby on Rails is also a most reliable software development platform that developers worldwide trust to quickly and easily build all sorts of high-performance web applications. Technology has had a long impact, spanning decades, and is likely to remain on the market for years to come. It is versatile, consistent, and efficient, which is why it can be easily extended to a wide variety of projects. It contains several plugins and modules that allow developers to save time and not write repetitive codes. The most potent argument that makes Ruby on Rails a future-proof framework is that it is scalable. This works well, particularly for start-ups who want to start up small and expand as a company picks up. As traffic on the app increases, it can be scaled to cope with a rise in users’ number. Ruby on Rails allows smooth workflow and the potential to make truly unique goods. Rapid MVP development Perhaps the best thing about the production of Ruby on Rails E-commerce is the ability to generate a quick MVP. Ruby on Rails developers can easily create MVPs, allowing start-ups to test the water and check the feasibility of the product. The Ruby on Rails system follows a configuration convention that removes the need for unnecessary configuration, such as Django. MVC and DRY minimize the time taken to create and make the development process much more efficient. Even with a small team, you can quickly develop an MVP and make some changes as you move along. A Framework That’s Flexible Enough The architecture (Modular) of the Ruby on Rails framework makes it easy for developers to modify code and add plugins. It is a fantastic advantage if you choose to improve the code or add features that need many changes. Ruby on Rails E-commerce apps are modular and can be revised without tearing down the entire coding structure. Ruby on Rails is highly adaptable and allows you to shape the code according to your needs. Even if your MVP is ready, you can make changes without much damage. Summary Despite having been around for almost three decades, Ruby on Rails has adapted well to the app industry's changing environment. It is still considered the best development platform in 2020 for its numerous unique features. Try getting the assistance of both skilled and experienced professionals so you can t reach your E-commerce objectives with Ruby on Rails' help.
https://meetadeveloper.com/7-reasons-why-ror-is-the-best-web-development-technology-in-2020-41af9b685d0c
['Vishnu Narayan']
2020-12-15 08:35:26.792000+00:00
['Web Development Company', 'Website', 'Technology', 'Web Development', 'Ruby on Rails']
2,897
There is no such thing as bad technology
By Ete Davies, AnalogFolk London’s Managing Director With all the hype around cryptocurrency, growth accelerators, start-ups, entrepreneurs, venture capitalists and big tech giants looking for their next “advantage-gaining innovation”, I went to Web Summit 2018 expecting a festival of technology, opportunism and overt capitalism. Instead, I have been surprised by how much of the conversation has centred on humanity and “growing by doing good”. In 10 years, Web Summit has evolved from a tech start-up conference into a space for engaging in debates sparked by the role of technology in the wider world. This year, responsibility and accountability are the main topics of discussion, amid all the pitching, networking and wheeler-dealing. The event kicked off with world wide web creator Sir Tim Berners-Lee reminding us that the web was constructed to be a universal platform for all and he believed that if you “connect humanity with technology, great things will happen”. However, the outcome has been bittersweet. While still proud of his creation, Berners-Lee made it clear that he has been disappointed with how some humans have chosen to use it. Many speakers and panels explored the overriding question: “What’s the role of human agency — are we outsourcing our morals, ethics, democracy and free will?” Increased regulation In “Nurturing a digital future that is safe and beneficial for all”, United Nations secretary-general António Guterres led the call for greater government or cross-industry regulation of technology and, specifically, the web “to be essentially a force for good” — a notion urgently driven by the spread of fake news, hate speech, data misuse and invasions of privacy. For those of us working in the brand space, it’s clear that GDPR was just the beginning of the movement towards increased regulation, but we can expect much more in 2019 as governments, tech companies and consumers start to align. Mozilla executive chairperson Mitchell Baker, Guardian Media Group chief executive David Pemsel and European Commissioner for Justice Vera Jourová were among the many voices lamenting the loss of critical thinking and social media platforms came under the most criticism due to their incompetence in dealing with hate speech and questionable business practises (such as Cambridge Analytica). While social media platforms have not broken any laws, the question around the moral responsibility of these platforms as mass-media outlets is one they can no longer ignore. Consumer distrust Data, artificial intelligence and privacy were, of course, also on the agenda. The conversations centred on addressing growing consumer distrust about how companies are collecting and using, or misusing, our data. Jourová told the summit that “it is time to address non-transparent political advertising and the misuse of people’s personal data”. In “Winning trust: the delicate balance between technology and human emotion”, Zander Lurie, chief executive of SurveyMonkey, pointed out that trust is a concept ingrained in humanity and affects every decision we make, built through repeated useful value exchanges and respectful interactions. All human bonds are created through this method, so if we’re asking data and AI to help build meaningful bonds between brands and consumers, then how we approach these relationships should be no different. Google EMEA chief Matt Brittin and Samsung president Young Sohn were among many speakers reiterating the concept of “technology in service of humans”. While it’s humans who buy products and who (for the moment) communicate and trade with each other, technology will exist to service those things. While I agree with the sentiment, I couldn’t help feeling that the elephant in the room was the question of which humans the technology is serving. Although technology may not be inherently good or bad, humans can be both and everything in-between. We use technology to service both the good and bad in us, and there’s business to be made from either. But there is a choice to be made. If you create, provide and potentially profit from innocently intended technology that has been put to alarming use, what — if anything — are you going to do about it? That’s the question tech companies are seeking to answer at Web Summit and beyond.
https://medium.com/analogfolk/there-is-no-such-thing-as-bad-technology-4f50ed9f5032
[]
2019-04-10 13:05:37.252000+00:00
['Insights', 'Technology', 'Innovation']
2,898
The One Thing We Can All Agree on: Importance of Money
Few would contest the statement that money has been one of the most important catalysts in the evolution of civilizations. From cattle and sheep to wampum beads, to precious metals and eventually fiat, societies around the globe have always sought to create an effective medium of exchange in order to boost business. A perfect illustration of the importance of money can be seen in Saifedean Ammous’s recent The Bitcoin Standard, or Yuval Noah Harari’s bestselling Sapiens, which explores money as one of the ‘great unifiers,’ alongside religion and empires. Money as a force that unites is a concept we’ve witnessed for millennia. Indeed, even during the times of the Crusades, both Muslims and Christians happily used the opposing side’s coins, despite these being adorned with the prophets of their enemies — gold was widely accepted due to relative scarcity and the fact that it was (and still is) impossible to synthesize. There’s a fascinating game theory dynamic at play with money — if a given form of currency is valuable solely to one group, it will, by extension, become valuable to trading partners involved with said group. Consider the case of the Americas: prior to the arrival of the Spaniards, the Native Americans did not perceive gold to have any value. It was only upon understanding the value that the resource had in Spain that they changed their tune — the metal rapidly gained utility (and subsequent value) as it allowed them to purchase innumerable products with just small amounts of gold. It’s easy to draw some parallels between traditional conceptions of what makes materials ideal means of exchange with the burgeoning field of digital assets. Constraints such as divisibility, portability, and potential for counterfeiting evaporate as stores of value and mediums of exchange migrate to cyberspace. It would be absurd to overlook these digital assets, or to ignore the role they’ll inevitably play in the constantly evolving markets. One of the most interesting properties that we’re focused on at OpenFinance Network is liquidity — that is, ensuring that there are always participants available to purchase or sell it, making the asset easily accessible on the market. The example of gold is an apt one, given that it is recognized around the world and has value to all, thus making it easy to exchange for goods or services. The big challenge with liquidity is finding a suitable pool of buyers and sellers. Blockchain technology allows for the seamless and trustless transfer of digital assets around the world in a matter of seconds, and could ensure that this pool extends to 7 billion plus people. That’s exactly what we aim to facilitate with our offering through the OpenFinance Network. The emerging space lacks refined UX/UI and services to make digital securities available to the more technophobic. There’s still a degree of familiarity required to transact “on-chain,” which may deter many from getting involved. We want to put an end to this, which is why we’re building an easy-to-use, intuitive platform for the trading of security tokens. If money truly is one of the great unifiers, we need to ensure that it’s fit for purpose, particularly as its next iteration unfolds in cyberspace. New forms of money need to be understood, appreciated and subsequently made accessible to everyone. It’s time to get started. As always, if you’re not already, join the conversation on Telegram and follow us on Twitter to stay up to date on all the latest developments with OFN. ### Juan M. Hernandez is the Founder and CEO of OpenFinance Network, the trading platform for security tokens and other alternative assets. Juan is a serial entrepreneur, technologist, and polymath experienced in financial markets, exchanges, and blockchain technology. He holds a CS degree from Northwestern University and an MBA from the Kellogg Graduate School of Management. If you enjoyed this post, please “clap” 50X in the bottom left corner so it will be shared with more people.
https://medium.com/openfinance/the-one-thing-we-can-all-agree-on-importance-of-money-dafa5c31666e
['Juan M. Hernandez']
2018-10-15 16:04:33.424000+00:00
['Blockchain', 'Ethereum', 'Money', 'Currency Trading', 'Blockchain Technology']
2,899
Eureka Whirlwind Bagless Canister Vacuum Cleaner, Lightweight Vac for Carpets and Hard Floors, w/Filter, Blue
description about this product Innovative Multi-Surface vacuum: Deep clean with the whirlwind canister vacuum. This vacuum features an integrated airflow control on the handle that can be easily switched at your fingertips. It includes three settings: carpet, upholstery & hard floors. 2.5L dust container, no maintenance costs: No bags or filters required. The whirlwind has a Bagless design and uses washable filters. Rinse the filters as needed and you’re ready to go. Lightweight and easy to maneuver: vacuum anywhere around your home with ease with this lightweight vacuum. The Eureka whirlwind vacuum cleaner weighs less than 8 pounds and can easily maneuver under and around furniture and stairs thanks to its swivel steering and telescoping metal Wand. 2-In-1 integrated crevice tool: The Eureka whirlwind canister vacuum features a crevice tool integrated into the hose handle so it’s at your fingertips whenever you need and will never get lost. Easily disconnect the handle from the hose and then you can switch between a crevice tool and a dusting brush. Easy-to-use: Spend more time cleaning and less time struggling with your vacuum cleaner. The Eureka whirlwind has automatic cord Rewind, saving your valuable time. The whirlwind has a one-button release for easy dust cup emptying. What’s the difference between NEN110A and NEN110B. There is an Extra washable Filter in NEN110B. Capacity: 2.5 liters for order click this link https://amzn.to/3cU0N1x blogger id link https://abouthealthinformation50.blogspot.com/ youtube channel link https://www.youtube.com/channel/UCczFPd47JMYjEC2qzD_C0sw second youtube channel link https://www.youtube.com/channel/UCgez83Hqpa822GsMBazEcog facebook group link https://www.facebook.com/groups/387680909727762/media/videos affiliate link https://dashboard.teespring.com/overview?from=login&from=login&auth_method=email&from_spring=
https://medium.com/@mughalnabeel049/eureka-whirlwind-bagless-canister-vacuum-cleaner-lightweight-vac-for-carpets-and-hard-floors-c874cf1d934e
['Nabeel Mughal']
2021-11-28 14:04:02.955000+00:00
['Electronics', 'Technology', 'Cleaning', 'Product', 'Vacuum Cleaner']