source
stringclasses 2
values | author
stringlengths 0
824
⌀ | title
stringlengths 0
475
| description
stringlengths 0
32.8k
⌀ | url
stringlengths 13
713
| urlToImage
stringlengths 0
2k
⌀ | publishedAt
stringlengths 20
20
⌀ | content
stringlengths 0
32.8k
| category_nist
stringlengths 5
160
| category
stringlengths 5
239
| id
stringlengths 6
7
⌀ | subreddit
stringlengths 2
21
⌀ | score
int64 0
30.2k
⌀ | num_comments
int64 0
2.13k
⌀ | created_time
timestamp[ns] | top_comments
stringlengths 3
32.7k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
news | PathAI Reports that Its ML-based NASH Drug Discovery Tool May Identify Clinical Trial Responders Based on Post-Hoc Analysis of Bristol Myers Squibb's FALCON 1 Study at The Liver Meeting 2021 | BOSTON, Nov. 12, 2021 /PRNewswire/ -- PathAI, a global provider of AI-powered technology applied to pathology, will announce results from a retrospective analysis of liver biopsy specimens from Bristol Myers Squibb's FALCON 1 study, a Phase 2b, randomized, multicenter, placebo-controlled... | https://www.prnewswire.com/news-releases/pathai-reports-that-its-ml-based-nash-drug-discovery-tool-may-identify-clinical-trial-responders-based-on-post-hoc-analysis-of-bristol-myers-squibbs-falcon-1-study-at-the-liver-meeting-2021-301423231.html | 2021-11-12T18:30:00Z | BOSTON, Nov. 12, 2021 /PRNewswire/ -- PathAI, a global provider of AI-powered technology applied to pathology, will announce results from a retrospective analysis of liver biopsy specimens from Bristol Myers Squibb's FALCON 1 study, a Phase 2b, randomized, multicenter, placebo-controlled study assessing the efficacy and safety of pegbelfermin (PGBF) as a treatment for non-alcoholic steatohepatitis (NASH) at The Liver Meeting, November 12-15, 2021 (NCT03486899). This exploratory post hoc analysis compared machine learning (ML)-based quantification of histological features with traditional pathology scoring methods, and the results will be presented in the poster Shevell et al., Comparison of manual vs machine learning approaches to liver biopsy scoring for NASH and fibrosis: a post hoc analysis of the FALCON 1 study. PathAI has developed the AI-based histologic measurement of NASH Drug Development Tool (AIM-NASH DDT) that has been accepted into the FDA Biomarker Qualification Program. The AIM-NASH DDT is intended for use in assessment of endpoints in clinical trials as well as clinical trial enrollment after FDA qualification. AIM-NASH has been trained to detect and quantify the key histological features required to score NASH disease severity using the standard NASH CRN scoring system and generates slide-level scores for those features (lobular inflammation, ballooning, steatosis, and fibrosis) mirroring the standard pathology workflow. In this study, biopsy slides, collected from clinical trial participants within 6 months prior to or during the screening period and after 24 weeks of PGBF treatment, were digitized into whole slide images and evaluated using AIM-NASH. The clinical study central pathologist manually scored these same biopsy samples during the study period.The FALCON 1 trial had 197 participants randomized to four arms: placebo, plus three treatment arms of PGBF dosed at 10mg, 20mg, and 40mg. Evaluating the primary clinical trial endpoint of 1 stage NASH CRN fibrosis improvement without NASH worsening or NASH improvement without fibrosis worsening at 24 weeks, identified a statistically significant proportion of responders in the treatment arms by the AIM-NASH DDT (p=0.013) that were not reported by manual assessment (p=0.148).AIM-NASH-based and manual scores for all CRN components showed distinct trends of improvement in all PGBF arms compared to placebo. The AIM-NASH DDT CRN scoring revealed significant improvements in ballooning (p=0.033) and lobular inflammation (p=0.019) in the treatment arms compared with placebo that were not seen by manual scoring (ballooning p=0.274; lobular inflammation p=0.716). Conversely, manual methods showed significant improvements in steatosis for treated patients (p=0.0022) that AIM-NASH did not (p=0.106). Treatment-associated improvements in fibrosis were not seen using either method. Additional assessment by AIM-NASH using a continuous scoring method showed significant differences between placebo and PGBF treated patients for ballooning (p=0.0014), lobular inflammation (p=0.05), and steatosis (p=0.001).While this study suggests that AIM-NASH-based pathologic assessment of tissue may be more sensitive than manual assessment and may capture changes in histology that could be indicative of drug efficacy, further analyses with larger tissue datasets are required to further support these claims.About PathAIPathAI is a leading provider of AI-powered research tools and services for pathology. PathAI's platform promises substantial improvements to the accuracy of diagnosis and the efficacy of treatment of diseases like cancer, leveraging modern approaches in machine and deep learning. Based in Boston, PathAI works with leading life sciences companies and researchers to advance precision medicine. To learn more, visit pathai.com.SOURCE PathAI Related Linkswww.pathAI.com | Prediction/Decision Making | Life, Physical, and Social Science/Healthcare Practitioners and Support | null | null | null | null | null | null |
||
news | Kolawole Samuel Adebayo | Chip developer Cerebas bolsters AI-powered workload capabilities with $250M | The company stated that this capital will let it globally expand its business and deploy its AI-powered CS-2 system to new customers. | https://venturebeat.com/2021/11/10/chip-developer-cerebas-bolsters-ai-powered-workload-capabilities-with-250m/ | 2021-11-10T20:20:01Z | Cerebras Systems, the California-based company that has built a brain-scale chip to power AI models with 120 trillion parameters, said today it has raised $250 million funding at a valuation of over $4 billion. Cerebras claims its technology significantly accelerates the time involved in todays AI work processes at a fraction of the power and space. It also claims its innovations will support the multi-trillion parameter AI models of the future.In a press release, the company stated that this additional capital will enable it to further expand its business globally and deploy its industry-leading CS-2 system to new customers, while continuing to bolster its leadership in AI compute.Cerebras cofounder and CEO Andrew Feldman noted that the new funding will allow Cerebras to extend its leadership to new regions. Feldman believes this will aid the companys mission to democratize AI and usher in what it calls the next era of high-performance AI compute an era where the company claims its technology will help to solve todays most urgent societal challenges across drug discovery, climate change, and much more.Redefining AI-powered possibilitiesCerebras Systems is redefining what is possible with AI and has demonstrated best in class performance in accelerating the pace of innovation across pharma and life sciences, scientific research, and several other fields, said Rick Gerson, cofounder, chairman, and chief investment officer at Falcon Edge Capital and Alpha Wave.We are proud to partner with Andrew and the Cerebras team to support their mission of bringing high-performance AI compute to new markets and regions around the world, he added.Cerebras CS-2 system, powered by the Wafer Scale Engine (WSE-2) the largest chip ever made and the fastest AI processor to date is purpose-built for AI work. Feldman told VentureBeat in an interview that in April of this year, the company more than doubled the capacity of the chip, bringing it up to 2.6 trillion transistors, 850,000 AI-optimized cores, 40GBs on-chip memory, 20PBs memory bandwidth, and 220 petabits fabric bandwidth. He noted that for AI work, big chips process information more quickly and produce answers in less time.With only 54 billion transistors, the largest graphics processing unit pales in comparison to the WSE-2, which has 2.55 trillion more transistors. With 56 times more chip size, 123 times more AI-optimized cores, 1,000 times more high-performance on-chip memory, 12,733 times more memory bandwidth, and 45,833 times more fabric bandwidth than other graphic processing unit competitors, the WSE-2 makes the CS-2 system the fastest in the industry. The company says its software is easy to deploy, and enables customers to use existing models, tools, and flows without modification. It also allows customers to write new ML models in standard open source frameworks.New customersCerebras says its CS-2 system is delivering a massive leap forward for customers across pharma and life sciences, oil and gas, defense, supercomputing centers, national labs, and other industries. The company announced new customers including Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center (PSC) for its groundbreaking Neocortex AI supercomputer, EPCC, the supercomputing center at the University of Edinburgh, Tokyo Electron Devices, GlaxoSmithKline, and AstraZeneca.The series F investment round was spearheaded by Alpha Wave Ventures, a global growth stage Falcon Edge-Chimera partnership, along with Abu Dhabi Growth (ADG).Alpha Wave Ventures and ADG join a group of strategic world-class investors including Altimeter Capital, Benchmark Capital, Coatue Management, Eclipse Ventures, Moore Strategic Ventures, and VY Capital. Cerebras has now expanded its frontiers beyond the U.S., with new offices in Tokyo, Japan, and Toronto, Canada. On the back of this funding, the company says it will keep up with its engineering work, expand its engineering force, and hunt for talents all over the world going into 2022.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member | Unknown | Unknown | null | null | null | null | null | null |
|
news | Lauren Hansen | Best Threat Intelligence Feeds of 2021 | As cyber attacks become increasingly common and sophisticated the importance of threat intelligence cannot be understated. Threat intelligence feeds in particular are digital tools that aggregate data to indicate emerging and existing security threats in real time, according to your company’s key metrics. That way you can effectively sort out the most important and imminent […]The post Best Threat Intelligence Feeds of 2021 appeared first on CIO Insight. | https://www.cioinsight.com/security/threat-intelligence-feeds/ | 2021-11-29T18:40:55Z | As cyber attacks become increasingly common and sophisticated the importance of threat intelligence cannot be understated. Threat intelligence feeds in particular are digital tools that aggregate data to indicate emerging and existing security threats in real time, according to your companys key metrics. That way you can effectively sort out the most important and imminent threats.Feeds are a simple way to start building out your companys threat intelligence capabilities and assessing your threat posture. It serves as a first line of defense that detects outside threats with an internal security system that will alert security analysts according to targeted areas of interest. You can configure the feed to leverage your cyber intelligence by setting up automatic alerts and/or integrating it with your security information and event management (SIEM) platform.More robust threat intelligence feeds, however, will utilize machine learning on their own to automatically gather, process, and analyze incoming data from internal sources, such as logs and records, as well as external sources, such as the open web or dark web, in order to generate actionable insights.Read more:Best Threat Intelligence Platforms & Tools for 2021Best Threat Intelligence Feeds There are free, open-source threat intelligence feeds out there, but those may not provide the specific information your company needs. Moreover, other market comparisons focus on criteria that all of the below vendors share, such as integrations, analytics, alerts, and reporting.Below, however, we consider threat intelligence feed vendors according to key differentiating factors: predictive analytics, AI/ML, and natural language processing. VendorPredictiveAnalyticsNLPAIMLCofense IntelligenceCrowdstrike Falcon XDataminr PulseIntezer AnalyzeRecorded FutureResecurity ContextWebroot BrightCloudThe Cofense Intelligence suite contains various tools that use automated, AI-based techniques to analyze millions of messages daily from a variety of web sources. It specializes in preventing phishing scams and other security threats to your network, such as malware attacks. Cofense Intelligence delivers both alerts and actionable insights that are highly customizable.Cofense Intelligence operates with several of the big names in the SecOps landscape, so it will feed directly into your SIEM, TIP, SOAR tools, and more, so you can detect and guard against threats early on.Crowdstrikes Falcon X threat intelligence software provides automatic analysis and context based on a list of indicators of compromise (IoCs) tailored to your specific company. In fact, you can easily visualize your IoCs with a graph that shows the relationships among them. Based on user reviews, youll likely find Falcon Xs interface intuitive and easy to integrate with your own security solution.But depending on how robust your current tools are, you may get even more value out of Falcon X Premium or Falcon X Elite. While Falcon X focuses on threat scanning and alerts, Falcon X Premium and Elite offer up further intelligence and an assigned intelligence analyst from Crowdstrike (available with Elite).Dataminr Pulse is a threat intelligence feed designed to be scaled and customized for businesses of various sizes and industries. With the Hub feature in Dataminr Pulse, you get an overview of your geographical locations and their level of security. As your business grows, you can easily add and manage more locations in the Hub.Dataminr is also highly customizable. You can tailor security alerts and create dashboards based on various issues, levels of criticality, and geographical locations. Regardless of the metrics you use to customize your alerts, Dataminr Pulse Employs AI and mines public data (more than 200,000 sources, from 100+ languages) to produce real-time alerts, as well as visual and geographical context.Dataminr has many facets to its threat intelligence software, but can be parsed down to avoid scope creep and fit what your company needs. Users frequently mention its ease of use and integration into their companys own security workflow, as well as the depth and breadth of knowledge and support that Dataminr provides.Intezer Analyze is an all-in-one malware analysis platform that includes threat alerting and intelligence. It integrates with mainstream tools, such as Cortex XSOAR, Fortinet, Maltego, and Splunk. Users have noted integration issues, however, so check beforehand to ensure that Intezer Analyze is compatible with the SOC tools that youre currently using.Intezer Analyze conducts deep code analysis to compare new iterations of malware code to previous versions in order to quickly identify malware threats and alert users. Its designed with various audiences in mind, with easy-to-understand reporting. Recorded Futures platform includes its threat intelligence solution, which features predictive, actionable intelligence in real time. Planned attacks can take place in any language, but Recorded Future operates in multiple languages. It uses natural language processing to assign meta-tags to unstructured data that it gathers from around the world. The data that Recorded Future aggregates is delivered through a variety of channels: web portal, mobile app, or browser extension. It provides context and assigns risk scores for potential threats in a way thats not only intelligible to various stakeholders from security novices to experts but also takes the guesswork out of prioritizing risks.Recorded Future also employs machine learning to build rich data sets, from which it offers up predictive analytics, so you remain proactive in addressing security threats.Resecurity Context is a cyber threat intelligence platform that includes a portal for managing internal and external threat intelligence feeds. It allows you to configure your own alerts according to topic and how you want to be notified, whether via email or SMS.Once you configure your alerts and notification preferences, Context casts a wide source net. For example, it draws from over 300 million dark web data entries, as well as unstructured data from over 40 languages. After initial preference setting and data collection, Context then processes, analyzes, reports, and evaluates, as part of its six-step cycle. Popular among SMBs, Webroot BrightCloud Threat Intelligence is a low-commitment collection of SaaS that integrates seamlessly with apps and browsers you use on a daily basis. It automatically runs scheduled scans on billions of URLs and IP addresses to ensure peace of mind at both the network and endpoint levels.In fact, thats what most users report: ease of use and not needing to worry about being protected as they open emails and surf the web. Webroot BrightCloud employs machine learning to classify and organize IP addresses and URLs according to its level of threat to your business. BrightCloud encompasses six different services to address your companys particular needs.How to Choose a Threat Intelligence FeedKnow Your EnemiesThink through your intelligence goals first. What specific threats do you want your threat intelligence feed to be on the lookout for? If youre looking for a specific solution, like phishing, Cofense Intelligence, is a great place to start. If malware is your focus, check out Intezer Analyze.Read more: Top Cyber Security Threats to OrganizationsThe Importance of AI for Threat Intelligence All threat intelligence feeds employ some form of artificial intelligence to the extent that they perform automatic scans to pick up on and analyze IoCs. Only some of the vendors listed here (Falcon X, Recorded Future, Resecurity, and Webroot) deploy machine learning for generating actionable insights and predictive analytics, which is key for staying ahead of the curve on emerging threats. Moreover, some of the vendors use machine learning for natural language processing capabilities, such as Recorded Future and Resecurity. This is a standout feature that warrants more attention, given that bad actors plan and carry out attacks from around the world in various languages, and can target any organization regardless of its size and geographical reach. Feed CompatibilityIntegration is a standard benefit of threat intelligence feeds on the market today. To get the most out of your feed, integrate it with your companys existing security management solution and security workflows, but ensure compatibility with your particular solutions beforehand. Feed CustomizationTake advantage of the level of customization available in your feed of choice to fit your companys use case(s). This will allow the feed to generate the most relevant data and insights. If your company already employs a suite of security tools, your intelligence goals are likely more targeted, which will require a feed that allows for customization, such as Cofense Intelligence, Falcon X, or Dataminr Pulse.In terms of notification variety, Recorded Future is an outlier, as it is the only one to specifically mention functionality on the web, in a mobile app, and as a browser extension.Keep Audience in MindKeep in mind who is accessing the feed and their level of technical knowledge. If various stakeholders are using the feed, make sure there are corresponding levels of granularity for people with technical expertise, as opposed to management-level staff.If user-friendly analytics and reporting is a high-ranking criterion in your selection process, start by checking out Falcon X, Dataminr Pulse, Intezer Analyze, or Recorded Future.Choosing the Right Threat Intelligence FeedEach of the vendors here has something to offer in terms of level of specificity, scalability, and capability. Recorded Future is the all-around winner, based on its fulfillment of the three criteria of predictive analytics, natural language processing, and AI/ML capabilities, but there is no magic-bullet solution for every organization. Read more:Key Benefits of Threat Intelligence Platforms | Detection and Monitoring/Content Synthesis/Decision Making | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | ACCA, ACCA | Looking at AI through an ESG lens: how do ethical issues interconnect with ESG issues? | Global and EU experts discussed the impact of large-scale adoption of artificial intelligence on ESG priorities and ethical issues to consider in navigating this journey at recent joint ACCA-EY event, attended by over 300 participants | http://pr.euractiv.com/node/225549 | http://pr.euractiv.com/files/logo_pr.gif | 2021-12-08T23:00:00Z | Global and EU experts discussed the impact of large-scale adoption of artificial intelligence on ESG priorities and ethical issues to consider in navigating this journey at recent joint ACCA-EY event, attended by over 300 participantsThe data explosion we’ve witnessed over the past several years, coupled with increasing processing power and growing access to ‘smart’ technologies, have generated considerable enthusiasm about the transformative power of AI to accelerate the green transition. At national, EU and global level, political and regulatory action is shaping to channel the potential of AI towards the goals of the European Green Deal and UN SDGs. Sustainability standards and regulations continue to evolve, digital technologies like AI offer new ways to understand, track and improve performance – whether that’s using AI to collect and track ESG data, optimize business operations or validate the ESG performance of potential investments.But the introduction of these technologies is not without risks, and managing the use of AI in an ethical and responsible manner is essential if we are to create sustainable societal value from it. Legitimate ethical questions arise across a broad spectrum of topics, such as the environmental impact of data centers and supporting infrastructure, uneven access to technologies, potential hard-wiring of biases and risks of reduced human oversight in complex tradeoff decisions. So is AI an answer or solution to ESG? Or is it part of the problem and how can we make it net positive?These questions were the focus of a lively panel discussion jointly organised by ACCA (the Association of Chartered Certified Accountants) and EY, which explored the opportunities and risks presented by the use of AI; how public-private partnerships and regulatory policy decisions can accelerate the realization of the European Green Deal and UN Sustainable Development Goals; and what ethical issues we must consider in navigating this journey.Narayanan Vaidyanathan, head of Business Insights at ACCA opened the discussions : ‘ As highlighted by ACCA’s recent report Ethics for sustainable AI adoption: connecting AI and ESG, accountancy and finance professionals have a key role to ensure that AI adoption happens in an ethical manner, that will yield equitably distributed sustainable long-term benefits. With its explicit and long-standing commitment to ethical practices, the accountancy profession is well placed to guide organisations along a responsible path for AI adoption, through several actions: Setting tone at the top on AI adoption; delivering sustainable value; exercising professional judgement; challenging greenwashing; complying with AI regulation and ethics policies; Prioritising data management; adopting a strategic approach to oversight and delivery; understanding vendor landscape, and finally building knowledge and skills.Monica Dimitracopoulos Global Long-Term Value Leader at EY said: ‘We are at an inflection point with respect to the capacity of AI and related digital technologies to help us address pressing environmental, social, and business challenges. As sustainability standards continue to evolve, digital technologies offer new ways to understand, track and improve ESG performance – whether that’s using AI to collect and monitor data, optimize and de-risk business operations or validate the true environmental footprint of potential investments. According to EY’s 2021 Global Institutional Investor Survey, technology and data innovation are becoming increasingly important for both the companies issuing ESG data and for investors consuming those insights. We now face an historic opportunity – working across the public and private sectors, we can identify concrete actions that can help us navigate the ‘twin green and digital transitions’ in a way that creates value for this and future generations’.The panelists, Maikki Sipinen, from the European Commission’s Artificial Intelligence Policy Development and Coordination unit; Marianne Haarh, from the Green Digital Finance Alliance and Christine Chow, from HSBC Asset Management were also invited to share their views on the main adverse ESG impacts that should be considered as we move forward with broad-based adoption of these technologies, the role that policy makers and regulators can play, and on how companies can mitigate broad ethical and governance risks, including reputational risks from mishandling AI. At EU level, the 2021 Coordinated Plan on Artificial Intelligence (AI) is the next step in creating EU global leadership in trustworthy AI, and includes a strategic focus area on ‘AI for climate and environment’.Eva Kaili, MEP and Chair of the Panel for the Future of Science and Technology (STOA) at the European Parliament added: ‘As we talk about human centric and trustworthy AI, our role is also to understand the challenges of AI systems and the principles that should guide their use – sustainability being one of them. It's really important to recognise that algorithms are only as good as the humans who created them. So if we want to have AI-informed decisions, we have to ensure AI systems do not embed our biases and our failures – we definitely need to test them, as highlighted in recent STOA research. We need sandboxes and we have to protect our data, including biometrics. We have to decide that in Europe we want to lead with quality. We do invite input on the legislation in the making, your voice is needed in the debate!Brando Benifei, MEP, Member of the Special Committee on Artificial Intelligence in a Digital Age (AIDA), and European Parliament AI Regulation Rapporteur: ‘ EP special Committee AIDA, after having studied the various applications of AI and the way we can strengthen a sustainable human centric adoption of AI in the EU, is going to conclude its work in a few months with a report and we are now starting the more substantial work on the new AI regulation, for which I’ve been appointed co-rapporteur. The sustainability, ethical and ESG dimensions of AI uptake, including the prohibition for social scoring, the prohibition of mass surveillance in real time through biometric data, but also ecological transition, and social inclusion will be contentious issues. We will have to work together on this very complex file to make sure AI can contribute to these objectives and not to be used at their detriment.’ Andrew Hobbs, EMEIA Public Policy Leader, EY concluded: ‘Discussions clearly showed that understanding of AI is not just for “techies”, it's for everyone in an organization. So teamwork is absolutely critical both within the organization and the ecosystem to get the use of AI right - the use of AI in organisations needs to be done transparently with the workforce to build confidence. And it’s equally important to work all together- policy and decision makers, finance professionals and accountants, industry, etc. As highlighted by the European Commission, the Public sector can also lead the way in sustainable adoption by utilizing their remarkable purchasing power’.EndsAbout ACCAFor media enquiries, contact: Cecile Bonino Liti; [email protected] ; Mob: +32 (0) 493 29 17 66Click or tap if you trust this link.">www.accaglobal.com; Click or tap if you trust this link.">@ACCAViewsACCA is the Association of Chartered Certified Accountants. We’re a thriving global community of 233,000 members and 536,000 future members based in 178 countries and regions that upholds the highest professional and ethical values.We believe that accountancy is a cornerstone profession of society that supports both public and private sectors. That’s why we’re committed to the development of a strong global accountancy profession and the many benefits that this brings to society and individuals.Since 1904 being a force for public good has been embedded in our purpose. And because we’re a not-for-profit organisation, we build a sustainable global profession by re-investing our surplus to deliver member value and develop the profession for the next generation.Through our world leading ACCA Qualification, we offer everyone everywhere the opportunity to experience a rewarding career in accountancy, finance and management. And using our respected research, we lead the profession by answering today’s questions and preparing us for tomorrow. | Unknown | Management/Life, Physical, and Social Science | null | null | null | null | null | null |
news | elenak | Looking at AI through an ESG lens: how do ethical issues interconnect with ESG issues? | Innovation & EnterpriseGlobal and EU experts discussed the impact of large-scale adoption of artificial intelligence on ESG priorities and ethical issues to consider in navigating this journey at recent joint ACCA-EY event, attended by over 300 participantsThe data explosion we’ve witnessed over the past several years, coupled with increasing processing power and growing access to ‘smart’ technologies, have generated considerable enthusiasm about the transformative power of AI to accelerate the green transition. At national, EU and global level, political and regulatory action is shaping to channel the potential of AI towards the goals of the European Green Deal and UN SDGs. Sustainability standards and regulations continue to evolve, digital technologies like AI offer new ways to understand, track and improve performance – whether that’s using AI to collect and track ESG data, optimize business operations or validate the ESG performance of potential investments.But the introduction of these technologies is not without risks, and managing the use of AI in an ethical and responsible manner is essential if we are to create sustainable societal value from it. Legitimate ethical questions arise across a broad spectrum of topics, such as the environmental impact of data centers and supporting infrastructure, uneven access to technologies, potential hard-wiring of biases and risks of reduced human oversight in complex tradeoff decisions. So is AI an answer or solution to ESG? Or is it part of the problem and how can we make it net positive?These questions were the focus of a lively panel discussion jointly organised by ACCA (the Association of Chartered Certified Accountants) and EY, which explored the opportunities and risks presented by the use of AI; how public-private partnerships and regulatory policy decisions can accelerate the realization of the European Green Deal and UN Sustainable Development Goals; and what ethical issues we must consider in navigating this journey.Narayanan Vaidyanathan, head of Business Insights at ACCA opened the discussions : ‘ As highlighted by ACCA’s recent report Ethics for sustainable AI adoption: connecting AI and ESG, accountancy and finance professionals have a key role to ensure that AI adoption happens in an ethical manner, that will yield equitably distributed sustainable long-term benefits. With its explicit and long-standing commitment to ethical practices, the accountancy profession is well placed to guide organisations along a responsible path for AI adoption, through several actions: Setting tone at the top on AI adoption; delivering sustainable value; exercising professional judgement; challenging greenwashing; complying with AI regulation and ethics policies; Prioritising data management; adopting a strategic approach to oversight and delivery; understanding vendor landscape, and finally building knowledge and skills.Monica Dimitracopoulos Global Long-Term Value Leader at EY said: ‘We are at an inflection point with respect to the capacity of AI and related digital technologies to help us address pressing environmental, social, and business challenges. As sustainability standards continue to evolve, digital technologies offer new ways to understand, track and improve ESG performance – whether that’s using AI to collect and monitor data, optimize and de-risk business operations or validate the true environmental footprint of potential investments. According to EY’s 2021 Global Institutional Investor Survey, technology and data innovation are becoming increasingly important for both the companies issuing ESG data and for investors consuming those insights. We now face an historic opportunity – working across the public and private sectors, we can identify concrete actions that can help us navigate the ‘twin green and digital transitions’ in a way that creates value for this and future generations’.The panelists, Maikki Sipinen, from the European Commission’s Artificial Intelligence Policy Development and Coordination unit; Marianne Haarh, from the Green Digital Finance Alliance and Christine Chow, from HSBC Asset Management were also invited to share their views on the main adverse ESG impacts that should be considered as we move forward with broad-based adoption of these technologies, the role that policy makers and regulators can play, and on how companies can mitigate broad ethical and governance risks, including reputational risks from mishandling AI. At EU level, the 2021 Coordinated Plan on Artificial Intelligence (AI) is the next step in creating EU global leadership in trustworthy AI, and includes a strategic focus area on ‘AI for climate and environment’.Eva Kaili, MEP and Chair of the Panel for the Future of Science and Technology (STOA) at the European Parliament added: ‘As we talk about human centric and trustworthy AI, our role is also to understand the challenges of AI systems and the principles that should guide their use – sustainability being one of them. It's really important to recognise that algorithms are only as good as the humans who created them. So if we want to have AI-informed decisions, we have to ensure AI systems do not embed our biases and our failures – we definitely need to test them, as highlighted in recent STOA research. We need sandboxes and we have to protect our data, including biometrics. We have to decide that in Europe we want to lead with quality. We do invite input on the legislation in the making, your voice is needed in the debate!Brando Benifei, MEP, Member of the Special Committee on Artificial Intelligence in a Digital Age (AIDA), and European Parliament AI Regulation Rapporteur: ‘ EP special Committee AIDA, after having studied the various applications of AI and the way we can strengthen a sustainable human centric adoption of AI in the EU, is going to conclude its work in a few months with a report and we are now starting the more substantial work on the new AI regulation, for which I’ve been appointed co-rapporteur. The sustainability, ethical and ESG dimensions of AI uptake, including the prohibition for social scoring, the prohibition of mass surveillance in real time through biometric data, but also ecological transition, and social inclusion will be contentious issues. We will have to work together on this very complex file to make sure AI can contribute to these objectives and not to be used at their detriment.’ Andrew Hobbs, EMEIA Public Policy Leader, EY concluded: ‘Discussions clearly showed that understanding of AI is not just for “techies”, it's for everyone in an organization. So teamwork is absolutely critical both within the organization and the ecosystem to get the use of AI right - the use of AI in organisations needs to be done transparently with the workforce to build confidence. And it’s equally important to work all together- policy and decision makers, finance professionals and accountants, industry, etc. As highlighted by the European Commission, the Public sector can also lead the way in sustainable adoption by utilizing their remarkable purchasing power’.EndsAbout ACCAFor media enquiries, contact: Cecile Bonino Liti; [email protected] ; Mob: +32 (0) 493 29 17 66www.accaglobal.com; @ACCAViewsACCA is the Association of Chartered Certified Accountants. We’re a thriving global community of 233,000 members and 536,000 future members based in 178 countries and regions that upholds the highest professional and ethical values.We believe that accountancy is a cornerstone profession of society that supports both public and private sectors. That’s why we’re committed to the development of a strong global accountancy profession and the many benefits that this brings to society and individuals.Since 1904 being a force for public good has been embedded in our purpose. And because we’re a not-for-profit organisation, we build a sustainable global profession by re-investing our surplus to deliver member value and develop the profession for the next generation.Through our world leading ACCA Qualification, we offer everyone everywhere the opportunity to experience a rewarding career in accountancy, finance and management. And using our respected research, we lead the profession by answering today’s questions and preparing us for tomorrow.More information here09 Dec 2021 | http://pr.euractiv.com/pr/looking-ai-through-esg-lens-how-do-ethical-issues-interconnect-esg-issues-225549 | http://pr.euractiv.com/files/logo_pr.gif | 2021-12-09T12:46:13Z | Global and EU experts discussed the impact of large-scale adoption of artificial intelligence on ESG priorities and ethical issues to consider in navigating this journey at recent joint ACCA-EY event, attended by over 300 participantsThe data explosion we’ve witnessed over the past several years, coupled with increasing processing power and growing access to ‘smart’ technologies, have generated considerable enthusiasm about the transformative power of AI to accelerate the green transition. At national, EU and global level, political and regulatory action is shaping to channel the potential of AI towards the goals of the European Green Deal and UN SDGs. Sustainability standards and regulations continue to evolve, digital technologies like AI offer new ways to understand, track and improve performance – whether that’s using AI to collect and track ESG data, optimize business operations or validate the ESG performance of potential investments.But the introduction of these technologies is not without risks, and managing the use of AI in an ethical and responsible manner is essential if we are to create sustainable societal value from it. Legitimate ethical questions arise across a broad spectrum of topics, such as the environmental impact of data centers and supporting infrastructure, uneven access to technologies, potential hard-wiring of biases and risks of reduced human oversight in complex tradeoff decisions. So is AI an answer or solution to ESG? Or is it part of the problem and how can we make it net positive?These questions were the focus of a lively panel discussion jointly organised by ACCA (the Association of Chartered Certified Accountants) and EY, which explored the opportunities and risks presented by the use of AI; how public-private partnerships and regulatory policy decisions can accelerate the realization of the European Green Deal and UN Sustainable Development Goals; and what ethical issues we must consider in navigating this journey.Narayanan Vaidyanathan, head of Business Insights at ACCA opened the discussions : ‘ As highlighted by ACCA’s recent report Ethics for sustainable AI adoption: connecting AI and ESG, accountancy and finance professionals have a key role to ensure that AI adoption happens in an ethical manner, that will yield equitably distributed sustainable long-term benefits. With its explicit and long-standing commitment to ethical practices, the accountancy profession is well placed to guide organisations along a responsible path for AI adoption, through several actions: Setting tone at the top on AI adoption; delivering sustainable value; exercising professional judgement; challenging greenwashing; complying with AI regulation and ethics policies; Prioritising data management; adopting a strategic approach to oversight and delivery; understanding vendor landscape, and finally building knowledge and skills.Monica Dimitracopoulos Global Long-Term Value Leader at EY said: ‘We are at an inflection point with respect to the capacity of AI and related digital technologies to help us address pressing environmental, social, and business challenges. As sustainability standards continue to evolve, digital technologies offer new ways to understand, track and improve ESG performance – whether that’s using AI to collect and monitor data, optimize and de-risk business operations or validate the true environmental footprint of potential investments. According to EY’s 2021 Global Institutional Investor Survey, technology and data innovation are becoming increasingly important for both the companies issuing ESG data and for investors consuming those insights. We now face an historic opportunity – working across the public and private sectors, we can identify concrete actions that can help us navigate the ‘twin green and digital transitions’ in a way that creates value for this and future generations’.The panelists, Maikki Sipinen, from the European Commission’s Artificial Intelligence Policy Development and Coordination unit; Marianne Haarh, from the Green Digital Finance Alliance and Christine Chow, from HSBC Asset Management were also invited to share their views on the main adverse ESG impacts that should be considered as we move forward with broad-based adoption of these technologies, the role that policy makers and regulators can play, and on how companies can mitigate broad ethical and governance risks, including reputational risks from mishandling AI. At EU level, the 2021 Coordinated Plan on Artificial Intelligence (AI) is the next step in creating EU global leadership in trustworthy AI, and includes a strategic focus area on ‘AI for climate and environment’.Eva Kaili, MEP and Chair of the Panel for the Future of Science and Technology (STOA) at the European Parliament added: ‘As we talk about human centric and trustworthy AI, our role is also to understand the challenges of AI systems and the principles that should guide their use – sustainability being one of them. It's really important to recognise that algorithms are only as good as the humans who created them. So if we want to have AI-informed decisions, we have to ensure AI systems do not embed our biases and our failures – we definitely need to test them, as highlighted in recent STOA research. We need sandboxes and we have to protect our data, including biometrics. We have to decide that in Europe we want to lead with quality. We do invite input on the legislation in the making, your voice is needed in the debate!Brando Benifei, MEP, Member of the Special Committee on Artificial Intelligence in a Digital Age (AIDA), and European Parliament AI Regulation Rapporteur: ‘ EP special Committee AIDA, after having studied the various applications of AI and the way we can strengthen a sustainable human centric adoption of AI in the EU, is going to conclude its work in a few months with a report and we are now starting the more substantial work on the new AI regulation, for which I’ve been appointed co-rapporteur. The sustainability, ethical and ESG dimensions of AI uptake, including the prohibition for social scoring, the prohibition of mass surveillance in real time through biometric data, but also ecological transition, and social inclusion will be contentious issues. We will have to work together on this very complex file to make sure AI can contribute to these objectives and not to be used at their detriment.’ Andrew Hobbs, EMEIA Public Policy Leader, EY concluded: ‘Discussions clearly showed that understanding of AI is not just for “techies”, it's for everyone in an organization. So teamwork is absolutely critical both within the organization and the ecosystem to get the use of AI right - the use of AI in organisations needs to be done transparently with the workforce to build confidence. And it’s equally important to work all together- policy and decision makers, finance professionals and accountants, industry, etc. As highlighted by the European Commission, the Public sector can also lead the way in sustainable adoption by utilizing their remarkable purchasing power’.EndsAbout ACCAFor media enquiries, contact: Cecile Bonino Liti; [email protected] ; Mob: +32 (0) 493 29 17 66Click or tap if you trust this link.">www.accaglobal.com; Click or tap if you trust this link.">@ACCAViewsACCA is the Association of Chartered Certified Accountants. We’re a thriving global community of 233,000 members and 536,000 future members based in 178 countries and regions that upholds the highest professional and ethical values.We believe that accountancy is a cornerstone profession of society that supports both public and private sectors. That’s why we’re committed to the development of a strong global accountancy profession and the many benefits that this brings to society and individuals.Since 1904 being a force for public good has been embedded in our purpose. And because we’re a not-for-profit organisation, we build a sustainable global profession by re-investing our surplus to deliver member value and develop the profession for the next generation.Through our world leading ACCA Qualification, we offer everyone everywhere the opportunity to experience a rewarding career in accountancy, finance and management. And using our respected research, we lead the profession by answering today’s questions and preparing us for tomorrow. | Information Retrieval Or Search/Decision Making/Content Synthesis | Business and Financial Operations/Education, Training, and Library | null | null | null | null | null | null |
news | Luke Dormehl | Will Google ever lose its throne as king of search? You.com is betting on it | Dozens of companies have tried to disrupt Google's search dominance, and none have really succeeded -- but You.com thinks it has an approach that could work. | https://www.digitaltrends.com/features/you-dot-com-search-engine-challenge-google/ | 2021-12-04T17:00:26Z | You know that a technologys changed the world when it becomes a verb. It speaks to a level of popularity and ubiquity that goes beyond the wildest dreams of marketeers. Ill WhatsApp you. I spent the evening YouTubing. Disrupting any of these aforementioned brand-name products is beyond difficult — it requires a change in the default way that we relate to some standard action.To Google is a verb — and a powerful one. In Googles own words, its reason for being is no less than to organize the world’s information and make it universally accessible and useful.And Richard Socher wants to disrupt it.Socher is the former chief scientist of Salesforce, one of the worlds premier customer relationship management platforms and makers of enormously successful enterprise apps. During his career, he has started and sold the A.I. company MetaMind, and been published broadly in fields ranging from computer vision to machine translation to summarization within natural language processing. His new search engine — You.com — seeks to challenge the single gatekeeper of search that is Google. Hes not about to let a pesky thing like a near-$2 trillion giant stop him, either. Even if it is a gosh-darned verb.My first thought was, you know, it was a verb to Skype, Socher told Digital Trends at the start of a video call to showcase You in action. And you know what we’re [speaking] on right now? Not Skype.A different approach to searchThe idea driving You is to be the not Skype to Googles Skype. The contention of Socher and co-founder Bryan McCann is that the world is at an inflection point when it comes to search. The companys publicity materials drive this claim home: Today, a single gatekeeper controls nearly 90% of the search market, dictating everything you see. The advertising and SEO biases of current search engines result in a lack of control over what people read, watch, research, eat, and buy. All of this makes people an object of artificial intelligence algorithms designed to monetize them rather than utilizing technology to harness the worlds information in relevant ways that build trust and confidence with every search.The most noticeable difference between Google and You comes down to aesthetics and operation. Socher points out that, for years, search engines have all looked kind of the same. They assume that information can be — and, more importantly, should be — arranged in a text-based list, neatly sorted from the number one slot (most useful) downwards. But is this really the best way to arrange information? And, even if it once was, is it still? You, by contrast, leans more heavily into widgets, with a design that owes a bit to the tile layout of kanban boards or social media platforms.The tiled search results on You include the likes of Amazon pages, news stories, Yelp discoveries, Wikipedia pages, Reddit posts, Medium articles, coding snippets, LinkedIn listings, eBay sales, tweets (which can be retweeted and liked inside the search window), and more. Rather than Googles sequential list of search results, You offers something more akin to a topographical view of the internet that lets people view the different content islands at once before zooming in to explore the ones that seem relevant.“Can You displace Google? Can anything displace Google? This remains to be seen.”It actually took us a lot of iterations and thinking about design constraints and thinking about mobile, Socher said. When you think about Instagram and TikTok, people are very used to swiping left, right, and up and down. If you’re on Instagram, you swipe left to see more pictures of that story. Then, if you swipe down, you see the next story. We don’t want to have this massive engagement track of social networks. We want to help you search less and do more. Get things done, save your time, and summarize the web for you. But these are still very convenient ways to interact with content and are very intuitive — especially to younger generations.A screenshot of the You.com search results with “the metaverse” used as an example queryThese individual tiles can be upvoted and downvoted in something akin to Reddit. Searches consist firstly of preferred sources, followed by neutral sources, and then downvoted sources. Personalized search is nothing new: Google has been doing it since 2004. But Yous degree of transparent manipulation, the same way you can juggle around the apps that appear on your mobile home screen page, is fresh. In an effort to escape the filter bubble effect — whereby users may be shown slanted search results without realizing the slant — You makes it easier to separate the personalized searches from the objective ones. That is something that no one else does, really, Socher said. To give that kind of agency and control to their users on a search engine.You also emphasizes privacy in a big way. Again, this isnt a wholly unique claim to fame. DuckDuckGo has been leaning into private search for years. But Yous combining of this (the company wont sell private data and promises an impressive incognito mode) with its new reinvented approach to search could be enough to lure in some users.Taking on the mighty GoogleAll of this, of course, brings about the trillion-dollar question: Can You displace Google? Can anything displace Google? This remains to be seen. Search engines have certainly fallen before, replaced by faster, sleeker, better offerings. Remember W3Catalog, World Wide Web Wanderer, WebCrawler, Lycos, Jump Station, Magellan, Excite, Infoseek, Inktomi, Northern Light, Dogpile, Ask Jeeves, and AltaVista? All of these launched, rose to semi-prominence and were then crushed underfoot to varying degrees in the decade before Google established itself. Others like Yahoo and, more recently, Bing, have been successful in their own way — but theres no doubt which search engine trumps rules the roost.Logic dictates that, at some point, Google will falter. Empires have a habit of doing that, in the corporate world as much as anywhere else. Just 10 percent of the Fortune 500 companies for the year 1955 have remained on the list in the years since — and more than 89 percent have gone bankrupt, merged with or been acquired by others, or fallen off the Fortune 500 companies list at one time or another. When it comes to search, however, Google is a tricky customer to dislodge.The search engine business today is bigger and more profitable than its ever been. Google generates piles of cash that would have been unfathomable for the companies that preceded it. Furthermore, through deals with the likes of Apple (Google pays Apple billions of dollars per year to remain the default search engine for iOS), many of us use Google even when we dont explicitly think were using Google. This money means that Google can continue to innovate in search, hoovering up the best minds and, when needed, startups to fortify its castle walls.You has raised a not-impressive $20 million to date. But thats small potatoes next to the $183 billion that Google parent company Alphabet raked in in revenue in 2020, the overwhelming bulk of which came from advertising.Socher is under no illusions about the challenge of taking on a Google. However, he also notes that Googles focus on selling advertising could ultimately hurt its ability to nimbly experiment with new approaches and search layouts. (After all, if someones paying to be top of a list, theyre unlikely to be happy if they are suddenly one entry in a much larger grid.) At some point, the need to do pure search conflicts with the moneymaking model of selling ads. It’s becoming harder and harder to find just naturally relevant content [on Google], he said.The start of a journeyIts still the start of a long journey for You. The search engine has just entered a public beta, opening it up for critique and usage by the general public. There are also obvious ways that You could improve its offering — most notably in making it a touch-friendly interface for mobile.The interface is made to go on mobile, and we will very soon make more progress [in that area], Socher said. But the experience right now is much, much better on the desktop. We haven’t really put enough were just a small startup. We just haven’t had the time and resources to make it work on different kinds of platforms. [But over] the next couple of weeks and months, we’ll continue to improve the mobile experience.One things for certain, though: As tough a challenge as You has ahead of it, its got a whole lot of promise. Search is only going to become more important, and its requirements will continue to shift as the internet evolves. You has a smart team behind it, and some big-name investors, including Salesforce CEO Marc Benioff. Now it just remains to be seen if it can deliver.Taking on the mighty Google is an incredibly tall order. But then so was challenging Yahoo when Google co-founders Larry Page and Sergey Brin set out to build a page-ranking search algorithm for their Ph.D. thesis. And that turned out pretty darn well for them.Editors' Recommendations | Information Retrieval Or Search/Personalization | Unknown | null | null | null | null | null | null |
|
news | WHSV | PathAI Reports that Its ML-based NASH Drug Discovery Tool May Identify Clinical Trial Responders Based on Post-Hoc Analysis of Bristol Myers Squibb's FALCON 1 Study at The Liver Meeting 2021 | Post-hoc evaluation of liver biopsies from patients in the Bristol Myers Squibb sponsored FALCON 1 study by AI-based histologic measurement of NASH (nonalcoholic steatohepatitis; AIM-NASH) drug development tool (DDT) suggests that clinical trial endpoints may have been met and shows treatment-associated improvements in key liver tissue features not identified by manual assessment | https://www.whsv.com/prnewswire/2021/11/12/pathai-reports-that-its-ml-based-nash-drug-discovery-tool-may-identify-clinical-trial-responders-based-post-hoc-analysis-bristol-myers-squibbs-falcon-1-study-liver-meeting-2021/ | 2021-11-12T18:30:00Z | BOSTON, Nov. 12, 2021 /PRNewswire/ -- PathAI, a global provider of AI-powered technology applied to pathology, will announce results from a retrospective analysis of liver biopsy specimens from Bristol Myers Squibb's FALCON 1 study, a Phase 2b, randomized, multicenter, placebo-controlled study assessing the efficacy and safety of pegbelfermin (PGBF) as a treatment for non-alcoholic steatohepatitis (NASH) at The Liver Meeting, November 12-15, 2021 (NCT03486899).PathAI Logo (PRNewsfoto/PathAI)This exploratory post hoc analysis compared machine learning (ML)-based quantification of histological features with traditional pathology scoring methods, and the results will be presented in the poster Shevell et al., Comparison of manual vs machine learning approaches to liver biopsy scoring for NASH and fibrosis: a post hoc analysis of the FALCON 1 study. PathAI has developed the AI-based histologic measurement of NASH Drug Development Tool (AIM-NASH DDT) that has been accepted into the FDA Biomarker Qualification Program. The AIM-NASH DDT is intended for use in assessment of endpoints in clinical trials as well as clinical trial enrollment after FDA qualification. AIM-NASH has been trained to detect and quantify the key histological features required to score NASH disease severity using the standard NASH CRN scoring system and generates slide-level scores for those features (lobular inflammation, ballooning, steatosis, and fibrosis) mirroring the standard pathology workflow. In this study, biopsy slides, collected from clinical trial participants within 6 months prior to or during the screening period and after 24 weeks of PGBF treatment, were digitized into whole slide images and evaluated using AIM-NASH. The clinical study central pathologist manually scored these same biopsy samples during the study period.The FALCON 1 trial had 197 participants randomized to four arms: placebo, plus three treatment arms of PGBF dosed at 10mg, 20mg, and 40mg.Evaluating the primary clinical trial endpoint of 1 stage NASH CRN fibrosis improvement without NASH worsening or NASH improvement without fibrosis worsening at 24 weeks, identified a statistically significant proportion of responders in the treatment arms by the AIM-NASH DDT (p=0.013) that were not reported by manual assessment (p=0.148).AIM-NASH-based and manual scores for all CRN components showed distinct trends of improvement in all PGBF arms compared to placebo. The AIM-NASH DDT CRN scoring revealed significant improvements in ballooning (p=0.033) and lobular inflammation (p=0.019) in the treatment arms compared with placebo that were not seen by manual scoring (ballooning p=0.274; lobular inflammation p=0.716). Conversely, manual methods showed significant improvements in steatosis for treated patients (p=0.0022) that AIM-NASH did not (p=0.106). Treatment-associated improvements in fibrosis were not seen using either method. Additional assessment by AIM-NASH using a continuous scoring method showed significant differences between placebo and PGBF treated patients for ballooning (p=0.0014), lobular inflammation (p=0.05), and steatosis (p=0.001).While this study suggests that AIM-NASH-based pathologic assessment of tissue may be more sensitive than manual assessment and may capture changes in histology that could be indicative of drug efficacy, further analyses with larger tissue datasets are required to further support these claims.About PathAIPathAI is a leading provider of AI-powered research tools and services for pathology. PathAI's platform promises substantial improvements to the accuracy of diagnosis and the efficacy of treatment of diseases like cancer, leveraging modern approaches in machine and deep learning. Based in Boston, PathAI works with leading life sciences companies and researchers to advance precision medicine. To learn more, visit pathai.com.View original content to download multimedia:SOURCE PathAIThe above press release was provided courtesy of PRNewswire. The views, opinions and statements in the press release are not endorsed by Gray Media Group nor do they necessarily state or reflect those of Gray Media Group, Inc. | Detection and Monitoring/Information Retrieval Or Search/Content Synthesis | Healthcare Practitioners and Support/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | nostalgebraist | larger language models may disappoint you [or, an eternally unfinished draft] | Published on November 26, 2021 11:08 PM GMTwhat this post isThe following is an incomplete draft, which I'm publishing now because I am unlikely to ever finish writing it.I no longer fully endorse all the claims in the post. (In a few cases, I've added a note to say this explicitly.) However, there are some arguments in the post that I still endorse, and which I have not seen made elsewhere.This post is the result of me having lots of opinions about LM scaling, at various times in 2021, which were difficult to write down briefly or independently of one another. This post, originally written in July 2021, is the closest I got to writing them all down in one place.-nost, 11/26/210. caveatThis post will definitely disappoint you.Or, anyway, it will definitely disappoint me. I know that even though I haven't written it yet.My drafts folder contains several long, abandoned attempts to write (something like) this post. I've written (something like) this post many times in my head. I just can't seem to get it right, though. The drafts always sprawl out of control.So, if I can't do it right, why not do it wrong? Here's the disorganized, incomplete, brain-dump version of the better post I wish I were writing. Caveat lector.1. polarizationThe topic of this post is large language models (LMs) like GPT-3. Specifically, what will happen as we make them larger and larger.By my lights, everyone seem either too impressed/scared by the concept of LM scaling, or not impressed/scared enough.On LessWrong and related communities, I see lots of people worrying in earnest about whether the first superhuman AGI will be a GPT-like model. Both here and in the wider world, people often talk about GPT-3 like it's a far "smarter" being that it seems to me.On the other hand, the people who aren't scared often don't seem like they're even paying attention. Faced with a sudden leap in machine capabilities, they shrug. Faced with a simple recipe that can make those machines even better -- with eerie, physics-like regularity -- they . . . still shrug. I wrote about the most infamous of these detractors here.Meanwhile, I'm here in the middle. What do I think? Something like:The newer (i.e. large transformer) LMs really are a huge advance in NLP over the prior state of the artThe prior state of the art was really bad, though. Before the new LMs, neural nets simply couldn't "do" language the way they could "do" images, something I noted back in 2017.Most of the "huge advance" happened in the smallest of the new models, like BERT-Base and GPT-2-small.The effect of scaling up these models is mostly to "de-noise" capabilities already evident in the small ones. It makes their strengths more robust and easier to access, but doesn't add fundamentally new strengths.The larger language models of the future will be highly impactful, but banal.They will probably allow us to fully automate all the routine linguistic tasks you could almost imagine automating with GPT-3.People will make wonderful new things using them.They won't be "smart" in any way that GPT-3 is not, or indeed, really in any way that GPT-2 was not.They will get better at abstract reasoning -- in the sense that it will be easier to get them to spit out text that sounds like it is the product of abstract reasoning. (As even GPT-2 does frequently.) They will be weak at this relative to their other capabilities, as they are today, and little will come of it.They might end up as sub-systems in an AGI one day.The rest of the post will consist of some gestures where I try to make the above feel as natural to you as it does to me.2. the enthusiast's argumentFirst, let's spell out the argument that has people thinking GPT will lead to AGI.Roughly the same argument has been made elsewhere by gwern and bmk, among others.Loss scaling will continue. It will be straightforward to achieve lower and lower language modeling loss simply by using more compute + data.We can do this without making any new conceptual advances (except perhaps in hardware). Therefore someone will do it. Loss scaling could well continue indefinitely. I.e., more compute + data might push the loss asymptotically all the way to to the "intrinsic entropy of text" -- the true noise left when all patterns have been accounted for, including arbitrarily hard ones.It could be the case that scaling will instead bottom out at some earlier point, when the only patterns left are "too hard" for the models. We don't have much evidence one way or another on this point, and even if we did, we would have no idea how hard is "too hard." Language modeling is AGI-complete. A model that truly understood all patterns in a language modeling dataset would possess a large fraction of all human capabilities taken together.Can you write a textbook on Étale cohomology? (If you can't, then you're missing out on some language modeling loss.) Can you play both sides of a roundtable between the world's leading economists, imitating the distinct intellect of each one, the novel arguments they'd concoct of the spot, the subtle flecks of personal bias tainting those arguments? (If you can't, then...) Can you translate between any pair of human languages that any linguist, anywhere, knows how to translate? (If you can't, then...) And so on. Loss scaling makes models "smarter" fast enough to matter. This point is a crucial bridge between the abstract potential from points 2 and 3, and the quantitative near-term predictions from point 1.This point is easiest to explain by showing what its negation looks like.Suppose that points 2 and 3 are really true -- that adding more compute/data eventually turns a transformer LM into an AGI. That doesn't tell you anything about how fast the process happens.How many orders of magnitude do we need to add to make the model non-negligibly smarter? If the answer is "1 OOM," then the scaling projections from point 1 are relevant. If the answer is "100 OOM" . . . not so much.(Or, consider a variant of this scenario: suppose most of the abilities we care about, when we use the term "AGI" are locked away in the very last tiny sliver of loss just above the intrinsic entropy of text. In the final 0.00[...many extra zeros...]1 bits/character, in a loss difference so tiny we'd need vastly larger validation sets to for it to be distinguishable from data-sampling noise.) I agree with points 1-3. Point 4 is where I and the enthusiasts diverge.3. are we getting smarter yet?Why do the enthusiasts believe point 4? That is, why would we expect a feasible, incremental scaling upgrade to yield a meaningful boost in intelligence?Because it already did: GPT-3 is meaningfully smarter than GPT-2.The enthusiast's argument, in its most common form, relies entirely on this premise. The enthusiast knows perfectly well that AGI-completeness in principle is not enough: we need, not just an asymptotic result, but some idea of when we might get close enough.As gwern puts it [my emphasis]:The pretraining thesis, while logically impeccable—how is a model supposed to solve all possible trick questions without understanding, just guessing?—never struck me as convincing, an argument admitting neither confutation nor conviction. It feels too much like a magic trick: “here’s some information theory, here’s a human benchmark, here’s how we can encode all tasks as a sequence prediction problem, hey presto—Intelligence!” There are lots of algorithms which are Turing-complete or ‘universal’ in some sense; there are lots of algorithms like AIXI which solve AI in some theoretical sense (Schmidhuber & company have many of these cute algorithms such as ‘the fastest possible algorithm for all problems’, with the minor catch of some constant factors which require computers bigger than the universe). Why think pretraining or sequence modeling is not another one of them? Sure, if the model got a low enough loss, it’d have to be intelligent, but how could you prove that would happen in practice? [...] It might require more text than exists, countless petabytes of data for all of those subtle factors like logical reasoning to represent enough training signal, amidst all the noise and distractors, to train a model. Or maybe your models are too small to do more than absorb the simple surface-level signals [...]But apparently, it would’ve worked fine. [...] It just required more compute & data than anyone was willing to risk on it until a few true-believers were able to get their hands on a few million dollars of compute. [...]If GPT-3 gained so much meta-learning and world knowledge by dropping its absolute loss ~50% when starting from GPT-2’s level, what capabilities would another ~30% improvement over GPT-3 gain?But ... are the GPTs getting meaningfully smarter already, as we scale them?It's tempting to casually answer "yes," pointing to any one of the numerous ways that the bigger models just are better. (But see the section below on "continuity"!)However, we should not take this question so lightly. A yes answer would "complete the circuit" of the enthusiast's argument -- "turn it on" as a live concern. A no answer would leave the argument in limbo until more evidence comes in.So, let's assess the state of the evidence.4. on ecological evaluationConsider an organism, say, or a reinforcement learning agent. How do we know whether it has some capability?Easy. We put it in a situation where it needs to deploy that capability to get what it wants. We put food (or reward) at the end of the maze.Assessing capabilities by prompting GPT is not like this. GPT does not "want" to show off its capabilities to you, the way a mouse wants food and an RL agent wants reward.What GPT wants -- what it was directly optimized to do -- is to guess how a text will continue. This is not the same as "getting the right answer" or even "saying something sensible."GPT was trained on the writing of thousands of individual humans, possessed of various flavors and magnitudes of ignorance, and capable of saying all kinds of irrational, inexplicable, or just plain bizarre things on occasion. To put it rather over-dramatically: much of the task of language modeling is figuring out which capabilities you're not supposed to reveal right now. Figuring out what sorts of mistakes the current writer is likely to make, and making them right on cue.Thus, prompting tends to vastly underestimate (!) what any LM knows how to do in principle.What is special about the "food in the maze" type of evaluation: it removes any uncertainty as to whether the model knows it's supposed to do the thing you want. The model is given a direct signal, in its "native language," about exactly what you want. This will tend to elicit the capability if it exists at all.There's probably a standard term for the "food in the maze" thing, but I don't know it, so I'll just make one up: "ecological evaluation."4b. the road not takenIt's totally possible to do ecological evaluation with large LMs. (Indeed, lots of people are doing it.) For example, you can:Take an RL environment with some text in it, and make an agent that uses the LM as its "text understanding module."If the LM has a capacity, and that capability is helpful for the task, the agent will learn to elicit it from the LM as needed. See e.g. this paper.Just do supervised learning on a capability you want to probe.Both of these can be done with the LM weights frozen, or with full fine-tuning, or with a frozen LM plus a new "head" on top.A purist might argue that you have to freeze the LM weights, or else you aren't really probing what the LM "already" knows. (The gradients from fine-tuning could induce new capabilities that weren't there before.)But I doubt it really matters, since it turns out you can get the benefits of full fine-tuning even if you only tune the bias terms -- conceptually, just boosting or lowering the salience of patterns the LM could already recognize.There is a divide -- to me, a strange and inexplicable one -- in the LM community, as to who does this ecological stuff and who doesn't.The people who do fine-tuning / extra heads / etc...... generally don't care about scaling (an exception: section 3.4 here)...generally use comparatively "tiny" models like BERT (an exception: T5)... are often just trying to get practical things done, not deepen our understanding of LM capabilities (an exception: "probing tasks" in BERTology)The people who care about scaling and huge models...... care about understanding LM capabilities... mostly use non-ecological methods (prompting / few-shot), which are vastly unreliable measures of capability... often use purely subjective (and thus bias-prone) measures, like whether samples from an LM "feel smart" or "sound human" to a particular readerIn other words, there are ways to really know what a big LM is capable of -- but the GPT enthusiasts aren't making use of them.4c. non-ecological evaluation considered harmfulNon-ecological evaluation is epistemically bad. Whatever signal it provides is buried under thick layers of bias and noise, and can only be extracted with great care, if at all.I don't think the GPT enthusiasts realize just how bad it is. I think this is one crux of our disagreement.Let's survey some of the problems. (The names below are made-up and not meant very seriously -- I just need some headings to make this section readable.)Guess-what-I-mean bias, type 1 As discussed above, the model may not understand what specific thing you want it to do, even if it's perfectly capable of doing that thing.Result: a downward bias in capability estimates.Guess-what-I-mean bias, type 2The observed signal mixes together two components: "Can the model guess what you're trying to make it do?", and "Can the model actually do that thing?" But when people interpret such results, they tend to round them off to measures only of the latter.That is, when people see a bigger model do better on a few-shot task, they tend to think, "the model got better at the task!" -- not "the model got better at guessing which task I mean!"But bigger models tend to get better at these two things simultaneously. The better results you get from bigger models reflect some mixture of "true capability gains" and "better guessing of what the prompt writer was trying to measure."Result: an upward bias in capability scaling estimates.Prompt noise and humans-in-the-loopGuessing-what-you-mean is extremely sensitive to fine details of the prompt, even with huge models. (This is why "prompt programming" is a thing.)Thus, if you just pick the first reasonable-seeming prompt that comes into your head, you'll get a horribly noisy measure of the LM's true abilities. Maybe a slightly different prompt would elicit far better performance.(As you'd expect, the GPT-3 paper -- which took the "first thing reasonable-seeming prompt that comes into your head" approach -- ended up using severely suboptimal prompts for some tasks, like WiC.)If possible, you want less noisy estimates. So you do prompt programming. You try a bunch of different things.Even picking one "reasonable-seeming" prompt requires some human linguistic knowledge (to tell you what seems reasonable). Optimizing the prompt introduces more and more human linguistic knowledge, as you use what you know about language and the task to come up with new candidates and diagnose problems.Now we're not evaluating a machine anyone. We're evaluating a (human + machine) super-system.I don't want to make too much of this. Like, if you can find some prompt that always works across every variation of the task, surely the LM must "really know how to do the task," right?(Although there are dangers even here. Are you doing the same amount of prompt-optimization with bigger models as with smaller ones? What performance might be coaxed out of GPT-2 124M, if you gave it as much attention as you're giving GPT-3. Probably not much, I agree -- but if you haven't tried, that's a source of bias.)The issue I'm raising here is not that big LMs can't be smart without humans in the loop. (I'm sure they can.) The issue is that, with a human involved, we can't see clearly which parts would be easy for a machine alone, and hence which parts get us straightforwardly closer to AGI.For example. In an ecological setting -- with no human, only a machine (say an RL agent with an LM sub-system) -- would the machine need to do its own "prompt programming"?How much worse would it be at this than you are? (The part that operates the LM from the outside knows nothing about language; that's what the LM is there for.) What algorithms would work for this?Or maybe that wouldn't be necessary. Maybe the right information is there in the LM's inner activations, even when it's fed a "bad" prompt. Maybe the problem with "bad" prompts is only that they don't propagate this interior info into the output in a legible way. I don't know. No one does.4d. just how much does prompting suck?But how much does all that really matter? Are we really missing out on nontrivial knowledge here?Two case studies.Case study: BERT in the mazeThe GPT-3 paper measured model capabilities with "few-shot" prompting, i.e. filling up a long prompt with solved task examples and letting the model fill in the final-unsolved one. Typically they used 10 to 100 examples.They compared GPT-3 against strong previous models on the same tasks.These reference models used fine-tuning, generally with many more than 100 examples -- but the gap here is not always very big. On some benchmarks of great academic interest, even the fine-tuned models only get to see a few hundred examples:SuperGLUE data sizesSome of the reference models were carefully designed by researchers for one specific task. Let's ignore those.In most cases, the paper also compared against a BERT baseline: literally just a vanilla transformer, like GPT-3, hooked up to the task with vanilla fine-tuning. (Fine-tuning BERT is literally so routine that a machine can do the entire process for you, even on a totally novel dataset.)How well did GPT-3 do? On most tasks, about as well as a fine-tuned BERT-Large. Which is a transformer 500 times smaller than GPT-3.These are not new feats of intelligence emerging at GPT-3's vast scale. Apparently they're already there inside models several orders of magnitude smaller. They're not hard to see, once you put food at the end of the maze, and give the model a reason to show off its smarts.(Once again, GPT-3 saw fewer examples than the reference models -- but often not by much, and anyway you can make BERT do just fine with only 10-100 examples if you try hard and believe in yourself)So. If even cute little BERT-Large is capable of all this ... then what on earth is GPT-3 really capable of?Either GPT-3 is far smarter than the few-shot results can possibly convey . . . . . . or it isn't -- which would be a dramatic failure of scaling, with those 499 extra copies of BERT's neural infrastructure hardly adding any intelligence!No one knows, and no amount of prompting can tell you.As I wrote last summer:I called GPT-3 a “disappointing paper,” which is not the same thing as calling the model disappointing: the feeling is more like how I’d feel if they found a superintelligent alien and chose only to communicate its abilities by noting that, when the alien is blackout drunk and playing 8 simuntaneous games of chess while also taking an IQ test, it then has an “IQ” of about 100.[Addendum 11/26/21:"No one knows" here was wrong. The P-tuning paper, from March 2021, described an ecological evaluation method for GPTs that make them competitive with similarly-sized BERTs on SuperGLUE.I think I had heard of prompt tuning when I wrote this, but I had not read that paper and didn't appreciate how powerful this family of methods is.I'm not currently aware of any P-tuning-like results with very large models like GPT-3. End addendum]Case study: no one knows what few-shot results even meanThere's an excellent blog post by moire called "Language models are 0-shot interpreters." Go read it, if you haven't yet. I'll summarize parts of it below, but I'll probably get it a bit wrong.As state above, the GPT-3 paper prompted the model with solved task examples. In fact, they compared three variants of this:zero-shot: no examplesone-shot: a single examplefew-shot: many (10-100) examplesMost of the time, more "shots" were better. And the bigger the LM, the more it benefitted from extra shots.It is not immediately obvious what to make of this.The GPT-3 paper takes care to be technically 100% agnostic about the underlying mechanism . . . if you read it carefully, including the fine-print (i.e. footnotes and appendices). At the same time, in its choice of words, it gestures suggestively in exciting directions that a casual reader is likely to take at face value.For example, the paper makes extensive use of the term "meta-learning." Read casually, it seems to be saying that LMs as big as GPT-3 have a novel capability -- they can learn new tasks on the fly, without fine-tuning!But what the paper means by "meta-learning" is probably not what you mean by "meta-learning." The paper's own definition is provided in a footnote. It is (admirably) precise, non-standard, and almost tautologous.In short, meta-learning is "any mechanism that makes more 'shots' work better" (all emphasis mine):These terms ["meta-learning" and "zero/one/few-shot" -nost] are intended to remain agnostic on the question of whether the model learns new tasks from scratch at inference time or simply recognizes patterns seen during training – this is an important issue which we discuss later in the paper, but “meta-learning” is intended to encompass both possibilities, and simply describes the inner-outer loop structure.The same passage is quoted by moire in "Language models are 0-shot interpreters," who goes on to say:The later discussion is not very extensive, mostly just acknowledging the ambiguity inherent to few-shot [...]This is the uncertainty that I will investigate in this blog post, expanding on the results published in Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm.My purpose is also to challenge the ontology introduced by Language Models are Few-Shot Learners. Although the authors are careful to remain agnostic as to the mechanism of few-shot/meta-learning, what we have found by probing the mechanism suggests that an alternative framework which emphasizes the means by which a task is communicated may be more salient in some contexts.What does moire mean? The post goes on to describe a number of experiments, whose results suggest thatWhat matters is not the number of task examples ("shots"), but how well the prompt specifies the desired task.Failures of zero- and one-shot are often failures to guess-what-I-mean on the basis of a legitimately ambiguous prompt.The value of additional "shots" may only lie in their value as a proxy for clarity in task communication.Once you have written a sufficiently clear one-shot (or even zero-shot) prompt, the model does not do any better with additional examples -- the task has already been communicated.In some cases, one-shot is actually worse than zero-shot -- because it adds a new kind of ambiguity. ("We noticed that sometimes the model would respond to one-shot prompts as if the semantic content of the example translation was relevant to the new translation. Without multiple examples, it’s less clear that the translation instances are meant to be parallel and independent.")OpenAI made much of the fact that adding "shots" helps larger models more. (This was the result behind the whole "meta-learning" framing.) However...Larger models are also far better at zero-shot.Comparing a zero-shot prompt to a "control" prompt with no task information, larger models get more value out of the jump from control to zero-shot, and less value from additional examples.In other words: GPT-3's size lets it extract more information from examples, if you provide them. But its size also lets it extract far more information from the original question. So it doesn't need the examples as much.moire's post delights me for two reasons. First, I enjoyed learning the new experimental evidence it presents. But second, and perhaps more importantly, there was the sense of relief that someone actually did the experiments!OpenAI's few-shot "learning" results are full of ambiguity. The GPT-3 paper left me confused on a basic philosophical level, as I noted at the time.Surely the model isn't learning French de novo from 100 paired sentences -- either it speaks French at the outset, or it doesn't. So what could it be "learning" from those 100 examples?Likewise for virtually every result in the paper: grammar, commonsense reasoning, book-learning trivia quizzes... all things it's clearly possible to learn from reading massive swaths of the internet, and all things it's clearly impossible to learn from reading 10 to 100 examples. Yet the examples help? And I'm supposed to think that makes the model . . . smarter, somehow?Well, for French --> English translation at least, it turns out that the examples help in pretty much the only way they possibly could: by informing an already competent translator that you are requesting a translation.As we were attempting to replicate [OpenAI's translation] results, we noticed that when the model was failing on the 0-shot prompt, the failures were often of catastrophic nature: the task was not attempted at all, e.g. the model would output a newline, or another (or the same) French phrase instead of an attempt at an English translation.BLEU assigns a score from 0 to 1 to the accuracy of a translation, and would assign a score close to 0 to a catastrophic failure. The scores reported in the paper, however, are averaged over a large dataset, so the same score could hypothetically correspond to uniformly flawed attempts or a mix of perfect attempts and catastrophic failures.It seemed possible that 0-shot prompts were much less reliable at getting the models to attempt the translation task, but result in equivalent accuracy in the event that they did attempt it.[moire does an experiment investigating this hypothesis, and the results seem to confirm it]How much of the apparent consistent monotonic improvement in performance on tasks relative to number of shots in OpenAI’s results can be attributed to an unhelpful zero-shot prompt? Much more extensive testing is needed to say, but I suspect that this is the case for most of the translation tasks, at least.If the extra shots are just about clarifying the task, then what should we make of the claim that "larger models benefit more from extra shots?" That it's . . . easier to clarify tasks to them using this one particular mechanism? When people say GPT-3 displays some new, frightening kind of intelligence, emerging only at its massive scale, surely they can't mean that?And that's not even all. As moire shows, even though it is easier to clarify tasks to GPT-3 through the "shots" mechanism, it's also easier for GPT-3 to guess what you mean with no shots at all."My friend has such sharp hearing. Why, you see see, conditional on her not hearing what you say the first time you say it, she will definitely hear it when you repeat yourself." Quite probably true, but not a good way to make the point!What does it even mean that "language models are few-shot learners"? What does that tell us about the model's capabilities? We don't know. We haven't studied it in the level of depth appropriate for something that might actually matter.After all, moire did a simple and innocuous set of experiments -- just trying to figure out which prompts work best -- and ended up drawing radically different conclusions about the whole thing than OpenAI did.Oh, surely GPT-3 is plenty smart, I don't doubt that. The key question is how much smarter it got from scale, and in which ways. I don't think we'll know that until we put the model to the test, ecologically.5. what scaling doesLMs are trained with a convex loss function. This means they are not min-maxers. They prefer to spread out their capabilities.Given two areas of skill, Thing A and Thing B, they'll try to become equally good at both, even if that means not doing especially well at either. Given an extra marginal unit of potential-for-greatness, they'll smear it out as far as possible over all the Things they know about.Thanks to the convex loss, they do this in proportion to how bad they are at each Thing to begin with -- leveling up their very worst abilities first, making themselves as un-specialized as they can.As we've discussed above, LMs are also trained on very wide-ranging text corpora. Everything from fourth-tier clickbait news to advanced physics preprints to advanced-looking but crackpot physics preprints to badly-written furry porn to astonishingly well-written furry porn to mangled OCR transcripts of 18th-century law texts to et cetera, et cetera. And as far as they can manage, they will do exactly as well at modeling each and every (information-theoretic) bit of it.Larger LMs achieve lower loss. We know that from the scaling laws. And lower loss means being better at predicting each individual word in that corpus, as uniformly as possible.What does this imply?First: that larger LMs are better at everything. It is difficult to find any capability which is present at all in smaller LMs, yet which does not improve with scale.And second: that LMs abhor a skill vacuum.Take a tiny LM, so tiny it really can't make heads or tails of some particular type of text. Now start scaling it up. As its capacity grows, its first priority is to eliminate its greatest weaknesses.That one type of text, that utterly baffled the tiny LM? Convex loss hates that. Every additional unit of capacity gets invested, disproportionately, in bringing such stragglers up to par. The LM desperately wants to be at least sort-of-decent-I-guess at everything -- more than it wants to be a master of anything.Given any one Thing, it will reach sort-of-decent-I-guess performance at that Thing at the smallest scale it can manage -- given the competition from all the other Things it desperately needs to be sort-of-decent-I-guess at.By subjective standards, to human eyes casually scanning over LM samples, this happens pretty fast.GPT-3 is great at lots of individual Things. But take any one of those Things, and you can bet a much tinier LM can do it at the sort-of-decent-I-guess level.Humans, I think, tend to expect intelligence to grow in discontinuous jumps. Stages of child development. They don't understand that at this age. And then, a year or two later, they do understand -- fully.LMs work the other way around. They never perform a sudden jump into competence where they could instead make a slow, gradual rise from "sort of seeming like they understand 10% of the time" to "sort of seeming like they understand 11% of the time" and so on. And this for every Thing uniformly.It's very hard to find any point where scaling suddenly "flips a switch," and the model didn't Get It before, but now it Gets It. (The one example I know of is GPT-3 arithmetic, for some reason. Note that few-shot learning -- whether you call it "meta-learning" or not -- is as gradual as everything else, not a switch that flips on with GPT-3.)[Addendum 11/26/21: I wrote this in ignorance of the BIG-Bench project, which is tracking returns to scale for a large and diverse set of tasks.BIG-Bench has not published results yet, but they livestreamed some preliminary results in May 2021; see also LW discussion here.In the livestream, they give two examples of tasks with smooth scaling and two examples of tasks with a "sudden switch-flip" around 100B params (this slide). They also show, in the aggregate over all tasks, "switch-flip" to faster scaling around 100B (this slide), although this is tricky to interpret since it depends on the task mixture. End addendum]This confounds our intuitive assessments of LM scaling.I have been on this train since the beginning, when tiny lil' GPT-2 124M blew my mind. I've used every new big every model from almost the moment it came out, as excited as a kid on Christmas morning.I did this with every step of the (in retrospect, rather silly) GPT-2 staged release. My tumblr bot started out as (I think) 774M. Then I jumped to 1.5B.That was as far as free OpenAI models went, but when EleutherAI came out with a 2.7B model, I finetuned that one for my bot. I was willing to endure the absolute horrors of mesh-tensorflow (don't ask) to get that 2.7B model up and r | https://www.lesswrong.com/posts/pv7Qpu8WSge8NRbpB/larger-language-models-may-disappoint-you-or-an-eternally | 2021-11-26T23:08:56Z | what this post isThe following is an incomplete draft, which I'm publishing now because I am unlikely to ever finish writing it.I no longer fully endorse all the claims in the post. (In a few cases, I've added a note to say this explicitly.) However, there are some arguments in the post that I still endorse, and which I have not seen made elsewhere.This post is the result of me having lots of opinions about LM scaling, at various times in 2021, which were difficult to write down briefly or independently of one another. This post, originally written in July 2021, is the closest I got to writing them all down in one place.-nost, 11/26/210. caveatThis post will definitely disappoint you.Or, anyway, it will definitely disappoint me. I know that even though I haven't written it yet.My drafts folder contains several long, abandoned attempts to write (something like) this post. I've written (something like) this post many times in my head. I just can't seem to get it right, though. The drafts always sprawl out of control.So, if I can't do it right, why not do it wrong? Here's the disorganized, incomplete, brain-dump version of the better post I wish I were writing. Caveat lector.1. polarizationThe topic of this post is large language models (LMs) like GPT-3. Specifically, what will happen as we make them larger and larger.By my lights, everyone seem either too impressed/scared by the concept of LM scaling, or not impressed/scared enough.On LessWrong and related communities, I see lots of people worrying in earnest about whether the first superhuman AGI will be a GPT-like model. Both here and in the wider world, people often talk about GPT-3 like it's a far "smarter" being that it seems to me.On the other hand, the people who aren't scared often don't seem like they're even paying attention. Faced with a sudden leap in machine capabilities, they shrug. Faced with a simple recipe that can make those machines even better -- with eerie, physics-like regularity -- they . . . still shrug. I wrote about the most infamous of these detractors here.Meanwhile, I'm here in the middle. What do I think? Something like:The newer (i.e. large transformer) LMs really are a huge advance in NLP over the prior state of the artThe prior state of the art was really bad, though. Before the new LMs, neural nets simply couldn't "do" language the way they could "do" images, something I noted back in 2017.Most of the "huge advance" happened in the smallest of the new models, like BERT-Base and GPT-2-small.The effect of scaling up these models is mostly to "de-noise" capabilities already evident in the small ones. It makes their strengths more robust and easier to access, but doesn't add fundamentally new strengths.The larger language models of the future will be highly impactful, but banal.They will probably allow us to fully automate all the routine linguistic tasks you could almost imagine automating with GPT-3.People will make wonderful new things using them.They won't be "smart" in any way that GPT-3 is not, or indeed, really in any way that GPT-2 was not.They will get better at abstract reasoning -- in the sense that it will be easier to get them to spit out text that sounds like it is the product of abstract reasoning. (As even GPT-2 does frequently.) They will be weak at this relative to their other capabilities, as they are today, and little will come of it.They might end up as sub-systems in an AGI one day.The rest of the post will consist of some gestures where I try to make the above feel as natural to you as it does to me.2. the enthusiast's argumentFirst, let's spell out the argument that has people thinking GPT will lead to AGI.Roughly the same argument has been made elsewhere by gwern and bmk, among others.Loss scaling will continue. It will be straightforward to achieve lower and lower language modeling loss simply by using more compute + data.We can do this without making any new conceptual advances (except perhaps in hardware). Therefore someone will do it.Loss scaling could well continue indefinitely. I.e., more compute + data might push the loss asymptotically all the way to to the "intrinsic entropy of text" -- the true noise left when all patterns have been accounted for, including arbitrarily hard ones.It could be the case that scaling will instead bottom out at some earlier point, when the only patterns left are "too hard" for the models. We don't have much evidence one way or another on this point, and even if we did, we would have no idea how hard is "too hard."Language modeling is AGI-complete. A model that truly understood all patterns in a language modeling dataset would possess a large fraction of all human capabilities taken together.Can you write a textbook on Étale cohomology? (If you can't, then you're missing out on some language modeling loss.) Can you play both sides of a roundtable between the world's leading economists, imitating the distinct intellect of each one, the novel arguments they'd concoct of the spot, the subtle flecks of personal bias tainting those arguments? (If you can't, then...) Can you translate between any pair of human languages that any linguist, anywhere, knows how to translate? (If you can't, then...) And so on.Loss scaling makes models "smarter" fast enough to matter. This point is a crucial bridge between the abstract potential from points 2 and 3, and the quantitative near-term predictions from point 1.This point is easiest to explain by showing what its negation looks like.Suppose that points 2 and 3 are really true -- that adding more compute/data eventually turns a transformer LM into an AGI. That doesn't tell you anything about how fast the process happens.How many orders of magnitude do we need to add to make the model non-negligibly smarter? If the answer is "1 OOM," then the scaling projections from point 1 are relevant. If the answer is "100 OOM" . . . not so much.(Or, consider a variant of this scenario: suppose most of the abilities we care about, when we use the term "AGI" are locked away in the very last tiny sliver of loss just above the intrinsic entropy of text. In the final 0.00[...many extra zeros...]1 bits/character, in a loss difference so tiny we'd need vastly larger validation sets to for it to be distinguishable from data-sampling noise.) I agree with points 1-3. Point 4 is where I and the enthusiasts diverge.3. are we getting smarter yet?Why do the enthusiasts believe point 4? That is, why would we expect a feasible, incremental scaling upgrade to yield a meaningful boost in intelligence?Because it already did: GPT-3 is meaningfully smarter than GPT-2.The enthusiast's argument, in its most common form, relies entirely on this premise. The enthusiast knows perfectly well that AGI-completeness in principle is not enough: we need, not just an asymptotic result, but some idea of when we might get close enough.As gwern puts it [my emphasis]:The pretraining thesis, while logically impeccablehow is a model supposed to solve all possible trick questions without understanding, just guessing?never struck me as convincing, an argument admitting neither confutation nor conviction. It feels too much like a magic trick: heres some information theory, heres a human benchmark, heres how we can encode all tasks as a sequence prediction problem, hey prestoIntelligence! There are lots of algorithms which are Turing-complete or universal in some sense; there are lots of algorithms like AIXI which solve AI in some theoretical sense (Schmidhuber & company have many of these cute algorithms such as the fastest possible algorithm for all problems, with the minor catch of some constant factors which require computers bigger than the universe).Why think pretraining or sequence modeling is not another one of them? Sure, if the model got a low enough loss, itd have to be intelligent, but how could you prove that would happen in practice? [...] It might require more text than exists, countless petabytes of data for all of those subtle factors like logical reasoning to represent enough training signal, amidst all the noise and distractors, to train a model. Or maybe your models are too small to do more than absorb the simple surface-level signals [...]But apparently, it wouldve worked fine. [...] It just required more compute & data than anyone was willing to risk on it until a few true-believers were able to get their hands on a few million dollars of compute. [...]If GPT-3 gained so much meta-learning and world knowledge by dropping its absolute loss ~50% when starting from GPT-2s level, what capabilities would another ~30% improvement over GPT-3 gain?But ... are the GPTsgetting meaningfully smarter already, as we scale them?It's tempting to casually answer "yes," pointing to any one of the numerous ways that the bigger models just are better. (But see the section below on "continuity"!)However, we should not take this question so lightly. A yes answer would "complete the circuit" of the enthusiast's argument -- "turn it on" as a live concern. A no answer would leave the argument in limbo until more evidence comes in.So, let's assess the state of the evidence.4. on ecological evaluationConsider an organism, say, or a reinforcement learning agent. How do we know whether it has some capability?Easy. We put it in a situation where it needs to deploy that capability to get what it wants. We put food (or reward) at the end of the maze.Assessing capabilities by prompting GPT is not like this. GPT does not "want" to show off its capabilities to you, the way a mouse wants food and an RL agent wants reward.What GPT wants -- what it was directly optimized to do -- is to guess how a text will continue. This is not the same as "getting the right answer" or even "saying something sensible."GPT was trained on the writing of thousands of individual humans, possessed of various flavors and magnitudes of ignorance, and capable of saying all kinds of irrational, inexplicable, or just plain bizarre things on occasion. To put it rather over-dramatically: much of the task of language modeling is figuring out which capabilities you're not supposed to reveal right now. Figuring out what sorts of mistakes the current writer is likely to make, and making them right on cue.Thus, prompting tends to vastly underestimate (!) what any LM knows how to do in principle.What is special about the "food in the maze" type of evaluation: it removes any uncertainty as to whether the model knows it's supposed to do the thing you want. The model is given a direct signal, in its "native language," about exactly what you want. This will tend to elicit the capability if it exists at all.There's probably a standard term for the "food in the maze" thing, but I don't know it, so I'll just make one up: "ecological evaluation."4b. the road not takenIt's totally possible to do ecological evaluation with large LMs. (Indeed, lots of people are doing it.) For example, you can:Take an RL environment with some text in it, and make an agent that uses the LM as its "text understanding module."If the LM has a capacity, and that capability is helpful for the task, the agent will learn to elicit it from the LM as needed. See e.g. this paper.Just do supervised learning on a capability you want to probe.Both of these can be done with the LM weights frozen, or with full fine-tuning, or with a frozen LM plus a new "head" on top.A purist might argue that you have to freeze the LM weights, or else you aren't really probing what the LM "already" knows. (The gradients from fine-tuning could induce new capabilities that weren't there before.)But I doubt it really matters, since it turns out you can get the benefits of full fine-tuning even if you only tune the bias terms -- conceptually, just boosting or lowering the salience of patterns the LM could already recognize.There is a divide -- to me, a strange and inexplicable one -- in the LM community, as to who does this ecological stuff and who doesn't.The people who do fine-tuning / extra heads / etc...... generally don't care about scaling (an exception: section 3.4 here)...generally use comparatively "tiny" models like BERT (an exception: T5)... are often just trying to get practical things done, not deepen our understanding of LM capabilities (an exception: "probing tasks" in BERTology)The people who care about scaling and huge models...... care about understanding LM capabilities... mostly use non-ecological methods (prompting / few-shot), which are vastly unreliable measures of capability... often use purely subjective (and thus bias-prone) measures, like whether samples from an LM "feel smart" or "sound human" to a particular readerIn other words, there are ways to really know what a big LM is capable of -- but the GPT enthusiasts aren't making use of them.Non-ecological evaluation is epistemically bad. Whatever signal it provides is buried under thick layers of bias and noise, and can only be extracted with great care, if at all.I don't think the GPT enthusiasts realize just how bad it is. I think this is one crux of our disagreement.Let's survey some of the problems. (The names below are made-up and not meant very seriously -- I just need some headings to make this section readable.)Guess-what-I-mean bias, type 1 As discussed above, the model may not understand what specific thing you want it to do, even if it's perfectly capable of doing that thing.Result: a downward bias in capability estimates.Guess-what-I-mean bias, type 2The observed signal mixes together two components: "Can the model guess what you're trying to make it do?", and "Can the model actually do that thing?" But when people interpret such results, they tend to round them off to measures only of the latter.That is, when people see a bigger model do better on a few-shot task, they tend to think, "the model got better at the task!" -- not "the model got better at guessing which task I mean!"But bigger models tend to get better at these two things simultaneously. The better results you get from bigger models reflect some mixture of "true capability gains" and "better guessing of what the prompt writer was trying to measure."Result: an upward bias in capability scaling estimates.Prompt noise and humans-in-the-loopGuessing-what-you-mean is extremely sensitive to fine details of the prompt, even with huge models. (This is why "prompt programming" is a thing.)Thus, if you just pick the first reasonable-seeming prompt that comes into your head, you'll get a horribly noisy measure of the LM's true abilities. Maybe a slightly different prompt would elicit far better performance.(As you'd expect, the GPT-3 paper -- which took the "first thing reasonable-seeming prompt that comes into your head" approach -- ended up using severely suboptimal prompts for some tasks, like WiC.)If possible, you want less noisy estimates. So you do prompt programming. You try a bunch of different things.Even picking one "reasonable-seeming" prompt requires some human linguistic knowledge (to tell you what seems reasonable). Optimizing the prompt introduces more and more human linguistic knowledge, as you use what you know about language and the task to come up with new candidates and diagnose problems.Now we're not evaluating a machine anyone. We're evaluating a (human + machine) super-system.I don't want to make too much of this. Like, if you can find some prompt that always works across every variation of the task, surely the LM must "really know how to do the task," right?(Although there are dangers even here. Are you doing the same amount of prompt-optimization with bigger models as with smaller ones? What performance might be coaxed out of GPT-2 124M, if you gave it as much attention as you're giving GPT-3. Probably not much, I agree -- but if you haven't tried, that's a source of bias.)The issue I'm raising here is not that big LMs can't be smart without humans in the loop. (I'm sure they can.) The issue is that, with a human involved, we can't see clearly which parts would be easy for a machine alone, and hence which parts get us straightforwardly closer to AGI.For example. In an ecological setting -- with no human, only a machine (say an RL agent with an LM sub-system) -- would the machine need to do its own "prompt programming"?How much worse would it be at this than you are? (The part that operates the LM from the outside knows nothing about language; that's what the LM is there for.) What algorithms would work for this?Or maybe that wouldn't be necessary. Maybe the right information is there in the LM's inner activations, even when it's fed a "bad" prompt. Maybe the problem with "bad" prompts is only that they don't propagate this interior info into the output in a legible way. I don't know. No one does.4d. just how much does prompting suck?But how much does all that really matter? Are we really missing out on nontrivial knowledge here?Two case studies.Case study: BERT in the mazeThe GPT-3 paper measured model capabilities with "few-shot" prompting, i.e. filling up a long prompt with solved task examples and letting the model fill in the final-unsolved one. Typically they used 10 to 100 examples.They compared GPT-3 against strong previous models on the same tasks.These reference models used fine-tuning, generally with many more than 100 examples -- but the gap here is not always very big. On some benchmarks of great academic interest, even the fine-tuned models only get to see a few hundred examples:SuperGLUE data sizesSome of the reference models were carefully designed by researchers for one specific task. Let's ignore those.In most cases, the paper also compared against a BERT baseline: literally just a vanilla transformer, like GPT-3, hooked up to the task with vanilla fine-tuning. (Fine-tuning BERT is literally so routine that a machine can do the entire process for you, even on a totally novel dataset.)How well did GPT-3 do? On most tasks, about as well as a fine-tuned BERT-Large. Which is a transformer 500 times smaller than GPT-3.These are not new feats of intelligence emerging at GPT-3's vast scale. Apparently they're already there inside models several orders of magnitude smaller. They're not hard to see, once you put food at the end of the maze, and give the model a reason to show off its smarts.(Once again, GPT-3 saw fewer examples than the reference models -- but often not by much, and anyway you can make BERT do just fine with only 10-100 examples if you try hard and believe in yourself)So. If even cute little BERT-Large is capable of all this ... then what on earth is GPT-3 really capable of?Either GPT-3 is far smarter than the few-shot results can possibly convey . . . . . . or it isn't -- which would be a dramatic failure of scaling, with those 499 extra copies of BERT's neural infrastructure hardly adding any intelligence!No one knows, and no amount of prompting can tell you.As I wrote last summer:I called GPT-3 a disappointing paper, which is not the same thing as calling the model disappointing: the feeling is more like how Id feel if they found a superintelligent alien and chose only to communicate its abilities by noting that, when the alien is blackout drunk and playing 8 simuntaneous games of chess while also taking an IQ test, it then has an IQ of about 100.[Addendum 11/26/21:"No one knows" here was wrong. The P-tuning paper, from March 2021, described an ecological evaluation method for GPTs that make them competitive with similarly-sized BERTs on SuperGLUE.I think I had heard of prompt tuning when I wrote this, but I had not read that paper and didn't appreciate how powerful this family of methods is.I'm not currently aware of any P-tuning-like results with very large models like GPT-3. End addendum]Case study: no one knows what few-shot results even meanThere's an excellent blog post by moire called "Language models are 0-shot interpreters." Go read it, if you haven't yet. I'll summarize parts of it below, but I'll probably get it a bit wrong.As state above, the GPT-3 paper prompted the model with solved task examples. In fact, they compared three variants of this:zero-shot: no examplesone-shot: a single examplefew-shot: many (10-100) examplesMost of the time, more "shots" were better. And the bigger the LM, the more it benefitted from extra shots.It is not immediately obvious what to make of this.The GPT-3 paper takes care to be technically 100% agnostic about the underlying mechanism . . . if you read it carefully, including the fine-print (i.e. footnotes and appendices). At the same time, in its choice of words, it gestures suggestively in exciting directions that a casual reader is likely to take at face value.For example, the paper makes extensive use of the term "meta-learning." Read casually, it seems to be saying that LMs as big as GPT-3 have a novel capability -- they can learn new tasks on the fly, without fine-tuning!But what the paper means by "meta-learning" is probably not what you mean by "meta-learning." The paper's own definition is provided in a footnote. It is (admirably) precise, non-standard, and almost tautologous.In short, meta-learning is "any mechanism that makes more 'shots' work better" (all emphasis mine):These terms ["meta-learning" and "zero/one/few-shot" -nost] are intended to remain agnostic on the question of whether the model learns new tasks from scratch at inference time or simply recognizes patterns seen during training this is an important issue which we discuss later in the paper, but meta-learning is intended to encompass both possibilities, and simply describes the inner-outer loop structure.The same passage is quoted by moire in "Language models are 0-shot interpreters," who goes on to say:The later discussion is not very extensive, mostly just acknowledging the ambiguity inherent to few-shot [...]This is the uncertainty that I will investigate in this blog post, expanding on the results published in Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm.My purpose is also to challenge the ontology introduced by Language Models are Few-Shot Learners. Although the authors are careful to remain agnostic as to the mechanism of few-shot/meta-learning, what we have found by probing the mechanism suggests that an alternative framework which emphasizes the means by which a task is communicated may be more salient in some contexts.What does moire mean? The post goes on to describe a number of experiments, whose results suggest thatWhat matters is not the number of task examples ("shots"), but how well the prompt specifies the desired task.Failures of zero- and one-shot are often failures to guess-what-I-mean on the basis of a legitimately ambiguous prompt.The value of additional "shots" may only lie in their value as a proxy for clarity in task communication.Once you have written a sufficiently clear one-shot (or even zero-shot) prompt, the model does not do any better with additional examples -- the task has already been communicated.In some cases, one-shot is actually worse than zero-shot -- because it adds a new kind of ambiguity. ("We noticed that sometimes the model would respond to one-shot prompts as if the semantic content of the example translation was relevant to the new translation. Without multiple examples, its less clear that the translation instances are meant to be parallel and independent.")OpenAI made much of the fact that adding "shots" helps larger models more. (This was the result behind the whole "meta-learning" framing.) However...Larger models are also far better at zero-shot.Comparing a zero-shot prompt to a "control" prompt with no task information, larger models get more value out of the jump from control to zero-shot, and less value from additional examples.In other words: GPT-3's size lets it extract more information from examples, if you provide them. But its size also lets it extract far more information from the original question. So it doesn't need the examples as much.moire's post delights me for two reasons. First, I enjoyed learning the new experimental evidence it presents. But second, and perhaps more importantly, there was the sense of relief that someone actually did the experiments!OpenAI's few-shot "learning" results are full of ambiguity. The GPT-3 paper left me confused on a basic philosophical level, as I noted at the time.Surely the model isn't learning French de novo from 100 paired sentences -- either it speaks French at the outset, or it doesn't. So what could it be "learning" from those 100 examples?Likewise for virtually every result in the paper: grammar, commonsense reasoning, book-learning trivia quizzes... all things it's clearly possible to learn from reading massive swaths of the internet, and all things it's clearly impossible to learn from reading 10 to 100 examples. Yet the examples help? And I'm supposed to think that makes the model . . . smarter, somehow?Well, for French --> English translation at least, it turns out that the examples help in pretty much the only way they possibly could: by informing an already competent translator that you are requesting a translation.As we were attempting to replicate [OpenAI's translation] results, we noticed that when the model was failing on the 0-shot prompt, the failures were often of catastrophic nature: the task was not attempted at all, e.g. the model would output a newline, or another (or the same) French phrase instead of an attempt at an English translation.BLEU assigns a score from 0 to 1 to the accuracy of a translation, and would assign a score close to 0 to a catastrophic failure. The scores reported in the paper, however, are averaged over a large dataset, so the same score could hypothetically correspond to uniformly flawed attempts or a mix of perfect attempts and catastrophic failures.It seemed possible that 0-shot prompts were much less reliable at getting the models to attempt the translation task, but result in equivalent accuracy in the event that they did attempt it.[moire does an experiment investigating this hypothesis, and the results seem to confirm it]How much of the apparent consistent monotonic improvement in performance on tasks relative to number of shots in OpenAIs results can be attributed to an unhelpful zero-shot prompt? Much more extensive testing is needed to say, but I suspect that this is the case for most of the translation tasks, at least.If the extra shots are just about clarifying the task, then what should we make of the claim that "larger models benefit more from extra shots?" That it's . . . easier to clarify tasks to them using this one particular mechanism? When people say GPT-3 displays some new, frightening kind of intelligence, emerging only at its massive scale, surely they can't mean that?And that's not even all. As moire shows, even though it is easier to clarify tasks to GPT-3 through the "shots" mechanism, it's also easier for GPT-3 to guess what you mean with no shots at all."My friend has such sharp hearing. Why, you see see, conditional on her not hearing what you say the first time you say it, she will definitely hear it when you repeat yourself." Quite probably true, but not a good way to make the point!What does it even mean that "language models are few-shot learners"? What does that tell us about the model's capabilities? We don't know. We haven't studied it in the level of depth appropriate for something that might actually matter.After all, moire did a simple and innocuous set of experiments -- just trying to figure out which prompts work best -- and ended up drawing radically different conclusions about the whole thing than OpenAI did.Oh, surely GPT-3 is plenty smart, I don't doubt that. The key question is how much smarter it got from scale, and in which ways. I don't think we'll know that until we put the model to the test, ecologically.5. what scaling doesLMs are trained with a convex loss function. This means they are not min-maxers. They prefer to spread out their capabilities.Given two areas of skill, Thing A and Thing B, they'll try to become equally good at both, even if that means not doing especially well at either. Given an extra marginal unit of potential-for-greatness, they'll smear it out as far as possible over all the Things they know about.Thanks to the convex loss, they do this in proportion to how bad they are at each Thing to begin with -- leveling up their very worst abilities first, making themselves as un-specialized as they can.As we've discussed above, LMs are also trained on very wide-ranging text corpora. Everything from fourth-tier clickbait news to advanced physics preprints to advanced-looking but crackpot physics preprints to badly-written furry porn to astonishingly well-written furry porn to mangled OCR transcripts of 18th-century law texts to et cetera, et cetera. And as far as they can manage, they will do exactly as well at modeling each and every (information-theoretic) bit of it.Larger LMs achieve lower loss. We know that from the scaling laws. And lower loss means being better at predicting each individual word in that corpus, as uniformly as possible.What does this imply?First: that larger LMs are better at everything. It is difficult to find any capability which is present at all in smaller LMs, yet which does not improve with scale.And second: that LMs abhor a skill vacuum.Take a tiny LM, so tiny it really can't make heads or tails of some particular type of text. Now start scaling it up. As its capacity grows, its first priority is to eliminate its greatest weaknesses.That one type of text, that utterly baffled the tiny LM? Convex loss hates that. Every additional unit of capacity gets invested, disproportionately, in bringing such stragglers up to par. The LM desperately wants to be at least sort-of-decent-I-guess at everything -- more than it wants to be a master of anything.Given any one Thing, it will reach sort-of-decent-I-guess performance at that Thing at the smallest scale it can manage -- given the competition from all the other Things it desperately needs to be sort-of-decent-I-guess at.By subjective standards, to human eyes casually scanning over LM samples, this happens pretty fast.GPT-3 is great at lots of individual Things. But take any one of those Things, and you can bet a much tinier LM can do it at the sort-of-decent-I-guess level.Humans, I think, tend to expect intelligence to grow in discontinuous jumps. Stages of child development. They don't understand that at this age. And then, a year or two later, they do understand -- fully.LMs work the other way around. They never perform a sudden jump into competence where they could instead make a slow, gradual rise from "sort of seeming like they understand 10% of the time" to "sort of seeming like they understand 11% of the time" and so on. And this for every Thing uniformly.It's very hard to find any point where scaling suddenly "flips a switch," and the model didn't Get It before, but now it Gets It. (The one example I know of is GPT-3 arithmetic, for some reason. Note that few-shot learning -- whether you call it "meta-learning" or not -- is as gradual as everything else, not a switch that flips on with GPT-3.)[Addendum 11/26/21: I wrote this in ignorance of the BIG-Bench project, which is tracking returns to scale for a large and diverse set of tasks.BIG-Bench has not published results yet, but they livestreamed some preliminary results in May 2021; see also LW discussion here.In the livestream, they give two examples of tasks with smooth scaling and two examples of tasks with a "sudden switch-flip" around 100B params (this slide). They also show, in the aggregate over all tasks, "switch-flip" to faster scaling around 100B (this slide), although this is tricky to interpret since it depends on the task mixture. End addendum]This confounds our intuitive assessments of LM scaling.I have been on this train since the beginning, when tiny lil' GPT-2 124M blew my mind. I've used every new big every model from almost the moment it came out, as excited as a kid on Christmas morning.I did this with every step of the (in retrospect, rather silly) GPT-2 staged release. My tumblr bot started out as (I think) 774M. Then I jumped to 1.5B.That was as far as free OpenAI models went, but when EleutherAI came out with a 2.7B model, I finetuned that one for my bot. I was willing to endure the absolute horrors of mesh-tensorflow (don't ask) to get that 2.7B model up and running. Then, when EleutherAI made a 6.1B model, I got my bot using it in under a week.I feel a kind of double vision, se | Content Synthesis/Content Creation/Decision Making | Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
|
news | Michael Bellusci | Block Launches Cash App Feature Allowing Users to Gift Bitcoin | The new feature is similar to those offered by PayPal and Coinbase. | https://finance.yahoo.com/news/block-launches-cash-app-feature-193848128.html | https://s.yimg.com/uu/api/res/1.2/e_ZSFLR4NhDDdAcqMUL2bw--~B/aD02MDA7dz04MDA7YXBwaWQ9eXRhY2h5b24-/https://media.zenfs.com/en/coindesk_75/25113a6040efa013d7f114f08988fd2a | 2021-12-14T19:38:48Z | (Reuters) - "Do you want to see yourself acting in a movie or on TV?" said the description for one app on online stores, offering users the chance to create AI-generated synthetic media, also known as deepfakes. How increasingly sophisticated technology is applied is one of the complexities facing synthetic media software, where machine learning is used to digitally model faces from images and then swap them into films as seamlessly as possible. The technology, barely four years old, may be at a pivotal point, according to Reuters interviews with companies, researchers, policymakers and campaigners. "Once the entry point is so low that it requires no effort at all, and an unsophisticated person can create a very sophisticated non-consensual deepfake pornographic video - that's the inflection point," said Adam Dodge, an attorney and the founder of online safety company EndTab. | Content Creation/Detection and Monitoring | Unknown | null | null | null | null | null | null |
news | Mark Pesce | Kids Grok AI, But Not Its Pitfalls | The green light next to my computer’s built-in webcam turned on. “Uh-oh,” I thought to myself, realizing just then that the software I was trying out was recording me. My browser showed an image of my body with some dots placed over it, as the computer worked to map my posture. In using this software, I should have been down on the floor, stretched out, doing an isometric exercise known as the plank. But I wasn’t, and the app eventually gave up trying to assess my performance, tut-tutting me for my poor form. Making all of that happen in an app required the integration of many technologies: webcam live streaming, computer vision, and most significantly, a machine-learning model trained to discriminate a well-performed plank from my nonexistent effort. That entails quite a bit of work, most of it at the high end of what a software designer would typically be asked to deliver. But this amazing app had been written by a seventh grader.For the last few decades, I’ve obsessively followed developments in interactive toys: the Furby and Lego Mindstorms, Sony’s PlayStation and Bandai’s Tamagotchi, all of it handed to kids without a second thought, and all of it shaping the way they think. We often learn by playing—particularly when we’re young. The objects we play with help us build an enduring model of the way things work. A child who chatted with a Furby 25 years ago has no trouble as an adult engaging with Alexa.Today’s kids have toys powered by artificial intelligence and are getting quite comfortable using it. Some of them are even able to apply AI to novel problems. I get to see this up close every year as a judge at an Australia-wide competition-cum-science-fair, where students prototype and present some incredibly creative IT projects. A decade ago, a typical project might have involved an Arduino or a Raspberry Pi doing something clever like operating a scheduling system for a school playground. (Kids often solve problems they experience themselves.) This year saw an explosion of projects using Google’s TensorFlow—such as that plank evaluator—and others using the still-in-beta application programming interface (API) for the awesomely powerful GPT-3 text-analysis engine from OpenAI. A child who chatted with a Furby 25 years ago has no trouble as an adult engaging with Alexa.Both have become accessible to secondary school students because Google and OpenAI recently released new APIs, making the sophisticated capabilities of these machine-learning systems easy to exploit. Kids dream up an application, then either adapt some existing code or just throw themselves into it and build something from scratch with the sort of obsessive focus adolescents find effortless. Alongside Internet-of-Things and robotic projects, this year’s crop of applications demonstrated that the next generation already understands the potential of AI and knows exactly how to use it to solve a problem. But they don’t always grasp the pitfalls. That was particularly obvious in one of the apps I reviewed: Trained using a million Reddit comments, it reflected the worldview and experience of your average Redditor—a narrow enough base to inadvertently generate (and reinforce) unconscious biases. These blind spots echo the broader challenge that AI poses. And they point to the growing importance of an education that includes both technical skills and a solid grounding in ethics. After all, with great power comes great responsibility. Youngsters have shown themselves adept at exercising these new AI powers; let’s do what we can to make sure they’re equally good at applying them responsibly. This article appears in the January 2022 print issue as “Power Play.” | https://spectrum.ieee.org/kids-ai | 2021-12-16T20:00:00Z | Dronamics will run trials with its partners, including DHL and Hellmann Worldwide Logistics, in the hope of eventually fielding thousands of drones, each carrying as much as 350 kilograms of cargo up to 2,500 kilometers. The European Union has facilitated this sort of experimentation by instituting a single certification policy for drone aircraft. Once its aircraft are certified, Dronamics must get a route approved through one of the E.U.s member countries; that done, it should be fairly easy to get other member countries to agree as well.In October, Dronamics announced that it would use Malta as its base, with a view to connecting first to Italy and later to other Mediterranean countries.One thing Dronamics doesnt do is full-scale autonomy: Its planes do not detect and avoid obstacles. Instead, each flight is programmed in advance, in a purely deterministic way. Flights often take place in controlled airspace and always between drone ports that the company controls. Someone on the ground monitors the flight from afar, and if something unexpected arises, that person can redirect the plane.We operate like a proper airline, but we can intervene, says Svilen Rangelov, the cofounder and CEO of Dronamics. Were looking for underserved airports, using time slots where there is no passenger traffic. In the United States there are 17,000 airports, but only about 400 are commercially used. The rest dont have regular service at all.Unlike the multicopter burrito drones of years past, or even Amazons prototypes, these machines fly on fixed wings and are powered by internal combustion engines, the better to carry big loads long distances and to operate at off-the-grid airfields. Anything less than 200 miles [about 320 kilometers] is not appropriate because, given the time to get to the airport, fly, and then pick up, you may as well truck it, Rangelov says.The companys drone is called Black Swan, a phrase often used to describe important but unpredictable events. That was precisely the reasoning behind the name, Rangelov says, explaining what makes this drone so unique and rare. "We knew [the drone] had to be cheaper to produce and to operate than any existing models.The drone likely will not be carrying one pallet of the same things but multiple packages for many customers.Because this vehicle is intended to transport cargo with no people on board, Dronamics could design the interior to fit cargo pallets. Its exactly the right cargo size for this business, Rangelov says. It likely will not be carrying one pallet of the same things but multiple packages for many customers. And Dronamics claims it can carry cargo for half of what todays air freighters charge.Hellmann Worldwide Logistics sees a lot of potential for using Dronamics in Africa and other places with limited infrastructure. For now, though, the company is focused on the dense population, manageable distances, and supportive governmental institutions of Europe.Especially between north and south Europefrom Germany and Hungary, where theres a lot of automotive business, says Jan Kleine-Lasthues, Hellmanns chief operating officer for air freight. There are also supply lines going into Italy that service the cruise ships on the Mediterranean Sea, he says, and fresh fish would be ideal cargo. Indeed, Dronamics is working on a temperature-controlled container.What effect would massive fleets of such drones have had on todays supply-chain problems? It could help, he says. If the container isnt arriving with production material, we could use drones to keep production alive. But its not replacing the big flowits just a more flexible, more agile mode of transport.Before cargo drones darken the skies, though, Hellmann wants to see how the rollout goes.First of all, we want to try it, Kleine-Lasthues says. One use case is replacing commercial air freightfor example, Frankfurt to Barcelona by drone; also, theres a use case replacing vans. If it is working, I think it can be quickly ramped up. The question is how fast can Dronamics add capacity to the market. This article appears in the January 2022 print issue as Flying Pallets Without Pilots. | Content Creation/Discovery/Personalization | Education, Training, and Library/Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Will Douglas Heaven | 2021 was the year of monster AI models | It’s been a year of supersized AI models. When OpenAI released GPT-3, in June 2020, the neural network’s apparent grasp of language was uncanny. It could generate convincing sentences, converse with humans, and even autocomplete code. GPT-3 was also monstrous in scale—larger than any other neural network ever built. It kicked off a whole new trend in… | https://www.technologyreview.com/2021/12/21/1042835/2021-was-the-year-of-monster-ai-models/ | 2021-12-21T10:00:00Z | What does it mean for a model to be large? The size of a modela trained neural networkis measured by the number of parameters it has. These are the values in the network that get tweaked over and over again during training and are then used to make the models predictions. Roughly speaking, the more parameters a model has, the more information it can soak up from its training data, and the more accurate its predictions about fresh data will be.GPT-3 has 175 billion parameters10 times more than its predecessor, GPT-2. But GPT-3 is dwarfed by the class of 2021. Jurassic-1, a commercially available large language model launched by US startup AI21 Labs in September, edged out GPT-3 with 178 billion parameters. Gopher, a new model released by DeepMind in December, has 280 billion parameters. Megatron-Turing NLG has 530 billion. Googles Switch-Transformer and GLaM models have one and 1.2 trillion parameters, respectively. The trend is not just in the US. This year the Chinese tech giant Huawei built a 200-billion-parameter language model called PanGu. Inspur, another Chinese firm, built Yuan 1.0, a 245-billion-parameter model. Baidu and Peng Cheng Laboratory, a research institute in Shenzhen, announced PCL-BAIDU Wenxin, a model with 280 billion parameters that Baidu is already using in a variety of applications, including internet search, news feeds, and smart speakers. And the Beijing Academy of AI announced Wu Dao 2.0, which has 1.75 trillion parameters. Meanwhile, South Korean internet search firm Naver announced a model called HyperCLOVA, with 204 billion parameters.Every one of these is a notable feat of engineering. For a start, training a model with more than 100 billion parameters is a complex plumbing problem: hundreds of individual GPUsthe hardware of choice for training deep neural networksmust be connected and synchronized, and the training data split must be into chunks and distributed between them in the right order at the right time. Large language models have become prestige projects that showcase a companys technical prowess. Yet few of these new models move the research forward beyond repeating the demonstration that scaling up gets good results. There are a handful of innovations. Once trained, Googles Switch-Transformer and GLaM use a fraction of their parameters to make predictions, so they save computing power. PCL-Baidu Wenxin combines a GPT-3-style model with a knowledge graph, a technique used in old-school symbolic AI to store facts. And alongside Gopher, DeepMind released RETRO, a language model with only 7 billion parameters that competes with others 25 times its size by cross-referencing a database of documents when it generates text. This makes RETRO less costly to train than its giant rivals. | Content Creation/Content Synthesis/Prediction/Information Retrieval Or Search | Computer and Mathematical/Arts, Design, Entertainment, Sports, and Media/Education, Training, and Library | null | null | null | null | null | null |
|
news | Verge Genomics Secures $98 Million in New Financing | SAN FRANCISCO--(BUSINESS WIRE)--Verge Genomics, a tech-enabled drug discovery company pioneering the use of artificial intelligence (AI) and human data to develop new drugs, announced today it has closed an oversubscribed $98 million equity financing. The Series B financing was led by funds managed by BlackRock and supported by new healthcare and technology investors, including Eli Lilly and Company, Merck Global Health Innovation Fund (Merck GHI), Section 32, and Vulcan Capital, alongside Verg | https://www.businesswire.com/news/home/20211216005023/en/Verge-Genomics-Secures-98-Million-in-New-Financing | 2021-12-16T11:36:08Z | SAN FRANCISCO--(BUSINESS WIRE)--Verge Genomics, a tech-enabled drug discovery company pioneering the use of artificial intelligence (AI) and human data to develop new drugs, announced today it has closed an oversubscribed $98 million equity financing. The Series B financing was led by funds managed by BlackRock and supported by new healthcare and technology investors, including Eli Lilly and Company, Merck Global Health Innovation Fund (Merck GHI), Section 32, and Vulcan Capital, alongside Verges existing investors, including Threshold Ventures, ALS Investment Fund, Tao Capital Partners, Lifeforce Capital, and others.We are thrilled that multiple stakeholders in our space, from leading pharmaceutical companies to healthcare and technology investors, recognize the potential of our human-centric AI platform to transform drug discovery for the most challenging diseases of our generation, said Alice Zhang, co-founder and CEO of Verge Genomics. Our platform has demonstrated its ability to identify novel targets from human datasets for these diseases and rapidly develop them into proprietary clinical candidates. With our Series B completed, the quality and breadth of our shareholder base allows us to validate our platform in the clinic. The addition of new clinical datasets has the potential to dramatically improve our self-reinforcing learning platform and accelerate us towards our mission of developing better drugs, faster.Verge has built an end-to-end, AI-driven drug discovery platform that includes one of the fields largest proprietary genomic datasets from human brain tissue. Since its Series A in 2018, Verge has become one of the first AI-enabled drug discovery companies to discover a novel target and develop it internally into a proprietary clinical candidate entirely using its platform. In July 2021, the company announced a $706 million partnership with Lilly to develop new treatments for amyotrophic lateral sclerosis (ALS) using its platform. Funding from the Series B will be used to exponentially grow the number of preclinical and clinical programs in Verges pipeline and advance its lead programs through clinical testing. Additionally, Verge will continue to expand its proprietary technology to enable improvements in many steps of the drug discovery process including translational medicine and clinical development.Exponential improvements in technology are converging to create an inflection point in drug development, said William Abecassis, BlackRocks Head of Innovation Capital. Verge has uniquely positioned itself to harness this opportunity, having developed both a world-class AI discovery platform and the most comprehensive neurodegenerative genomic database of human patients in the world. Together with their team of prominent leaders in neuroscience, we are excited to be supporting them in shaping the next era of technology-enabled drug development.David M. Rubin, PhD., Managing Director of Merck GHI added: Identifying better targets is one of the critical challenges faced in drug discovery today. The speed with which Verge advanced their lead program underscores the platforms potential to develop new candidates with greater efficiency. We are pleased to be joining this financing alongside experts from pharma, healthcare and technology who share the belief that Verges platform has the potential to deliver life-changing medicines to patients.About Verge GenomicsVerge uses artificial intelligence and human data to develop better drugs faster for large unmet diseases. Verge has built an end-to-end drug discovery and development platform, featuring one of the fields largest and most comprehensive proprietary patient genomics datasets in neuroscience. Verge applies machine learning to reveal new targets from human datasets and develops drugs with greater probabilities of success using its human-centric biology and chemistry platforms.Verge is the first AI-enabled drug discovery company to internally develop a clinical candidate from a novel target discovered from its platform. Verge has further demonstrated the power of its platform by delivering a broad preclinical and discovery pipeline spanning diverse therapeutic areas, with its first program entering the clinic in 2022. The company is led by experienced drug developers and computational biologists with a shared belief that technology has created a new opportunity to deliver life-changing medicines more efficiently.For additional information, please visit www.vergegenomics.com. Follow us on LinkedIn and Twitter.About Merck Global Health Innovation FundMerck Global Health Innovation Fund (Merck GHI) is evolving corporate healthcare venture capital globally by utilizing their healthcare ecosystem strategy. This investment strategy connects innovative companies with complementary technologies to develop integrated healthcare solutions. Merck GHI has $500M under management and provides growth capital to emerging healthcare technology companies worldwide while leveraging the vast R&D-based, global resources of Merck. With a vision that data will be the currency in healthcare, Merck GHI invests broadly in digital health. Merck GHI invests in platform companies with proven technologies or business models where Mercks expertise and perspectives can accelerate revenue growth and enhance value creation. Since late 2010, Merck GHI has made over 50 investments in Digital Health companies. www.merckghifund.comShumaker, Loop & Kendrick LLP and Green Shoots Consulting, LLC acted as advisors to Merck GHI in connection with the Verge Genomics transaction. | Prediction/Discovery | Life, Physical, and Social Science/Computer and Mathematical | null | null | null | null | null | null |
||
news | Avaya ENGAGE 2021 Highlights Experience Builders™ and the Unique Value They Deliver Enhancing Customer and Employee Engagement | ORLANDO, Fla.--(BUSINESS WIRE)--AVAYA ENGAGE 2021 – Avaya (NYSE:AVYA), a global leader in solutions to enhance and simplify communications and collaboration, is highlighting the ecosystem of Avaya Experience Builders™ this week at its ENGAGE 2021 user conference. Experience Builders align Avaya services, partners, technology developers, customers and citizen developers in an extensive global network designed to help enterprises build better experiences for employees and customers, when a one-si | https://www.businesswire.com/news/home/20211213005200/en/Avaya-ENGAGE-2021-Highlights-Experience-Builders%E2%84%A2-and-the-Unique-Value-They-Deliver-Enhancing-Customer-and-Employee-Engagement | 2021-12-13T13:08:39Z | ORLANDO, Fla.--(BUSINESS WIRE)--AVAYA ENGAGE 2021 Avaya (NYSE:AVYA), a global leader in solutions to enhance and simplify communications and collaboration, is highlighting the ecosystem of Avaya Experience Builders this week at its ENGAGE 2021 user conference. Experience Builders align Avaya services, partners, technology developers, customers and citizen developers in an extensive global network designed to help enterprises build better experiences for employees and customers, when a one-size-fits-all solution is not sufficient.Avaya Experience Builders makes it easier for businesses to build and deliver customized experiences by providing co-development support, including existing or completely tailored experiences, or technology to compose their own. Experience Builders span a wide range of markets and use cases, including:Quantiphi is an Experience Builder specializing in Google Cloud Contact Center AI (CCAI) services for transforming the experiences of customers and staff across a number of verticals including healthcare, financial services, retail and more. The innovation we develop in AI and machine learning technologies brings ease of integration with the Avaya OneCloud portfolio, helping organizations reduce costs and improve the customer journey across channels. Avaya customers continue to see positive results and I am excited about the tremendous opportunity we have in front of us, said Gaurav Johar, Practice Leader for Conversational AI, Quantiphi. As our ecosystem of fellow developers and innovators expands, we are able to serve new customers with unique solutions designed to meet the specific needs of the many markets we serve, including healthcare, autonomous vehicles, banking, retail, telecom, media and entertainment, and government.ConvergeOne is a proven, services-led cloud solution provider that utilizes its intellectual property and unique methodologies to create value for customers and develop progressive solutions that connect people with purpose. The company has spent decades building upon its customer offerings which span the core technology markets including cloud, customer experience, cybersecurity, data center, enterprise networking, and the modern workplace. ConvergeOne is an Experience Builder leveraging Avaya OneCloud CPaaS as seamless and flexible architecture for its C1 Conversations solution. With C1 Conversations, customers have a platform to accelerate their digital transformation initiatives without fear of further fragmenting their service with point solutions, said Jeff Bloom, Senior Director of Business Development, ConvergeOne. C1 Conversations is easy to adopt, meeting customers where they are, requiring no rip and replace of existing communications solutions.Clemson University is one of the leading public research institutions in the U.S. and an Experience Builder leveraging Avaya OneCloud collaboration capabilities to create an exceptional digital learning experience for students and faculty. Developed pre-pandemic but accelerated due to the need for remote learning, the platform has been embraced by students and is being expanded for modern computational biology training activities. This cloud-based platform has made me a better teacher, and I will never go back to in-class only learning, said Clemson Professor Frank Alex Feltus. Currently comprised of interactive learning resources, experiential labs, virtual collaboration and online classrooms, AI-powered curation and more, we have a virtual, centralized learning experience that can grow with us, tailored to our needs.SpinSci is an established market innovator focused on digital healthcare solutions through a comprehensive set of patient access workflows. The company serves over 100 Fortune 500 companies and over 50,000 clinical and non-clinical agents. SpinScis cloud-based Electronic Health Record (EHR) integration drives context-based care management to health systems by leveraging Avaya OneCloud CPaaS and CCaaS. Providing optimal patient engagement and real-time data driven care management today cant be based on a monolithic solution, but requires intuitive multi-experience capabilities that orchestrates across disparate systems, said Rajit Kumar, CEO, SpinSci Technologies. Avaya is a leader in contact center solutions with an extensive ecosystem of developers and partners, and a natural fit for us to collaborate with.Journey is the developer of a patented, trusted identity platform and network enabling highly secure, privacy-preserving customer experiences in establishing identity in the contact center. The Journey digital trusted identity platform, integrated with Avaya OneCloud CCaaS offerings, enables enterprises to onboard, authenticate, interact and transact with customers in a secure and simple manner leveraging sensors and tools in smartphones or laptops. With its use of facial, voice and device biometric technologies and more, Journey can authenticate users with 99.9999% accuracy in less than two seconds. Journey's solutions provide Avaya contact centers with unique security, privacy, payment, and regulatory compliance capabilities that significantly enhance the experiences they are able to provide to their own customers, said Brett Shockley, CEO and co-founder, Journey. We are excited to be showing our solutions at Avaya ENGAGE 2021, and connecting with the growing innovation network that is represented here.We have tens of thousands of partners, over 150,000 developers, plus more than 100,000 customers worldwide, and the ambition to ensure every citizen developer can build experiences with the Avaya OneCloud platform. Avaya OneCloud and the innovation Experience Builders are delivering every day are creating stronger brands, changing entire industries, and in many cases improving lives, said Simon Harrison, Senior Vice President and CMO, Avaya. Experience as a Service is what we can provide enabling Experience Builders around the world to compose and wrap solutions around their own customers, tailored to every use case and every user. Avaya ENGAGE 2021 is the showcase for these innovators to tell their stories, and we embrace these partners who are changing the game.Avaya is providing demos of Avaya Experience Builders innovation at the annual Avaya ENGAGE user conference in Orlando, FL this week. For more information, go to Avaya ENGAGE 2021 at: https://avaya-engage.avaya.com/avaya-engage-2021Additional ResourcesAbout AvayaBusinesses are built by the experiences they provide, and everyday millions of those experiences are delivered by Avaya Holdings Corp. (NYSE: AVYA). Avaya is shaping what's next for the future of work, with innovation and partnerships that deliver game-changing business benefits. Our cloud communications solutions and multi-cloud application ecosystem power personalized, intelligent, and effortless customer and employee experiences to help achieve strategic ambitions and desired outcomes. Together, we are committed to help grow your business by delivering Experiences that Matter. Learn more at http://www.avaya.comCautionary Note Regarding Forward-Looking StatementsThis document contains certain forward-looking statements. All statements other than statements of historical fact are forward-looking statements for purposes of the U.S. federal and state securities laws. These statements may be identified by the use of forward-looking terminology such as "anticipate," "believe," "continue," "could," "estimate," "expect," "intend," "may," "might," our vision, "plan," "potential," "preliminary," "predict," "should," "will," or would or the negative thereof or other variations thereof or comparable terminology. The Company has based these forward-looking statements on its current expectations, assumptions, estimates and projections. While the Company believes these expectations, assumptions, estimates and projections are reasonable, such forward-looking statements are only predictions and involve known and unknown risks and uncertainties, many of which are beyond its control. The factors are discussed in the Companys Annual Report on Form 10-K and subsequent quarterly reports on Form 10-Q filed with the Securities and Exchange Commission (the SEC) available at www.sec.gov, and may cause the Companys actual results, performance or achievements to differ materially from any future results, performance or achievements expressed or implied by these forward-looking statements. The Company cautions you that the list of important factors included in the Companys SEC filings may not contain all of the material factors that are important to you. In addition, in light of these risks and uncertainties, the matters referred to in the forward-looking statements contained in this press release may not in fact occur. The Company undertakes no obligation to publicly update or revise any forward-looking statement as a result of new information, future events or otherwise, except as otherwise required by law.All trademarks identified by ®, TM, or SM are registered marks, trademarks, and service marks, respectively, of Avaya Inc. All other trademarks are the property of their respective owners.Source: Avaya Newsroom | Digital Assistance/Content Synthesis/Recommendation | Management/Business and Financial Operations/Healthcare Practitioners and Support/Education, Training, and Library | null | null | null | null | null | null |
||
news | ET Bureau | Courseplay gets Rs 3 crore in seed funding from Inflection Point, others | Founded in 2016 by Arjun Gupta, Courseplay captures data to provide growth suggestions to employers and their teams through its web application, chatbot, and mobile apps. | https://economictimes.indiatimes.com/tech/funding/courseplay-gets-rs-3-crore-in-seed-funding-from-inflection-point-others/articleshow/88745005.cms | 2022-01-07T00:36:41Z | Bengaluru:Courseplay, an employee growth enablement platform, has raised Rs 3 crore in a seed funding round led by Inflection Point Ventures.The capital will be used to acquire more customers, develop new AI-driven capabilities, as well as for expansion across India, Southeast Asia, and the Middle East.Founded in 2016 by Arjun Gupta, Courseplay captures data to provide growth suggestions to employers and their teams through its web application, chatbot, and mobile apps. In 2020, the startup engaged almost 400,000 employees, offering them over 100,000 activities. The Mumbai-based startup counts Amazon, SpiceJet, Emami, Swiggy, etc., as its clientele.Were ready to capitalise on the wave of digital transformation and accelerate our expansion plans across emerging markets like India, Southeast Asia and the Middle East, Gupta said.Corporate learning and talent development is a $450-billion market worldwide. Employee experience is one of the fastest growing segments in the enterprise SaaS sector. Coursera estimates that the market will grow at 20-25% annually for the next three years, given the recent push for digital transformation by companies in the region.Upskilling employees and improving workforce performance through regular staff training programmes has always been a must for companies across sectors. However, training employees offline is an expensive affair, said Vinay Bansal, founder and chief executive of Inflection Point Ventures. Courseplay provides the most cost-effective and scalable online learning technology solution. This online training programme has been the need of an hour especially with the world going digital due to Covid-19.Trusted by Industry LeadersKunal BahlCo-Founder & CEO, SnapdealRitesh AgarwalFounder & CEO, OyoDeepinder GoyalCo-founder & CEO, Zomato | Content Synthesis/Personalization/Recommendation | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Greg Nichols | 2022: A major revolution in robotics | A world class roboticist on why everything is about to change. | https://www.zdnet.com/article/2022-prediction-a-major-revolution-in-robotics/ | 2021-12-14T12:00:02Z | For a while now, those who track robotics development have taken note of a quiet revolution in the sector. While self-driving cars have grabbed all the headlines, the work happening at the intersection of AI, machine vision, and machine learning is fast becoming the foundation for the next phase of robotics.By combining machine vision with learning capabilities, roboticists are opening a wide range of new possibilities like vision-based drones, robotic harvesting, robotic sorting in recycling, and warehouse pick and place. We're finally at the inflection point: The moment where these applications are becoming good enough to provide real value in semi-structured environments where traditional robots could never succeed.To discuss this exciting moment and how it's going to fundamentally change the world we live in, I connected with Pieter Abbeel, a professor of electrical engineering and computer science at the University of California, Berkeley, where he is also the director of the Berkeley Robot Learning Lab and co-director of the Berkeley AI Research lab. He is co-founder and Chief Scientist of Covariant and host of the excellent The Robot Brains podcast.In other words, he's got robotics bon fides, and what he says about the near future of automation is nothing short of astounding.GN: You call AI Robotics a quiet revolution. Why is it revolutionary and why do you think recent developments are still under the radar, at least in popular coverage?For the past sixty years, we've had physically highly capable robots. However, they just weren't that smart. So these physically highly capable robots ended up constrained to factories mostly car and electronics factories where they were trusted to execute carefully pre-programmed motions. These robots are very reliable at doing the same thing over and over. They create value, but it's barely scratching the surface of what robots could do with better intelligence. The quiet revolution is occurring in the area of artificial intelligence (AI) Robotics. AI robots are empowered with sophisticated AI models and vision. They can see, learn, and react to make the right decision based on the current situation. Popular coverage of robotics trends towards home-butler style robots and self-driving cars because they're very relatable to our everyday lives. Meanwhile, AI Robotics is taking off in areas of our world that are less visible but critical to our livelihoods think e-commerce fulfillment centers and warehouses, farms, hospitals, recycling centers. All areas with a big impact on our lives, but not activities that the average person is seeing or directly interacting with on a daily basis. GN: Semi-structured environments are sort of the next frontier for robots, which have traditionally been confined to structured settings like factories. Where are we going to see new and valuable robotics deployments in the next year or so?The three big ones I anticipate are warehouse pick and pack operations, recycling sortation, and crop harvesting/care. From a technological point of view, these are naturally in the striking range of recent AI developments. And also personally, I know people working on AI Robotics in each of those industries and they are making great strides.GN: Why is machine vision one of the most exciting areas of development in robotics? What can robots now do that they couldn't do, say, five years ago?Traditional robotic automation relied on very clever engineering to allow pre-programmed-motion robots to be helpful. Sure, that worked in car and electronics factories, but ultimately it's very limiting. Giving robots the gift of sight completely changes what's possible. Computer Vision, the area of AI concerned with making computers and robots see, has undergone a night-and-day transformation over the past 5-10 years --- thanks to Deep Learning. Deep Learning trains large neural networks (based on examples) to do pattern recognition, in this case pattern recognition enabling understanding of what's where in images. And then Deep Learning, of course, is providing capabilities beyond seeing. It allows for robots to also learn what actions to take to complete a task, for example, pick and pack an item to fulfill an online order.GN: A lot of coverage over the past decade has focused on the impact of sensors on autonomous systems (lidar, etc). How is AI reframing the conversation in robotics development?Before Deep Learning broke onto the scene, it was impossible to make a robot "see" (i.e. understand what's in an image). Consequently, in the pre-Deep Learning days, a lot of energy and cleverness went into researching alternative sensor mechanisms. Lidar is indeed one of the popular ones (how it works is that you send a laser beam out, measure how long it takes to get reflected, and then multiply by speed of light to determine distance to the nearest obstacle in that direction). Lidar is wonderful when it works, but the failure modes can't be discounted (e.g., Does the beam always make it back to you? Does it get absorbed by a black surface? Does it go right through a transparent surface? etc..). But in a camera image, we humans can see what's there, so we know the information has been captured by the camera, we just need a way for the computer or robot to be able to extract that same information from the image. AI advances, specifically Deep Learning, has completely changed what's possible in that regard. We're on a path to build AI that can interpret images as reliably as humans can, as long as the neural networks have been shown enough examples. So there is a big shift in robotics from focusing on inventing dedicated sensory devices to focusing on building the AI that can learn and empower our robots using the natural sensory inputs already available to us, especially cameras.GN: Robotics has always been a technology of confluences. In addition to AI and machine vision, what technologies have converged to make these deployments possible?Indeed, any robotic deployment requires a confluence of many great components and a team that knows how to make them all work together. Besides AI there is, of course, the long-existing technology of reliable industrial grade manipulator robots. And, crucially, there are cameras and computers, which are ever becoming better and cheaper.GN: What's going to surprise people about robots over the next five years?The magnitude at which robots are contributing to our everyday lives, most often without seeing any of these robots. Indeed, we likely won't personally see the robots physically interacting with the things we use everyday but there will be a day soon in which the majority of the items in our household were touched by a robot at least once before reaching us. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Valkyrie Trading Society | GSI Technology's Product Could Revolutionise Deep Learning | Search and recommendation processes are already very important in how much mass-adopted technology functions. | https://seekingalpha.com/article/4475456-gsi-technologys-product-could-revolutionise-deep-learning | 2021-12-17T14:00:00Z | monsitj/iStock via Getty ImagesI'm not a computer scientist, and I don't have much experience with super deep models, but from what I do know, it seems that GSI Technology (GSIT) could go far beyond the sparse matrix operations that are inherent in the super large datasets held by companies that do a lot of search and recommendation like Netflix (NFLX) and Amazon (AMZN). Their APU, which does in-place operations on matrices within the memory, meaning no more memory caches needed, could change a lot of deep learning. I'll try to explain the concept to the best of my abilities for laymen, to help them understand why an APU could help improve lots of highly complex deep learning models that are at the forefront of mimicking some our most human faculties like vision and general understanding.Neurons That Fire Together, Wire Together The idea that neurons that get activated at the same moments are probably connected is an idea that originates from the life sciences. Stimuli are likely to activate the same complex of neurons every time they are introduced. From a modelling perspective, this means that we can sort of meld together these neurons, since their activations will be the same for the same stimuli in various situations. This is the Hebbian principle. These sorts of networks, characterised by correlated activations, can be represented as sparse matrices, and are good representations, in the sense that they represent accurately, many very relevant real-life phenomena like in computer vision. A network represented by a sparse matrix also means that we can reduce the complexity and number of parameters that we need to estimate in our model. This is good because with too many parameters, we risk overfitting the data, meaning we risk creating a combination of parameters that accounts for every observed outcome perfectly, but cannot be generalised and used in new situations not seen before in the observations that trained a model. So sparsity in deep neural networks is a really useful thing, because it helps us reduce the parameters while also attacking very accurately a lot of real-life modelling problems, in fact many of the most complex ones, which require a mimicking of entirely biological systems in the brain. APUs Step In The thing is modern computing is so bad with sparse matrices. When you have even very sparse but non-uniform matrices (you'd never have uniform matrices unless you found a very smart way to compel that), so sparse that the number of operations might be reduced by a factor of 100x, it would still take longer because of all the lookup and cash misses. So basically, in practice there's no value in using sparse matrices, even if in theory it's a great way to reduce the number of parameters involved. In some computer vision applications, it is possible to imitate sparseness even when using non-sparse matrices for which modern computing is most suited, but it requires specific assumptions and circumstances that while possible in computer vision applications, may not be possible in other deep learning applications. But an APU would solve these issues. With the ability for the APU to work on sparse matrices, model architectures in deep learning applications that use sparse matrices would become viable, while at the moment they are completely unviable. This would create a generation of models that are more parsimonious, i.e., don't have too many unnecessary parameters, but still tackle the problems at hand in a rational and effective way. It would mean a generation of models that could train faster and produce more accurate results, and also require smaller datasets to train on. Overall, it would be a great thing for deep learning models, solving the issue that sparsity presents in deep learning applications.GSIT Business Update So when is this finally going to become a reality for the industry? GSIT is planning on delivering their products with the necessary APIs and software, which is actually the bottleneck at this stage and not hardware despite semiconductor shortages, to beta testers. These deliveries were supposed to happen this quarter, but it'll have to happen at the beginning of 2022. Then we'll hopefully start to hear more concrete things about the Gemini, and be getting more information on how customers are going to get involved or what they'll have to do to make the product useful in their applications. It seems that GSIT is now working on fundamental APIs and libraries to help customers out, but eventually they'll have a much larger hand in getting the Gemini deployed for whatever they might need it for. The legacy businesses continue to deliver a cash cushion, with new radiation resistant products being launched to keep that cash machine on for a while longer as Gemini becomes introduced to the market. The cash burn is between $4-$5 million a quarter, which is about $16 million a year on a cash balance that stands slightly above $54 million, so the dilution of equity required to keep the business developing the Gemini option is relatively limited. However, general execution risks remain for the business, which is after all making a loss. Continued delays are diluting our equity, and the success of the product is critical in order for an upside to be realized. While we continue to be confident in the idea of the APU, we still need to see some big players acknowledge the product and start incorporating it. While we've had reports that Amazon and others are interested, we trust nothing until we get something more concrete, and will just continue to hold our limited position in the company. Nonetheless, the opportunities offered by the APU for the world of machine learning remain revolutionary, and we think that this stock could easily be a multiple of its current worth if GSIT turns out to be the ones to bring this revolution to market.Also, if you noticed any mistakes in my explanations thanks to your expertise, please comment publicly below so I can learn!If you thought our angle on this company was interesting, you may want to check out our service, The Value Lab. We focus on long-only value strategies, where we try to find international mispriced equities and target a portfolio yield of about 4%. We've done really well for ourselves over the last 5 years, but it took getting our hands dirty in international markets. If you are a value-investor, serious about protecting your wealth, our group of buy-side and sell-side experienced analysts will have lots to talk about. Give our no-strings-attached free trial a try to see if it's for you. | Content Synthesis/Recommendation/Prediction | Unknown | null | null | null | null | null | null |
|
news | Chris Whitmore, Chris Whitmore | What’s new in Map Viewer (December 2021) | See what's new in the December 2021 Map viewer update. | https://www.esri.com/arcgis-blog/products/arcgis-online/mapping/whats-new-in-map-viewer-december-2021/ | 2021-12-17T16:40:15Z | The December 2021 Map Viewer update is here! The update includes many enhancements, such as HTML source editing for pop-ups, feature-specific effects, an improved sketch experience, and many more. Read below to learn more about the recent features and enhancements now available in Map Viewer.Feature-specific effectsWith the latest update, you can now use effects to emphasize specific features in a layer. Effects such as Bloom and Grayscale, Drop Shadow and Blur, and more are used to apply effects to highlight some features in a layer based on conditions that you define.Bloom + Blur effect in Map ViewerSee Feature-Specific Effects: The next enhancement for Map Effects for more details and inspiration for your maps.Pop-upsThe December release also includes several enhancements for pop-ups.Editing source HTMLWhether starting from an existing map previously authored in Map Viewer Classic, or starting fresh in Map Viewer, you can add or modify the source HTML for your pop-ups. Improved integration with supported HTML helps provide a consistent WYSWYG.To access the source HTML, use the source button on the pop-up text editor.Accessing source HTML for pop-upsArcade-driven clustering pop-upsPreviously, only cluster attributes like average value, count, and predominant value were available with clustering pop-ups. With the latest update, you can use a few lines of Arcade to display additional information about the features in the cluster, such as all of the features contained in the cluster, or minimum / maximum values.Stay tuned for an upcoming blog for more details on using Arcade to drive your clustering pop-ups.Arcade-driven content elementsThe December Map Viewer update provides even more control and flexibility to build the pop-up you envision. In addition to using Arcade expressions to return attributes, you can now use Arcade to define an entire block of content whether that is rich text, charts, formatted list of fields or any combination of these pop-up content elements.For more details, check out Introducing Arcade pop-up content elements.Smart MappingThe December 2021 Map Viewer update includes several enhancements in Smart Mapping.Performance updatesSmart mapping optimizations in the JavaScript API for ArcGIS mean better performance in Map Viewer. Statistics, class breaks, and other internal computations are handled in a more efficient manner, providing improved responsiveness when styling your layer and adjusting class breaks or working with the Smart Mapping histogram.Smart mapping calculates summary statistics and histograms to help you understand your data better. The speed of these calculations significantly improved for many styles in the latest update of ArcGIS Online.For a deeper dive into the performance improvements, check out A better experience for styling layers using Arcade in ArcGIS Online.More color ramps!167 colorblind-friendly color ramps have been added to Map Viewer’s gallery, bringing the total to over 500. The new ramps include many that are designed for highlighting middle values in above and below visualization styles. Previously, color ramps for above and below styles use a transparent or neutral color where middle values were de-emphasized. However, sometimes the data values in the middle are what should be emphasized. With both approaches available in Map Viewer now, you can use the approach that works best for visualizing your data.New color ramps in Map Viewer offer ramps with three colors that emphasize middle values, as well as above and below values.Sketch enhancementsSketch enhancements include easier access with a Sketch button on the (light) toolbar, as well as improved snapping controls improved and updated default symbology for lines and polygons.For more details, see An enhanced sketch experience in Map Viewer.LabelsLabels this release received several enhancements:If youve used the layer effect drop shadow you have seen the XY slider already. We have added the XY Slider to help with label placement. As you move the point around in the XY slider you will see the label placement update in the map.Labels on lines have been updated with the ability to control how a label repeats along features. You can now disable label repetition completely, or use a distance interval. Line labels also support label overrun, which allows labels that are larger than the feature being labeled to be displayed on the map.A new FGDC GeoAge font has been added to label styles, a standard font for geologic map symbolization.TablesFor feature layer tables, you can now select rows in the table and zoom to the location of the extent of all the selected features on the map. GeoJSONYou can now add GeoJSON feeds from a url in Map Viewer. Have a link for GeoJSON from a data hub? Or an earthquake feed? With Map Viewer, you can use smart mapping to style these layers, as well as configure effects, blend modes, filtering and more.Imagery Layer enhancementsThe latest update includes a couple enhancements for Imagery Layers as well. Time can be displayed (and configured!) for time-enabled Tiled Imagery layers. Additionally, you can now browse for Raster Function Templates to apply to your Imagery Layer.Managing layer propertiesThe December Map Viewer release includes improved authoring tools for managing layer properties. All layers include properties that can be configured in Map Viewer. For a feature layer, these properties include the layer’s style, pop-ups, filter, and more. Often, these properties are already configured and stored with the layer. When you open them in Map Viewer, you immediately see those configurations made by the layer owner. From there, you can further modify the layer as you see fit. When you save the web map, the configured properties for the layer are stored with the web map rather than the layer. Anyone viewing your web map will now see the changes you made to the layer’s properties. Any properties that haven’t been changed in the web map would continue being referenced from the source layer.The latest update provides helpful indicators that tell which properties are stored in the web map (and are not referenced from the source), and which are stored with the source layer meaning any updates to layer properties will be automatically displayed in the web map. Currently, only feature layers display layer property status. Future updates will include support for more layer types.In addition to reporting the status for each layer property, you have the option to Reset to source layer properties, which will discard all properties that are stored in the web map and revert the layer to the properties that are stored with the layer. Along with the reset to source option, you can now Disconnect layer properties which will store all properties in the current web map. This effectively overrides any inheritance from the source layer properties, ensuring the web map remains exactly as authored. Note, however, that the data contained in the layer will continue to be referenced from the source layer.Form authoringYou can now author forms directly in Map Viewer to create a tailored editing experience for your editors.Improved support for location tracking layersLocation tracking layers have improved interaction in Map Viewer. When adding tracks to Map Viewer, they are automatically grouped,Last Known Locations, Track Lines, and Tracks are added automatically to Map Viewer as a group.Additionally, pop-ups automatically display location information similar to Track Viewer.Other enhancementsOGC WFS layers, as well as CSV layers added from url, now support filtering (look for smart mapping support for WFS in the next release!).Time-enabled bookmarks are now supported in the Map Viewer. This allows you to create bookmarks for specific time windows. To add a time-enabled bookmark, add a supported layer to your map and ensure that the time slider is open. Adjust the time slider to the desired time extent and then open the Bookmarks tool from the Contents toolbar and add a new bookmark. You will see the time extent displayed on the bookmark.Effects applied to the whole layer are now reflected in the legend as well (look for print support in the next release!).Overlapping polygons have been improved so that outlines draw in the same order as the features on the map.Previously, overlapping polygons were rendered with outlines of underlying polygons displaying on top (left). With the latest release, outlines are only displayed in the order that each feature is displayed on the map.More resources Participate on GeoNet to keep up on news , provide feedback, and join in discussions. | Content Synthesis/Discovery | Architecture and Engineering/Computer and Mathematical | null | null | null | null | null | null |
|
news | Lisa Morgan | The business benefits of automating and embedding BI | Automated insights and embedded BI are getting decision-makers the data needed to make quick decisions, accelerate business processes and lessen manual work required. | https://searchbusinessanalytics.techtarget.com/tip/The-business-benefits-of-automating-and-embedding-BI | 2021-12-14T14:10:00Z | Business intelligence continues to serve as a competitive weapon and the use cases continue to become more sophisticated. At the same time, BI use has been democratized among organizational leaders and "citizen data scientists" so the organization can benefit from data-informed insights.Augmented analytics makes the democratization of BI possible because unlike traditional BI, it includes natural language processing (NLP), so users don't need to understand a query language to pose a business question. Augmented analytics also streamlines collaboration using NLP to explain or "narrate" data visualizations, so users don't have to guess what data visualizations mean.Automation enables augmented analytics to work as advertised, including its use of NLP. In an NLP context, it's the automation of natural language to machine language and vice versa. Automation also accelerates business processes and lessens the amount of manual work that's necessary to drive insights from data."Automation is much more than a feature; it's at the core of analytics," said Ashley Kramer, chief product and marketing officer at augmented analytics platform provider Sisense. "Automation, via well-trained, responsible AI, is the driver behind the true promise of augmented analytics: giving users the actionable insights they need, when they need it, without having to ask or perhaps without realizing they need it in the first place."Augmented analytics is also being embedded into third-party applications and firmware for user convenience and to drive additional insights that weren't possible or practical to do before.Automated insights save timeThe modern business environment made traditional BI on its own obsolete. As the global business environment continues to move toward real time, business leaders can no longer wait days, weeks or months just to receive a report."The inflection point here is the difference between a rules-based [approach] and self-learning or training," said Tomás Puig, CEO and founder of marketing and analytics company Alembic Technologies. "There's good cases where the new fraud automation systems no longer require me to tell my bank I'm going on vacation because I have my cell phone with me that has GPS."Automation, via well-trained, responsible AI, is the driver behind the true promise of augmented analytics: giving users the actionable insights they need, when they need it, without having to ask or perhaps without realizing they need it in the first place.Ashley KramerChief product and marketing officer at SisenseThe main benefit of automated analytics is faster time to insights. A secondary benefit is saving humans time. For example, in a call center the logs are analyzed to understand what's working, what's not working and why. It's now possible to have the call transcripts automatically generated and analyzed, with the key moments identified, tagged and highlighted for faster issue resolution.From an operational standpoint, call centers can achieve greater consistency among call center agents who are interacting with customers. If there's a human QA team combing through transcripts and identifying issues, some of them may use all the criteria provided while others only use partial criteria."[The] current main use cases [in call centers] are automating the quality assurance process so users understand how the different agents are doing across different teams, geographics and time zones," said Jithendra Vepa, chief scientist at intelligent workforce platform provider Observe.ai. "It's important for the contact centers to know what their top agents' scores are compared to their bottom performers, where the bottom performers are lagging, how they can be coached and how they can improve their efficiency and performance over time."Embedded BI provides convenienceEmbedded BI extends insights out to the "edge," which provides the business with insights it didn't have previously, such as the impact of agricultural field health on a wine producer's long-term real estate planning. It can also provide users with greater convenience when BI capabilities can be accessed from within another application, so users don't have to switch back and forth between the two.For example, Personica embeds Sisense BI into its SaaS platform so restaurant owners can launch a loyalty program and use analytics to understand how well that program is working and with which customers.A classic use case is providing diners with a free entree after a certain number of visits during a given month. Then, using the analytics, the restaurants can understand how many customers visited the requisite number of times and what their check size is --which indicates how profitable or unprofitable customers are."Restaurants that analyze their diners' buying behaviors can target them with offers that appeal to them," said Dave Arthurs, chief product and technology officer at restaurant loyalty and personalization platform provider Personica. "If a customer always orders a certain type of burger, personalized email analytics show that emailing them offers for that particular burger will be more successful than plying them with a general burger or meal deal. It's a game changer for smaller shops and makes their diners' experience personalized and enjoyable."Restaurant owners use that intelligence to optimize many things including their marketing campaigns, food items on the menu and staffing -- business decisions that impact the bottom line.Augmented analytics helps organizations answer more types of questions about more areas of the business in a simplified way. Automated AI speeds time to insights. Embedded BI extends the reach of intelligence out to the edge and provides users with greater convenience when the BI capabilities can be accessed from within an application utilized on a regular basis. | Decision Making/Process Automation | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Isaac Sacolick | 14 Expert Predictions on Winning in 2022: CX, Hybrid Work, Hyperautomation, Ecosystems, and AI | How should we look back at 2021 and 2020, and how should we think about what’s critical to win in 2022? I believe we’re at the inflection point defining what digital transformation 2.0 will be about over the next five years. But before I share this with you in an upcoming post, I wanted to see what some experts are predicting for 2022. The predictions fall | https://blogs.starcio.com/2022/01/2022-predictions-cx-hyperautomation-ai.html | https://blogger.googleusercontent.com/img/a/AVvXsEhQbWxhEQSsII1PJeJwFSR9XAWtOT7BQjDUESGRkI14jDVAzrcQ_Y--9I7iIPOC9cWQMl3LwkD4vV08474mZynMvRICqkDmHaG3Z2FbtAW_VAv_WWEUSkzjTwrYGZZgA3exmH6U3-CIh0c--rIv_UbQANsduCvFTsG2yYYyzeLVlCOORxOXNzs=w1200-h630-p-k-no-nu | 2022-01-03T13:30:00Z | How should we look back at 2021 and 2020, and how should we think about what’s critical to win in 2022? I believe we’re at the inflection point defining what digital transformation 2.0 will be about over the next five years. But before I share this with you in an upcoming post, I wanted to see what some experts are predicting for 2022.The predictions fall into several categories that have always been critical to digital transformation, like personalizing customer experiences. Others are ripples from the pandemic, including supporting talent, hybrid working, and enabling hyperautomation. And then, there are a few that point to a future of API-driven ecosystems and democratizing AI.Welcome to 2022. I’m hoping this inflection year will lead to growth and acceleration over the years to come. Are you ready to drive digital transformation?Personalize the Customer Experience1. “In 2022, expect CX and core marketing KPIs to evolve quickly from today’s outdated models, which don’t fully take into account the digital world. The pandemic has changed the habits of consumers, who are increasingly online, and marketers must develop new ways to assess the long-term effectiveness of marketing campaigns based on first-party data and identity. Finally, the end of third-party cookies will require experimentation and new expectations as these shifts will ultimately force a change in thought, technology, and measurement going forward.” Bill Bruno, CEO at D4t4 Solutions2. “Hyper-personalization will become a competitive differentiator in 2022. As the world becomes increasingly digital, customers will expect experiences that are tailored and can adapt to their needs and desires in the moment. Applications need to take advantage of AI versus executing simple rules.” Tim Srock, MendixExtend Talent and Hybrid Working Culture Changes3. “The most experienced subject matter experts will be identified, and great length will be taken to capture their expertise and make it transportable and shareable. The concept of ‘heroes’ will no longer be viewed as a good thing. In fact, the existence of technical heroes will be a red flag for higher business risk as those heroes are short in supply and (due to the pandemic) may consider other life choices. Technologies will be sought that allow these experts to codify their expertise and allow it to be applied in perpetuity and shared globally by others.” - Song Pang, SVP customer engineering, NetBrain 4. “A workload that feels unmanageable is one of the leading causes of stress in the workplace, and employees are 70 percent less likely to experience burnout if they have enough time in the day to handle their tasks. With more than half of U.S. workers feeling burnout over the last year, employers will make strategic changes in 2022 to free up employee time. In addition to more flexible working hours and fewer meetings, more companies will automate routine tasks within their work software tools, such as automatically sending emails, texting customers, uploading assets to advertising channels, and more. This can save employees up to three hours per day to focus on the work that matters most.” - Daniel Lereya, Monday.com5. “Talent becomes the biggest barrier to growth for tech companies. The labor shortage – combined with high demand for tech skills and an influx of capital into services – means businesses are going to battle it out for the talent they need to hit growth targets. Attrition rates will be 2-3x higher than they were coming into the pandemic.” - Chris Barbin, Tercera6. “The human element is the culprit behind 85 percent of all cybersecurity breaches. Yet, I believe the success rate of cyber attacks on businesses will decrease in 2022 but remain above the pre-pandemic levels. Predicting the opposite may seem counterintuitive after two years of exponential growth in cyberattacks. But security issues appeared on the radar for many companies, possibly enough to compel many to invest in cybersecurity. The forecasted retreat of the pandemic will lead a part of the workforce to return to the office or adapt a hybrid form of work instead of full WFH. This will reduce potential access points for hackers. Meanwhile, those who opt for permanent remote work will have had the time to address the security issues overlooked in the rushed transition from offices.” - Tom Okman, Nord SecurityEnable the Future of Work with Hyperautomation and Self-Service Tech7. “Self-Service and Process Automation are starting to peak now, but these are going to get stronger as CIOs look for ways to improve consumer, partner, and developer experience and streamline processes. For example, health plans are currently having to scale solutions to meet the demands of the Interoperability Rule, 21st Century Cures Act. The challenge is extensive, from enabling self-service connectivity, providing robust security and audit, to scaling up to meet a difficult-to-estimate traffic volume. Self-service and automation are the clear answer to minimizing the impact on staff and systems.” - Ruby Raley, Axway.com8. “CIOs need to get in front of their architecture: More and more, automations are being driven by business users vs. IT. With the resurgence in no-code/low-code apps and platforms, the typical business user is becoming savvier in the world of tech. In 2021, we found that the percentage of users with a business title under the likes of business operations or product management made up nearly 50% of our users. We’ll start to see this trend take off next year, but CIOs must have their architecture set up to meet new requests from these folks. They will inevitably push to have a new tool added to the company’s tech stack with their new know-how, so I recommend they follow a GEARS framework to empower them in the best way: Govern, Enable, Adopt, Run, Scale.” - Carter Busse, CIO, Workato9. “Video and voice will become critical for team collaboration. There will be an increasingly tight integration of messaging and group collaboration into our daily activities. Enterprises will accelerate their shift from dependency on a single company for their messaging needs, as real-time collaboration and engagement become even more of an imperative in the new work environment.” - Gabriel Engel Rocket.ChatAccelerate Digital Transformation Through API-driven Ecosystems10. “B2B integration and collaboration will accelerate its digital transformation built on the backs of APIs and the cloud. Because cloud-native and API-first approaches have matured to an open everything architecture, the time and cost to innovation through partnerships and collaboration has significantly decreased. Furthermore, as the enterprise surface area is API-centric, more innovation is unlocked by unbundling and re-bundling offerings and supply chains across industries and verticals. Significant investment and start-up growth in B2B offerings for travel and logistics, warehousing, manufacturing, lending, insurance, and boutique retail. We won’t be talking much about GraphQL come the end of 2021. REST will continue to reign supreme.” - Vince Padua, Axway11. “To create cohesive experiences across modalities and contexts, businesses will increase their use of out-of-the-box APIs that can easily connect disparate systems and data sources. Developers will appreciate the speed with which they can connect core systems of record and deliver what customers actually want, such as recognizing the customer at every point in their journey and providing contextually relevant experiences.” - Tim Srock, Mendix12. “Developers will use more APIs in 2022 than ever before. Digitalization strategies will be more important than ever before, and APIs are driving digital transformation. According to recent research from RapidAPI, 71% of developers plan to use more APIs in 2022 than in 2021. In 2022 we will see a shift from digital transformation to digital acceleration. As most companies already have digital transformation efforts underway, it will become more about how companies can continue to innovate. “ Iddo Gino, CEO and founder, RapidAPI Enable AI with Distributed Databases and MLOps13. “Enterprises are eager to leverage AI to improve business outcomes. But as they gather massive volumes of data, they face challenges in extracting the right data mainly because of legacy databases and data silos. This will exponentially change next year as more enterprises begin to employ next-generation databases that can scale across their infrastructure and deliver the unified insights needed to support AI.” - Max Liu, PingCAP14. “The AI market is expected to blow past $500 billion by 2024 - next year is bound to be the stepping stone toward the rise of AI implementation in software development. For one thing, AI will alter how code is written, updated, and released - DevOps will become increasingly automated and responsive with developers becoming a prime persona for vendors.” - Jonathan Grandperrin, CEO of Mindee | Process Automation/Content Synthesis/Decision Making | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Stuart Barnes | How To Evolve Your Company’s AI Capabilities | Artificial intelligence (AI) has moved beyond being a buzzword and has made itself indispensable to modern business. With AI, companies can improve their customer retention, service quality and efficiency, and supplier interaction several times. The | https://blogs.sap.com/2021/12/28/how-to-evolve-your-companys-ai-capabilities/ | 2021-12-28T23:34:14Z | Artificial intelligence (AI) has moved beyond being a buzzword and has made itself indispensable to modern business. With AI, companies can improve their customer retention, service quality and efficiency, and supplier interaction several times. The leaders in the field have already figured out what they should do to improve their business operations, but many companies are only just getting started on the road to adopting AI. There isn’t any industry-based bible that can guide them to the best approach. Instead, they are left to search for their next steps forward. Their next steps might be a lot clearer based on where they are now, given what other companies have done. Where is your company on its route to full adoption?Level 1: FreshmanA company in the freshman stage of AI adoption is still experimenting with the technology. The directors and managers know what AI is capable of, but they haven’t yet witnessed anything that makes them think that this could actually work in their organization. These companies need to invest in fast projects with tangible outcomes for this trust to bloom. The underlying culture of such an organization must celebrate innovation and push boundaries. Companies at this level of adoption need to seek out AI systems built on well-established platforms with proper documentation, so they have support in case of issues.Level 2: SophomoreSophomore organizations are already implementing cultural changes to move towards a more AI-fueled workflow. They have centered on creating a culture that relies on data for insights rather than gut-feeling from managers and directors. These companies already have AI agents deployed and functioning in several areas of their organization. The next step is to seek out custom AI agents that can help their specific business needs. However, this is an iterative process, requiring building on past successes. Over time, this investment will net returns in well-developed AI agents to help the business shore up its operations.Level 3: JuniorJunior organizations start exploring the world outside their business bubble. In Junior organizations, the insight that AI will form a significant part of their business processes and allow them to compete with others in the industry will enable them to develop better ways of implementing AI into their operations. The focus of these organizations is on creating and enhancing that culture of curiosity. They are comfortable taking calculated risks with their AI implementation projects because of how well it has worked for them so far. Junior companies are best served by tapping into hosted AI solutions, allowing them to focus on what the agent can do and avoid dealing with the complex maintenance of AI development.Level 4: SeniorSeniors are at the top of the pack for AI-based companies. Their operations go hand in hand with their AI implementation, using it to offer more efficient outcomes. However, they aren’t just looking at the present but delving into AI’s potential in their business operations and interactions. These companies have achieved AI integration, but they must be constantly vigilant to ensure that their system continues to perform well. Additionally, they should keep an eye out for areas that custom AI can improve. Continued experimentation is the core of this company’s advancement.The Next Step in EvolutionAI is a core function of many business organizations. However, it needs proper UI and UX integrations to allow better customer interaction. Through it all, a business’s AI capabilities depend on where it intends to go next based on where it is now. Aiming too high could spell disaster for the company, just as surely as not aiming at all will. Balancing the business’s needs and its AI integration is a delicate task, but it is necessary for many organizations to achieve the promise of AI in their business operations. | Decision Making/Content Synthesis/Personalization | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Corinna Makris | Codex aims to enable engineers to collaborate within an IDE | With Codex, users highlight a codeblock in their IDE and request context, which then immediately notifies the appropriate team members. | https://venturebeat.com/2021/12/14/codex-aims-to-enable-engineers-to-collaborate-within-an-ide/ | 2021-12-14T17:30:50Z | Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022.Learn moreCodex, a company that provides a developer tool designed to let engineers communicate directly within an integrated development environment (IDE), today announced that it has secured $4.4 million in funding and is now in private beta. The seed round will help the company grow its team and onboard even more beta users from its waitlist of more than 200 companies. Codex was a member of the Y-Combinator Summer 2021 startup funding cycle.A month after receiving its Y Combinator funding, Codex began a private beta with 25 companies. Today, Codexs Beta release is a VsCode extension that enables context-sharing and collaboration as a local-first solution.Codex makes programming multiplayerGenerally, when a team member has a question about a code block they would have to find that user in Slack, or with a Pull Request. With Codex, users highlight a code block in their IDE and request context by asking a question. Codex performs the Git function git blame and then automatically prompts via a notification in Codex the members of the team who worked on the specific lines of code that youre asking a question about. Codex then holds that context in the correct location of the codebase. Codex is also designed to allow engineers to introduce context by annotating areas of a codebase.Were out to save engineers time and headaches by automatically storing and sharing institutional knowledge, cofounder and CEO Brandon Waselnuk said. Ive heard horror stories from so many engineers about answering the same question over and over again in Slack DMs, or multiple pair programming sessions for the same topic filling their calendars.Many companies have senior staff leaving with all this critical context thats never been written down or shared. This leads to teams having to, in the worst case, reverse engineer functionality to grok how it works. Its crazy how much time is spent on this work today, Waselnuk said,Staff retention is an issue affecting many industries, increasingly in tech. As seen in the recently released Work Trend Index survey from Edelman Data x Intelligence, nearly 41% of people are considering leaving their current employer this year, and theres a 4.5% increase in tech resignations.A quest for contextCodex founders, Waselnuk along with Karl Clement, COO, and Saumil Patel, chief technology officer, say they started the company as a side project in their quest to add a context layer on top of a Git repo to help onboard new engineers into a codebase. They wanted to provide engineers with a tool that could essentially answer why the developer architected software in a certain way, such as what decisions were made to use certain design patterns, or why they chose to use a for loop instead of a dictionary.Codex plans to offer integrations to other modern IDEs, allowing everyone at a company to share context, as well as a desktop application that will let engineers author and share onboarding paths through the codebase. Codex never stores source code and all processing happens locally on the users machine, the company claims.The funding round is led by NFX, backed by Y Combinator, and joined by Ludlow Ventures, Emergence Capital, and operator angels.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member | Digital Assistance/Process Automation | Computer and Mathematical | null | null | null | null | null | null |
|
news | ryan_greenblatt | Potential gears level explanations of smooth progress | Published on December 22, 2021 6:05 PM GMT(Epistemic status: exploratory. Also, this post isn't thorough; I wanted to write quickly.)(Thanks to Mary Phuong, Pedro Freire, Tao Lin, Rohin Shah, and probably a fewother people I'm forgetting for discussing this topic with me.)My perception is that it is a common belief that (after investment spendingbecomes sufficiently large) AI progress on domains of focus will likely besmooth. That is, it will consistently improve at similar rates at a yearover year basis (or perhaps somewhat smaller time periods).[1]Note that this doesn't imply that growth will necessarily be exponential,the growth rate could steadily decelerate or accelerate and I would still consider itsmooth growth.I find this idea somewhat surprising because in current ML domains there havebeen relatively few meaningful advancements. This seems to imply that each ofthese improvements would yield a spike in performance. Yet, empirically, Ithink that in many domains of focus ML progress has been relativelyconsistent. I won't make the case for these claims here. Additionally,empirically it seems that most domains are reasonably continuous even afterselecting for domains which are likely to contain discontinuities.This post will consider some possible gears level explanations of smoothprogress. It's partially inspired by this post in the 2021 MIRIconversations.Many partsIf a system consists of many part and people are working on many parts at the sametime, then the large number of mostly independent factors will drive downvariance. For instance, consider a plane. There are many, many parts and somesoftware components. If engineers are working on all of these simultaneously,then progress will tend to be smooth throughout the development of a given aircraft andin the field as whole.(I don't actually know anything about aircraft, I'm just using this as a model.If anyone has anything more accurate to claim about planes, please do soin the comments.)So will AIs have many parts? Current AI doesn't seem to have many parts whichmerit much individual optimization. Architectures are relatively uniform andnot all that complex. However, there are potentially a large number ofhyperparameters and many parts of the training and inference stack (hardware,optimized code, distributed algorithms, etc.). While hyperparameters can besearched for far more easily than aircraft parts, there is still relevant humanoptimization work. The training stack as a whole seems likely to have smoothprogress (particularly hardware). So if/while compute remains limiting, smoothtraining stack progress could imply smooth AI progress.Further, it's plausible that future architectures will be deliberatelyengineered to have more different parts to better enable optimization withmore people.Knowledge as a latent variablePerhaps the actual determining factor of the progress of many fields is theunderlying knowledge of individuals. So progress is steady because individualstend to learn at stable rates and the overall knowledge of the field growssimilarly. This explanation seems very difficult to test or verify, but itwould imply potential for steady progress even in domains where there arebottlenecks. Perhaps mathematics demonstrates this to some extent.Large numbers of low-impact discoveriesIf all discoveries are low-impact and the number of discoveries per year isreasonably large (and has low variance), then the total progress per year wouldbe smooth. This could be the case even if all discoveries are concentrated inone specific component which doesn't allow for parallel progress. It seemsquite difficult to estimate the impact of future discoveries in AI.Some other latent variableThere do seem to be a surprising large number of areas where progress issteady (at least to me). Perhaps this indicates some incorrect human bias (orpersonal bias) that progress should be less steady. It could also indicate theexistence of some unknown and unaccounted for latent variable common in many domains.If this variable was applicable in future AI development, then that wouldlikely indicate that future AI development would be unexpectedly smooth.Note that while smooth progress probably correlates with slower takeoffs,takeoff speeds and smoothness of progress aren't the same: it'spossible to have very fast takeoff in which progress is steady before and after some inflectionpoint (but not through the inflection point). Similarly it's possible tohave slower takeoffs where progress is quite erratic. ↩︎Discuss | https://www.lesswrong.com/posts/ShrAZXjTs5HTxDmGM/potential-gears-level-explanations-of-smooth-progress | 2021-12-22T18:05:59Z | (Epistemic status: exploratory. Also, this post isn't thorough; I wanted to write quickly.)(Thanks to Mary Phuong, Pedro Freire, Tao Lin, Rohin Shah, and probably a fewother people I'm forgetting for discussing this topic with me.)My perception is that it is a common belief that (after investment spendingbecomes sufficiently large) AI progress on domains of focus will likely besmooth. That is, it will consistently improve at similar rates at a yearover year basis (or perhaps somewhat smaller time periods).[1]Note that this doesn't imply that growth will necessarily be exponential,the growth rate could steadily decelerate or accelerate and I would still consider itsmooth growth.I find this idea somewhat surprising because in current ML domains there havebeen relatively few meaningful advancements. This seems to imply that each ofthese improvements would yield a spike in performance. Yet, empirically, Ithink that in many domains of focus ML progress has been relativelyconsistent. I won't make the case for these claims here. Additionally,empirically it seems that most domains are reasonably continuous even afterselecting for domains which are likely to contain discontinuities.This post will consider some possible gears level explanations of smoothprogress. It's partially inspired by this post in the 2021 MIRIconversations.Many partsIf a system consists of many part and people are working on many parts at the sametime, then the large number of mostly independent factors will drive downvariance. For instance, consider a plane. There are many, many parts and somesoftware components. If engineers are working on all of these simultaneously,then progress will tend to be smooth throughout the development of a given aircraft andin the field as whole.(I don't actually know anything about aircraft, I'm just using this as a model.If anyone has anything more accurate to claim about planes, please do soin the comments.)So will AIs have many parts? Current AI doesn't seem to have many parts whichmerit much individual optimization. Architectures are relatively uniform andnot all that complex. However, there are potentially a large number ofhyperparameters and many parts of the training and inference stack (hardware,optimized code, distributed algorithms, etc.). While hyperparameters can besearched for far more easily than aircraft parts, there is still relevant humanoptimization work. The training stack as a whole seems likely to have smoothprogress (particularly hardware). So if/while compute remains limiting, smoothtraining stack progress could imply smooth AI progress.Further, it's plausible that future architectures will be deliberatelyengineered to have more different parts to better enable optimization withmore people.Knowledge as a latent variablePerhaps the actual determining factor of the progress of many fields is theunderlying knowledge of individuals. So progress is steady because individualstend to learn at stable rates and the overall knowledge of the field growssimilarly. This explanation seems very difficult to test or verify, but itwould imply potential for steady progress even in domains where there arebottlenecks. Perhaps mathematics demonstrates this to some extent.Large numbers of low-impact discoveriesIf all discoveries are low-impact and the number of discoveries per year isreasonably large (and has low variance), then the total progress per year wouldbe smooth. This could be the case even if all discoveries are concentrated inone specific component which doesn't allow for parallel progress. It seemsquite difficult to estimate the impact of future discoveries in AI.Some other latent variableThere do seem to be a surprising large number of areas where progress issteady (at least to me). Perhaps this indicates some incorrect human bias (orpersonal bias) that progress should be less steady. It could also indicate theexistence of some unknown and unaccounted for latent variable common in many domains.If this variable was applicable in future AI development, then that wouldlikely indicate that future AI development would be unexpectedly smooth.Note that while smooth progress probably correlates with slower takeoffs,takeoff speeds and smoothness of progress aren't the same: it'spossible to have very fast takeoff in which progress is steady before and after some inflectionpoint (but not through the inflection point). Similarly it's possible tohave slower takeoffs where progress is quite erratic. | Content Synthesis/Discovery/Prediction | Computer and Mathematical | null | null | null | null | null | null |
|
news | bullcitydev | Trying Out Generics in Go | In case you’ve been living under a rock these past couple of years, you might not know that Go is getting generics in version 1.18. If you were aware of this, you still may not have been giving it much attention like myself.The other night I saw this tweet from the Go team which gave me the motivation to try using generics myself: Go 1.18 Beta 1 is released! | https://markphelps.me/posts/trying-out-generics-in-go/ | 2021-12-16T17:41:12Z | In case youve been living under a rock these past couple of years, you might not know that Go is getting generics in version 1.18. If you were aware of this, you still may not have been giving it much attention like myself.The other night I saw this tweet from the Go team which gave me the motivation to try using generics myself: Go 1.18 Beta 1 is released!Try it! File bugs! https://t.co/Ul1xGhvlkfAnnouncement: https://t.co/SQDn3AORMtDownload: https://t.co/FL3qVZlruU#golangpic.twitter.com/tz0uODeVXO— Go (@golang) December 14, 2021This post aims to describe my initial experience converting my markphelps/optional library from using code generation to using generics instead. Hopefully, after reading this post youll have a better understanding of what generics can do for you and what they cant.Some BackgroundMy first response when the plan to add generics was announced was meh. In my 5+ years working in Go, I can probably count on one hand the number of times that I felt like I really needed generics. Most of the code I write in my day job is very specific to the domain and doesnt fit the use case that generics aim to fill. That being said, I still wanted to play with the shiny new thing and at least get a handle on the syntax before 1.18 is released.I quickly realized that I already had an existing library that would likely benefit from being updated to use generics, markphelps/optional.If you want to read more about why and how markphelps/optional was created, check out my previous postThe tl;dr of it is that I used code generation and Gos text/template library to support both primitive and custom types. This is what I wanted to try to replace with generics.Generics to the RescueBefore jumping into code I took another quick read-through of the very detailed proposal to refresh my knowledge of the basic syntax.The basic syntax is this:// Print prints the elements of any slice.// Print has a type parameter T and has a single (non-type)// parameter s which is a slice of that type parameter.func Print[T any](s []T) { // same as above}Where the [T any] after the function name specifies that T can be any type (any is basically interface{}). Seems simple enough, so I created a new branch and got to deleting some code.I no longer needed any of the pre-generated optional types like byte, string, int, etc that are present in the main branch.This took the library from: lsCONTRIBUTING.md README.md cmd error.go float64.go go.sum int32.go int_test.go string_test.go uint32.go uintptr.goLICENSE.md bool.go complex128.go example_test.go generate.go int.go int64.go rune.go uint.go uint64.goMakefile byte.go complex64.go float32.go go.mod int16.go int8.go string.go uint16.go uint8.goTo: lsCONTRIBUTING.md LICENSE.md Makefile README.md example_test.go go.mod go.sum optional.go optional_test.goMuch better.I created optional.go to contain the generic code and got to writing. I should say I got to copying, as I simply inserted the code from one of the previously generated files and replaced the type names.tip: I find that it's much easier to extract/replace previously typed code with generic code than it is to write the code generically from scratch.The code now looks like this:// Optional is an optional T.type Optional[T any] struct { value *T}// New creates an optional T from a T.func New[T any](v T) Optional[T] { o := Optional[T]{value: &v} return o}// Set sets the value.func (o *Optional[T]) Set(v T) { o.value = &v}Now the user of my library could write code like this:o := New(42)v, err := o.Get()if err != nil { return err}I like the type inference here as well so that users don’t have to write something like:o := New[int](42)You do have to add the type though when you just want to declare a new variable like:var o optional.Optional[int]This is not as nice looking, but I understand the necessity.I’ll have to admit that overall the syntax was and still is a bit jarring to me. What are all these brackets doing here?! Over time I’m sure that I will get used to it, but I did notice that it took me longer to grok this code than normal.Trying Out ConstraintsGenerics wouldn’t be as useful as they are if there wasn’t a way to guard which types can actually use your generic code. The classic example is calling the String() method on a type T any like:// This function is INVALID.func Stringify[T any](s []T) (ret []string) { for _, v := range s { ret = append(ret, v.String()) // INVALID } return ret}This is invalid and will not compile because T being an any or interface{} doesn’t guarantee that it will have a String() method.The fix here is to use constraints, which define which types are allowed to be used. Constraints are interfaces that can also contain type sets such as:// SignedInteger is a constraint that matches any signed integer type.type SignedInteger interface { ~int | ~int8 | ~int16 | ~int32 | ~int64}For the above case, we can get by with using the existing fmt.Stringer interface like:// This function is valid.func Stringify[T fmt.Stringer](s []T) (ret []string) { for _, v := range s { ret = append(ret, v.String()) } return ret}At first, I tried to use constraints in my rewrite in order to not re-introduce this previously fixed bug.Not all types can marshall/unmarshall to and from JSON per this comment in the encoding/json source.In the pre-generics version I fixed this by simply not generating code for complex types such as complex64/complex128, but this wouldn’t work in the generic version.Initially, I thought about using a constraint like:type marshallable interface { constraints.Integer | constraints.Float | ~string | json.Marshaler}This would work for most use cases and prevent anyone from using the library with complex types such as complex64, but what about if they weren’t using JSON at all and just wanted to do something like creating an optional of a func:o := New(func()int{ return 42})Using the above constraint when defining Optional would prevent this invocation because of the marshallable constraint.In the end, I decided to relax the constraints and just use any when defining T. This would allow the user to write the above, however, if they were to call json.Marshal(o) with o being a func, they would get the error:json: error calling MarshalJSON for type optional.Optional[func() int]: json: unsupported type: func()I think this is ok from a library perspective to return the error at runtime instead of guarding against it at compile-time, since after all, thats why json.Marshal can return this error in the first place.update: I removed the section on build tags because I realized I was using them incorrectly.A Few TipsBefore wrapping up, I’d like to provide a few tips for those getting started with 1.18 and generics:As stated earlier, it’s usually easier to replace typed code with generics than to write it using generics from the beginning. Start with a sample implementation using any type, then replace that type with generics.Read the proposal/spec! It is pretty dense, but at least skim through it before starting to write code with generics. It helped me out a few times in this experiment with examples and explanations when I got stuck.Constraints can be tricky. They are often necessary to prevent compile-time errors, but can also introduce runtime errors. Thankfully the proposal authors thought of this and provided some suggestions on dealing with this problem.Remember, it’s still in beta, so I wouldn’t go updating your production code to use generics just yet In ClosingI that I was able to delete 95% of my code because of generics.I think that generics will be very beneficial to maintainers who create libraries for things like my own as well as for searching, sorting, transformations, and the like. I can also see some them being extremely helpful for creating well-tested libraries around the various concurrency patterns that are sometimes tricky to get right. I’m also personally excited to never have to write min/max type functions for the various primitive types ever again.I’m not sure that most Go developers will be using generics daily, but it’s nice to know that they exist if we need them.What do you think about the addition of generics to Go? Do you see yourself using them regularly or just occasionally? Are you looking forward to generics solving certain code duplication in your own code? Reach out to me on Twitter and let me know!fyi: If you want to checkout the 1.18 branch of markphelps/optional and see the code it’s available here. I’ll likely create a new release once 1.18 is released. | Content Creation/Decision Making | Computer and Mathematical | null | null | null | null | null | null |
|
news | Vilhjálmur Þorsteinsson | PyPy: Natural Language Processing for Icelandic with PyPy: A Case Study | Natural Language Processing for Icelandic with PyPy: A Case StudyIcelandic is oneof the smallest languages of the world, with about 370.000 speakers. Itis a language in the Germanic family, most si | https://www.pypy.org/posts/2022/02/nlp-icelandic-case-study.html | 2022-02-06T15:00:00Z | Icelandic is oneof the smallest languages of the world, with about 370.000 speakers. Itis a language in the Germanic family, most similar to Norwegian, Danishand Swedish, but closer to the original OldNorse spoken throughoutScandinavia until about the 14th century CE.As with other small languages, there are worries that the language maynotsurvivein a digital world, where all kinds of fancy applications are developedfirst - and perhaps only - for the major languages. Voice assistants,chatbots, spelling and grammar checking utilities, machine translation,etc., are increasingly becoming staples of our personal and professionallives, but if they dont exist for Icelandic, Icelanders will gravitatetowards English or other languages where such tools are readilyavailable.Iceland is a technology-savvy country, with world-leading adoptionrates of theInternet,PCs and smart devices, and a thriving software industry. So thegovernment figured that it would be worthwhile to fund a 5-yearplan to build naturallanguage processing (NLP) resources and other infrastructure for theIcelandic language. The project focuses on collecting data anddeveloping open source software for a range of core applications, suchas tokenization, vocabulary lookup, n-gram statistics, part-of-speechtagging, named entity recognition, spelling and grammar checking, neurallanguage models and speech processing.My name is Vilhjálmur Þorsteinsson, and Im the founder and CEO of asoftware startup Miðeind in Reykjavík,Iceland, that employs 10 software engineers and linguists and focuses onNLP and AI for the Icelandic language. The company participates in thegovernments language technology program, and has contributedsignificantly to the programs core tools (e.g., a tokenizer and aparser), spelling and grammar checking modules, and a neural machinetranslation stack.When it came to a choice of programming languages and development toolsfor the government program, the requirements were for a major, wellsupported, vendor-and-OS-agnostic FOSS platform with a large and diversecommunity, including in the NLP space. The decision to select Python asa foundational language for the project was a relatively easy one. Thatsaid, there was a bit of trepidation around the well known fact thatCPython can be slow for inner-core tasks, such as tokenization andparsing, that can see heavy workloads in production.I first became aware of PyPy in early 2016 when I was developing acrossword game Netskrafl in Python 2.7for Google App Engine. I had a utility program that compressed adictionary into a Directed Acyclic Word Graph and was taking 160seconds to run on CPython 2.7, so I tried PyPy and to my amazement sawa 4x speedup (down to 38 seconds), with literally no effort besidesdownloading the PyPy runtime.This led me to select PyPy as the default Python interpreter for mycompanys Python development efforts as well as for our productionwebsites and API servers, a role in which it remains to this day. Wehave followed PyPys upgrades along the way, being just about to migrateour minimally required language version from 3.6 to 3.7.In NLP, speed and memory requirements can be quite important forsoftware usability. On the other hand, NLP logic and algorithms areoften complex and challenging to program, so programmer productivity andcode clarity are also critical success factors. A pragmatic approachbalances these factors, avoids premature optimization and seeks acareful compromise between maximal run-time efficiency and minimalprogramming and maintenance effort.Turning to our use cases, our Icelandic texttokenizer "Tokenizer" is fairly light,runs tight loops and performs a large number of small, repetitiveoperations. It runs very well on PyPys JIT and has not required furtheroptimization.Our Icelandic parser Greynir(known on PyPI as reynir) is,if I may say so myself, a piece of work. It parses natural languagetext according to ahand-written context-freegrammar,using an Earley-typealgorithm as enhancedby Scott andJohnstone.The CFG contains almost 7,000 nonterminals and 6,000 terminals, and theparser handles ambiguity as well as left, right and middle recursion. Itreturns a packed parse forest for each input sentence, which is thenpruned by a scoring heuristic down to a single best result tree.This parser was originally coded in pure Python and turned out to beunusably slow when run on CPython - but usable on PyPy, where it was3-4x faster. However, when we started applying it to heavier productionworkloads, it became apparent that it needed to be faster still. Wethen proceeded to convert the innermost Earley parsing loop from Pythonto tightC++and to call it from PyPy viaCFFI, with callbacks fortoken-terminal matching functions (business logic) that remained onthe Python side. This made the parser much faster (on the order of 100xfaster than the original on CPython) and quick enough for our productionuse cases. Even after moving much of the heavy processing to C++ and using CFFI, PyPy still gives a significant speed boost over CPython.Connecting C++ code with PyPy proved to be quite painless using CFFI,although we had to figure out a few magic incantations in our buildmoduleto make it compile smoothly during setup from source on Windows andMacOS in addition to Linux. Of course, we build binary PyPy and CPythonwheels for the most common targets so most users dont have to worryabout setup requirements.With the positive experience from the parser project, we proceeded totake a similar approach for two other core NLP packages: our compressedvocabulary package BinPackage(known on PyPI as islenska) and ourtrigrams database package Icegrams.These packages both take large text input (3.1 million word forms withinflection data in the vocabulary case; 100 million tokens in thetrigrams case) and compress it into packed binary structures. Thesestructures are then memory-mapped at run-time usingmmap and queried viaPython functions with a lookup time in the microseconds range. Thelow-level data structure navigation is done inC++,called from Python via CFFI. The ex-ante preparation, packing,bit-fiddling and data structure generation is fast enough with PyPy, sowe havent seen a need to optimize that part further.To showcase our tools, we host public (and open source) websites such asgreynir.is for our parsing, named entityrecognition and query stack andyfirlestur.is for our spell and grammarchecking stack. The server code on these sites is all Python running onPyPy using Flask,wrapped in gunicorn and hosted onnginx. The underlying database isPostgreSQL accessed viaSQLAlchemy andpsycopg2cffi. This setuphas served us well for 6 years and counting, being fast, reliable andhaving helpful and supporting communities.As can be inferred from the above, we are avid fans of PyPy andcommensurately thankful for the great work by the PyPy team over theyears. PyPy has enabled us to use Python for a larger part of ourtoolset than CPython alone would have supported, and its smoothintegration with C/C++ through CFFI has helped us attain a bettertradeoff between performance and programmer productivity in ourprojects. We wish for PyPy a great and bright future and also lookforward to exciting related developments on the horizon, such asHPy. | Content Creation/Content Synthesis/Discovery | Unknown | null | null | null | null | null | null |
|
news | Aidéo Technologies | Aidéo Technologies Expands Executive Team with Two Key New Hires | Jason Sroka joins as Chief Data Sciences Officer; Brent Backhaus becomes Chief Technology OfficerWEST PALM BEACH, Fla., Jan. 19, 2022 (GLOBE NEWSWIRE... | https://finance.yahoo.com/news/aid-o-technologies-expands-executive-224400710.html | https://s.yimg.com/ny/api/res/1.2/Mq.FJAfVyzE7wKg8LlqrTg--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD0xNzA-/https://s.yimg.com/uu/api/res/1.2/uVhv0NFzWIqNKiZ9kxIKWQ--~B/aD0yMTcxO3c9MTUyOTc7YXBwaWQ9eXRhY2h5b24-/https://media.zenfs.com/en/globenewswire.com/0d8f0d6fb518504c687f5061abdb5a77 | 2022-01-19T22:44:00Z | Jason Sroka joins as Chief Data Sciences Officer; Brent Backhaus becomes Chief Technology OfficerWEST PALM BEACH, Fla., Jan. 19, 2022 (GLOBE NEWSWIRE) -- Aidéo Technologies a leader in AI-enabled automation technology for the healthcare industry announced today that it has added two top executives to its senior leadership team: Chief Data Sciences Officer Jason Sroka, PhD, and Chief Technology Officer Brent Backhaus.Over his 30-year career, Sroka has led large data science teams in high-growth companies, most recently as Chief Analytics Officer at SmartSense by Digi, a leading provider of IoT solutions. Sroka will lead the companys expanding data sciences team and will help lead the companys expansion of its capabilities in artificial intelligence and machine learning. He holds a bachelors degree in both electrical engineering and managerial sciences from the Massachusetts Institute of Technology, as well as a doctorate from the Harvard University/MIT Health Sciences and Technology Program.I am very excited to join the Aidéo team, Sroka said. I believe Aidéo has identified an impactful business model, and the company has assembled a talented management team that uniquely positions it to capitalize on the changing healthcare IT environment.Backhaus has more than 30 years of experience in engineering and information technology, including as co-founder of a number of healthcare IT startups. In addition to holding executive positions for companies in the radiology field, he co-founded and served as Chief Information Officer of Verata Health, an AI-based healthcare IT firm. Backhaus holds both a bachelors and masters degree from the Massachusetts Institute of Technology.The vision and opportunity to scale at Aidéo Technologies captured my attention from the very beginning, Backhaus said. Im excited about solving complex issues that will help the business deliver on our core mission with transformative technology. Im looking forward to leading and mentoring this team of talented engineers.Aidéo Technologies provides software automation tools using artificial intelligence and machine learning to the healthcare industry. Its innovative Gemini AutoCode tool interprets structured and unstructured clinical data to deliver a People Assisted Coding (PAC) solution for Revenue Cycle Management (RCM) that continuously improves with each interaction. Its accuracy matches or exceeds human coders.Jason and Brent bring a wealth of experience to help build on the companys success, said Rob Gontarek, President and CEO of Aidéo Technologies. They are both immensely talented and seasoned executives who will allow us to continue our growth trajectory and scale the company to meet market demand.Established in 2009, Aidéo Technologies has development centers in West Palm Beach, Fla., Silicon Valley and Mumbai, India.Contact:Laura KrejcaVice President, Client [email protected] | Content Synthesis/Decision Making | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Doug Turnbull | LambdaMART in Depth | Reimplementing LambdaMART in Python for endless tinkering and learning | https://softwaredoug.com/blog/2022/01/17/lambdamart-in-depth.html | 2022-01-21T15:35:52Z | LambdaMART is a classic. Its the endlessly tinkerable classic car of ranking algorithms. If you can grok the algorithm, you can play with the model architecture, coming up with your own variations on this learning to rank staple.Last time I went over the intuition behind how LambdaMART learns to ranks in pseudocode. Now Lets develop a full, performant, end-to-end LambdaMART implementation in Python (w/ Pandas & SkLearn).All the code for this notebook is in this collab notebook. So follow along! :)Quick Review: Shop Smart, Shop LambdaMART!LambdaMART lets us plug-and-play how we optimize the relevance of the system. We can use ranking metrics familiar to a search or recommendations practitioners.Need to get just the top item right? Like in a search for an item by name? Then perhaps optimize the precision of the first position. In other words, forget all the other results, if the first position is right, the algorithm has done well. Theres no prize for second place!Or need to show the user a variety of relevant results? Like a user searching for shoes seeing a variety of different products to compare/contrast? Then choosing a metric like Discounted Cumulative Gain (DCG) computed over the top 10 works best. In this case, a range of relevant results (with a bias towards top positions) captures what we want.The flexibility to optimize whatever we want, makes LambdaMART an enduring Information Retrieval algorithm. You can even invent metrics for your specific algorithm or needs - for example this blog post on optimizing user-product marketplaces.Reviewing the LambdaMART algorithmHow does LambdaMART achieve such flexibility?Last time, I described how LambdaMART optimizes search relevance via a pairwise swapping. Zeroing in on each query, we swap each potential search result in the training data and look at the impact to a metric like DCG.For example, our ideal search results for a query rambo might be something likeRankMovieLabel (0-4)1RamboVery Relevant (4)2First BloodVery Relevant (4)3Rambo IIIVery Relevant (4)50Forrest GumpVery Irrelevant (0)Lets say we computed DCG for this ideal ranking for the query, and it was DCG = 25We know in our training data that the movie Forrest Gump is not relevant for the rambo query. So we can play what if. What if we didnt achieve a perfect relevance ranking. What if instead Forrest Gump jumped to the top of the result set? We can perform this swap, and see the impact to DCG.RankMovieLabel (0-4)1RamboVery Irrelevant (4)2First BloodVery Relevant (4)3Forrest GumpVery Relevant (0)50Rambo IIIVery Relevant (4)DCG = 19.9Ooof DCG got 5.1 worse with this scenario. We can track the positive / negative impact to that swap in a table.QueryResultDCG Swap ImpactramboRambo III5.1ramboForrest Gump5.1We can repeat this what if swapping for every pair of labeled results for rambo. We accumulate these swaps in a value we call lambdas.QueryResultTotal DCG Swap Impact (aka lambdas)ramboRambo III201.1ramboForrest Gump-50.1Of course, we dont just do this on rambo! We build this table for every query in our training set, creating a table of lambdas for each row, by playing what if within each query in our training set.QueryResultTotal DCG Swap Impact (aka lambdas)ramboRambo III201.1ramboForrest Gump-50.1rockyRocky220.0rockyRocky & Bullwinkle-21.5Along with each example, we also have a vector of features we think might predict relevance for this document. In our case, we use the Elasticsearch relevance scores for the Query column in the title and overview fields. We can, and should, of course experiment with any and all features we might want to learn relevance on - but this is a blog post about the algo, not finding good features, so well stick with these!QueryResultTotal DCG Swap Impact (aka lambdas)Features [title, overview]ramboRambo III201.1[20.6, 4.0]ramboForrest Gump-50.1[0.0, 0.0]rockyRocky220.0[21.0, 7.0]rockyRocky & Bullwinkle-21.5[11.0, 3.0]Wed suppose that high title and overview scores correlate with search results that maximize DCG. But we have to express that mathematically of course. We need to connect it to the DCG swap impact weve seen to our proposed features.We need:Where X is our features (title, overview, whatever) and y are the lambdas we want to predict.We could use just about any model to make this prediction. Last time, we trained a decision tree using the lambdas as labels and the features as predictors. When we did this, we created a ranking function that could use features derived from a search system like Elasticsearch to approximate our chosen ranking metric like DCG.Adding in Gradient BoostingLambdaMART isnt just pair-wise swapping and predicting though. Its a lot more.LambdaMART is an ensemble model. This means the final prediction is a sum of little kiddy models. The final prediction is something likeprediction = model1.predict(features) + model2.predict(features) + + modelN.predict(features)How could this possibly work?We train modelN with knowledge on how well prediction = model1 + + modelN-1 ranks relative to the ideal (as expressed by the lambdas). Round-by-round we predict NOT the lambdas, but something (in spirit) closer to last_prediction - expected_prediction.We dont learn the lambdas, but the error in the current models prediction.Of course, the lambdas still matter. In the very first round, thats all we have to go on! So the error really is the lambdas themselves!We call this iterative technique gradient boosting. An ensemble of models where each round we learn to compensate for the error in the sum of the previous rounds. Gradient boosting is an extremely powerful technique used extensively throughout the industry. Its crucial to me to deeply understand it and its power, especially when it comes to ranking and relevance.Sound abstract? Lets dive inLambdaMART with Python and PandasOK ok, to the code already. First we set up what we need.We will assume we ran the loading steps in the notebook. Those steps load a movie corpus into Elasticsearch (thanks to TheMovieDB!) with a simple toy training set and features (remember title and overview Elasticsearch relevance scores).Lets quickly inspect the starting training set dataframe, judgments, were starting with:uidqidkeywordsdocIdgradefeatures01_75551rambo75554[11.657399, 10.083591]11_13701rambo13703[9.456276, 13.265001]21_13691rambo13693[6.036743, 11.113943]31_132581rambo132582[0.0, 6.869545]41_13681rambo13684[0.0, 11.113943].....................138540_3707940star wars370790[0.0, 0.0]138640_12675740star wars1267570[0.0, 0.0]You see in the output that:We have search keywords, doc id, and a grade (the label for relevance for this query, from 0-4 with 0 meaning completely irrelevant, 4 very relevant)Each query has a unique id qidEach example has its own unique id uidA feature vector, holding title and overview well use to predict on each round of the ensembleNext, well copy this dataframe and perform some initialization. Most importantly, we need toSet up the last rounds model prediction (at first 0.0)Sort by this prediction within each query - (important to do this with a stable sort so we avoid seeing the impact swapping equivalent results)lambdas_per_query=judgments.copy()lambdas_per_query['last_prediction']=0.0lambdas_per_query.sort_values(['qid','last_prediction'],ascending=[True,False],kind='stable')uidqidkeywordsdocIdgradefeatureslast_prediction01_75551rambo75554[11.657399, 10.083591]0.011_13701rambo13703[9.456276, 13.265001]0.021_13691rambo13693[6.036743, 11.113943]0.031_132581rambo132582[0.0, 6.869545]0.041_13681rambo13684[0.0, 11.113943]0.0........................138540_3707940star wars370790[0.0, 0.0]0.0138640_12675740star wars1267570[0.0, 0.0]0.0138740_3979740star wars397970[0.0, 0.0]0.0138840_1811240star wars181120[0.0, 0.0]0.0138940_4305240star wars430520[0.0, 0.0]0.0We next compute statistics specific to DCG:display_rank - what rank the document would appear for this query if sorted by the models current relevance score (ie last_prediction)discount - the weight of this position (positions on the top of the page matter more)gain - the importance of this result as a function of the relevance grade (2^grade - 1 here)lambdas_per_query.sort_values(['qid','last_prediction'],ascending=[True,False],kind='stable')# DCG statslambdas_per_query['display_rank']=lambdas_per_query.groupby('qid').cumcount()lambdas_per_query['discount']=1/np.log2(2+lambdas_per_query['display_rank'])lambdas_per_query['gain']=(2**lambdas_per_query['grade']-1)lambdas_per_query[['qid','display_rank','discount','grade','gain']]Outputqiddisplay_rankdiscountgradegain0101.0000004151110.630930372120.500000373130.430677234140.386853415..................138540250.21031000138640260.20801500138740270.20584700138840280.20379500138940290.20184900We compute display_rank here by first sorting on the models score (last_prediction) then using Pandas cumcount which simply assigns a counter to each row in each query, with items sorted first receiving a lower value.Computing Pairwise DeltasLambdaMART works by accumulating the impact to our ranking metric when a querys potential result X is swapped with query result Y.To compute pairwise deltas, you might imagine we need to visit each query and loop over each labeled result, then within that result loop again to do a swap.Something like this mangled psuedocode:forquery_judgmentsinjudgments.groupby('qid'):forresult_xinquery_judgments:forresult_yinquery_judgments:dcg_x+=swap(result_x,result_y)But pandas lets us be cleverer and more efficient than that.In just two lines of Pandas we can compute the DCG impact of swapping a given position with every other position. We do this by joining the dataframe with itself! Recall your relational joins, an outer join joins every instance with every-other instance.# each querys result paired with every other resultswaps=lambdas_per_query.merge(lambdas_per_query,on='qid',how='outer')# Compute impact of x swapped with yswaps['delta']=np.abs((swaps['discount_x']-swaps['discount_y'])*(swaps['gain_x']-swaps['gain_y']))swaps[['qid','display_rank_x','display_rank_y','delta']]qiddisplay_rank_xdisplay_rank_ydelta01000.00000011012.95256221024.00000031036.83188141040.000000...............490194029250.000000490204029260.000000490214029270.000000490224029280.000000490234029290.000000A lot happens is these two lines:Self-join of our judgments on qid using an outer join. Every xth position for a query is now paired with every yth positionDeltas computed - After the swap the _x version of each value has the higher grade. It turns out the impact of the swap on DCG is (discount_x - discount_y) * (gain_x - gain_y).Note that its not entirely self evident that the DCG swap impact is (discount_x - discount_y) * (gain_x - gain_y). But if you work out the math behind DCG, youll see that indeed this would be the impact of DCG_x minus DCG_y.Luckily, the Ranklib Learning to Rank library has conveniently figured out how to compute the swap impact of supported ranking statistics as an optimization. For example, you can find ERRs here.Important to note, we havent accumulated anything yet!* *Were still in a pairwise space with every potential query result paired with every other one. This dataframes delta shows the component of every accumulated DCG change for a given x. Recall we need to accumulate back each xth rows total DCG swap impact aka the lambdas.In this space, we also need to examine another important variable - how wrong the model is. AKA, how well last_prediction predicts the correct ordering between two potential search results. We do that next.Rho rho rho your boat - computing the models current errorIn the previous blog post, we focused on just the pairwise swapping trick to compute labels. We just computed our swaps in Python above with a cute Pandas outer-join.Of course we know now, reality is more complex. LambdaMART is an ensemble model. An ensemble model sums the predictions of each of its constituent models. In other words, we actually rank not by a single decision tree, but by summing all the little kiddy models together:# relevance scores for this query relative to a docfeatures=compute_features(query,document)prediction=0.0formodelinensemble:prediction+=model(features)At each round of training, we already have a set of models that have been trained.But these are bad models on Santas Naughty List. Our next model will be nice, right? Can it compensate for all those disappointing ones?To compensate, we dont learn the deltas directly, but learn the error between the naughty list of existing models and the ground-truth ranking. In other words, we only care about each pairwise delta in proportion to its error in the current model.A value rho computes how well (or not well!) the existing, naughty models predict the current pairs relative relevance. If the xth result appears to be more relevant than the yth, and the model predicts it as so, then were good. Otherwise, the existing naughty models get a demerit, and we have to try to make up for their negligence.We do this by weighing each delta by rho which computes the models current error. Then we use rho * delta instead of delta. Instead of the DCG delta of swapping these two positions, its DCG delta * how far prediction is from capturing that DCG delta.Luckily rho isnt hard to know, its simply the difference between prediction at x and prediction - y. But rho dresses up fancy by wrapping itself in a function scaling it to between 0-1: 1 / (1 + e^(prediction_x - prediction_y)).Lets walk through this step-by-step to make it clearer.swaps['rho'] = 1 / (1 + np.exp(swaps['last_prediction_x'] - swaps['last_prediction_y']))swaps[['qid', 'display_rank_x', 'display_rank_y', 'delta', 'last_prediction_x', 'last_prediction_y', 'rho']]qiddisplay_rank_xdisplay_rank_ydeltalast_prediction_xlast_prediction_yrho01000.0000000.00.00.511012.9525620.00.00.521024.0000000.00.00.531036.8318810.00.00.541040.0000000.00.00.5........................490194029250.0000000.00.00.5490204029260.0000000.00.00.5490214029270.0000000.00.00.5490224029280.0000000.00.00.5490234029290.0000000.00.00.5Starting out, because all the predictions are 0.0 (1 / (1 + e^0) == 0.5), every rho is equal. This means, well first try to directly learn the deltas computed in the previous section.In subsequent rounds, things get more interesting!Consider what it means if the naughty model actually predicts the relationship between potential search results x and y correctly: last_prediction_x >> last_predction_y.1 / (1 + e^100) -> approaches 0Here because rho approaches 0, the new, nice model will not attempt to not impact the previous models score. The models already correct, no need to compensate any for this particular pair.But what if the model is waaay off for this pair last_prediction_x << last_predction_y?1 / (1 + e^-100) -> approaches 1For this pair of search results, the delta will be weighted much higher. The next model in the ensemble will attempt to predict delta, thus compensating for the error in previous rounds.With this code in place, we can compute the lambdas for this round. What the decision tree predicts.swaps['lambda'] = 0# Only look at places where x > yslice_x_better =swaps[swaps['grade_x'] > swaps['grade_y']]swaps.loc[swaps['grade_x'] > swaps['grade_y'], 'lambda'] = slice_x_better['delta'] * slice_x_better['rho']swaps[['qid', 'display_rank_x', 'display_rank_y', 'delta', 'last_prediction_x', 'last_prediction_y', 'rho', 'lambda']]qiddisplay_rank_xdisplay_rank_ydeltalast_prediction_xlast_prediction_yrholambda01000.0000000.00.00.50.00000011012.9525620.00.00.51.47628121024.0000000.00.00.52.00000031036.8318810.00.00.53.41594141040.0000000.00.00.50.000000...........................490194029250.0000000.00.00.50.000000490204029260.0000000.00.00.50.000000490214029270.0000000.00.00.50.000000490224029280.0000000.00.00.50.000000490234029290.0000000.00.00.50.000000Note the slice_x_better line. We actually only care about cases where x > y. This is an optimization, but also avoids the values canceling each other out. The important line has slice_x_better['delta'] * slice_x_better['rho']Accumulate lambdas and train!Were still with a dataframe dealing with each pair. Each delta has been computed, reflecting the impact of that swap to DCG. Weve also weighed each swap according to how far off the current model predicts that value.But remember this psuedocode?forquery_judgmentsinjudgments.groupby(qid):forresult_xinquery_judgments:forresult_yinquery_judgments:dcg_x+=swap(result_x,result_y)We done the swap part. But we havent done the += part - we need to accumulate all these back to dcg_xWe do this by merging in the difference: lambas_x - lambas_y# Better minus worselambdas_x = swaps.groupby(['qid', 'display_rank_x'])['lambda'].sum().rename('lambda')lambdas_y = swaps.groupby(['qid', 'display_rank_y'])['lambda'].sum().rename('lambda')lambdas = lambdas_x - lambdas_ylambdas_per_query = lambdas_per_query.merge(lambdas, left_on=['qid', 'display_rank'], right_on=['qid', 'display_rank_x'], how='left')lambdas_per_query[['qid', 'docId', 'grade', 'features', 'lambda']]This captures the rho-weighted impact of each pairwise swap.As with a great deal of machine learning, the actual line of code that trains a model gets attention, but in reality is perhaps the least interesting:features=lambdas_per_query['features'].tolist()tree=DecisionTreeRegressor()tree.fit(features,lambdas_per_query['lambda'])We add this back to our ensembleRinse and Repeat!We next recompute last_prediction for each row (simply by accumulating in this next models prediction). Then we go back to the top of this article to recompute everything! We iterate until we reach diminishing returns. Remember at search time evaluating each model in the ensemble for every candidate result will add time! So you have to decide a good model size.I encourage you to walk through the full loop in the collab notebook.Learning RateI left out a few (I feel minor or easy to learn) details of LambdaMART. You can inspect the notebook to learn more.But Ill quickly mention that in reality theres a learning rate - we actually dont take the existing models as the current models ranking when computing last_prediction, but temper it down by multiplying it my learning_rate. This helps us more gradually learn the error and prevents overfittingGet in touch!As always, please get in touch and teach me what Im missing. As Im lame and old, the easiest place to find me is LinkedIn! And Id love to work with you at Shopify - please consider applying as a relevance engineer on our team. Weve got a great deal of search and recommendation problems at pretty intense WebScale.Special Thanks toSimon Eskildsenfor reviewing this post and giving substantive edits and feedback! | Content Creation/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Ganesh Bell | Investing in Artificial Intelligence: How MLOps Drives Enterprise AI Wins | Insight Partners had a banner year in 2021, with more than $50 billion in capital commitments across over 200 investments. While we invest in founders across a wide spectrum of businesses, this series focuses on outlining our theses on four verticals that we’re particularly excited about in 2022: Artificial intelligence (AI), fintech, cybersecurity and healthtech. […]The post Investing in Artificial Intelligence: How MLOps Drives Enterprise AI Wins appeared first on DevOps.com. | https://devops.com/investing-in-artificial-intelligence-how-mlops-drives-enterprise-ai-wins/ | 2022-01-27T09:00:26Z | Insight Partners had a banner year in 2021, with more than $50 billion in capital commitments across over 200 investments. While we invest in founders across a wide spectrum of businesses, this series focuses on outlining our theses on four verticals that were particularly excited about in 2022: Artificial intelligence (AI), fintech, cybersecurity and healthtech. Artificial intelligence has remained a super-hot category for good reason: It has the potential to transform nearly every industry and business.At Insight, weve long been bullish on the many use cases for AI. In the past year, weve invested in image recognition software from Netherlands-based ScreenPoint Medical, which improves early detections of breast cancer, Covera Health, which provides a quality analytics platform to reduce medical errors in radiology, CARTO, which helps companies use and understand spatial analytics, and Laminar, a cloud data security platform to continuously monitor and protect against data leaks among other game-changing companies. In total, Insight invested in 49 different companies across a broad spectrum of artificial intelligence and machine learning use cases in 2021, which represents a 172% increase from the year prior. As we look ahead into 2022, we expect artificial intelligence tools to continue to dominate. We see the ecosystem dividing into three primary categories: Layer 1 – Base platform companies: Algorithms; frameworks; infrastructure and workbenches for creating ML systems Layer 2 – Cross-industry capability companies: Turnkey machine learning-based systems that solve specific problems spanning multiple industries (e.g., cybersecurity) Layer 3 – Industry-specific companies: Applications powered by prediction or classification systems that target specific, niche use cases in one domain Companies and investors will find valuable AI/ML software across all three layers. At Insight, we initially focused on layers two and three. We invested in startups creating robust ML systems that addressed specific problems, either vertically (like credit underwriting company Zest AI) or horizontally (like cybersecurity company SentinelOne). We thought that economic moats were hardest to build at layer one; in part as a result of robust open source ecosystems and because large public cloud vendors deliver many of these tools at low prices.Insight Partners Three-Layer Framework of Machine Learning CompaniesThroughout this year, however, we realized we were missing a piece of the puzzle. While there will always be market demand for cross-industry capabilities and industry-specific applications, 2021 taught us that as companies try to productize AI for the first time, base platforms will become increasingly important as well. After all, more companies than ever are charging forward on artificial intelligence projects to try to guide business decisions and save time and money. A whopping 90% of enterprises are either actively running AI projects or plan to in the next 12 months, according to a recent survey. But heres the catch: Most of these projects will fail. There are many reasons why including a lack of sufficient data or black-box models that spit out indecipherable results. Regardless of the reason, high failure rates can hurt the ecosystem if they result in corporations hesitation to move forward with future AI projects. Accordingly, Insight is particularly excited about a subsection of AI tools that can significantly increase companies likelihood of succeeding at their project goals: Machine earning operations or MLOps. MLOps tools can improve a machine learning pipeline from start to finish by helping gather, manage and label data, experiment and test the model selection, deploy multiple models in production at once and protect against model and data drifts and attacks. Holistically, the goal is to improve communication and collaboration between data scientists, data engineers and business analysts through the machine learning life cycle, similar to how DevOps tools help improve communication in the software development life cycle. Machine learning is an iterative process with constant feedback loops and the need for continuous monitoring, which makes MLOps tools to manage this process even more critical. Overview of some of the key areas of an MLops pipelineWhile every company rolling out AI projects can benefit from MLOps, the type of tools they require will depend on their needs, resources and strategies. Companies should ask themselves: How sophisticated is our data science team? How mission-critical do we want our models to be? Are our data sources structured or unstructured? Do we want open source, commercial or homegrown tooling? The answers to those questions will point companies to specific MLops tools that are best suited for their specific challenges. For example, some platformslike Dataiku or DataRobotare end-to-end products with wide capabilities, which are typically easier to use and thus encourage model creation and experimentation from workers across job functions, regardless of technical knowledge. These so-called ‘citizen data scientists’ can boost analytic workflows for companies with fewer data science experts and create significant value, but there’s a risk that the deployed models are not as well understood or controlled. These types of tools are best for companies that want to use machine learning to drive business insights and analytics rather than as part of their core business function. If a company has a sophisticated data science team, however, it may lean towards specific point solutions, or what Insight calls best-in-class MLOps solutions. While these products require greater technical knowledge to maintain and deploy, they allow for much more control, depth and sophistication in a system. One startup offered the apt analogy that using platforms versus best-in-class solutions is like driving an automatic car versus a stick shift. There are multiple MLOps tools for each part of the pipeline, and we see a world where each tool has enough market space to support it becoming a large company. At Insight, we spent 2021 learning about and investing in many of what we consider to be the best-in-class MLOps tools&mdash?as determined by highly satisfied customer feedback, clear market momentum and well-respected and knowledgeable teams. This perspective led to our investments in Explorium, Rasgo, Weights & Biases, Deci, Run AI, Fiddler, Tonic, Dataiku and Databricks, as well as several others that have yet to be announced. Insight Partners MLOps market map While Insight expects MLOps to continue to be an essential driver of AI success in the new year, we also expect to see several other trends take shape in 2022: First, we will see the emergence of two machine learning pipelines, one for structured and one for unstructured data. Unstructured data sourceslike images, videos and audiorequire specific data warehouses, data management, pipelines and model management tools that maximize productivity and accuracy. Given the nuances and complexities between the two data types, we expect to see more MLOps tools shift toward owning their category for one or the other. As enterprises scale from dozens of models to hundreds or even thousands of models, we expect the increased industrialization of AI. This will result in the rise of orchestration layers that sit on top of machine learning pipelines to help manage all the different tooling. These layers will integrate across multiple tools in the ML pipelinewhether open source, commercial or homegrownand provide an integrated environment (or so-called single pane of glass) to better track and manage ML pipelines. Orchestration layers will help enterprises gain more control over their pipelines while still supporting best-in-class tools. Insight believes we are at an inflection point of moving from a model-centric to a data-centric world. In a model-centric approach, you ask how you can change a models code to improve the performance of the system, whereas, in a data-centric model, you ask how you can change the data to improve it [system performance]. We expect the most successful tools of 2022 to be focused on supporting the new tasks, workflows, and jobs spawned by the data-centric movement.We will also see a new class of MLOps tools focused on closing the growing AI skills shortage and gap across businesses. These tools will focus on increasing the efficiency of the model production process and lower the bar for ML development.Finally, we expect to see a significant increase in the overall enterprise adoption of ML, with a shift from experimentation to deploying models in full production. Last year, we saw many enterprises form AI research teams and hire data scientists to experiment with machine learning. In 2022, we believe these enterprises will start realizing the full capacity of their AI projects and driving significant business value.This increase in enterprise adoption will also be driven by the explosion of edge devices that will enable new use cases of ML adoption. As active machine learning and artificial intelligence investors, we are excited to continue to watch the market and support new entrants in what will surely be another dynamic and exciting year. George Mathew, Lonne Jaffe and Sophie Beshar co-authored this piece. | Process Automation/Decision Making/Content Synthesis | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Christine Hall | New Greylock venture partner Mustafa Suleyman is looking for AI’s next best thing | Mustafa Suleyman has been working in artificial intelligence for 12 years, trying to figure out how to use machine learning systems and AI to do important... | https://techcrunch.com/2022/01/20/new-greylock-venture-partner-mustafa-suleyman-is-looking-for-ais-next-best-thing/ | https://s.yimg.com/ny/api/res/1.2/SOftqXcaH0WnRvl5hNFN0A--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD02NzI-/https://s.yimg.com/uu/api/res/1.2/M7aPSLuGdgHJcnq3Ubyn1w--~B/aD0xMDQ5O3c9MTg3MjthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/techcrunch_350/7f34c2bee7834ab6154f4cc0c173b132 | 2022-01-20T18:00:58Z | Mustafa Suleyman has been working in artificial intelligence for 12 years, trying to figure out how to use machine learning systems and AI to do important things in the work and have impact at scale.And over the years, I've been lucky enough to be at the forefront of a lot of cutting-edge applications of AI, he told TechCrunch. Over the years, that experience has given me a really good intuition, for when a piece of AI is ready for the real world, and when it's not. The projects that I've seen fail are mostly because people overestimate how good AI is. People think that AI is this silver bullet and it can solve all your problems, but actually you have to craft the environment and the application problem correctly.Suleyman is now putting that intuition to good use on the venture capital side. After previously investing in companies alongside Greylock partner Reid Hoffman, Suleyman made the full leap to join Greylock as a venture partner.He joins the firm from Google, where he was vice president of AI product management and AI policy. Prior to that, he co-founded and led DeepMind, an AI company acquired by Google in 2014."AI will no doubt touch every aspect of our lives in the coming years, and we at Greylock believe there's an abundance of opportunity for entrepreneurs to continue building in this space. Hoffman said via email. Mustafa is visionary, knowledgeable and connected across the vast AI landscape, and we know he'll be a valuable resource to our existing portfolio, and a board member of choice for new AI investments."Suleyman is eager to work with early-stage founders, who he said are super fearless and really energetic people who arent afraid to take a risk when they see opportunity.He believes there is much opportunity in the AI companies that are out there. He says the time has come for AI due to the ecosystem maturing, and more people understand the strengths and limitations of AI and how to use it. AI is also becoming more accessible and usable by people who dont necessarily have a technical background, but are creating new ways to use machine learning.As such, he says that AI is at an inflection point: We now have AI systems that can generate new text, conversational sentences and whole paragraphs, which is approaching human-level performance.The range of things that entrepreneurs can do when you can basically have an API that can talk to your users in natural language is amazing and the imagination is the limit, Suleyman added. Combined with the explosion of the metaverse and all the excitement around that in the last couple years, I definitely think that AI has a central role to play in virtual reality, in gaming and the metaverse.While those are a few areas where he sees AI winning, there are some areas where he thinks AI is not quite there yet, including large-scale infrastructure, manufacturing and logistics distribution, and he is looking for those companies stepping up to build AI into scheduling and coordination of shipping, shipment tracking and route optimization.When looking at what is down the pipeline for AI, he again sees the metaverse and gaming dominating the space as characters and avatars come to life -- think Ready Player One, where there is an animated parallel world in tandem with ours.It'll be able to do planning and prediction, so it won't just be kind of generating scripted or written responses, it will actually be emergent and responsive to the environment, Suleyman said. On the enterprise side, the time for AI and medical imaging is definitely now that we've proven it works in research, and now we're ready for large-scale production that's going to be very successful. | Content Creation/Decision Making/Discovery | Management/Computer and Mathematical | null | null | null | null | null | null |
news | PR Newswire | Ankura Adds Transformative Data and Technology Capability with Acquisition of Noragh Analytics | Ankura Consulting Group, LLC, an independent global expert services and advisory firm, today announced that it has acquired Noragh Analytics, a globally... | https://finance.yahoo.com/news/ankura-adds-transformative-data-technology-141000137.html | https://s.yimg.com/uu/api/res/1.2/EXveDK92SUX_uxBzGbGUTQ--~B/aD04MTt3PTQwMDthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/prnewswire.com/805b09a2ffa37ea4cc2920f24b627495 | 2022-02-09T14:10:00Z | Transaction enables Ankura to bring advanced data analytics solutions to complex problems facing clients and strengthens Ankura's position as a global leader in technology-enabled consultingNEW YORK, Feb. 9, 2022 /PRNewswire/ -- Ankura Consulting Group, LLC, an independent global expert services and advisory firm, today announced that it has acquired Noragh Analytics, a globally recognized leader in advanced data analytics and the enabling of machine learning and artificial intelligence to actionable use of complex data. Noragh is a proven leader in delivering market solutions related to some of the world's most challenging business issues and further positions Ankura as a leading innovator in addressing the needs of clients both today and in the future.Ankura Logo (PRNewsfoto/Ankura)The Noragh proprietary platforms gather both structured and unstructured data from internal and external data sources into comprehensive database networks. This enables a collection of information and data solutions that combine analytic and data management technology with artificial intelligence, machine learning and graph analytics to deliver an unmatched ability to transform business processes, address fraud and predict behavior, patterns and outcomes. The acquisition complements and expands Ankura's market-leading Data and Technology offerings and places the Firm squarely at the forefront of the application of transformative artificial intelligence and machine learning solutions for business and commercial application.Noragh Analytics was founded by retired Four Star Admiral and former U.S. Navy Commander in Chief of the Atlantic Fleet, William "Bud" Flanagan, Jr. Admiral Flanagan and his team have vast experience in bringing successful analytic solutions enabled by their technology platforms, in both the government and commercial business sectors. Noragh's capabilities provide real-time actionable analytics that are saving the insurance industry hundreds of millions of dollars annually in fraud avoidance. The platforms' methods of applying data solutions enables artificial intelligence technology and expands Ankura's ability to generate key insights from big data to deliver solutions for clients facing complex, cross-disciplinary issues."Acquiring Noragh Analytics marks an exciting milestone in the continued growth of Ankura's advanced data analytics and technology enabled consulting capabilities worldwide. As we continue to push the envelope on innovation, the Noragh platform allows us to find answers and provide solutions within data that our competitors simply cannot reach. Today's announcement also reflects our ongoing commitment to delivering the best cutting-edge technology and solutions to clients for their most pressing challenges," said Kevin Lavin, Ankura's Chief Executive Officer."I'm very excited that the Noragh team and these game changing solutions are becoming part of Ankura's global technologies solutions offering," said Olaf Larson, Senior Managing Director and leader of Ankura's Data and Technology Business Group. "These innovative platforms and associated customized solutions will further expand our leading position in bringing advanced data and analytics solutions to bear on our clients' toughest challenges as we continue building upon the expansion of our data analytics and technology enabled consulting capabilities in the Americas, Europe, Asia Pacific, the Middle East and Africa.""What started off as a critical initiative to help the intelligence community fight terror has grown to become one of the most advanced analytics companies of its kind," said Admiral Flanagan. "I'm looking forward to unleashing the combined talent of Noragh and Ankura to bring the power of our platforms to bear for our clients in the commercial and government marketplaces. Ankura's ability to leverage our platforms to solve complex problems, mitigate risk, inform strategy and create competitive advantage for clients will be world-leading and strongly positions us for the future."To learn more about the functionality of the Noragh platforms, visit: https://noragh.com/.Sidley Austin LLP served as legal advisor to Noragh Analytics. Davis Polk & Wardwell LLP served as legal advisor, together with Jayaram Law, Inc as special IP legal advisor to Ankura.About AnkuraAnkura Consulting Group, LLC is an independent global expert services and advisory firm that delivers services and end-to-end solutions to help clients at critical inflection points related to change, risk, disputes, finance, performance, distress, and transformation. The Ankura team consists of more than 1,700 professionals serving 3000+ clients across 55 countries who are leaders in their respective fields and areas of expertise. Collaborative lateral thinking, hard-earned experience, expertise, and multidisciplinary capabilities drive results and Ankura is unrivaled in its ability to assist clients to Protect, Create and Recover Value. For more information, please visit, www.ankura.com.View original content to download multimedia:https://www.prnewswire.com/news-releases/ankura-adds-transformative-data-and-technology-capability-with-acquisition-of-noragh-analytics-301478788.htmlSOURCE Ankura | Detection and Monitoring/Decision Making/Content Synthesis | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Richard Yonck | AI2 releases demo of question-answering model it claims outperforms GPT-3 | From the beginning of the digital age, we’ve looked to our computers for answers. Nowhere is this so evident as in the computer science discipline known as question answering, or QA. Overlapping the fields of natural language processing and information retrieval, QA initially utilized handcrafted knowledge bases to answer questions. Today, however, these systems increasingly use machine learning and pre-trained language models like OpenAI’s GPT-3 to achieve their results. One of the newest and most innovative of these QA models has recently been developed at the Allen Institute for AI (AI2) in Seattle. Macaw, which loosely stands for “Multi-angle c(q)uestion… Read More | https://www.geekwire.com/2022/ai2-releases-demo-of-question-answering-model-it-claims-outperforms-gpt-3/ | 2022-01-21T16:36:15Z | From the beginning of the digital age, weve looked to our computers for answers. Nowhere is this so evident as in the computer science discipline known as question answering, or QA. Overlapping the fields of natural language processing and information retrieval, QA initially utilized handcrafted knowledge bases to answer questions. Today, however, these systems increasingly use machine learning and pre-trained language models like OpenAIs GPT-3 to achieve their results.One of the newest and most innovative of these QA models has recently been developed at the Allen Institute for AI (AI2) in Seattle. Macaw, which loosely stands for Multi-angle c(q)uestion answering, was developed as an open-source project and is available to the community via GitHub.If youd like to see how Macaw works, AI2 is making their interactive demo available to the public starting today. You can use the demo to explore Macaws answers and compare them to those given by the GPT-3 language model on a benchmark set of questions.Macaw is built on top of Googles pre-trained open-source T5 language model, which is less than a tenth the size of the well-known GPT-3 language model. Yet, despite its considerably smaller size, Macaw outperformed GPT-3 by more than 10% on Challenge300, a suite of 300 questions designed to push various limits of question-answering systems. In a performance comparison with three other QA systems, Macaw scored 75%, compared with 65% for both GPT-3 and AI2s Jurassic-1 and 57% for Googles T5-CBQA. (T5-Closed Book QA)Whats so interesting to me is Macaw produces quite remarkable answers, to the extent it can even surprise someone like me whos worked in AI for years, said Peter Clark, project lead and senior research manager at AI2. Clark has worked in artificial intelligence for more than three decades.Of the existing pretrained QA systems, none have previously been able to perform as well as GPT-3s few-shot model. A few-shot model generates answers based on a limited number of samples.But that was before Macaw. The relative performances between Macaw and GPT-3 may seem counterintuitive given GPT-3 is based on 175 billion parameters, while Macaws T5 model uses only 11 billion. These parameters are the weights and biases in the models neural network. This can be thought of as a general indication of the scale and overall complexity for pretrained language models and in recent years, increased scale has been accompanied by improved capabilities. But Macaws approach to QA makes a huge difference.Many early QA systems relied on querying a structured database for their answers: input a question and the system would output a corresponding answer. But more recently, QA systems have been based on pre-trained language models which have the potential for much greater versatility. In Macaws case, its multi-angle approach allows it to use different combinations of inputs and outputs to achieve surprisingly impressive results. Instead of just giving it one permutation, Clark explains, were giving it all of these different permutations and that has two advantages. One is, in principle, it should improve its performance in all of these individual tasks. And secondly, it allows a bit more flexibility in using the system.Macaw achieves this by using a combination of slots as its inputs and outputs. These slots are the Context, Question, Multiple-choice options, Answer and Explanation. By using different angles or combinations of these slots as the input, a different, often more accurate output can be generated. (see figure 1 below)For example, you might input a question along with its context in order to get an answer. Or you might give Macaw a question, an answer and the context and the system would return a set of multiple-choice options as its output. Macaw can even generate explanations to accompany its answers, though the studys researchers consider these to be of lower quality than the other kinds of results the model generates. Weve used it for generating explanations for questions and answers, Clark explains. So, we can say, we have an answer to this question. Can you explain it for us? And Macaw was able to do that as well.Macaws output is further improved by recursively assembling its inputs and outputs in different combinations, so they can be fed back into the system, often improving the accuracy of the final output. The result is a much stronger zero-shot performance. Zero-shot in this context refers to generating answers to questions for which Macaw has no prior labeled examples. This amounts to a kind of inference, a variation of the kind of reasoning people perform, reaching conclusions based on evidence. While its no surprise the system isnt as good as we are at this, its still impressive. Though Macaw reaches its answers very differently from how we do, its a little analogous to our own reasoning. Several pieces of information are often more helpful than a single item or data point, even though they may not all be directly relevant. Different contexts may also alter the conclusions we reach. At a certain level, the same can be said for Macaw.One of the ongoing challenges in artificial intelligence is to give it general common sense about the world, much as people have. To this end, AI2 has its Mosaic project, a team led by Yejin Choi that focuses on developing machine common sense reasoning. But Macaw also demonstrates a considerable degree of common sense as a result of its being trained on millions of real-world questions and answers. Combined with its ability to perform zero-shot reasoning, its feasible that Macaw and other common sense systems could one day support each other, contributing to and reinforcing each others capabilities.Clark acknowledges this. There is a huge overlap and our two teams do work very closely together, he said. Details about Macaws approach and methods can be found in the study paper, General-Purpose Question-Answering with Macaw by Oyvind Tafjord and Peter Clark, both of AI2. | Information Retrieval Or Search/Prediction | Unknown | null | null | null | null | null | null |
|
news | Tachyum Selected for Pan-European Project Enabling 1 AI Zettaflop in 2024 | LAS VEGAS--(BUSINESS WIRE)--Tachyum Selected for Pan-European Project Enabling 1 AI Zettaflop in 2024 | https://www.businesswire.com/news/home/20220125005273/en/Tachyum-Selected-for-Pan-European-Project-Enabling-1-AI-Zettaflop-in-2024 | 2022-01-25T13:06:51Z | LAS VEGAS--(BUSINESS WIRE)--Tachyum today announced that it was selected by the Slovak Republic to participate in the latest submission for the Important Projects of Common European Interest (IPCEI), to develop Prodigy 2 for HPC/AI. Prodigy 2 for HPC/AI will enable 1 AI Zettaflop and more than 10 DP Exaflops computers to support superhuman brain-scale computing by 2024 for under 1B. As part of this selection, Tachyum could receive a 49 million Euro grant to accelerate a second-generation of its Tachyum Prodigy® processor for HPC/AI in a 3-nanometer process.The IPCEI program can make a very important contribution to sustainable economic growth, jobs, competitiveness and resilience for industry and the economy in the European Union. IPCEI will strengthen the EUs open strategic autonomy by enabling breakthrough innovation and infrastructure projects through cross-border cooperation and with positive spill-over effects on the internal market and society as a whole.Even though global semiconductor demand has exploded, Europe's share of the market has shrunk. By announcing the proposed European Chips Act, President Ursula von der Leyen sent a strong economic signal to EU member states about the need to achieve technological sovereignty. The Act builds on the European Alliance on Processors and Semiconductor Technologies which was launched in July 2021. The strategic project that Tachyum has been selected for will result in a major inflection point for the data center industry and the democratization of AI.According to Europe 2020s economic strategy, IT is one of the key drivers for smart, sustainable and inclusive growth. Its ability to transform the structures and dynamics of European society enables people to organize their lives and businesses in new ways, manage information and learn throughout their lives, and contribute to the pool of online knowledge. Because of the importance of these projects, pan-European research and innovation, like that in which Tachyum has been invited to participate, represent an advancement of common European interests.Tachyums Prodigy processor can run HPC applications, convolutional AI, explainable AI, general AI, bio AI, and spiking neural networks, plus normal data center workloads, on a single homogeneous processor platform, using existing standard programming models. Without Prodigy, hyperscale data centers must use a combination of disparate CPU, GPU and TPU hardware, for these different workloads, creating inefficiency, expense, and the complexity of separate supply and maintenance infrastructures. Using specific hardware dedicated to each type of workload (e.g. data center, AI, HPC), results in underutilization of hardware resources, and more challenging programming, support, and maintenance. Prodigys ability to seamlessly switch among these various workloads dramatically changes the competitive landscape and the economics of data centers.As we near the completion of our first phase of delivering Prodigy to market in order to transform data centers throughout the world, we have already begun work on the next generation of Prodigy processors that will enable us to help transform economic opportunities for society at large, said Dr. Radoslav Danilak, founder and CEO of Tachyum. The importance of AI and supercomputers cannot be underestimated. Being selected as a key participant in this IPCEI allows us to reimagine how we can best revolutionize industry by developing a 3nm version of Prodigy that will enable superhuman brain-scale computing. We look forward to working with the commission and contributing to the success of this project.Follow Tachyumhttps://twitter.com/tachyum https://www.linkedin.com/company/tachyum https://www.facebook.com/Tachyum/About TachyumTachyum is transforming AI, HPC, public and private cloud data center markets with Prodigy, the worlds first Universal Processor that delivers industry-leading performance, cost, and power efficiency for both specialty and general-purpose computing. When Prodigy processors are provisioned in a hyperscale data center, they enable all AI, HPC, and general-purpose applications to run on one hardware infrastructure, saving companies billions of dollars per year. A fully functional Prodigy emulation system is currently available to select customers and partners for early testing and software development. With data centers currently consuming over 3% of the planets electricity, predicted to be 10% by 2025, the ultra-low power Prodigy Universal Processor is critical, if we want to continue doubling worldwide data center capacity every four years. Tachyum, Co-founded by Dr. Radoslav Danilak with its flagship product Prodigy, is marching towards tape out and chip sampling in 2022, with software emulations and an FPGA-based emulator running native Linux available to early adopters. The company is building the worlds fastest 64 AI exaflops supercomputer in 2022 in the EU with Prodigy chips. Tachyum has offices in the United States and Slovakia. For more information, visit https://www.tachyum.com/. | Unknown | Unknown | null | null | null | null | null | null |
||
news | PR Newswire | AI-Enabled E-Commerce Solutions Market worth US$ 16.8 Billion by 2030 - Exclusive Report by InsightAce Analytic | The newly published report titled "Global AI-Enabled E-Commerce Solutions Market– By Trends, Industry Competition/Company Profiles Analysis, Revenue (US... | https://finance.yahoo.com/news/ai-enabled-e-commerce-solutions-113000291.html | https://s.yimg.com/uu/api/res/1.2/WKwbqK3i7twRn3pvlDLYiA--~B/aD0yMDA7dz0yMDA7YXBwaWQ9eXRhY2h5b24-/https://media.zenfs.com/en/prnewswire.com/6b4d9c5cf03ecf3ef815aa7e40c40b29 | 2022-02-02T11:30:00Z | JERSEY CITY, N.J., Feb. 2, 2022 /PRNewswire/ -- The newly published report titled "Global AI-Enabled E-Commerce Solutions Market By Trends, Industry Competition/Company Profiles Analysis, Revenue (US$ Billions) and Forecast Till 2030." features in-depth industry analysis and an extensive market study, exploring its significant factors.According to InsightAce Analytic's latest market intelligence research report, the global AI-Enabled E-Commerce Solutions Market size was valued at US$ 3.71 Billion in 2021, and it is expected to reach US$ 16.8 Billion in 2030, record a promising CAGR of 15.7% from 2021 to 2030.Request Sample Report:https://www.insightaceanalytic.com/request-sample/1198Software development can be easier with ML and A.I. technologies, making predictive analytics more accurate. The AI-based platform enables a retailer to increase its sales target by reaching the right customer with critical analysis based on the information collected. E-commerce A.I. has transformed the online shopping field with features such as image search, customer-focused search, re-identifying potential customers, visual assistants, and bid data analytics. Newly developed A.I. applications consider various parameters such as purchasing history, product searches, and demographics of customers to measure future buying trends and make product recommendations based on browsing patterns, likely to drive the market growth.The fast implementation of cloud-based platforms, reduced rate of manual errors in development processes, surging use of machine learning-based applications, cost-effective procedures, rapid adoption of advanced technologies, easy access to real-time data, quick resolution of complaints through artificial intelligence-enabled chat boxes, and increasing government support for the R&D of AI-based platforms are estimated to drive the AI-enabled E-Commerce solutions market over the projected period. Additionally, the recent emergence of Covid-19 has had a significant impact on the AI-enabled E-Commerce solutions market as it has created the need for warehouse automation and management. However, factors such as the complex and time-consuming development procedures, high cost of A.I. solutions, and the lack of skilled professionals may hinder the market growth in the upcoming years.In terms of Region, North America will dominate the AI-enabled E-Commerce solutions market in upcoming years. It will continue its trend over the forecast period 2022-2030, attributed to the fast adoption of advanced technologies, rising online shopping trend, easy access to cloud applications or platforms, increasing R&D investments, and entry of new players in the market. On the other hand, in Europe, growing partnerships within key players and fast adoption of cloud-based platforms will help bloom the market. Asia-Pacific's market is going to expand faster during the forecast years. This growth is attributable to changing lifestyles, technology advancement specifically in Artificial Intelligence, and the increasing rate of online shopping due to digital transformations.Request for ToC/Proposal:https://www.insightaceanalytic.com/report/global-ai-enabled-e-commerce-solutions-market/1198Major players included in the AI-enabled E-Commerce solutions market are Riskified, Reflektion, Inc., Shelf.ai, Osaro, Sift, AntVoice SAS, Appier Inc, Eversight, Inc., Granify Inc., LivePerson, Inc., Manthan Software Services Pvt. Ltd., PayPal, Inc., Sidecar Interactive, Inc., Tinyclues SAS, Twiggle Ltd., Celect, Inc., Cortexica Vision Systems Ltd., Crobox B.V., Deepomatic SAS, Dynamic Yield Ltd., Emarsys eMarketing, Systems AG, Satisfi Labs, Inc., Staqu Technologies Pvt. Ltd., ViSenze Pte Ltd., and Other Prominent Players. Product innovations, acquisitions, collaborations, partnerships, and R&D activities are key strategies used by players in this market.In July 2021, LivePerson, Inc. acquired German conversational A.I. company e-bot7. The strategic acquisition propels LivePerson's self-service capabilities empowering brands of all sizes to quickly launch AI-powered messaging experiences as well as its continued growth across Europe.In February 2021, LivePerson, Inc. launched A.I. Annotator, a new tool automating brand-consumer conservations faster than ever by harnessing agents' expertise to improve Conversational A.I.In August 2019, Nike acquired Boston-based predictive analytics company Celect (A.I. platform), marking its latest acquisition in a string of deals to bolster its direct-to-consumer strategy. This acquisition allowed Nike to integrate their inventories with the website and app of the company.In August 2017, PayPal launched new Innovation Labs at the Chennai and Bangalore Tech centres. The lab is the first by PayPal in India and it is third after the USA and Singapore. The lab works as a platform to promote innovation which is a core value for PayPal globally. The labs support Machine Learning, A.I., Data Science, IoT, AR, V.R., and basic robotics projects.Get the report? @ https://www.insightaceanalytic.com/customisation/1198AI-Enabled E-Commerce Solutions Market SegmentsAI-Enabled E-Commerce Solutions Market Size (Value US$ Bn) & Forecasts and Trend Analyses, 2022 to 2030 based on TechnologyDeep LearningMachine LearningNLPAI-Enabled E-Commerce Solutions Market Size (Value US$ Bn) & Forecasts and Trend Analyses, 2022 to 2030 based on ApplicationsCustomer Relationship ManagementSupply Chain AnalysisFake Review AnalysisWarehouse AutomationMerchandizingProduct RecommendationCustomer ServiceAI-Enabled E-Commerce Solutions Market Size (Value US$ Bn) & Forecasts and Trend Analyses, 2022 to 2030 based on DeploymentAI-Enabled E-Commerce Solutions Market Size (Value US$ Bn) & Forecasts and Trend Analyses, 2022 to 2030 based on RegionNorth AmericaEuropeAsia PacificLatin AmericaMiddle East & AfricaNorth America AI-enabled E-Commerce solutions market revenue (US$ Million) by Country, 2020 to 2030Europe AI-enabled E-Commerce solutions market revenue (US$ Million) by Country, 2020 to 2030GermanyFranceItalySpainRussiaRest of EuropeAsia Pacific AI-enabled E-Commerce solutions market revenue (US$ Million) by Country, 2020 to 2030IndiaChinaJapanSouth KoreaAustralia & New ZealandLatin America AI-enabled E-Commerce solutions market revenue (US$ Million) by Country, 2020 to 2030MexicoRest of Latin AmericaThe Middle East & Africa AI-enabled E-Commerce solutions market revenue (US$ Million) by Country, 2020 to 2030South AfricaRest of the Middle East & AfricaWhy should buy this report:To receive a comprehensive analysis of the prospects for the global AI-enabled E-Commerce solutions marketTo receive an industry overview and future trends of the AI-enabled E-Commerce solutions marketTo analyze the AI-enabled E-Commerce solutions market drivers and challengesTo get information on AI-enabled E-Commerce solutions market size value (US$ Mn) forecast till 2030Significant investments, mergers & acquisitions in the AI-enabled E-Commerce solutions market industryFor More Information @https://www.insightaceanalytic.com/enquiry-before-buying/1198Other Related Reports Published by InsightAce Analytic:Global Next-Generation Personalized Beauty MarketGlobal Artificial Intelligence (A.I.) In Beauty and Cosmetics MarketGlobal Artificial Intelligence in Genomics MarketAbout Us:InsightAce Analytic is a market research and consulting firm that enables clients to make strategic decisions. Our qualitative and quantitative market intelligence solutions inform the need for market and competitive intelligence to expand businesses. We help clients gain a competitive advantage by identifying untapped markets, exploring new and competing technologies, segmenting potential markets, and repositioning products. Our expertise is in providing syndicated and custom market intelligence reports with an in-depth analysis with key market insights in a timely and cost-effective manner.Contact Us:Priyanka TilekarInsightAce Analytic Pvt. Ltd.Tel : +1 551 226 6109Asia: +91 79 72967118Visit: www.insightaceanalytic.comEmail: [email protected] Us on LinkedIn @ bit.ly/2tBXsgSFollow Us On Facebook @ bit.ly/2H9jnDZView original content:https://www.prnewswire.com/news-releases/ai-enabled-e-commerce-solutions-market-worth-us-16-8-billion-by-2030---exclusive-report-by-insightace-analytic-301473632.htmlSOURCE InsightAce Analytic Pvt. Ltd. | Prediction/Decision Making | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Kyle Wiggers | o9 Solutions raises $295M to apply analytics to the supply chain and beyond | o9 Solutions, a company developing analytics solutions for supply chain challenges and beyond, has raised a whopping $295 million in capital. | https://venturebeat.com/2022/01/26/o9-solutions-raises-295m-to-apply-analytics-to-the-supply-chain-and-beyond/ | 2022-01-26T11:07:25Z | Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.Investors are throwing capital behind supply chain products as the appetite for ecommerce explodes. Supply chain startups raised $24.3 billion in venture funding during the first three quarters of 2021 alone, according to Pitchbook 58% more than the full-year total for 2020. Many of the companies drawing big investments focus on managing warehouses, matching freight loads to transportation, and mapping out cost-effective routes solutions for which demand is rising due to both the climbing cost of logistics and rise of bottlenecks, the Wall Street Journal notes in a recent piece.The many beneficiaries of boom include U.K.-based digital supply chain and freight platform maker Beacon and Altana AI, a startup creating a platform to unify global supply chain data. Other companies including Verusen, Paxafe, and NextBillion, and SourceDay have collectively raised tens of millions of dollars of capital. The investment craze has extended to the public market, where issuers like ProShares and Breakwave now offer exchange-traded funds that track the index of companies involved in goods and raw materials shipping.One of the more successful players in the market is o9 Solutions, a Dallas, Texas-based company that applies AI to help organizations plan their supply chains and more. o9 today announced that it raised $295 million from General Atlantic and General Atlantics BeyondNetZero and Generation Investment Management with participation from existing investors including KKR, valuing the company at $2.7 billion.Supply chain analyticso9 was founded in 2009 by Sanjiv Sidhu and Chakri Gottemukkala. Sidhu, a former member of the AI technical staff at Texas Instruments, previously founded i2 Technologies, which developed supply chain management software in the early 1990s and 2000s. Gottemukkala served in a range of roles at i2 across product development, sales, and strategy.After JDA Software acquired i2 in 2008, Sidhu says he saw an opportunity to employ technologies like AI, machine learning, and analytics to build a platform focused on tackling major supply chain and business intelligence problems. He and Gottemukkala formed a team and consulted potential customers to begin developing o9s platform, which launched in late 2014. Today, o9s platform supports sales and marketing decision-making in addition to supply chain management and planning.The sweeping effects of global supply chain shortages, climate change and a raging pandemic that can be felt at the individual level underscore the fact that we are clearly at an inflection point, Gottemukkala said in a statement. Our AI-powered, cloud-native o9 [platform] was born from the need to give organizations the ability to make faster, more integrated business decisions that create customer value and drive better financial results while making efficient use of the planets precious resources. Our purpose is to develop the best platform and solutions to help our clients in this critical pursuit.Fundamentally, o9 is an analytics platform designed to run on public cloud providers e.g., Amazon Web Services, Google Cloud Platform, or Microsoft Azure with prebuilt predictive models tailored for particular scenarios. o9 can draw on data to drive forecasting from both unstructured and structured internal data sources, including customer relationship management software, procurement apps, warehouse and factory machines and internet of things sensors. It can also connect to external sources, tapping into consumer market research, point-of-sales systems, and even smartphone hardware.Over time, o9 reconciles the data to create a knowledge graph. Like other knowledge graphs, o9s represents a network of objects, events, situations, or concepts and illustrates the relationship between them putting data in context and providing a framework for analysis.A commercial planning dashboard built by o9 Solutions.[T]he o9 platform was designed as an open platform, allowing companies to leverage new sources of data and new algorithms for completely new use cases, the company explains on its website. Beyond supply chain management and supply chain logistics for retail, o9 offers models for revenue management, integrated business planning, merchandising and assortment management.Applied analyticsIn many ways, o9, whose customers include Anheuser-Busch InBev, Caterpillar, and Walmart, competes not only with supply chain management solutions but with platforms like Fractal Analytics, which ingest data from disparate sources to anticipate trends in various markets and lines of business. Other vendors in the big data analytics segment include Noogata, Imply, Unsupervised, Pecan.ai, Tata Consultancy Services, Wipro, Tredence, LatentView, and Mu Sigma. Big data analytics refers to the use of analytic techniques to make sense of large, diverse datasets that include structured, semi-structured, and unstructured data from different sources and in different sizes, ranging from terabytes to zettabytes.Despite the potential of and record investment in big data analytics, some research paints a mixed picture of its return on investment. A 2021 NewVantage Partners report found that only 24% of executives believe their organizations have realized the goal of becoming data-driven, with cultural barriers including organizational alignment, business processes, change management, communication, people skill sets, and resistance or lack of understanding presenting the biggest hurdles.As Harvard Business Review explored in a 2013 piece, big data is often hyped so heavily that companies are expecting it to deliver more value than it actually can. Turning insights into competitive advantage requires changes that businesses might be incapable of making. And most companies dont do a good job with the information they already have according to Forrester, between 60% and 73%. of all data within an organization goes unused for analytics.[W]hile our most recent survey found that businesses are beginning to reap AI [and analytics] benefits, the reality is theyre not often seeing a financial return or worse, not even covering their investments, PricewaterhouseCoopers analysts wrote in a July 2021 report. Compounding the challenge is the fact that many organizations struggle to define ROI for AI [and analytics] in the first place.Still, the global big data and business analytics segment could be worth approximately $684 billion by 2030, according to Valuates Reports assuming that the current trend holds. o9 claims that one of the worlds largest beer companies used its platform to run demand and supply algorithms that reduced supply chain costs and bolstered inventory across more than 32 countries. Another customer an American clothing and home decor retailer tapped o9 to replace manual and Excel-driven processes for inventory planning and management.Not only is an agile, intelligent, and resilient supply chain one of the most important growth accelerators, it also inherently leads to a reduced carbon footprint especially for organizations that operate on a global scale, Sidhu said in a press release, placing on emphasis on o9s ostensible potential to improve supply chain efficiency. A sustainable supply chain requires companies to digitally transform their planning and decision-making capabilities.o9 Solutions, which took on its first external financing in April 2020, from KKR (which took a minority stake), has raised $200 million in capital to date. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More | Content Synthesis/Decision Making | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | PR Newswire | NIH funds CUNY SPH and West Point AI Center for Precision Nutrition and Health | The National Institutes of Health (NIH) has awarded the CUNY Graduate School of Public Health and Health Policy (CUNY SPH) and the United States Military... | https://finance.yahoo.com/news/nih-funds-cuny-sph-west-175800550.html | https://s.yimg.com/uu/api/res/1.2/7IDf40ha6STyJ4yMmigkIQ--~B/aD0xNjt3PTE2O2FwcGlkPXl0YWNoeW9u/https://media.zenfs.com/en/prnewswire.com/20450e317d237ed4008db5f55a177492 | 2022-01-20T17:58:00Z | The center will develop and use new computational, data science, and tech approaches to advance precision nutrition, improve health and reduce chronic diseases.NEW YORK, Jan. 20, 2022 /PRNewswire/ -- The National Institutes of Health (NIH) has awarded the CUNY Graduate School of Public Health and Health Policy (CUNY SPH) and the United States Military Academy at West Point an estimated $8.1 million over five years, pending available funds, to establish the world's first artificial intelligence (AI) and computational modeling center for precision nutrition and health. Precision nutrition is an emerging area aimed at better tailoring diets to different people's characteristics and circumstances to achieve better health outcomes. This award is part of the Nutrition for Precision Health, powered by the All of Us Research Program (NPH) initiative, a $170 million NIH-wide effort and first independent study that will recruit a diverse pool of participants from All of Us to inform more personalized nutrition recommendations. The NPH and the center are part of the NIH's Common Fund, a special program aimed at catalyzing multiple biomedical disciplines.The center will develop state-of-the-art AI, machine learning (ML), Big Data methods, and other data science approaches to better understand and improve diet and nutrition. This will include new ways to better understand how individuals have different dietary needs and avoid potential biases and disparities that may result from various nutrition recommendations. The center will be co-led by two world-renowned AI and computational modeling experts, Bruce Y. Lee, MD, MBA, professor of health policy and management at CUNY SPH and executive director of PHICOR (Public Health Informatics, Computational, and Operations Research) and Diana M. Thomas, PhD, professor and research chair of mathematics at West Point."As the nation's leading urban public university, CUNY is proud to help drive cutting-edge research, in partnership with West Point, that aims to bring more equity to nutrition and health approaches," said CUNY Chancellor Félix V. Matos Rodríguez. "The Nutrition for Precision Health initiative will leverage the expertise of CUNY SPH as well as the University's great diversity, reach and dedication to social justice. With their renowned work in artificial intelligence and computational modeling, Drs. Lee and Thomas are the ideal scholars to lead this ambitious new center.""Our society is at a key inflection point," says Lee. "We now have much more data and technology available to guide diet and nutrition in ways that have not been previously done. This could greatly improve the health of people around the world; however, if not done correctly, it could worsen health outcomes and deepen disparities in health.""This is the first time that leading experts in data science, statistics, and systems modeling will collaborate with the top nutrition clinical and nutrition research centers in the U.S.," says Thomas. "The effort is unique and extremely timely as we can combine new AI approaches with unprecedented levels of computing power to develop algorithms for personalized nutrition guidance."Media Contact: Sarah [email protected] original content:https://www.prnewswire.com/news-releases/nih-funds-cuny-sph-and-west-point-ai-center-for-precision-nutrition-and-health-301465204.htmlSOURCE CUNY SPH | Content Synthesis/Discovery | Life, Physical, and Social Science/Healthcare Practitioners and Support | null | null | null | null | null | null |
news | delton137 | How I'm thinking about GPT-N | Published on January 17, 2022 5:11 PM GMTThere has been a lot of hand-wringing about accelerating AI progress within the AI safety community since OpenAI's publication of their GPT-3 and Scaling Laws papers. OpenAI's clear explication of scaling provides a justification for researchers to invest more in compute and provides a clear path forward for improving AI capabilities. Many in the AI safety community have rightly worried that this will lead to an arms race dynamic and faster timelines to AGI.At the same time there's also an argument that the resources being directed towards scaling transformers may have counter-factually been put towards other approaches (like reverse engineering the neocortex) that are more likely to lead to existentially dangerous AI. My own personal credence on transformers slowing the time to AGI is low, maybe 20%, but I think it's important to weigh in.There is also a growing concern within the AI safety community that simply scaling up GPT-3 by adding more data, weights, and training compute could lead to something existentially dangerous once a few other relatively simple components are added.I have not seen the idea that scaling transformers will lead to existentially dangerous AI (after combining with a few other simple bits) defended in detail anywhere but it seems very much an idea "in the water" based on the few discussions with AI safety researchers I have been privy too. It has been alluded to various places online also:Connor Leahy has said that a sufficiently large transformer model could serve as a powerful world model for an otherwise dumb and simple reinforcement learning agent, allowing it to rapidly learn how to do dangerous things in the world. For the record, I think this general argument is a super important point and something we should worry about, even though in this post I'll mainly be presenting reasons for skepticism.Gwern is perhaps the most well-known promoter of scaling being something we should worry about. He says "The scaling hypothesis regards the blessings of scale as the secret of AGI: intelligence is ‘just’ simple neural units & learning algorithms applied to diverse experiences at a (currently) unreachable scale."Observe the title of Alignment Newsletter #156: "The scaling hypothesis: a plan for building AGI". Note: I'm not sure what Rohin Shah's views are exactly, but from what I read they are pretty nuanced.Zac Hatfield Dobbs (an engineer at Anthropic) commented on 16 July 2021: "Now it looks like prosaic alignment might be the only kind we get, and the deadline might be very early indeed."lennart : "The strong scaling hypothesis is stating that we only need to scale a specific architecture, to achieve transformative or superhuman capabilities — this architecture might already be available."MIRI is famously secretive about what they are doing, but they've been pretty public that they've made a shift towards transformer alignment as a result of OpenAI's work. Eliezer Yudkowsky told me he thinks GPT-N plus "a few other things" could lead to existentially dangerous AI (personal communication that I believe is consistent with his public views as they were expressed recently in the published MIRI conversations).I do think a GPT-N model or a close cousin could be a component of an existentially dangerous AI. A vision transformer could serve a role analogous to the visual cortex in humans. A GPT type model trained on language might even make a good "System 1" for language, although I'm little less certain about that. So it definitely makes sense to be focusing a substantial amount of resources to transformer alignment when thinking about how to reduce AI x-risk.While I've seen a lot of posts making the bullish case on LessWrong and the EA Forum, I've seen fewer posts making a bearish case. The only I have seen are a series of inciteful and interesting posts from nostalgebraist. [Interestingly, the bearish points I argue are very much distinct from the lines of attack nostalgebraist takes, so it's worth looking at his posts too, especially his last one.] Another reason for writing this stems from my suspicion that too many AI safety resources are being put towards transformer alignment. Transformers are taking over AI right now, but I suspect they will be overtaken by a completely different architecture and approach soon (some strong candidates to take over in the near-term are the perciever architecture, Hopfield networks, energy based models, genetically/evolutionarily designed architectures, gated multi-layer perceptrons, and probably others I'm missing). The fact is we don't really have any understanding of what makes a good architecture and there is no good reason to think transformers are the final story. Some of the transformer alignment work (like dataset sanitization) may transfer to whatever architecture replaces transformers, but I don't we can predict with any certainty how much of it will transfer to future architectures and methods.Given the number of AI safety orgs and academics already working transformer alignment, I question if it is a good investment for EAs on the current margin. A full discussion of neglectedness is beyond the scope of this post, however you can look at this EA Forum post that touches on the academic contribution to transformer alignment, and I'll note there is also much work on aligning transformers going on in industry too.Summary of main pointsTransformers, like other deep learning models that came before, appear to work primarily via interpolation and have trouble finding theories that extrapolate. Having the capability to find theories that can extrapolate is at the very least a key to scientific progress and probably a prerequisite for existentially dangerous AI.A recent paper shows CNNs have trouble Grokking Conway's Game of Life. Discussing the Rashomon effect, I make the case that Grokking will be a pretty circumscribed / rare phenomena.The degree to which GPT-3 can do common sense reasoning seems extremely murky to me. I generally agree with people who have said GPT-3 mostly does System 1 type stuff, and not System 2 stuff.There are numerous other problems with transformers which appear solvable in the near term, some of which are already well on their way to being solved.The economic utility of very large transformer models is overhyped at the moment.Hypothesis: transformers work by interpolation onlySome double descent curves, from [1].Double descent is a phenomena which is critical to understanding how deep learning models work. Figure 1 shows double descent curves for two language models from OpenAI's "Deep Double Descent" paper,[1:1] which Evan Hubinger has summarized on LessWrong. Notice how the test loss first decreases, bottoms out, and then increases. The error bottoms out and starts to increase because of overfitting. This is the bias-variance trade-off which can be derived from the classical theory of statistical modeling. Notice however that as model size continues to increase, the test loss curve bends back down. This is the double descent phenomena. At large enough model size the test loss eventually becomes lower than it was in the regime were the bias-variance trade-off applied, although you can't see it in this particular figure.Notice that the double descent test loss curve peaks when the training loss bottoms out near zero. This is the interpolation threshold. The model has memorized the training data precisely. (or nearly so. In CNNs it is typical for the training loss to reach precisely zero).An important point about interpolation is that it works locally. Algorithms that work via interpolation are incapable of discovering global trends. My favorite illustration of this is the following:[2]From Hasson et al: Direct fit to nature.[2:1]No matter how many parameters or data you put in a neural network, it will never figure out that the underlying trend is y = x^2.What deep learning models appear to in effect is dimensionality reduction to a lower-dimension manifold followed by piece-wise linear interpolation, which is very similar to k-nearest neighbors. If I understand things correctly, Trenton Bricken has shown something similar for transformers, by drawing out a mathematical correspondence between the attention mechanism in transformers and sparse distributed memory, a high level model of how memory works in the brain (the main difference is that transformer representations aren't actually sparse).[3]At least three forms of double descent have been discovered. The first occurs as you increase the number of parameters. The second occurs during training -- oddly enough, during training a model can have better test error, than worse, and then better! (It seems historically this was hidden by the widespread practice of early stopping.) The last occurs as more training data is added.Why do I bring up these other forms of double descent? Mainly to point out these this is evidence these systems are very different than biological brains. Imagine working through some flashcards and then getting worse after a certain point. Or imagine a situation where adding more flashcards to the deck actually makes you worse at a language. These odd properties of transformers (which are shared with most if not all deep learning models) are clearly sub-optimal which leads me to assign higher credence to the view that eventually transformers (and a lot of other deep learning stuff) will be replaced by something significantly different.CNNs trained past the interpolation threshold memorize their training data (input-label relationships, assuming one-to-one correspondence). Memorization is a big part of how GPT-3 works, too. When unprompted, about 1% of the text produced by large language models is copied verbatim from the training corpus.[4] (As a reminders on some of the relevant numbers: GPT-3 has 175 Gb parameters and the size of the training data was ~45 Tb). Using adversarial techniques it may be able to extract specific data about people etc that is in the training data.[5] It appears as models get larger they memorize more - the extraction of people's names and personal information from much smaller models like BERT was found to be difficult. OpenAI's Codex seems to utilize a lot of memorization, often returning small code samples verbatim that were in the training data from Github. Some memorization is of course necessary and important (for instance models need to memorize how to spell words). However when a lot of the model's capability come from memorization, I tend to be less impressed. On the other hand, perception and System 1 in the brain also seems to rely on a lot of brute force memorization and interpolation.[2:2]Sometimes GPT-3's interpolation abilities can be quite impressive, for instance Alyssa Vance gave the prompt "Early this morning, in a shocking surprise attack, the international credit card and finance company Visa launched a full-scale invasion of the island nation of Taiwan" and GPT-3's output is quite impressive. GPT-2's "extrapolation" of Ginsberg's Moloch is quite impressive. However, this is only extrapolation in a loose sense, a different way of looking at it may be "interpolation within the space of Moloch and Moloch-like sentences". In general though, I appears that transformers struggle in models /theories/explanations that extrapolate, that reach outside the context they were discovered in to give a truly new prediction. The best examples of such theories are in science. The generation of such theories seems to often require a creative leap and can't be done just by brute force fitting or induction (more on this below). A different way of saying this is that by optimizing for an objective (like next-work prediction) you don't explore the landscape of possible models/theories enough (cf Kenneth Stanley's well-known arguments about this). [My favorite example of a creative leap, by the way, is when the Greek astronomer hypothesized that the stars are glowing orbs like the sun, just very far away].To give a simple example of the distinction I'm trying to flesh out here - Galileo observed that the period of a pendulum doesn't depend on the amount of mass attached to it but does depend on the length (longer length = longer period) which are two high level rules / principles that extrapolate. Could a GPT-N model, reading about the properties of pendulums, come up with similar rules and apply them consistently? I have a hard time believing that it would, at least not without a ton of training data. On the other hand, humans had trouble discovering this simple law too (pendulums I think were around long before Galileo). A better thing to look at here is how GPT-3 particularly struggles with multiple-choice conceptual physics questions at the ~High School / Early College level, achieving only 35% accuracy (random guessing = 25%). For college physics level questions it does just barely better than random chance.[6] Learning how to think abstractly and apply a small number of powerful rules and principles to an infinite number of diverse situations is the key to doing physics. My guess is the equations and principles of physics were in GPT-3's training data, along with some physics problems, they just were a tiny part so it didn't prioritize them much.To try to summarize these various points, I think there's a fairly strong argument that GPT-3, like other deep learning models, works mainly via some form of interpolation between stuff in its training data, and this constitutes a significant limitation which makes me less concerned about a scaled up GPT-like model being existentially dangerous.There is an important exception to all this, however, where GPT-N does discover rules that extrapolate, called "Grokking":Why I'm not so worried about Grokking and emergent behavior during scalingFor most tasks in the GPT-3 paper, the performance scales smoothly with model size. For a few, however, there are sudden jumps in performance. The tasks exhibiting significant jumps were addition, subtraction, and symbol substitution. Labeling these jumps "phase changes" is a terrible abuse in terminology - on closer inspection they are not at all discontinuous jumps and the term misleadingly suggests the emergence of a new internal order (phase changes should occur uniformly throughout a medium/space - the interpolation threshold in double descent may be a sort of phase change, but not Grokking).More recently it has been shown that with enough training data and parameters simple transformer models can learn how to reproduce certain mathematical transformations exactly.[7] During training, the models exhibit jumps upwards to 100% accuracy, with varying degrees of sharpness in the jump. The authors call this "grokking". The set of transformations they studied involved addition, subtraction, multiplication, division, and the modulo operator.As a result of these findings, AI safety researchers are worried about unexpected emergent behavior appearing in large models as they are scaled up.[8]Here's the thing about Grokking though -- the network can't Grok (get perfect accuracy) the architecture has to be able to literally do the algorithm and SGD has to find it. In the case of transformers, that means the algorithm must be easily decomposable into a series of matrix multiplies (it appears maybe repeated multiplication is Turing complete, so that's why I stress easily). Notice that all the examples of Grokking with transformers involve simple operations that can be decomposed into things like swapping values or arithmetic which can be easily expressed as a series of matrix multiplications. Division is notably absent from both the Grokking and GPT-3 paper, I wonder why...But Grokking doesn't always work, even when we know that the network can do the thing easily in principle. This was shown in a paper by Jacob M. Springer and Garrett T. Kenyon recently.[9] (I did a summer internship with Dr. Kenyon in 2010 and can vouch for his credibility). The authors set up a simple CNN architecture that in principle can learn the rules for Conway's Game of Life, so given an input board state the CNN can reproduced the Game of Life exactly, given the right parameters. The network was trained on over one million randomly generated examples, but despite all this data the network could not learn the exact solution. In fact, the minimal architecture couldn't even learn how to predict just two steps out! They then tested what happens when they duplicate the the filter maps in several layers, creating m times as many weights than are necessary. They found that the degree of overcompleteness m scaled very quickly with the number of steps the network could predict.The authors argue that their findings are consistent with the Lottery Ticket Hypothesis (LTH) that deep neural nets must get lucky by having a subset of initial parameters that are close enough to the desired solution. In other words, SGD alone can't always find the right solution - some luck is involved in the initial parameter settings - which explains why bigger models with a larger pool of parameters to work with do better. (I feel compelled to mention that attempts to validate the LTH have produced a mixed bag of murky results and it remains only a hypothesis, not a well-established theory or principle.)There is another important fact about data modeling that implies Grokking or even semi-Grokking will be exceptionally rare in deep learning models - the Rashomon effect, first described by Leo Brieman.[10] The effect is simply the observation that for any dataset, there is an infinite number of functions which fit it exactly which are mechanistically very different from each other. In his original paper, Brieman demonstrates this effect empirically by training a bunch of decision trees which all get equivalent accuracy on a test set but work very differently internally. Any model that works by fitting a ton of parameters to large data is subject to the Rashomon effect. The Rashomon effect implies that in the general case SGD is very unlikely to converge to the true model - ie very unlikely to Grok. In fact, I doubt SGD would even find a good approximation to the true model. (By "true model" I mean whatever algorithm or set of equations is generating the underlying data).Solomonoff induction tries avoid "Rashomon hell" by biasing the Bayesian updating towards models with shorter algorithmic descriptions, with assumption that shorter descriptions are always closer to the truth. [Side note: I'm skeptical of Occam's razor and how successfully this strategy works in any real world setup is, to my knowledge, rather poorly understood, which is just one of many reasons Solomonoff induction is a bad model for intelligence in my view].Even if biasing towards simpler models is a good idea, we don't have a good way of doing this in deep learning yet, apart from restricting the number of parameters, which usually hurts test set performance to some degree. It used to be thought that SGD sought out "flat minima" in the loss (minima with low curvature) which result in simpler models in terms of how compressible they are, but further studies have shown this isn't really true.[11] So we have reasons to believe transformers will be subject to the Rashomon effect and Grokking will be very hard.The big debate - to what extent does GPT-3 have common sense?I don't have a strong interest in wading through the reams of GPT-3 outputs people have posted online, much of which I suspect has been hand-picked to fit whatever narrative the author was trying to push. It's not my cup of tea reading GPT-3 prose/outputs and Gwern has already done it far more thoroughly than I ever could.I think the failures are much more illuminating than the successes, because many of the failures are ones a human would never make (for instance answering "four" to "how many eyes does a horse have"). Just as humans are easy to mislead with the Cognitive Reflection Test, especially when sleep deprived or tired, GPT-3 is very easy to mislead too, sometimes embarrassingly so. My favorite examples of this come from Alyssa Vance, yet more can be found in Marcus and Davis' MIT Tech Review article.It seems GPT-3, like it's predecessor GPT-2 has some common sense, but mainly only the system 1 gut reaction type - it still struggles with common sense reasoning. Many have made this observation already, including both Sara Constantine and Scott Alexander in the context of GPT-2 (as I side note, I highly recommend people read Sarah's brilliant disquisition on System 1 vs System 2 entitled "Distinctions in Types of Thought".).Issues that seem solvableThere are some issues with transformers that appear very solvable to me and are in the process of being solved:The first is lack of truthfulness. GPT-3 is great at question answering, the issue is it's often plausible but wrong (see Alyssa Vance's post "When GPT-3 Is Confident, Plausible, And Wrong"). Part of this is due to garbage-in garbage-out problem with transformers right now where they mimic human falsehoods that are in their training data.[12] Another issue is just not having enough memory to memorize all the relevant facts people may want to ask about. Deep Mind seems to have solved the later issue with their Retrieval-Enhanced Transformer (RETRO) which utilizes a 2 trillion token database.[13]A related issue is lack of coherence/lack of calibration. An optimal Bayesian agent considers all possibilities all the time, but any agent with finite resources can't afford to do that - real world agents have finite memory, so they have to figure out when to forget disproven theories/facts/explanations. In the context of resource-bounded systems, it may be best to stick with a single best explanation rather than trying to hold multiple explanations [as an example, it seems reasonable to disregard old scientific theories once they have been robustly falsified even though from a Bayesian perspective they still have a tiny amount of non-zero probability attached to them]. Indeed, the human brain seems to have in-built bias against holding multiple contradictory theories at once (cognitive dissonance). Transformers, on the other hand, often give conflicting answers to similar questions, or even the same question when prompted multiple times. In other situations it maeks sense for resource-bounded agents to keep track of multiple theories and weight them in a Bayesian manner. Just as CNNs are not well-calibrated for mysterious reasons, I suspect transformers are not well calibrated either. However, just as there are methods for fixing calibration in CNNs, I suspect there are methods to fix calibration in transformers too.Another issue is lack of metacognition, or alerting the user about confidence. This is a big problem right now since humans want a question answering system to give correct answers and know when it doesn't know something or isn't sure. Interestingly, Nick Cammarata figured out that with careful prompting GPT-3 can identify nonsense questions (whether it counts as metacognition isn't very clear). I think this is solvable by tweaking RETRO so it alerts the user when something isn't in it's database (maybe it already does this?). As with models like CNNS, where uncertainty can be added via dropout during inference or by adopting Bayesian training, there are probably other ways to add uncertainty quantification to transformers. MIRI's "visible thoughts" approach is another way of attacking this problem.Another issue is very weak compositionality. Like RNNs which came before,[14] transformers are really not good at composition, or chaining together a sequence of discrete tasks in a way it hasn't seen before. Look for instance at how bad OpenAI's Codex model is at chaining together components:[15]From the OpenAI Codex paper.[15:1]This is very different behavior than humans, where the ability to accurately chain together two things implies the ability to accurately chain together a long sequence of things well. At least intuitively this seems solvable at least for many applications of interest by writing ad-hoc hard-coded methods to detect when chaining is needed and then do it.The final issue is bias/toxicity. This problem is addressable both through dataset sanitization and via de-biasing word embeddings.[16] There have recently been a number of papers discussing and making progress on this.[17][18][19]Aside: prediction vs explanation"For even in purely practical applications, the explanatory power of a theory is paramount, and its predictive power only supplementary. If this seems surprising, imagine that an extraterrestrial scientist has visited the Earth and given us an ultra-high-technology “oracle” which can predict the outcome of any possible experiment but provides no explanations. According to the instrumentalists, once we had that oracle we should have no further use for scientific theories, except as a means of entertaining ourselves. But is that true? How would the oracle be used in practice? In some sense it would contain the knowledge necessary to build, say, an interstellar spaceship. But how exactly would that help us to build one? Or to build another oracle of the same kind? Or even a better mousetrap? The oracle only predicts the outcomes of experiments. Therefore, in order to use it at all, we must first know what experiments to ask it about. If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place. And if it predicted that the spaceship we had designed would explode on takeoff, it could not tell us how to prevent such an explosion. That would still be for us to work out. And before we could work it out, before we could even begin to improve the design in any way, we should have to understand, among other things, how the spaceship was supposed to work. Only then could we have any chance of discovering what might cause an explosion on takeoff. Prediction – even perfect, universal prediction – is simply no substitute for explanation." - David Deutsch, The Fabric of RealityOf course, one could also ask a truly God-like oracle to predict how a human would write an instruction manual for building a spaceship, and then just follow that. The point of quoting this passage is to distinguish prediction from understanding. I don't want to wade into the deep philosophical waters about what 'explanation' is, the Chinese Room, and all the rest. Rather, I just want to convince the reader that for the purpose of thinking about what GPT-N models can and can't do, the distinction is real and important. Next word prediction is not everything. When we relentlessly optimize deep learning models only on predictive accuracy, they take shortcuts. They learn non-robust features, making them prone to adversarial examples. They memorize individual cases rather than trying to extract high-level abstract rules. And they then suffer when applied out of distribution.Final thoughts - transformers are overhyped, at least right now"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run" - Roy Amara ("Amara's Law")The debut of GPT-3 was accompanied by a lot of hype about how it would lead to a boom in startups and various economic activity. As far as I can tell, no company is actually making a profit with GPT-3 yet (I have Googled extensively and asked on Twitter about this multiple times. If you know an example, please comment below). It wasn't until June 2021 that Microsoft themselves released their first commercial product that uses GPT-3, when they integrated a GPT-3-like model into Power Apps. The system allows users to put in a natural language input and get an output which is a string of code in a bespoke language developed at Microsoft called "Power Fx". The resulting code can do things like manipulate Excel spreadsheets. This is cool, but also a bit underwhelming relative to the hype. In December, 2021, a South Korean company called Naver said they were starting to use a larger language model (trained on 6,500 more tokens than GPT-3) to help with product recommendations. This is also neat but underwhelming.There is a pattern in AI where there is huge buzz around cool demos and lab demonstrations hits a brick wall when it comes to actually deploying things. I see this all the time in my own field of AI for medical imaging. People drastically underestimate the difficulty of deploying things into the real world (AI system that can be plugged into existing systems online, like for targeting ads, are a somewhat different matter). This is one of skeptical arguments from Rodney Brooks I agree with (for his argument, see section 7 here). The compute costs of training and inferencing GPT-like models also presents significant headwinds to translation to real-world use. Thompson et al. have argued that baring significant algorithmic improvements, hardware and compute costs will soon be fatal to the entire enterprise of scaling.[20][21] However, I am skeptical about the conclusions of their work since it appears to me they didn't factor in Moore's law well enough or the possibility of special-purpose hardware. See also Gwern's comments in the comments section here.As far as I can tell, in the next year we will see the following applications move from the lab to commercialization and real-world use:incrementally better NPCs in videogamesincrementally better text summarization for things like product reviews or press releasesincrementally better translationbetter code completionAcknowledgementsThank you to Stephen "Cas" Casper for proofreading an earlier draft of this post and providing useful comments.ReferencesNakkiran, et al. "Deep Double Descent: Where Bigger Models and More Data Hurt". 2019. ↩︎ ↩︎ ↩︎Hasson et al. "Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks". Neuron. 105(3). pages 416-434. 2020. ↩︎ ↩︎ ↩︎Bricken, Trenton and Pehlevan, Cengiz. "Attention Approximates Sparse Distributed Memory". In Proceedings of Advances in Neural Information Processing Systems (NeurIPS) 34. 2021. ↩︎Lee, et al. "Deduplicating Training Data Makes Language Models Better". arXiv e-prints. 2021. ↩︎Carlini et al. "Extracting Training Data from Large Language Models". In Proceedings of the 30th USENIX Security Symposium. 2021. ↩︎Hendrycks et al. "Measuring Massive Multitask Language Understanding". In Proceedings of the International Conference on Learning Representations (ICLR). 2021. ↩︎Power et al. "Grokking: Generalization Beyond Overfitting On Small Algorithmic Datasets". In Proceedings of the 1st Mathematical Reasoning in General Artificial Intelligence Workshop, ICLR. 2021. ↩︎Steinhardt, Jacob. "On The Risks of Emergent Behavior in Foundation Models". 2021. ↩︎Springer, J. M., & Kenyon, G. T. It’s Hard for Neural Networks to Learn the Game of Life. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN). 2021. (also on arXiv) ↩︎Breiman, Leo. "Statistical Modeling: The Two Cultures". Statistical Science. 16 (3) pg 199 - 231. 2001. ↩︎Dinh et al. "Sharp Minima Can Generalize For Deep Nets". 2017. ↩︎Lin et al. "TruthfulQA: Measuring How Models Mimic Human Falsehoods". arXiv e-prints. 2021. ↩︎Borgeaud et al. "Improving language models by retrieving from trillions of tokens". arXiv e-prints". 2021. ↩︎Lake et al. "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks". In Proceedings of the 35th International Conference on Machine Learning (ICML). 2018. ↩︎Chen et al. "Evaluating Large Language Models Trained on Code". arXiv e-print. 2021. ↩︎ ↩︎Bolukbasi, et al. "Man is to Computer Programmer as Woman is toHomemaker? Debiasing Word Embeddings". In Proceedings of the 30th International Conference on Neural Information Processing Systems (NeurIPS). 2016. ↩︎Webl et al. "Challenges in Detoxifying Language Models". In Findings of EMNLP. 2021. ↩︎Weidinger, et al. "Ethical and social risks of harm from Language Models". arXiv e-prints. 2021. ↩︎Askell, et al. "A General Language Assistant as a Laboratory for Alignment". arXiv e-prints. 2021. ↩︎Thompson et al. "Deep Learning's Diminishing Returns". IEEE Spectrum. 2021. ↩︎Thompson et al. "The Computational Limits of Deep Learning". arXiv e-prints. 2020. ↩︎Discuss | https://www.lesswrong.com/posts/iQabBACQwbWyHFKZq/how-i-m-thinking-about-gpt-n | 2022-01-17T17:11:49Z | There has been a lot of hand-wringing about accelerating AI progress within the AI safety community since OpenAI's publication of their GPT-3 and Scaling Laws papers. OpenAI's clear explication of scaling provides a justification for researchers to invest more in compute and provides a clear path forward for improving AI capabilities. Many in the AI safety community have rightly worried that this will lead to an arms race dynamic and faster timelines to AGI.At the same time there's also an argument that the resources being directed towards scaling transformers may have counter-factually been put towards other approaches (like reverse engineering the neocortex) that are more likely to lead to existentially dangerous AI. My own personal credence on transformers slowing the time to AGI is low, maybe 20%, but I think it's important to weigh in.There is also a growing concern within the AI safety community that simply scaling up GPT-3 by adding more data, weights, and training compute could lead to something existentially dangerous once a few other relatively simple components are added.I have not seen the idea that scaling transformers will lead to existentially dangerous AI (after combining with a few other simple bits) defended in detail anywhere but it seems very much an idea "in the water" based on the few discussions with AI safety researchers I have been privy too. It has been alluded to various places online also:Connor Leahy has said that a sufficiently large transformer model could serve as a powerful world model for an otherwise dumb and simple reinforcement learning agent, allowing it to rapidly learn how to do dangerous things in the world. For the record, I think this general argument is a super important point and something we should worry about, even though in this post I'll mainly be presenting reasons for skepticism.Gwern is perhaps the most well-known promoter of scaling being something we should worry about. He says "The scaling hypothesis regards the blessings of scale as the secret of AGI: intelligence is just simple neural units & learning algorithms applied to diverse experiences at a (currently) unreachable scale."Observe the title of Alignment Newsletter #156: "The scaling hypothesis: a plan for building AGI". Note: I'm not sure what Rohin Shah's views are exactly, but from what I read they are pretty nuanced.Zac Hatfield Dobbs (an engineer at Anthropic) commented on 16 July 2021: "Now it looks like prosaic alignment might be the only kind we get, and the deadline might be very early indeed."lennart : "The strong scaling hypothesis is stating that we only need to scale a specific architecture, to achieve transformative or superhuman capabilities this architecture might already be available."MIRI is famously secretive about what they are doing, but they've been pretty public that they've made a shift towards transformer alignment as a result of OpenAI's work. Eliezer Yudkowsky told me he thinks GPT-N plus "a few other things" could lead to existentially dangerous AI (personal communication that I believe is consistent with his public views as they were expressed recently in the published MIRI conversations).I do think a GPT-N model or a close cousin could be a component of an existentially dangerous AI. A vision transformer could serve a role analogous to the visual cortex in humans. A GPT type model trained on language might even make a good "System 1" for language, although I'm little less certain about that. So it definitely makes sense to be focusing a substantial amount of resources to transformer alignment when thinking about how to reduce AI x-risk.While I've seen a lot of posts making the bullish case on LessWrong and the EA Forum, I've seen fewer posts making a bearish case. The only I have seen are a series of inciteful and interesting posts from nostalgebraist. [Interestingly, the bearish points I argue are very much distinct from the lines of attack nostalgebraist takes, so it's worth looking at his posts too, especially his last one.] Another reason for writing this stems from my suspicion that too many AI safety resources are being put towards transformer alignment. Transformers are taking over AI right now, but I suspect they will be overtaken by a completely different architecture and approach soon (some strong candidates to take over in the near-term are the perciever architecture, Hopfield networks, energy based models, genetically/evolutionarily designed architectures, gated multi-layer perceptrons, and probably others I'm missing). The fact is we don't really have any understanding of what makes a good architecture and there is no good reason to think transformers are the final story. Some of the transformer alignment work (like dataset sanitization) may transfer to whatever architecture replaces transformers, but I don't we can predict with any certainty how much of it will transfer to future architectures and methods.Given the number of AI safety orgs and academics already working transformer alignment, I question if it is a good investment for EAs on the current margin. A full discussion of neglectedness is beyond the scope of this post, however you can look at this EA Forum post that touches on the academic contribution to transformer alignment, and I'll note there is also much work on aligning transformers going on in industry too.Summary of main pointsTransformers, like other deep learning models that came before, appear to work primarily via interpolation and have trouble finding theories that extrapolate. Having the capability to find theories that can extrapolate is at the very least a key to scientific progress and probably a prerequisite for existentially dangerous AI.A recent paper shows CNNs have trouble Grokking Conway's Game of Life. Discussing the Rashomon effect, I make the case that Grokking will be a pretty circumscribed / rare phenomena.The degree to which GPT-3 can do common sense reasoning seems extremely murky to me. I generally agree with people who have said GPT-3 mostly does System 1 type stuff, and not System 2 stuff.There are numerous other problems with transformers which appear solvable in the near term, some of which are already well on their way to being solved.The economic utility of very large transformer models is overhyped at the moment.Hypothesis: transformers work by interpolation onlySome double descent curves, from [1].Double descent is a phenomena which is critical to understanding how deep learning models work. Figure 1 shows double descent curves for two language models from OpenAI's "Deep Double Descent" paper,[1:1] which Evan Hubinger has summarized on LessWrong. Notice how the test loss first decreases, bottoms out, and then increases. The error bottoms out and starts to increase because of overfitting. This is the bias-variance trade-off which can be derived from the classical theory of statistical modeling. Notice however that as model size continues to increase, the test loss curve bends back down. This is the double descent phenomena. At large enough model size the test loss eventually becomes lower than it was in the regime were the bias-variance trade-off applied, although you can't see it in this particular figure.Notice that the double descent test loss curve peaks when the training loss bottoms out near zero. This is the interpolation threshold. The model has memorized the training data precisely. (or nearly so. In CNNs it is typical for the training loss to reach precisely zero).An important point about interpolation is that it works locally. Algorithms that work via interpolation are incapable of discovering global trends. My favorite illustration of this is the following:[2]From Hasson et al: Direct fit to nature.[2:1]No matter how many parameters or data you put in a neural network, it will never figure out that the underlying trend is y = x^2.What deep learning models appear to in effect is dimensionality reduction to a lower-dimension manifold followed by piece-wise linear interpolation, which is very similar to k-nearest neighbors. If I understand things correctly, Trenton Bricken has shown something similar for transformers, by drawing out a mathematical correspondence between the attention mechanism in transformers and sparse distributed memory, a high level model of how memory works in the brain (the main difference is that transformer representations aren't actually sparse).[3]At least three forms of double descent have been discovered. The first occurs as you increase the number of parameters. The second occurs during training -- oddly enough, during training a model can have better test error, than worse, and then better! (It seems historically this was hidden by the widespread practice of early stopping.) The last occurs as more training data is added.Why do I bring up these other forms of double descent? Mainly to point out these this is evidence these systems are very different than biological brains. Imagine working through some flashcards and then getting worse after a certain point. Or imagine a situation where adding more flashcards to the deck actually makes you worse at a language. These odd properties of transformers (which are shared with most if not all deep learning models) are clearly sub-optimal which leads me to assign higher credence to the view that eventually transformers (and a lot of other deep learning stuff) will be replaced by something significantly different.CNNs trained past the interpolation threshold memorize their training data (input-label relationships, assuming one-to-one correspondence). Memorization is a big part of how GPT-3 works, too. When unprompted, about 1% of the text produced by large language models is copied verbatim from the training corpus.[4] (As a reminders on some of the relevant numbers: GPT-3 has 175 Gb parameters and the size of the training data was ~45 Tb). Using adversarial techniques it may be able to extract specific data about people etc that is in the training data.[5] It appears as models get larger they memorize more - the extraction of people's names and personal information from much smaller models like BERT was found to be difficult. OpenAI's Codex seems to utilize a lot of memorization, often returning small code samples verbatim that were in the training data from Github. Some memorization is of course necessary and important (for instance models need to memorize how to spell words). However when a lot of the model's capability come from memorization, I tend to be less impressed. On the other hand, perception and System 1 in the brain also seems to rely on a lot of brute force memorization and interpolation.[2:2]Sometimes GPT-3's interpolation abilities can be quite impressive, for instance Alyssa Vance gave the prompt "Early this morning, in a shocking surprise attack, the international credit card and finance company Visa launched a full-scale invasion of the island nation of Taiwan" and GPT-3's output is quite impressive. GPT-2's "extrapolation" of Ginsberg's Moloch is quite impressive. However, this is only extrapolation in a loose sense, a different way of looking at it may be "interpolation within the space of Moloch and Moloch-like sentences". In general though, I appears that transformers struggle in models /theories/explanations that extrapolate, that reach outside the context they were discovered in to give a truly new prediction. The best examples of such theories are in science. The generation of such theories seems to often require a creative leap and can't be done just by brute force fitting or induction (more on this below). A different way of saying this is that by optimizing for an objective (like next-work prediction) you don't explore the landscape of possible models/theories enough (cf Kenneth Stanley's well-known arguments about this). [My favorite example of a creative leap, by the way, is when the Greek astronomer hypothesized that the stars are glowing orbs like the sun, just very far away].To give a simple example of the distinction I'm trying to flesh out here - Galileo observed that the period of a pendulum doesn't depend on the amount of mass attached to it but does depend on the length (longer length = longer period) which are two high level rules / principles that extrapolate. Could a GPT-N model, reading about the properties of pendulums, come up with similar rules and apply them consistently? I have a hard time believing that it would, at least not without a ton of training data. On the other hand, humans had trouble discovering this simple law too (pendulums I think were around long before Galileo). A better thing to look at here is how GPT-3 particularly struggles with multiple-choice conceptual physics questions at the ~High School / Early College level, achieving only 35% accuracy (random guessing = 25%). For college physics level questions it does just barely better than random chance.[6] Learning how to think abstractly and apply a small number of powerful rules and principles to an infinite number of diverse situations is the key to doing physics. My guess is the equations and principles of physics were in GPT-3's training data, along with some physics problems, they just were a tiny part so it didn't prioritize them much.To try to summarize these various points, I think there's a fairly strong argument that GPT-3, like other deep learning models, works mainly via some form of interpolation between stuff in its training data, and this constitutes a significant limitation which makes me less concerned about a scaled up GPT-like model being existentially dangerous.There is an important exception to all this, however, where GPT-N does discover rules that extrapolate, called "Grokking":Why I'm not so worried about Grokking and emergent behavior during scalingFor most tasks in the GPT-3 paper, the performance scales smoothly with model size. For a few, however, there are sudden jumps in performance. The tasks exhibiting significant jumps were addition, subtraction, and symbol substitution. Labeling these jumps "phase changes" is a terrible abuse in terminology - on closer inspection they are not at all discontinuous jumps and the term misleadingly suggests the emergence of a new internal order (phase changes should occur uniformly throughout a medium/space - the interpolation threshold in double descent may be a sort of phase change, but not Grokking).More recently it has been shown that with enough training data and parameters simple transformer models can learn how to reproduce certain mathematical transformations exactly.[7] During training, the models exhibit jumps upwards to 100% accuracy, with varying degrees of sharpness in the jump. The authors call this "grokking". The set of transformations they studied involved addition, subtraction, multiplication, division, and the modulo operator.As a result of these findings, AI safety researchers are worried about unexpected emergent behavior appearing in large models as they are scaled up.[8]Here's the thing about Grokking though -- the network can't Grok (get perfect accuracy) the architecture has to be able to literally do the algorithm and SGD has to find it. In the case of transformers, that means the algorithm must be easily decomposable into a series of matrix multiplies (it appears maybe repeated multiplication is Turing complete, so that's why I stress easily). Notice that all the examples of Grokking with transformers involve simple operations that can be decomposed into things like swapping values or arithmetic which can be easily expressed as a series of matrix multiplications. Division is notably absent from both the Grokking and GPT-3 paper, I wonder why...But Grokking doesn't always work, even when we know that the network can do the thing easily in principle. This was shown in a paper by Jacob M. Springer and Garrett T. Kenyon recently.[9] (I did a summer internship with Dr. Kenyon in 2010 and can vouch for his credibility). The authors set up a simple CNN architecture that in principle can learn the rules for Conway's Game of Life, so given an input board state the CNN can reproduced the Game of Life exactly, given the right parameters. The network was trained on over one million randomly generated examples, but despite all this data the network could not learn the exact solution. In fact, the minimal architecture couldn't even learn how to predict just two steps out! They then tested what happens when they duplicate the the filter maps in several layers, creating m times as many weights than are necessary. They found that the degree of overcompleteness m scaled very quickly with the number of steps the network could predict.The authors argue that their findings are consistent with the Lottery Ticket Hypothesis (LTH) that deep neural nets must get lucky by having a subset of initial parameters that are close enough to the desired solution. In other words, SGD alone can't always find the right solution - some luck is involved in the initial parameter settings - which explains why bigger models with a larger pool of parameters to work with do better. (I feel compelled to mention that attempts to validate the LTH have produced a mixed bag of murky results and it remains only a hypothesis, not a well-established theory or principle.)There is another important fact about data modeling that implies Grokking or even semi-Grokking will be exceptionally rare in deep learning models - the Rashomon effect, first described by Leo Brieman.[10] The effect is simply the observation that for any dataset, there is an infinite number of functions which fit it exactly which are mechanistically very different from each other. In his original paper, Brieman demonstrates this effect empirically by training a bunch of decision trees which all get equivalent accuracy on a test set but work very differently internally. Any model that works by fitting a ton of parameters to large data is subject to the Rashomon effect. The Rashomon effect implies that in the general case SGD is very unlikely to converge to the true model - ie very unlikely to Grok. In fact, I doubt SGD would even find a good approximation to the true model. (By "true model" I mean whatever algorithm or set of equations is generating the underlying data).Solomonoff induction tries avoid "Rashomon hell" by biasing the Bayesian updating towards models with shorter algorithmic descriptions, with assumption that shorter descriptions are always closer to the truth. [Side note: I'm skeptical of Occam's razor and how successfully this strategy works in any real world setup is, to my knowledge, rather poorly understood, which is just one of many reasons Solomonoff induction is a bad model for intelligence in my view].Even if biasing towards simpler models is a good idea, we don't have a good way of doing this in deep learning yet, apart from restricting the number of parameters, which usually hurts test set performance to some degree. It used to be thought that SGD sought out "flat minima" in the loss (minima with low curvature) which result in simpler models in terms of how compressible they are, but further studies have shown this isn't really true.[11] So we have reasons to believe transformers will be subject to the Rashomon effect and Grokking will be very hard.The big debate - to what extent does GPT-3 have common sense?I don't have a strong interest in wading through the reams of GPT-3 outputs people have posted online, much of which I suspect has been hand-picked to fit whatever narrative the author was trying to push. It's not my cup of tea reading GPT-3 prose/outputs and Gwern has already done it far more thoroughly than I ever could.I think the failures are much more illuminating than the successes, because many of the failures are ones a human would never make (for instance answering "four" to "how many eyes does a horse have"). Just as humans are easy to mislead with the Cognitive Reflection Test, especially when sleep deprived or tired, GPT-3 is very easy to mislead too, sometimes embarrassingly so. My favorite examples of this come from Alyssa Vance, yet more can be found in Marcus and Davis' MIT Tech Review article.It seems GPT-3, like it's predecessor GPT-2 has some common sense, but mainly only the system 1 gut reaction type - it still struggles with common sense reasoning. Many have made this observation already, including both Sara Constantine and Scott Alexander in the context of GPT-2 (as I side note, I highly recommend people read Sarah's brilliant disquisition on System 1 vs System 2 entitled "Distinctions in Types of Thought".).Issues that seem solvableThere are some issues with transformers that appear very solvable to me and are in the process of being solved:The first is lack of truthfulness. GPT-3 is great at question answering, the issue is it's often plausible but wrong (see Alyssa Vance's post "When GPT-3 Is Confident, Plausible, And Wrong"). Part of this is due to garbage-in garbage-out problem with transformers right now where they mimic human falsehoods that are in their training data.[12] Another issue is just not having enough memory to memorize all the relevant facts people may want to ask about. Deep Mind seems to have solved the later issue with their Retrieval-Enhanced Transformer (RETRO) which utilizes a 2 trillion token database.[13]A related issue is lack of coherence/lack of calibration. An optimal Bayesian agent considers all possibilities all the time, but any agent with finite resources can't afford to do that - real world agents have finite memory, so they have to figure out when to forget disproven theories/facts/explanations. In the context of resource-bounded systems, it may be best to stick with a single best explanation rather than trying to hold multiple explanations [as an example, it seems reasonable to disregard old scientific theories once they have been robustly falsified even though from a Bayesian perspective they still have a tiny amount of non-zero probability attached to them]. Indeed, the human brain seems to have in-built bias against holding multiple contradictory theories at once (cognitive dissonance). Transformers, on the other hand, often give conflicting answers to similar questions, or even the same question when prompted multiple times. In other situations it maeks sense for resource-bounded agents to keep track of multiple theories and weight them in a Bayesian manner. Just as CNNs are not well-calibrated for mysterious reasons, I suspect transformers are not well calibrated either. However, just as there are methods for fixing calibration in CNNs, I suspect there are methods to fix calibration in transformers too.Another issue is lack of metacognition, or alerting the user about confidence. This is a big problem right now since humans want a question answering system to give correct answers and know when it doesn't know something or isn't sure. Interestingly, Nick Cammarata figured out that with careful prompting GPT-3 can identify nonsense questions (whether it counts as metacognition isn't very clear). I think this is solvable by tweaking RETRO so it alerts the user when something isn't in it's database (maybe it already does this?). As with models like CNNS, where uncertainty can be added via dropout during inference or by adopting Bayesian training, there are probably other ways to add uncertainty quantification to transformers. MIRI's "visible thoughts" approach is another way of attacking this problem.Another issue is very weak compositionality. Like RNNs which came before,[14] transformers are really not good at composition, or chaining together a sequence of discrete tasks in a way it hasn't seen before. Look for instance at how bad OpenAI's Codex model is at chaining together components:[15]From the OpenAI Codex paper.[15:1]This is very different behavior than humans, where the ability to accurately chain together two things implies the ability to accurately chain together a long sequence of things well. At least intuitively this seems solvable at least for many applications of interest by writing ad-hoc hard-coded methods to detect when chaining is needed and then do it.The final issue is bias/toxicity. This problem is addressable both through dataset sanitization and via de-biasing word embeddings.[16] There have recently been a number of papers discussing and making progress on this.[17][18][19]"For even in purely practical applications, the explanatory power of a theory is paramount, and its predictive power only supplementary. If this seems surprising, imagine that an extraterrestrial scientist has visited the Earth and given us an ultra-high-technology oracle which can predict the outcome of any possible experiment but provides no explanations. According to the instrumentalists, once we had that oracle we should have no further use for scientific theories, except as a means of entertaining ourselves. But is that true? How would the oracle be used in practice? In some sense it would contain the knowledge necessary to build, say, an interstellar spaceship. But how exactly would that help us to build one? Or to build another oracle of the same kind? Or even a better mousetrap? The oracle only predicts the outcomes of experiments. Therefore, in order to use it at all, we must first know what experiments to ask it about. If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place. And if it predicted that the spaceship we had designed would explode on takeoff, it could not tell us how to prevent such an explosion. That would still be for us to work out. And before we could work it out, before we could even begin to improve the design in any way, we should have to understand, among other things, how the spaceship was supposed to work. Only then could we have any chance of discovering what might cause an explosion on takeoff. Prediction even perfect, universal prediction is simply no substitute for explanation." - David Deutsch, The Fabric of RealityOf course, one could also ask a truly God-like oracle to predict how a human would write an instruction manual for building a spaceship, and then just follow that. The point of quoting this passage is to distinguish prediction from understanding. I don't want to wade into the deep philosophical waters about what 'explanation' is, the Chinese Room, and all the rest. Rather, I just want to convince the reader that for the purpose of thinking about what GPT-N models can and can't do, the distinction is real and important. Next word prediction is not everything. When we relentlessly optimize deep learning models only on predictive accuracy, they take shortcuts. They learn non-robust features, making them prone to adversarial examples. They memorize individual cases rather than trying to extract high-level abstract rules. And they then suffer when applied out of distribution.Final thoughts - transformers are overhyped, at least right now"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run" - Roy Amara ("Amara's Law")The debut of GPT-3 was accompanied by a lot of hype about how it would lead to a boom in startups and various economic activity. As far as I can tell, no company is actually making a profit with GPT-3 yet (I have Googled extensively and asked on Twitter about this multiple times. If you know an example, please comment below). It wasn't until June 2021 that Microsoft themselves released their first commercial product that uses GPT-3, when they integrated a GPT-3-like model into Power Apps. The system allows users to put in a natural language input and get an output which is a string of code in a bespoke language developed at Microsoft called "Power Fx". The resulting code can do things like manipulate Excel spreadsheets. This is cool, but also a bit underwhelming relative to the hype. In December, 2021, a South Korean company called Naver said they were starting to use a larger language model (trained on 6,500 more tokens than GPT-3) to help with product recommendations. This is also neat but underwhelming.There is a pattern in AI where there is huge buzz around cool demos and lab demonstrations hits a brick wall when it comes to actually deploying things. I see this all the time in my own field of AI for medical imaging. People drastically underestimate the difficulty of deploying things into the real world (AI system that can be plugged into existing systems online, like for targeting ads, are a somewhat different matter). This is one of skeptical arguments from Rodney Brooks I agree with (for his argument, see section 7 here). The compute costs of training and inferencing GPT-like models also presents significant headwinds to translation to real-world use. Thompson et al. have argued that baring significant algorithmic improvements, hardware and compute costs will soon be fatal to the entire enterprise of scaling.[20][21] However, I am skeptical about the conclusions of their work since it appears to me they didn't factor in Moore's law well enough or the possibility of special-purpose hardware. See also Gwern's comments in the comments section here.As far as I can tell, in the next year we will see the following applications move from the lab to commercialization and real-world use:incrementally better NPCs in videogamesincrementally better text summarization for things like product reviews or press releasesincrementally better translationbetter code completionAcknowledgementsThank you to Stephen "Cas" Casper for proofreading an earlier draft of this post and providing useful comments.ReferencesNakkiran, et al. "Deep Double Descent: Where Bigger Models and More Data Hurt". 2019. Hasson et al. "Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks". Neuron.105(3). pages 416-434. 2020. Bricken, Trenton and Pehlevan, Cengiz. "Attention Approximates Sparse Distributed Memory". In Proceedings of Advances in Neural Information Processing Systems (NeurIPS) 34. 2021. Lee, et al. "Deduplicating Training Data Makes Language Models Better". arXiv e-prints. 2021. Carlini et al. "Extracting Training Data from Large Language Models". In Proceedings of the 30th USENIX Security Symposium. 2021. Hendrycks et al. "Measuring Massive Multitask Language Understanding". In Proceedings of the International Conference on Learning Representations (ICLR). 2021. Power et al. "Grokking: Generalization Beyond Overfitting On Small Algorithmic Datasets". In Proceedings of the 1st Mathematical Reasoning in General Artificial Intelligence Workshop, ICLR. 2021. Steinhardt, Jacob. "On The Risks of Emergent Behavior in Foundation Models". 2021. Springer, J. M., & Kenyon, G. T. Its Hard for Neural Networks to Learn the Game of Life. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN). 2021. (also on arXiv) Breiman, Leo. "Statistical Modeling: The Two Cultures". Statistical Science.16 (3) pg 199 - 231. 2001. Dinh et al. "Sharp Minima Can Generalize For Deep Nets". 2017. Lin et al. "TruthfulQA: Measuring How Models Mimic Human Falsehoods". arXiv e-prints. 2021. Borgeaud et al. "Improving language models by retrieving from trillions of tokens". arXiv e-prints". 2021. Lake et al. "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks". In Proceedings of the 35th International Conference on Machine Learning (ICML). 2018. Chen et al. "Evaluating Large Language Models Trained on Code". arXiv e-print. 2021. Bolukbasi, et al. "Man is to Computer Programmer as Woman is toHomemaker? Debiasing Word Embeddings". In Proceedings of the 30th International Conference on Neural Information Processing Systems (NeurIPS). 2016. Webl et al. "Challenges in Detoxifying Language Models". In Findings of EMNLP. 2021. Weidinger, et al. "Ethical and social risks of harm from Language Models". arXiv e-prints. 2021. Askell, et al. "A General Language Assistant as a Laboratory for Alignment". arXiv e-prints. 2021. Thompson et al. "Deep Learning's Diminishing Returns". IEEE Spectrum. 2021. Thompson et al. "The Computational Limits of Deep Learning". arXiv e-prints. 2020. | Discovery/Decision Making/Content Synthesis/Prediction | Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | AleksandarK | (PR) Tachyum Selected for Pan-European Project Enabling 1 AI Zettaflop in 2024 | Tachyum today announced that it was selected by the Slovak Republic to participate in the latest submission for the Important Projects of Common European Interest (IPCEI), to develop Prodigy 2 for HPC/AI. Prodigy 2 for HPC/AI will enable 1 AI Zettaflop and more than 10 DP Exaflops computers to support superhuman brain-scale computing by 2024 for under €1B. As part of this selection, Tachyum could receive a 49 million Euro grant to accelerate a second-generation of its Tachyum Prodigy processor for HPC/AI in a 3-nanometer process.The IPCEI program can make a very important contribution to sustainable economic growth, jobs, competitiveness and resilience for industry and the economy in the European Union. IPCEI will strengthen the EU's open strategic autonomy by enabling breakthrough innovation and infrastructure projects through cross-border cooperation and with positive spill-over effects on the internal market and society as a whole. Read full story | https://www.techpowerup.com/291325/tachyum-selected-for-pan-european-project-enabling-1-ai-zettaflop-in-2024 | 2022-01-26T06:39:46Z | Tachyum today announced that it was selected by the Slovak Republic to participate in the latest submission for the Important Projects of Common European Interest (IPCEI), to develop Prodigy 2 for HPC/AI. Prodigy 2 for HPC/AI will enable 1 AI Zettaflop and more than 10 DP Exaflops computers to support superhuman brain-scale computing by 2024 for under 1B. As part of this selection, Tachyum could receive a 49 million Euro grant to accelerate a second-generation of its Tachyum Prodigy processor for HPC/AI in a 3-nanometer process.The IPCEI program can make a very important contribution to sustainable economic growth, jobs, competitiveness and resilience for industry and the economy in the European Union. IPCEI will strengthen the EU's open strategic autonomy by enabling breakthrough innovation and infrastructure projects through cross-border cooperation and with positive spill-over effects on the internal market and society as a whole.Even though global semiconductor demand has exploded, Europe's share of the market has shrunk. By announcing the proposed European Chips Act, President Ursula von der Leyen sent a strong economic signal to EU member states about the need to achieve technological sovereignty. The Act builds on the European Alliance on Processors and Semiconductor Technologies which was launched in July 2021. The strategic project that Tachyum has been selected for will result in a major inflection point for the data center industry and the democratization of AI.According to Europe 2020's economic strategy, IT is one of the key drivers for smart, sustainable and inclusive growth. Its ability to transform the structures and dynamics of European society enables people to organize their lives and businesses in new ways, manage information and learn throughout their lives, and contribute to the pool of online knowledge. Because of the importance of these projects, pan-European research and innovation, like that in which Tachyum has been invited to participate, represent an advancement of common European interests.Tachyum's Prodigy processor can run HPC applications, convolutional AI, explainable AI, general AI, bio AI, and spiking neural networks, plus normal data center workloads, on a single homogeneous processor platform, using existing standard programming models. Without Prodigy, hyperscale data centers must use a combination of disparate CPU, GPU and TPU hardware, for these different workloads, creating inefficiency, expense, and the complexity of separate supply and maintenance infrastructures. Using specific hardware dedicated to each type of workload (e.g. data center, AI, HPC), results in underutilization of hardware resources, and more challenging programming, support, and maintenance. Prodigy's ability to seamlessly switch among these various workloads dramatically changes the competitive landscape and the economics of data centers."As we near the completion of our first phase of delivering Prodigy to market in order to transform data centers throughout the world, we have already begun work on the next generation of Prodigy processors that will enable us to help transform economic opportunities for society at large," said Dr. Radoslav Danilak, founder and CEO of Tachyum. "The importance of AI and supercomputers cannot be underestimated. Being selected as a key participant in this IPCEI allows us to reimagine how we can best revolutionize industry by developing a 3 nm version of Prodigy that will enable superhuman brain-scale computing. We look forward to working with the commission and contributing to the success of this project." | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | AI-Driven Autonomous Testing Pioneer Appvance Raises $13 million to Disrupt the $120 billion Software Testing Market | SANTA CLARA, Calif., Jan. 18, 2022 /PRNewswire/ -- Appvance, the inventor, first to market and industry leader of AI–driven software testing technology announces today that it has secured $13 million in Series C funding to accelerate global expansion and product roadmap development. This... | https://www.prnewswire.com/news-releases/ai-driven-autonomous-testing-pioneer-appvance-raises-13-million-to-disrupt-the-120-billion-software-testing-market-301461848.html | 2022-01-18T15:00:00Z | SANTA CLARA, Calif., Jan. 18, 2022 /PRNewswire/ -- Appvance, the inventor, first to market and industry leader of AIdriven software testing technology announces today that it has secured $13 million in Series C funding to accelerate global expansion and product roadmap development. This round is led by US growth equity firm Arrowroot Capital with participation from existing investors including Javelin Venture Partners and TRI HoldCo. Appvance is disrupting the $120 billion software QA market through its patented Appvance IQ(AIQ) unified test platform the world's first Level 5 autonomous testing solution. Appvance delivers automated testing to leading global brands using the most advanced AI/ML engine in the industry: this technology designs, generates, and independently executes tests for sophisticated web and mobile applications with no human intervention. Every day AIQ drives-out risk and drives-up value on business-critical applications by automatically delivering comprehensive application coverage. "We are ecstatic about this new round and the addition of Arrowroot Capital as our investment partner," said Andre Liao, Appvance's CEO. "2021 was a momentous year - our business grew over 200% benefiting from enterprises embracing AI to radically improve testing efficacy and product quality. Today's announcement is a significant milestone, and we couldn't have done it without the amazing support of our customers, partners, and employees around the world. Thank you!"As part of the financing, Appvance welcomes Kareem El Sawy, Founding Partner at Arrowroot Capital to the company's Board of Directors. He added, "Being the AI pioneer within the increasingly demanding QA function, Appvance is poised to carve out a leadership role in accelerating DevOps productivity at scale. With blue chip customers and tier 1 partnerships globally, market demand for the AIQ product has hit an undeniable inflection point. We are thrilled to partner with Andre and his incredible team to help build the next gen autonomous QA platform of choice for the modern enterprise."About AppvanceAppvance is the inventor of AI-driven autonomous testing technology and is leading the charge to revolutionize software development lifecycle. The company's premier product is Appvance IQ(AIQ), the world's first Level 5 unified test automation system. AIQ helps enterprises improve the quality, performance and security of their applications, while transforming the efficiency and output of testing teams. Appvance is headquartered in Santa Clara, CA, with additional offices in Costa Rica and India. Learn more at https://appvance.com.About Arrowroot CapitalArrowroot Capital is a global growth equity firm based in Los Angeles, CA focused on minority, majority, and buyout investments in B2B software companies. The firm serves as a catalyst for growth-related initiatives by partnering with management and leveraging its deep enterprise software expertise to deliver meaningful, tangible value. Arrowroot has the flexibility to pursue opportunities of varying sizes, as well as a broad range of transaction types. Arrowroot also assists in targeting and executing add-on acquisitions for its portfolio companies to help drive growth and increased market position. Learn more at www.arrowrootcapital.com.SOURCE Appvance | Process Automation/Detection and Monitoring | Unknown | null | null | null | null | null | null |
||
news | 7 Feb 22 CrowdStrike Announces General Availability of Falcon XDR, Extending Industry-Leading Threat Detection, Investigation, Response, and Hunting Capabilities Across the Security Stack Falcon XDR brings together world-class threat hunting, machine learning (ML), artificial intelligence (AI) and indicators of attack (IOAs) with third-party data sources to correlate events and deliver real-time detections AUSTIN, Texas – Feb 7, 2022 – CrowdStrike Holdings, Inc. (Nasdaq: | CrowdStrike announces the general availability of its Falcon XDR module, extending CrowdStrike’s industry-leading endpoint detection and response (EDR) capabilities. | https://www.crowdstrike.com/press-releases/crowdstrike-announces-general-availability-of-falcon-xdr/ | 2022-02-07T17:59:00Z | Falcon XDR brings together world-class threat hunting, machine learning (ML), artificial intelligence (AI) and indicators of attack (IOAs) with third-party data sources to correlate events and deliver real-time detectionsAUSTIN, Texas Feb 7, 2022 CrowdStrike Holdings, Inc. (Nasdaq: CRWD), a leader in cloud-delivered protection of endpoints, cloud workloads, identity and data, today announced the general availability of its Falcon XDR module, extending CrowdStrikes industry-leading endpoint detection and response (EDR) capabilities to improve threat visibility across the enterprise, simplify security operations and dramatically speed up response time, containment and remediation of the most sophisticated attacks.One of the ways to address the cybersecurity skills gap is to empower security teams to work more effectively,” said Amol Kulkarni, chief product and engineering officer at CrowdStrike. “Falcon XDR helps to address this problem by correlating weak, siloed threat signals into prioritized alerts from a centralized console for security teams to ensure their investigations are meaningful and efficient.”Falcon XDR enables security teams to:Unify detection and response security data. Falcon XDR takes third-party data (including network security, email security, web security, cloud security and cloud access security broker [CASB]) from third-party vendors, including CrowdXDR Alliance partners, and correlates it with data from the CrowdStrike Security Cloud to optimize real-time threat detection, investigation, response and hunting.Get the right answers fast. Falcon XDR speeds up triage and investigation for security operations center (SOC) analysts and threat hunters by delivering one central console for accurate alert prioritization, flexible search scheduling and detection customization, full attack context and interactive graph visualization.Turn XDR insight into action. To orchestrate and automate response across security workflows, Falcon Fusion, a security orchestration, automation and response (SOAR) framework, is built natively into the Falcon platform. Security teams can improve SOC and IT efficiencies by building real-time notification and response capabilities, along with customizable triggers based on detection and incident categorizations. Falcon Fusion is free for CrowdStrike customers.Increase efficiency of SOC operations. Falcon XDR automatically correlates and provides high-quality detection data across the security stack. It dramatically speeds investigation and hunting by providing a common search interface directly from the CrowdStrike Security Cloud.Improve return on investment (ROI) of existing security investments. Falcon XDR uncovers actionable insights from previously siloed data in disparate, disconnected security products from across the IT stack.CrowdStrike have spent years building and refining their detection and response automation capabilities, said Dave Gruber, principal analyst at Enterprise Strategy Group (ESG). As market interest in XDR continues to accelerate, CrowdStrike is well-positioned to expand into XDR, capitalizing on their existing, mature and scalable EDR infrastructure, as they invest in new data ingest, analysis and advanced threat detection capabilities required to respond to a more sophisticated threat landscape. CrowdStrikes alliance-driven XDR strategy should enable them to readily ingest telemetry from a broad range of third-party security solutions into their Security Cloud, offering security teams flexibility in their choice of other core security controls.For more information on Falcon XDR, please visit our blog.To watch a Falcon XDR demo, please click here.About CrowdStrikeCrowdStrike Holdings, Inc. (Nasdaq: CRWD), a global cybersecurity leader, has redefined modern security with one of the worlds most advanced cloud-native platforms for protecting critical areas of enterprise risk endpoints and cloud workloads, identity and data.Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities.Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity and immediate time-to-value.CrowdStrike: We stop breaches.Learn more: https://www.crowdstrike.com/Follow us: Blog | Twitter | LinkedIn | Facebook | InstagramStart a free trial today: https://www.crowdstrike.com/free-trial-guide/© 2022 CrowdStrike, Inc. All rights reserved. CrowdStrike, the falcon logo, CrowdStrike Falcon and CrowdStrike Threat Graph are marks owned by CrowdStrike, Inc. and registered with the United States Patent and Trademark Office, and in other countries. CrowdStrike owns other trademarks and service marks, and may use the brands of third parties to identify their products and services.ContactKevin BenacciCrowdStrike Corporate [email protected] | Detection and Monitoring/Prediction | Computer and Mathematical | null | null | null | null | null | null |
||
news | Bob Reselman | Rest vs GraphQL: Performance and ease of use comparisons | Representational State Transfer, popularly known as REST, is all the rage these days. Since the specification was first described in Roy Fielding’s seminal dissertation in 2000, REST has become the ... | https://www.theserverside.com/blog/Coffee-Talk-Java-News-Stories-and-Opinions/Rest-vs-GraphQL-Performance-and-ease-of-use-comparisons | 2022-01-25T06:50:53Z | Representational State Transfer, popularly known as REST, is all the rage these days. Since the specification was first described in Roy Fieldings seminal dissertation in 2000, REST has become the de facto standard by which applications work with data exposed by APIs running on the Internet. It seems as if every major information source, from the SpaceX to the FDIC publishes a REST interfaceIt makes sense. REST is easy to work with. You make an HTTP GET call to a URL that represents a collection of data entities, a.k.a resources, and you get back data in a well known format, such as XML or JSON. Also, REST allows you to add data, provided of course you have permission to do so as provided by the API publisher.But, for all the benefits that REST provides, it has a problem: The caller has no control over the structure of the data returned from an API. The immutable nature of REST data structures has performance implications.REST has performance implicationsTake a look at JSON below in Listing 1. Its an excerpt of a REST call to the SpaceX API that gets a list of all landing attempts. The URL used to make the call is:https://api.spacex.land/rest/landpadsThe excerpt below is a single landing attempt extracted from an array of landing attempts that was returned from the REST call.{ "attempted_landings": "3", "details": "SpaceX's first east coast landing pad is Landing Zone 1, where the historic first Falcon 9 landing occurred in December 2015. LC-13 was originally used as a launch pad for early Atlas missiles and rockets from Lockheed Martin. LC-1 was later expanded to include Landing Zone 2 for side booster RTLS Falcon Heavy missions, and it was first used in February 2018 for that purpose.", "full_name": "Landing Zone 2", "id": "LZ-2", "landing_type": "RTLS", "location": { "latitude": 28.485833, "longitude": -80.544444, "name": "Cape Canaveral", "region": "Florida" }, "status": "active", "successful_landings": "3", "wikipedia": "https://en.wikipedia.org/wiki/Landing_Zones_1_and_2"}Listing 1: A JSON object that an element in a collection of landing attempts returned by a call the SpaceX REST APIJSON verbosityThe properties displayed in the JSON above are immutable. As a caller, I cant tell the REST API to give me only the full_name and wikipedia data. No matter what, whether the REST API returns a single landing attempt or a thousand landing attempts, I am always going to get those nine properties.This might not seem like a big deal, but it is. If I could make it so that REST only returned the two fields of data that are interesting to me instead of the nine that I will always get, the returned data set would be considerably smaller. Not only does REST make it so that a lot of uninteresting data has to make its way over the network, but also, that uninteresting data has to sit in memory on the client side. Theres an intrinsic inefficiency at hand.But, wouldnt it be cool if there was a way to query an API for data according to a data structure you define, asking only for the data you want, as you want it? Fortunately, there is and that way is not REST. Its GraphQL.When you compare REST vs GraphQL, performance is once criteria where GraphQL outshines REST and JSONGraphQL returns only the data structure you wantGraphQL is an API architecture that was first developed at Facebook 2012, released to the public in 2015 and since 2018 has been supported by the GraphQL Foundation, which is hosted by the Linux Foundation. GraphQL specifies a query language that allows a developer to call an GraphQL enabled API and get data returned according to a data structure defined by the GraphQL query.Take a look at Listing 2, below. Its an example of a query that will run against the same land pads data published by the SpaceX REST API used earlier in this article. Only this time its a GraphQL query made against the SpaceX GraphQL API, which is published at https://api.spacex.land/graphql/.Notice the query declares that its only interested in two fields, full_name and wikipedia.{ landpads { full_name wikipedia }}Listing 2: A GraphQL query against the SpaceX GraphQL API for all landing attempts by full_name and wikipedia URLREST vs GraphQL data exchangesNow take a look below at the data in Listing 3. Its the result of the GraphQL query shown above in Listing 2.{ "data": { "landpads": [ { "full_name": "Landing Zone 1", "wikipedia": "https://en.wikipedia.org/wiki/Landing_Zones_1_and_2" }, { "full_name": "Landing Zone 2", "wikipedia": "https://en.wikipedia.org/wiki/Landing_Zones_1_and_2" }, { "full_name": "Landing Zone 4", "wikipedia": "https://en.wikipedia.org/wiki/Vandenberg_AFB_Space_Launch_Complex_4#LZ-4_landing_history" }, { "full_name": "Of Course I Still Love You", "wikipedia": "https://en.wikipedia.org/wiki/Autonomous_spaceport_drone_ship" }, { "full_name": "Just Read The Instructions V1", "wikipedia": "https://en.wikipedia.org/wiki/Autonomous_spaceport_drone_ship" }, { "full_name": "Just Read The Instructions", "wikipedia": "https://en.wikipedia.org/wiki/Autonomous_spaceport_drone_ship" }, { "full_name": "A Shortfall of Gravitas", "wikipedia": "https://en.wikipedia.org/wiki/Autonomous_spaceport_drone_ship" } ] }}Listing 3: The result of a GraphQL query made against the SpaceX GraphQL APIAs you see, only the full_name and wikipedia data is returned. This is the data the developer wanted; no more, no less. The actual size of the GraphQL is 1.09 kilobytes. When we ask the SpaceX REST API for the land pad data, the size of the JSON array returned is 8.18 kilobytes. Remember, the REST API has to return all the fields, all the time. The efficiency provided by GraphQL is apparent.Rest vs GraphQL performance analysisREST is a very popular API architecture and is not going away anytime soon. REST provides the degree of standardization needed to make working with public APIs an easy, commonplace occurrence. There were other API architectures before REST came along, for example SOAP, but RESTs simplicity made it hard to ignore. Theres a good argument to be made that REST was the engine that drove the early days of the API adoption. But, REST comes with overhead, as demonstrated above.GraphQL, on the other hand, is the new kid on the block. Yet, its a technology thats enjoying wide-spread adoption. A lot of big names have gotten behind GraphQL, for example the NY Times, JPMorgan Chase, PayPal and Bank of America, to name a few.GraphQL has a lot to offer in terms of flexibility of data management. Also, the way data is conceptualized under the hood in GraphQL is more in line with the principles of the Semantic Web. The Semantic Web is a way of thinking about all the data on the Internet in an infinite amount, but concretely describable relationships.Nonetheless, both REST and GraphQL are here to stay. As such, understanding the details, benefits and tradeoffs of each API architecture is useful knowledge for developers working in the API economy. | Content Synthesis/Decision Making | Computer and Mathematical | null | null | null | null | null | null |
|
news | Kyle Alspach | Lacework hires Facebook VP of engineering to advance cloud security platform | Cloud security firm Lacework has hired Arash Nikkar, formerly VP of engineering at Facebook, to drive development of its ML-powered platform. | https://venturebeat.com/2022/01/24/lacework-hires-facebook-vp-of-engineering-to-advance-cloud-security-platform/ | 2022-01-24T13:00:00Z | Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.Lacework, developer of a data-driven cloud security platform that has seen rapid growth in the market, disclosed that Facebook vice president of engineering Arash Nikkar has joined the company.Nikkar had previously been with Facebook since 2013, and he started with Lacework last fall as the companys vice president of engineering. The hire was not announced until today.In an interview, Nikkar said he joined Lacework in part because the security industry feels like its at an inflection point driven by the increased adoption of the public cloud. The traditional approach to securing your footprint just will no longer suffice.And with a huge amount of room for growth in cloud, Im really excited to be a part of that evolution and that transformation, and tackle it with Lacework, Nikkar told VentureBeat.Parallels at LaceworkBefore joining Facebook, Nikkar had spent five years at Hulu as its principal software development lead.Nikkar said that while he hadnt previously worked in the security space, he saw a number of parallels at Lacework with his experiences at Facebook and Hulu.Like those companies, Lacework is a fast-paced organization that is anchored in principles such as being customer-first, data-driven, and highly collaborative, he said.There are also parallels between his past experiences and the way Lacework is approaching cloud security from a technical perspective, Nikkar said.Both of them deal with tremendous amounts of data, and require building out real-time data stream processing systems. Both require building out complex, distributed systems. Theres lots of machine learning and AI and other types of statistical analysis, as well as building out this common shared infrastructure, he said. So while the domains are clearly different than those Ive worked with in the past, there are lots of similarities in the underlying technologies.PolygraphCentral to Laceworks platform is its Polygraph technology, which collects and correlates massive amounts of data, detects potential security issues, and prioritizes the biggest threats for response. Key capabilities include ML-powered anomaly detection and deep visibility across cloud and container workloads. Additionally, the platform reduces alerts to an average of 1.4 per day and false positives by 95%, according to the company.In terms of technology, Polygraph is what caught my attention the most at Lacework, Nikkar said. The technology contrasts with the traditional model where a security team must constantly evaluate and rewrite rules for anomaly detection, he noted.Polygraph just takes a much smarter approachwhere youve built a baseline and youve applied machine learning to detect anomalies, Nikkar said. And you can cover far more breadth and different types of use cases.Founded in 2015, Lacework ranks among the best-funded and highest-valued privately held cybersecurity vendors, with the company most recently raising a $1.3 billion funding round in November that brought a post-money valuation of $8.3 billion. Lacework was originally incubated at Sutter Hill Ventures, following a model used to launch two other tech industry success stories, Pure Storage and Snowflake.Growth surgeWhile Lacework doesnt disclose specific metrics for its growth, the company has been growing at 3.5X, year-over-year, on most of these metrics, Lacework co-CEO Jay Parikh said in an interview. Customers include Snowflake and Pure Storage, as well as VMware, Cloudera, Reynolds Consumer Products, Nextdoor, and Brightcove .With its strong growth and recent funding, were scaling up what we can offer, investing in R&D, investing in our channel and partner program ecosystem build-out, Parikh said.Before coming to Lacework last July, Parikh had also previously worked at Facebookfrom late 2009 until February 2021in the role of vice president of engineering.Nikkar is working under Parikh at Lacework, and he is the first major hire at the company for the co-CEO.In terms of product development priorities looking ahead, Lacework will continue looking to advance its platform and its patented Polygraph technology, the executives said.Advancing the platformLacework is built atop the Snowflake data platform and excels at collecting, processing, and normalizing dataand then deriving insights for customers, according to the executives. Lacework will continue to look at increasing the visibility and insights its platform can provide around the data being gathered from customer environments, the executives said.Youll see us continue to evolve in terms of what types of insights we can providebecause we have both the understanding of the developer environment as well as whats happening in your actual runtime environment, Parikh said.For example, with the Apache Log4j vulnerability disclosed in December, Lacework brings the ability to both scan for the vulnerabilityand detect which specific binaries have the bad packageswhile also showing in production where the flaw might be exploited, he said.Some companies can just do the scanning, but they cant do the production analysis, Parikh said. We can do both, and its all on the same platform.This makes Lacework stand out from other cloud security players that are offering point solutions, he said.We fundamentally bring a different approach, Parikh said. And we can innovate faster and we can provide a much more comprehensive, end-to-end approachbecause weve invested in building out this unique platform approach.Building the teamAlong with continuing to drive the development of greater visibility and insights for customers in the Lacework platform, Nikkars priorities at the company will include building out a world-class engineering team, he said.This will include hiring cybersecurity experts who bring deep knowledge of the space, as well as bringing aboard people who dont necessarily have any experience in the security domain, Nikkar said.We look for folks who are intellectually curious, eager to learn, and comfortable dealing with a ton of whitespace and ambiguous problemsand who are genuinely unsatisfied with the status quo, he said.Because Laceworks platform takes a holistic view across a customers cloud footprint, having a team that can bring different ways of thinking about big problems is essential, Nikkar said.Were building really strong security domain expertise, as well as taking people whove solved problems within different domainsbut have a lot of parallels to a lot of what were doing here in building these large-scale systems, he said.Ultimately, Lacework is building technical systems that folks have never built before, Nikkar said.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member | Detection and Monitoring/Content Synthesis/Decision Making | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | crowdstrike-falconpy-dev added to PyPI | The CrowdStrike Falcon OAuth2 API SDK for Python 3 | https://pypi.org/project/crowdstrike-falconpy-dev/ | 2022-02-04T02:39:51Z | This is the development package. Please check https://pypi.org/project/crowdstrike-falconpy/ for the stable release.The FalconPy SDK contains a collection of Python classes that abstract CrowdStrike Falcon OAuth2 API interaction, removing duplicative code and allowing developers to focus on just the logic of their solution requirements.Overview There are a large number of CrowdStrike Falcon API service collections (45+) collectively containing hundreds of individual operations, all of which are accessible to your project via FalconPy.The CrowdStrike Falcon SDK for Python also supports interaction with all CrowdStrike regions, custom connection and response timeouts,routing requests through a list of proxies, disabling SSL verification, and custom header configuration.If the CrowdStrike APIs were rings of great power, that the Dark Lord Sauron gifted to the kings of dwarves, elves and men, then CrowdStrike's FalconPy would be the One Ring."One SDK to rule them all, One SDK to find them, One SDK to bring them all and in the darkness bind them."Supported versions of PythonThe CrowdStrike Falcon SDK for Python was developed for Python 3, and does not support versions of Python below 3.6. Every commit to the FalconPy code base is unit tested for functionality using all versions of Python the library currently supports.While Python 3.5 should not have problems running FalconPy, as of February 2021 this version is no longer analyzed as part of our unit testing.Supported Operating SystemsThe FalconPy SDK is unit tested on the following operating systems.FalconPy will also run on any of the following operating systems.Details regarding supported operating systems and Python versions, and project security and testing procedures can be found here.ComponentsThe FalconPy SDK provides two distinct methods for interacting with CrowdStrike's API.Service ClassesRepresenting a single CrowdStrike Falcon API service collection, each Service Class has a method defined for every operation available within that service collection.Available Service ClassesFor each CrowdStrike Falcon API service collection, a matching Service Class is available in the FalconPy library.Service Class benefitsClosely follows Python and OpenAPI best practice for code style and syntax. PEP-8 compliant.Completely abstracts token management, automatically refreshing your token when it expires.Provides simple programmatic patterns for interacting with CrowdStrike Falcon APIs.Supports cloud region autodiscovery for the CrowdStrike US-1, US-2 and EU-1 regions.Supports dynamic configuration based upon the needs of your environment.Supports CrowdStrike Falcon API parameter abstraction functionality.Supports CrowdStrike Falcon API body payload abstraction functionality.The Uber ClassOperating as a single harness for interacting with the entire CrowdStrike Falcon API, the Uber Class can access every available operation within every API service collection.Code Locationapi_complete.pyThe Uber Class provides an interface to all CrowdStrike APIs with a single handler. This solution supports communicating with API endpoints that do not have an available Service Class or are recently released.Uber Class benefitsAccess every CrowdStrike Falcon API service collection with only one import and only one class.Completely abstracts token management, automatically refreshing your token when it expires.Interact with newly released API operations not yet available in the library via the override keyword.Provides simple programmatic patterns for interacting with CrowdStrike Falcon APIs.Supports cloud region autodiscovery for the CrowdStrike US-1, US-2 and EU-1 regions.Supports CrowdStrike Falcon API parameter abstraction functionality.Supports all environment configuration options supported by FalconPy Service Classes.Comparing FalconPy class typesWhile the usage syntax varies slightly, the Uber Class provides the same performance and output as FalconPy Service Classes, and can perform all of the same operations. The Uber Class does not support body payload abstraction but does provide unique override functionality that is not available when you are using Service Classes.Quick Start Stable releases of FalconPy are available on the Python Package Index. In a terminal, execute the following command:python3 -m pip install crowdstrike-falconpyOnce installed, you can immediately begin using CrowdStrike functionality in your Python projects."""CrowdStrike FalconPy Quick Start."""fromfalconpyimportHostshosts=Hosts(client_id="CROWDSTRIKE_API_CLIENT_ID",client_secret="CROWDSTRIKE_API_SECRET")SEARCH_FILTER="hostname-search-string"# Retrieve a list of hosts that have a hostname that matches our search filterhosts_search_result=hosts.query_devices_by_filter(filter=f"hostname:'{SEARCH_FILTER}'")# Confirm we received a success response back from the CrowdStrike APIifhosts_search_result["status_code"]==200:hosts_found=hosts_search_result["body"]["resources"]# Confirm our search produced resultsifhosts_found:# Retrieve the details for all matcheshosts_detail=hosts.get_device_details(ids=hosts_found)["body"]["resources"]fordetailinhosts_detail:# Display the AID and hostname for this matchaid=detail["device_id"]hostname=detail["hostname"]print(f"{hostname} ({aid})")else:print("No hosts found matching that hostname within your Falcon tenant.")else:# Retrieve the details of the error responseerror_detail=hosts_search_result["body"]["errors"]forerrorinerror_detail:# Display the API error detailerror_code=error["code"]error_message=error["message"]print(f"[Error {error_code}] {error_message}")More samplesIf you are interesting in reviewing more examples of FalconPy usage, this repository also maintains a collection of samples to help get you started with integrating CrowdStrike Falcon into your DevOps processes.Documentation and Support FalconPy is a community-driven open source project designed to assist developers with implementing CrowdStrike's APIs within their applications, and is not a formal CrowdStrike product. As such it carries no formal support, expressed or implied.Official Project Documentation: falconpy.ioExtended documentation is also available via the wiki for this repository.Issues and QuestionsIs something going wrong? GitHub Issues are used to report bugs and errors.Have a question you can't find answered in the documentation?Please submit usage questions to the Q&A section of our discussion board.Community forumsThe discussion board for this repository also provides the community with means to communicate regarding enhancements ideas, integration examples and new releases.Additional contentThe following materials have been produced by the maintainers and members of the community regarding FalconPy.More information regarding FalconPy documentation and support can be found here.Contribute to FalconPy Interested in being acknowledged as a member of an elite community of security-focused Python developers that stop breaches?There are many ways you can contribute to the FalconPy project!Providing feedback by opening a GitHub ticket. Even a fly-by "hey, this worked..." is appreciated and helps validate approaches. Ideas on improving the project are most welcome.Documenting, blogging, or creating videos, of how you've used FalconPy! This type of content is invaluable and helps our community grow. Open a pull request for inclusion in the Additional content section of this page.Fix a bug or implement a new feature. Check out our open issues on GitHub or our discussion board for inspiration.Review pull requests by going through the queue of open pull requests on GitHub and giving feedback to the authors.To get started, review the Code of Conduct for community guidelines, and the contribution guide for more detail regarding contributing to the CrowdStrike FalconPy project.WE STOP BREACHES | Process Automation/Content Creation | Computer and Mathematical | null | null | null | null | null | null |
||
news | BS Reporter | ORAI raises Rs 6.5 crore in pre-series A funding round led by IPV | ORAI is a platform that offers customers end to end platform + solution + service. | https://www.business-standard.com/article/companies/orai-raises-rs-6-5-crore-in-pre-series-a-funding-round-led-by-ipv-122011300489_1.html | 2022-01-13T06:53:00Z | Conversational AI platform ORAI has raised Rs 6.5 crore in a pre-Series A Round led by Inflection Point Ventures, one of India’s largest angel investment platforms. The funds raised will be utilised in expansion of sales & marketing, to capture larger markets, product development and enhancements, and towards R&D.ORAI offers 100 per cent automation powered by AI with all the advanced features in a single bot that does all the work related to customer support, customer outreach, customer engagement, marketing, sales support, and post-sales services. The platform has also introduced WhatsApp commerce, where automating business conversations over WhatsApp with the power of AI.Vinay Bansal, founder & CEO, Inflection Point Ventures, says, “AI driven chatbot has witnessed an upsurge in demand as it helps business to grow and function holistically. Similarly, ORAI has evolved exponentially as a robust AI conversational platform catering to 14 sectors including healthcare, real estate, education, automobile among others. We at IPV look forward to supporting ORAI’s vision to grow beyond boundaries and be the leaders in AI conversational space.”ORAI is a platform that offers customers end to end platform + solution + service. It is an all-in-one conversational AI, customer acquisition platform enabling clients to gain higher revenues & increased ROI using Humanised virtual assistant enabling Realtime 3-way communication, and Automated lead qualification. It charges one time integration fee and a monthly subscription fee based on usage scale. ORAI currently has over 100 customers including Sayaji Hotel, GMR Delhi, Group Landmark, Kataria Group, Emcure etc.Swapnil Jain, co-founder and CEO, ORAI, says, “As IPV continues to show trust in our business growth, with the 2nd round led by the Platform, we are all set to expand our operations nationally and internationally. With WhatsApp Commerce becoming the biggest demand in the industry, ORAI has received multiple appreciations from customers for its high tech, high performing platform.”The global conversational AI market size is expected to grow from $4.8 billion in 2020 to $13.9 billion by 2025, at a compound annual growth rate of 21.9% during the forecast period. ORAI aims to be the leader in Conversational AI Space with constant innovations helping businesses connect better with customers.Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.Support quality journalism and subscribe to Business Standard.Digital Editor | Process Automation/Digital Assistance | Management/Business and Financial Operations/Sales and Related | null | null | null | null | null | null |
|
news | CoreWeave partners with EleutherAI & NovelAI to make open-source AI more accessible | SPRINGFIELD, N.J., Feb. 2, 2022 /PRNewswire/ -- CoreWeave, a specialized cloud provider built for GPU-accelerated workloads, today announced the release of the largest publicly available language model in partnership with EleutherAI, a grassroots collective of researchers working to open... | https://www.prnewswire.com/news-releases/coreweave-partners-with-eleutherai--novelai-to-make-open-source-ai-more-accessible-301474123.html | 2022-02-02T17:58:00Z | SPRINGFIELD, N.J., Feb. 2, 2022 /PRNewswire/ -- CoreWeave, a specialized cloud provider built for GPU-accelerated workloads, today announced the release of the largest publicly available language model in partnership with EleutherAI, a grassroots collective of researchers working to open source AI research. The model - GPT-NeoX-20B - was trained by EleutherAI on CoreWeave's state-of-the-art NVIDIA A100 training cluster and is set to provide businesses and researchers alike with access to build innovative products, applications, and advance scientific research.Effective today, GPT-NeoX-20B is available on GooseAI, a fully managed inference service delivered by API, prior to a full open-source release next week. With feature parity to other well known APIs, GooseAI delivers a plug-and-play solution for serving open source language models at over 70% cost savings by simply changing 2 lines of code. GooseAI is also being released today as a joint venture between CoreWeave and partner Anlatan - the creators of NovelAI.At 20 billion parameters, GPT-NeoX-20B is a powerhouse that was trained on EleutherAI's curated collection of datasets, The Pile. This was the same dataset used to train well-known models like Beijing Academy of Artificial Intelligence's Wu Dao (1.75T parameters, multimodal), AI21's Jurassic-1 (178B parameters), Anthropic's language assistant (52B parameters), and Microsoft and NVIDIA's Megatron-Turing NLG (340B parameters). GPT-NeoX-20B is a glimpse into the next generation of what open-sourced AI systems could look like. EleutherAI hopes to remove the current barriers to research on the understanding and safety of such powerful models.Founded in 2017, CoreWeave offers industry-leading, scalable, on-demand computing resources for bleeding-edge Machine Learning and Artificial Intelligence use cases. Its leading infrastructure, unparalleled scale across the broadest selection of NVIDIA GPUs, and specialized DevOps expertise give clients the flexibility and freedom that they need to manage complex workloads. With the release of GooseAI, CoreWeave and Anlatan are delivering a massive step forward for visionary businesses who are building products on top of large language models, while making it even easier to deploy NLP services on top of CoreWeave Cloud.Visit CoreWeave's blog to read more about the release of GPT-NeoX-20B and why GooseAI's NLP-as-a-Service is the next evolution of AI.About CoreWeave: CoreWeave is a specialized cloud provider, delivering a massive scale of accelerated compute resources on top of the industry's fastest & most flexible infrastructure. CoreWeave builds cloud solutions for compute intensive use cases - VFX / Rendering, Machine Learning & AI, Batch Processing, Pixel Streaming and Blockchain - that are up to 35x faster and 80% less expensive than the large, generalized public clouds.About EleutherAI: EleutherAI is a decentralized grassroots collective of volunteer researchers, engineers, and developers focused on AI alignment, scaling, and open source AI research. Founded in July of 2020, our flagship project is the GPT-Neo family of models designed to replicate those developed by OpenAI as GPT-3. Our Discord server is open and welcomes contributors.About NovelAI: NovelAI is a monthly subscription service for AI-assisted authorship, storytelling, virtual companionship, or simply a GPT-J powered sandbox for your imagination.SOURCE CoreWeave | Unknown | Business and Financial Operations/Life, Physical, and Social Science | null | null | null | null | null | null |
||
news | Deepsekhar Choudhury | Indian AI-SaaS start-ups can create $500 bn of market value by 2030: Report | India's increasing role in the global AI market could create employment for around 4.5 million people, says a study by venture capital firm Stellaris and International Finance Corporation | https://www.business-standard.com/article/companies/indian-ai-saas-start-ups-can-create-500-bn-of-market-value-by-2030-report-122021000658_1.html | 2022-02-10T07:54:00Z | India’s artificial intelligence (AI) and software as a service (SaaS) start-ups are well-positioned to create more than $500 billion of market value by 2030, according to a report by venture capital firm Stellaris and World Bank-backed International Finance Corporation (IFC).From a talent perspective, India’s AI and SaaS opportunity could create around 4.5 million jobs – more than 900,000 white collar jobs and 3.6 million new indirect jobs, according to the report.“We believe India could potentially generate a market value of $500 billion from both AI applications and services by 2030, driven by the country’s thriving startup ecosystem, experience of building successful SaaS companies, and world-class data and analytics talent,” said Alok Goyal, Partner, Stellaris Venture Partners.In 2021 alone, $4.5 billion was invested in SaaS startups in India, an increase of 170 per cent from 2020, according to Bain & Company. India has a large talent base of developers and strong process expertise, along with a fast-growing pool of niche talent such as designers and data scientists in addition to the availability of unique data sets and sophisticated algorithms, said the report.Moreover, the overall ecosystem for early-stage companies is backed by adequate high-risk capital, with individual and institutional investors who are focusing on software investments and industry-specific accelerators that are willing to support these companies in their early stages.“In India, there are several inflection points such as an abundance of data, storage capacity, and computing power, that are priming AI to drive the next wave of business transformation”, said Ruchira Shukla, Head, South Asia, Disruptive Technologies – Direct Equity and VC Funds of International Finance Corporation (IFC).“Several critical sectors such as education, healthcare, agriculture, logistics, and financial services stand to gain tremendously from AI-led solutions, driving inclusion and economic development,” she added.Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.Support quality journalism and subscribe to Business Standard.Digital Editor | Unknown | Unknown | null | null | null | null | null | null |
|
news | Dr. Roy Schestowitz | A Soup of Buzzwords From Brussels (the European Commission) | The European Commission (EC) is very much guilty of what the man on the right recently bemoaned; European officials are shaping our policies based on misconceptions and nebulous terms, brought forth by multinational corporations, their media, and their lobbyists in the Brussels area; today we focus | http://techrights.org/2022/01/11/buzzwords-from-brussels/ | http://techrights.org/wp-content/uploads/2022/01/hey-hi-overuse.jpg | 2022-01-11T15:56:00Z | Gemini version available Posted in Deception, Europe, Patents at 10:55 am by Dr. Roy SchestowitzVideo download link | md5sum 8e48a186bb1cfab6d4a7ab99da1d0094Say Hello to BuzzwordsCreative Commons Attribution-No Derivative Works 4.0Summary: The European Commission (EC) is very much guilty of what the man on the right recently bemoaned; European officials are shaping our policies based on misconceptions and nebulous terms, brought forth by multinational corporations, their media, and their lobbyists in the Brussels area; today we focus on a new consultation which sparingly uses the buzzwords “Hey Hi” and “IoT” (in almost every paragraph, on average)HIS morning we belatedly published this relatively short post concerning a misguided or misframed consultation, which is probably well-meant (or well-meaning) but makes inherently flawed/false assumptions about the problem at hand (liability for harm and/or defects). The above video is a very short version of what would otherwise take several hours to cover (e.g. addressing pertinent questions). It’s repeatedly noted that the questions themselves are loaded, wrong, ill-advised, and potentially unhelpful. They compel us to believe that there’s this “magic” called “AI” and that it cannot be understood, governed, and nobody can be held accountable for computerised systems anymore. It’s a form of defeatism. Inducing a sense of helplessness.In recent years we pointed out that the gross overuse of the term “AI” (which we’ve spun as “Hey Hi” for the sake of ridicule) is being exploited by patent maximalists. They want patents to be granted to computers and for patents to also cover computer programs (algorithms) by framing those programs as “AI”. This is really bad (both things) as it defies common sense, not just patent law or its raison d’être.An associate of ours has studied the document and more or less agrees. “I’d say it’s more likely misframed,” he adds. “By this decade, more ought to know about what general-purpose computing is about, so there is not the excuse of it being novel. Same with machine learning, where the innovation was in the 1970s and 1980s, but the computing power didn’t catch up with the theory until the last decade or so.”If one visits the page in question right now it says: “This survey has not yet been published or has already been unpublished in the meantime.”We’ve chosen not to comment until it’s officially over (“EU Survey – Adapting liability rules to the digital age and Artificial Intelligence”).As it states right there in the title, it’s all about “Artificial Intelligence” — a concept which is hardly defined or ill-defined.“The sad situation in the world nowadays is that the politicians neither know anything at all about ICT nor know anyone they can turn to who will give then an honest answer” the associate adds. “Therefore it is important that I at least go through the motions of providing the feedback they have requested from EU citizens.” The text below concerns Directive 85/374/EEC on liability for defective products (more in [1, 2] and it mentions “AI” about 100 times in total:2000 character(s) maximum for each of the following:Question: What do you think is the appropriate approach for consumers toclaim compensation when damage is caused by a defective product boughtthrough an online marketplace and there is no EU-based producer or importer?Question: Please elaborate on your answers or specify other grounds oflegal uncertainty regarding liability for damage caused by AI:Question: Please elaborate on your answers. You may reflect inparticular on the recently proposed AI Act and on the complementaryroles played by liability rules and the other safety-related strands ofthe Commissions AI policy in ensuring trust in AI and promoting theuptake of AI-enabled products and services:Question: Please elaborate on your answers, in particular on whetheryour assessment is different for AI-enabled products than for AI-enabledservicesQuestion: Please elaborate on your answers, in particular on whetheryour assessment is different for AI-enabled products than for AI-enabledservices, as well as on other impacts of possible legal fragmentationQuestion: Please elaborate on your answers and describe any othermeasures you may find appropriate:Question: Please elaborate on your answer, describe any other approachesregarding strict liability you may find appropriate and/or indicate towhich specific AI-enabled products and services strict liability shouldapply:Question: Please elaborate on your answers, also taking into account theinterplay with the other strands of the Commissions AI policy (inparticular the proposed AI Act). Please also describe any other measuresyou may find appropriate:Question: Please elaborate on your answer and specify if you wouldprefer a different approach, e.g. an approach differentiating by area ofAI application:Question: Are there any other issues that should be considered?-----English ENEuropean Commission EU Survey Save a backup on your local computer (disable if you are using apublic/shared computer)Adapting liability rules to the digital age and Artificial IntelligenceFields marked with * are mandatory.IntroductionThis public consultation aims to:confirm the relevance of the issues identified by the 2018 evaluation ofthe Product Liability Directive (e.g. how to apply the Directive toproducts in the digital and circular economy), and gather informationand views on how to improve the Directive (Section I);collect information on the need and possible ways to address issuesrelated specifically to damage caused by Artificial Intelligencesystems, which concerns both the Product Liability Directive andnational civil liability rules (Section II).You can respond to both sections or just to Section I. It is notpossible to respond only to Section II.About you* Question: Language of my contribution* Question: I am giving my contribution as* Question: First name* Question: Surname* Question: Email (this won't be published)______* Question: Country of origin. Please add your country of origin, orthat of your organisation.The Commission will publish all contributions to this publicconsultation. You can choose whether you would prefer to have yourdetails published or to remain anonymous when your contribution ispublished. For the purpose of transparency, the type of respondent (forexample, business association, consumer association, EU citizen)country of origin, organisation name and size, and its transparencyregister number, are always published. Your e-mail address will neverbe published. Opt in to select the privacy option that best suits you.Privacy options default based on the type of respondent selected* Question: I agree with the personal data protection provisionsSection I Product Liability DirectiveThis section of the consultation concerns Council Directive 85/374/EECon liability for defective products (Product Liability Directive),which applies to any product marketed in the European Economic Area (27EU countries plus Iceland, Liechtenstein and Norway). See also SectionII for more in-depth questions about the Directive and AI.According to the Directive, if a defective product causes damage toconsumers, the producer must pay compensation. The injured party mustprove the product was defective, as well as the causal link between thedefect and the damage. But the injured party does not have to prove thatthe producer was at fault or negligent (strict liability). In certaincircumstances, producers are exempted from liability if they prove, e.g.that the products defect was not discoverable based on the bestscientific knowledge at the time it was placed on the market.Injured parties can claim compensation for death, personal injury aswell as property damage if the property is intended for private use andthe damage exceeds EUR 500. The injured party has 3 years to seekcompensation. In addition, the producer is freed from liability 10 yearsafter the date the product was put into circulation.The Evaluation of the Directive in 2018 found that it was effectiveoverall, but difficult to apply to products in the digital and circulareconomy because of its outdated concepts. The Commissions 2020 Reporton Safety and Liability for AI, Internet of things (IoT) and roboticsalso confirmed this.The Evaluation also found that consumers faced obstacles to makingcompensation claims, due to thresholds and time limits, and obstacles togetting compensation, especially for complex products, due to the burdenof proof.* Question: How familiar are you with the Directive? Answer: I have detailed knowledge of the Directive, its objectives, rules and application Answer: I am aware of the Directive and some of its contents Answer: I am not familiar with the Directive Answer: No opinionAdapting the Directive to the digital ageQuestion: The Directive holds importers strictly liable for damagecaused by defective products when the producer is based outside the EU.Nowadays online marketplaces enable consumers to buy products fromoutside the EU without there being an importer.Online marketplaces intermediate the sale of products between traders,including those established outside the EU, and consumers. Typically,they are not in contact with the products they intermediate and theyfrequently intermediate trade between many sellers and consumers.Under the current rules, online marketplaces are covered by aconditional liability exemption (Article 14 of the e-CommerceDirective). The new proposal for a Digital Services Act includesobligations for online marketplaces to tackle illegal products online,e.g. gathering information on the identity of traders using theirservices. Moreover, the new proposal for a General Product SafetyRegulation includes provisions for online marketplaces to tackle thesale of dangerous products online.Do you agree or disagree with the following statements?Strongly agree Agree Neutral Disagree Strongly disagree No opinionThe proposals for a Digital Services Act and General Product SafetyRegulation are sufficient to ensure consumer protection as regardsproducts bought through online marketplaces where there is no EU-basedproducer or importer.The Product Liability Directive needs to be adapted to ensure consumerprotection if damage is caused by defective products bought throughonline marketplaces where there is no EU-based producer or importer.Question: What do you think is the appropriate approach for consumers toclaim compensation when damage is caused by a defective product boughtthrough an online marketplace and there is no EU-based producer or importer? (2000 character(s) maximum) 0 out of 2000 characters used.Question: Digital technologies may bring with them new risks and newkinds of damage.Regarding risks, it is not always clear whether cybersecurityvulnerabilities can be considered a defect under the Directive,particularly as cybersecurity risks evolve throughout a products lifetime.Regarding damage, the Directive harmonises the rights of consumers toclaim compensation for physical injury and property damage, although itlets each Member State decide itself whether to compensate fornon-material damage (e.g. privacy infringements, psychological harm).National rules on non-material damage differ widely. At EU level bothmaterial and non-material damage can be compensated under the GeneralData Protection Regulation (GDPR) when a data controller or processorinfringes the GDPR, and the Environmental Liability Directive providesfor the liability of companies for environmental damage.Do you agree or disagree with the following statements? Strongly agree Agree Neutral Disagree Strongly disagree No opinionProducers should potentially be held strictly liable for damages causedas a result of failure to provide necessary security updates for smartproductsThe Directive should harmonise the right of consumers to claimcompensation from producers who are not simultaneously data controllersor processors, for privacy or data protection infringements (e.g. a leakof personal data caused by a defect)The Directive should harmonise the right of consumers to claimcompensation for damage to, or destruction of, data (e.g. data beingwiped from a hard drive even if there is no tangible damage)The Directive should harmonise the right of consumers to claimcompensation for psychological harm (e.g. abusive robot in a caresetting, home-schooling robot)Some products, whether digital or not, could also cause environmentaldamage. The Directive should allow consumers to claim compensation forenvironmental damage (e.g. caused by chemical products)Coverage of other types of harmAdapting the Directive to the circular economyQuestion The Directive addresses defects present at the moment a productis placed on the market. However, changes to products after they areplaced on the market are increasingly common, e.g. in the context ofcircular economy business models.The Evaluation of the Directive found that it was not always clear whoshould be strictly liable when repaired, refurbished or remanufacturedproducts were defective and caused damage. It is worth noting here thatthe Directive concerns the defectiveness of products and not thedefectiveness of services. So, a third-party repair that was poorlycarried out would not lead to the repairer being held liable under theDirective, although remedies may be available under national law.Do you agree or disagree with the following statements? Strongly agree Agree Neutral Disagree Strongly disagree No opinionCompanies that remanufacture a product (e.g. restoring vehiclecomponents to original as-new condition) and place it back on the marketshould be strictly liable for defects causing damageCompanies that refurbish a product (e.g. restoring functionality of aused smartphone) and place it back on the market should be strictlyliable for defects causing damageThe manufacturer of a defective spare part added to a product (e.g. to awashing machine) during a repair should be strictly liable for damagecaused by that spare partPolicy approach and impacts of adapting the Directive to the digital andcircular economyReducing obstacles to getting compensationQuestion: The Evaluation of the Directive found that in some casesconsumers face significant difficulties in getting compensation fordamage caused by defective products.In particular it found that difficulties in proving the defectiveness ofa product and proving that the product caused the damage accounted for53% of rejected compensation claims. In particular, the technicalcomplexity of certain products (e.g. pharmaceuticals and emergingdigital technologies) could make it especially difficult and costly forconsumers to actually prove they were defective and that they caused thedamage.To what extent do you think that the following types of product presentdifficulties in terms of proving defectiveness and causality in theevent of damage? (See additional burden of proof question concerning AIin Section II) To a very large extent To a large extent To a moderate extent To a small extent Not at all Don't know/no answerAll productsTechnically complex productsPharmaceuticalsAI-enabled productsIoT (Internet of Things) productsQuestion: Other types of product (please specify): (50 character(s) maximum) 0 out of 50 characters used.Reducing obstacles to making claimsQuestion: The Evaluation of the Directive found that in some casesconsumers faced or could face significant difficulties in makingcompensation claims for damage caused by defective products. The currentrules allow consumers to claim compensation for personal injury orproperty damage. Time limits apply to all compensation claims andseveral other limitations apply to compensation for property damage.To what extent do the following features of the Directive createobstacles to consumers making compensation claims? To a very large extent To a large extent To a moderate extent To a small extent Not at all Don't know/no answerProducers are released from liability for death/personal injury 10 yearsafter placing the product on the marketProducers are released from liability for property damage 10 years afterplacing the product on the marketConsumers have to start legal proceedings within 3 years of becomingaware of the damageConsumers can claim compensation only for damage to property worth morethan EUR 500Consumers can claim compensation only for damage to property intendedand used for private purposesPolicy approach and impacts of reducing obstacles to gettingcompensation and making claimsEnd of Section I on Product Liability Directive*QuestionIn Section II of this consultation the problems linked to certain typesof Artificial Intelligence which make it difficult to identify thepotentially liable person, to prove that persons fault or to prove thedefect of a product and the causal link with the damage are exploredfurther.Would you like to continue with Section II on Artificial Intelligence? AnswerContinue with Section II on Artificial Intelligence AnswerClose the questionnaireSection II - Liability for AIIntroductionAs a crucial enabling technology, AI can drive both products andservices. AI systems can either be provided with a physical product(e.g. an autonomous delivery vehicle) or placed separately on the market.To facilitate trust in and the roll-out of AI technologies, theCommission is taking a staged approach. First, on 21 April 2021, itproposed harmonised rules for development, placing on the market and useof certain AI systems (AI Act). The AI Act contains obligations onproviders and users of AI systems, e.g. on human oversight, transparencyand information. In addition, the recent proposal for a Regulation onMachinery Products (published together with the AI act) also covers newrisks originating from emerging technologies, including the integrationof AI systems into machinery.However, safety legislation minimises but cannot fully excludeaccidents. The liability frameworks come into play where accidentshappen and damage is caused. Therefore, as a next step to complement therecent initiatives aimed at improving the safety of products when theyare placed on the EU market, the Commission is considering a revision ofthe liability framework.In the White Paper on AI and the accompanying 2020 Report on Safety andLiability, the Commission identified potential problems with liabilityrules, stemming from the specific properties of certain AI systems.These properties could make it difficult for injured parties to getcompensation based on the Product Liability Directive or nationalfault-based rules. This is because in certain situations, the lack oftransparency (opacity) and explainability (complexity) as well as thehigh degree of autonomy of some AI systems could make it difficult forinjured parties to prove a product is defective or to prove fault, andto prove the causal link with the damage.It may also be uncertain whether and to what extent national strictliability regimes (e.g. for dangerous activities) will apply to the useof AI-enabled products or services. National laws may change, and courtsmay adapt their interpretation of the law, to address these potentialchallenges. Regarding national liability rules and their application toAI, these potential problems have been further explored in this recentstudy.With this staged approach to AI, the Commission aims to provide thelegal certainty necessary for investment and, specifically with thisinitiative, to ensure that victims of damage caused by AI-enabledproducts and services have a similar level of protection to victims oftechnologies that operate without AI. Therefore, this part of theconsultation is looking at all three pillars of the existing liabilityframework.The Product Liability Directive, for consumer claims against producersof defective products. The injured party has to prove the product wasdefective and the causal link between that defect and the damage. Asregards the Directive, the proposed questions build on the first sectionof the consultation.National fault-based liability rules: The injured party has to prove thedefendants fault (negligence or intent to harm) and a causal linkbetween that fault and the damage.National strict liability regimes set by each Member State fortechnologies or activities considered to pose an increased risk tosociety (e.g. cars or construction activities). Strict liability meansthat the relevant risk is assigned to someone irrespective of fault.This is usually justified by the fact that the strictly liableindividual benefits from exposing the public to a risk.In addition to this framework, the General Data Protection Regulation(GDPR) gives anyone who has suffered material or non-material damage dueto an infringement of the Regulation the right to receive compensationfrom the controller or processor.Problems generalQuestion: Do you agree or disagree with the following statements? Strongly agree Agree Neutral Disagree Strongly disagree No opinionThere is uncertainty as to how the Product Liability Directive (i.e.liability for defective products) applies to damage caused by AIThere is uncertainty as to whether and how liability rules undernational law apply to damage caused by AIWhen AI operates with a high degree of autonomy, it could be difficultto link the damage it caused to the actions or omissions of a human actorIn the case of AI that lacks transparency (opacity) and explainability(complexity), it could be difficult for injured parties to prove thatthe conditions of liability (such as fault, a defect, or causation) arefulfilledBecause of AIs specific characteristics, victims of damage caused by AImay in certain cases be less protected than victims of damage thatdidnt involve AIIt is uncertain how national courts will address possible difficultiesof proof and liability gaps in relation to AIQuestion: Please elaborate on your answers or specify other grounds oflegal uncertainty regarding liability for damage caused by AI: (2000 character(s) maximum) 0 out of 2000 characters used.Question: Do you agree or disagree with the following statements? Strongly agree Agree Neutral Disagree Strongly disagree No opinionThe lack of adaptation of the current liability framework to AI maynegatively affect trust in AIThe lack of adaptation of the current liability framework to AI maynegatively affect the uptake of AI-enabled products and servicesQuestion: Please elaborate on your answers. You may reflect inparticular on the recently proposed AI Act and on the complementaryroles played by liability rules and the other safety-related strands ofthe Commissions AI policy in ensuring trust in AI and promoting theuptake of AI-enabled products and services: (2000 character(s) maximum) 0 out of 2000 characters used.Question: If the current liability framework is not adapted, to whatextent do you expect the following problems to occur in relation to theproduction, distribution or use of AI-enabled products or services, nowor in the foreseeable future? This question is primarily aimed atbusinesses and business associations.To a very large extent To a large extent To a moderate extent To a small extent Not at all Don't know/no answerCompanies will face additional costs (e.g. legal information costs,increased insurance costs)Companies may defer or abandon certain investments in AI technologiesCompanies may refrain from using AI when automating certain processesCompanies may limit their cross-border activities related to theproduction, distribution or use of AI-enabled products or servicesHigher prices of AI-enabled products and servicesInsurers will increase risk-premiums due to a lack of predictability ofliability exposuresIt will not be possible to insure some products/servicesNegative impact on the roll-out of AI technologies in the internal marketQuestion: Please elaborate on your answers, in particular on whetheryour assessment is different for AI-enabled products than for AI-enabledservices (2000 character(s) maximum) 0 out of 2000 characters used.Question: With the growing number of AI-enabled products and services onthe market, Member States may adapt their respective liability regimesto the specific challenges of AI, which could lead to increasingdifferences between national liability rules. The Product LiabilityDirective could also be interpreted in different ways by national courtsfor damage caused by AI.If Member States adapt liability rules for AI in a divergent way, ornational courts follow diverging interpretations of existing liabilityrules, to what extent do you expect this to cause the following problemsin the EU? This question is primarily aimed at businesses and businessassociations.To a very large extent To a large extent To a moderate extent To a small extent Not at all Don't know/no answerAdditional costs for companies (e.g. legal information costs, increasedinsurance costs) when producing, distributing or using AI-equippedproducts or servicesNeed for technological adaptations when providing AI-based cross-borderservicesNeed to adapt AI technologies, distribution models (e.g. sale versusservice provision) and cost management models in light of divergingnational liability rulesCompanies may limit their cross-border activities related to theproduction, distribution or use of AI-enabled products or servicesHigher prices of AI-enabled products and servicesInsurers will increase premiums due to more divergent liability exposuresNegative impact on the roll-out of AI technologiesQuestion: Please elaborate on your answers, in particular on whetheryour assessment is different for AI-enabled products than for AI-enabledservices, as well as on other impacts of possible legal fragmentation (2000 character(s) maximum) 0 out of 2000 characters used.Policy optionsQuestion: Due to their specific characteristics, in particular theirlack of transparency and explainability (black box effect) and theirhigh degree of autonomy, certain types of AI systems could challengeexisting liability rules.The Commission is considering the policy measures, described in thefollowing questions, to ensure that victims of damage caused by thesespecific types of AI systems are not left with less protection thanvictims of damage caused by technologies that operate without AI. Suchmeasures would be based on existing approaches in national liabilityregimes (e.g. alleviating the burden of proof for the injured party orstrict liability for the producer). They would also complement theCommissions other policy initiatives to ensure the safety of AI, suchas the recently proposed AI Act, and provide a safety net in the eventthat an AI system causes damage.Please note that the approaches to adapting the liability frameworkpresented below relate only to civil liability, not to state or criminalliability. The proposed approaches focus on measures to ease thevictims burden of proof (see next question) as well as a possibletargeted harmonisation of strict liability and insurance solutions(subsequent questions). They aim to help the victim recover damage moreeasily.Do you agree or disagree with the following approaches regarding theburden of proof? The answer options are not mutually exclusive.Regarding the Product Liability Directive, the following approachesbuild on the general options in the first part of this questionnaire. Strongly agree Agree Neutral Disagree Strongly disagree No opinionThe defendant (e.g. producer, user, service provider, operator) shouldbe obliged to disclose necessary technical information (e.g. log data)to the injured party to enable the latter to prove the conditions of theclaimIf the defendant refuses to disclose the information referred to in theprevious answer option, courts should infer that the conditions to beproven by that information are fulfilledSpecifically for claims under the Product Liability Directive: if anAI-enabled product clearly malfunctioned (e.g. driverless vehicleswerving off the road despite no obstacles), courts should infer that itwas defective and caused the damageIf the provider of an AI system failed to comply with their safety orother legal obligations to prevent harm (e.g. those proposed under theproposed AI Act), courts should infer that the damage was caused due tothat persons fault or that, for claims under the Product LiabilityDirective, the AI system was defectiveIf the user of an AI system failed to comply with their safety or otherlegal obligations to prevent harm (e.g. those proposed under theproposed AI Act), courts should infer that the damage was caused by thatpersons faultIf, in a given case, it is necessary to establish how a complex and/oropaque AI system (i.e. an AI system with limited transparency andexplainability) operates in order to substantiate a claim, the burden ofproof should be shifted from the victim to the defendant in that respectSpecifically for claims under the Product Liability Directive: if aproduct integrating an AI system that continuously learns and adaptswhile in operation causes damage, the producer should be liableirrespective of defectiveness; the victim should have to prove only thatthe product caused the damageCertain types of opaque or highly autonomous AI systems should bedefined for which the burden of proof regarding fault and causationshould always be on the person responsible for that AI system (reversalof burden of proof)EU action to ease the victims burden of proof is not necessary or justifiedQuestion: Please elaborate on your answers and describe any othermeasures you may find appropriate: (2000 character(s) maximum) 0 out of 2000 characters used.Question: Separately from the strict liability of producers under theProduct Liability Directive, national laws provide for a wide range ofdifferent strict liability schemes for the owner/user/operator. Strictliability means that a certain risk of damage is assigned to a personirrespective of fault.A possible policy option at EU level could be to harmonise strictliability (full or minimum), separately from the Product LiabilityDirective, for damage caused by the operation of certain AI-enabledproducts or the provision of certain AI-enabled services. This couldnotably be considered in cases where the use of AI (e.g. in autonomousvehicles and autonomous drones) exposes the public to the risk of damageto important values like life, health and property. Where strictliability rules already exist in a Member State, e.g. for cars, the EUharmonisation would not lead to an additional strict liability regime.Do you agree or disagree with the following approaches regardingliability for operating AI-enabled products and providing AI-enabledservices creating a serious injury risk (e.g. life, health, property)for the public? Strongly agree Agree Neutral Disagree Strongly disagree No opinionFull harmonisation of strict liability for operating AI-enabled productsand providing AI-enabled services, limited to cases where theseactivities pose serious injury risks to the publicHarmonisation of strict liability for the cases mentioned in theprevious option, but allowing Member States to maintain broader and/ormore far-reaching national strict liability schemes applicable to otherAI-enabled products and servicesStrict liability for operating AI-enabled products and providing ofAI-enabled services should not be harmonised at EU levelQuestion: Please elaborate on your answer, describe any other approachesregarding strict liability you may find appropriate and/or indicate towhich specific AI-enabled products and services strict liability shouldapply: (2000 character(s) maximum) 0 out of 2000 characters used.Question: The availability, uptake and economic effects of insurancepolicies covering liability for damage are important factors inassessing the impacts of the measures described in the previousquestions. Therefore, this question explores the role of (voluntary ormandatory) insurance solutions in general terms.The subsequent questions concern possible EU policy measures regardinginsurance. To what extent do you agree with the following statements? Strongly agree Agree Neutral Disagree Strongly disagree No opinionParties subject to possible harmonised strict liability rules asdescribed in the previous question would likely be covered by (voluntaryor mandatory) insuranceIn cases where possible facilitations of the burden of proof would apply(as described in the question on approaches to burden of proof), thepotentially liable party would likely be covered by (voluntary ormandatory) liability insuranceInsurance solutions (be they voluntary or mandatory) could limit thecosts of potential damage for the liable person to the insurance premiumInsurance solutions (be they voluntary or mandatory) could ensure thatthe injured person r | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
news | AI-Enabled E-Commerce Solutions Market worth US$ 16.8 Billion by 2030 - Exclusive Report by InsightAce Analytic | JERSEY CITY, N.J., Feb. 2, 2022 /PRNewswire/ -- The newly published report titled "Global AI-Enabled E-Commerce Solutions Market– By Trends, Industry Competition/Company Profiles Analysis, Revenue (US$ Billions) and Forecast Till 2030." features in-depth industry analysis and an extensive... | https://www.prnewswire.com/news-releases/ai-enabled-e-commerce-solutions-market-worth-us-16-8-billion-by-2030---exclusive-report-by-insightace-analytic-301473632.html | 2022-02-02T11:30:00Z | JERSEY CITY, N.J., Feb. 2, 2022 /PRNewswire/ -- The newly published report titled "Global AI-Enabled E-Commerce Solutions Market By Trends, Industry Competition/Company Profiles Analysis, Revenue (US$ Billions) and Forecast Till 2030." features in-depth industry analysis and an extensive market study, exploring its significant factors.According to InsightAce Analytic's latest market intelligence research report, the global AI-Enabled E-Commerce Solutions Market size was valued at US$ 3.71 Billion in 2021, and it is expected to reach US$ 16.8 Billion in 2030, record a promising CAGR of 15.7% from 2021 to 2030.Request Sample Report:https://www.insightaceanalytic.com/request-sample/1198Software development can be easier with ML and A.I. technologies, making predictive analytics more accurate. The AI-based platform enables a retailer to increase its sales target by reaching the right customer with critical analysis based on the information collected. E-commerce A.I. has transformed the online shopping field with features such as image search, customer-focused search, re-identifying potential customers, visual assistants, and bid data analytics. Newly developed A.I. applications consider various parameters such as purchasing history, product searches, and demographics of customers to measure future buying trends and make product recommendations based on browsing patterns, likely to drive the market growth. The fast implementation of cloud-based platforms, reduced rate of manual errors in development processes, surging use of machine learning-based applications, cost-effective procedures, rapid adoption of advanced technologies, easy access to real-time data, quick resolution of complaints through artificial intelligence-enabled chat boxes, and increasing government support for the R&D of AI-based platforms are estimated to drive the AI-enabled E-Commerce solutions market over the projected period. Additionally, the recent emergence of Covid-19 has had a significant impact on the AI-enabled E-Commerce solutions market as it has created the need for warehouse automation and management. However, factors such as the complex and time-consuming development procedures, high cost of A.I. solutions, and the lack of skilled professionals may hinder the market growth in the upcoming years. In terms of Region, North America will dominate the AI-enabled E-Commerce solutions market in upcoming years. It will continue its trend over the forecast period 2022-2030, attributed to the fast adoption of advanced technologies, rising online shopping trend, easy access to cloud applications or platforms, increasing R&D investments, and entry of new players in the market. On the other hand, in Europe, growing partnerships within key players and fast adoption of cloud-based platforms will help bloom the market. Asia-Pacific's market is going to expand faster during the forecast years. This growth is attributable to changing lifestyles, technology advancement specifically in Artificial Intelligence, and the increasing rate of online shopping due to digital transformations. Request for ToC/Proposal:https://www.insightaceanalytic.com/report/global-ai-enabled-e-commerce-solutions-market/1198Major players included in the AI-enabled E-Commerce solutions market are Riskified, Reflektion, Inc., Shelf.ai, Osaro, Sift, AntVoice SAS, Appier Inc, Eversight, Inc., Granify Inc., LivePerson, Inc., Manthan Software Services Pvt. Ltd., PayPal, Inc., Sidecar Interactive, Inc., Tinyclues SAS, Twiggle Ltd., Celect, Inc., Cortexica Vision Systems Ltd., Crobox B.V., Deepomatic SAS, Dynamic Yield Ltd., Emarsys eMarketing, Systems AG, Satisfi Labs, Inc., Staqu Technologies Pvt. Ltd., ViSenze Pte Ltd., and Other Prominent Players. Product innovations, acquisitions, collaborations, partnerships, and R&D activities are key strategies used by players in this market.In July 2021, LivePerson, Inc. acquired German conversational A.I. company e-bot7. The strategic acquisition propels LivePerson's self-service capabilities empowering brands of all sizes to quickly launch AI-powered messaging experiences as well as its continued growth across Europe. In February 2021, LivePerson, Inc. launched A.I. Annotator, a new tool automating brand-consumer conservations faster than ever by harnessing agents' expertise to improve Conversational A.I. In August 2019, Nike acquired Boston-based predictive analytics company Celect (A.I. platform), marking its latest acquisition in a string of deals to bolster its direct-to-consumer strategy. This acquisition allowed Nike to integrate their inventories with the website and app of the company. In August 2017, PayPal launched new Innovation Labs at the Chennai and Bangalore Tech centres. The lab is the first by PayPal in India and it is third after the USA and Singapore. The lab works as a platform to promote innovation which is a core value for PayPal globally. The labs support Machine Learning, A.I., Data Science, IoT, AR, V.R., and basic robotics projects.Get the report? @ https://www.insightaceanalytic.com/customisation/1198AI-Enabled E-Commerce Solutions Market SegmentsAI-Enabled E-Commerce Solutions Market Size (Value US$ Bn) & Forecasts and Trend Analyses, 2022 to 2030 based on TechnologyDeep Learning Machine Learning NLPAI-Enabled E-Commerce Solutions Market Size (Value US$ Bn) & Forecasts and Trend Analyses, 2022 to 2030 based on ApplicationsCustomer Relationship Management Supply Chain Analysis Fake Review Analysis Warehouse Automation Sorting and Placing Inventory StorageMerchandizing Facets and Filter Selection Multi Device InteractionProduct Recommendation Customer Service AI-Enabled E-Commerce Solutions Market Size (Value US$ Bn) & Forecasts and Trend Analyses, 2022 to 2030 based on DeploymentAI-Enabled E-Commerce Solutions Market Size (Value US$ Bn) & Forecasts and Trend Analyses, 2022 to 2030 based on RegionNorth AmericaEuropeAsia PacificLatin AmericaMiddle East & AfricaNorth America AI-enabled E-Commerce solutions market revenue (US$ Million) by Country, 2020 to 2030Europe AI-enabled E-Commerce solutions market revenue (US$ Million) by Country, 2020 to 2030GermanyFranceItalySpainRussiaRest of EuropeAsia Pacific AI-enabled E-Commerce solutions market revenue (US$ Million) by Country, 2020 to 2030IndiaChinaJapanSouth KoreaAustralia & New ZealandLatin America AI-enabled E-Commerce solutions market revenue (US$ Million) by Country, 2020 to 2030MexicoRest of Latin AmericaThe Middle East & Africa AI-enabled E-Commerce solutions market revenue (US$ Million) by Country, 2020 to 2030South AfricaRest of the Middle East & AfricaWhy should buy this report:To receive a comprehensive analysis of the prospects for the global AI-enabled E-Commerce solutions market To receive an industry overview and future trends of the AI-enabled E-Commerce solutions market To analyze the AI-enabled E-Commerce solutions market drivers and challenges To get information on AI-enabled E-Commerce solutions market size value (US$ Mn) forecast till 2030 Significant investments, mergers & acquisitions in the AI-enabled E-Commerce solutions market industryFor More Information @https://www.insightaceanalytic.com/enquiry-before-buying/1198Other Related Reports Published by InsightAce Analytic:Global Next-Generation Personalized Beauty MarketGlobal Artificial Intelligence (A.I.) In Beauty and Cosmetics MarketGlobal Artificial Intelligence in Genomics MarketAbout Us:InsightAce Analytic is a market research and consulting firm that enables clients to make strategic decisions. Our qualitative and quantitative market intelligence solutions inform the need for market and competitive intelligence to expand businesses. We help clients gain a competitive advantage by identifying untapped markets, exploring new and competing technologies, segmenting potential markets, and repositioning products. Our expertise is in providing syndicated and custom market intelligence reports with an in-depth analysis with key market insights in a timely and cost-effective manner.Contact Us:Priyanka TilekarInsightAce Analytic Pvt. Ltd.Tel : +1 551 226 6109Asia: +91 79 72967118Visit: www.insightaceanalytic.comEmail: [email protected] Follow Us on LinkedIn @ bit.ly/2tBXsgSFollow Us On Facebook @ bit.ly/2H9jnDZSOURCE InsightAce Analytic Pvt. Ltd. | Personalization/Recommendation | Business and Financial Operations/Sales and Related | null | null | null | null | null | null |
||
news | AE Studio Wins First Place in Neural Latents Benchmark Challenge | VENICE, Calif., Jan. 12, 2022 /PRNewswire/ -- AE Studio, a team of developers, designers, and data scientists, has earned the recognition as the world's top machine learning team in BCI (Brain Computer Interface) following its win in the Neural Latents Benchmark Challenge. Among its... | https://www.prnewswire.com/news-releases/ae-studio-wins-first-place-in-neural-latents-benchmark-challenge-301459430.html | 2022-01-12T13:14:00Z | VENICE, Calif., Jan. 12, 2022 /PRNewswire/ -- AE Studio, a team of developers, designers, and data scientists, has earned the recognition as the world's top machine learning team in BCI (Brain Computer Interface) following its win in the Neural Latents Benchmark Challenge. Among its entrants were six of the finest neurological research labs on the planet.AE Studio is developing the software component of BCI machine learning algorithms to interpret brain activity (neural decoding). By collaborating with research groups to advance the state of the art (and participating in machine learning competitions), AE Studio is amplifying the impact of the world's best research labs and providing better algorithms, better software, and better standards. "We believe BCI will be how humans and computers interact in the years to come. Developing BCI technology to increase human agency has always been a goal since AE was created, and we're excited that we're getting exponentially better, faster than we could have imagined," said Judd Rosenblatt, CEO at AE Studio. "To support the great work of those pushing the boundaries of knowledge and the future of Brain Computer Interfaces, AE will be reinvesting any prize money back into the neuroscience community. We're discussing with the NLB team and academic research groups, so look for another announcement shortly!"The competition featured five primary datasets on which teams' models were tested. Each model was deployed on neural data in different regions of a monkey's brain from electrode arrays implanted in the motor cortex. More detail available here:MC_Maze: Delayed reaching into a maze to chart a path to a target MC_RTT: Self-paced reaching, moving an arm towards a square within a grid Area2_Bump: Sensory responses while the monkey attempts to reach towards a target on a screen, while sometimes its arm is bumped DMFC_RSG: The monkeys attempting to reproduce a time interval between two stimuli with their hand or eye movements (in other words, imagine "Ready", "Set", "Go" while trying to match the time between "Set" and "Go" with the time between "Ready" and "Set") MC_Maze Scaling: Smaller versions of the first dataset used to test how models perform with limited amounts of dataAE's data science team offers the world's best models for extracting patterns from neurological data, as proven by the victory in the neural latents benchmark challenge (NLB). "BCI technology will one day enable human beings to interact with computers in entirely new ways beyond keyboards and mice," said Darin Erat Sleiter, Senior Data Scientist at AE Studio. "This is a small step in an exponentially-accelerating field that needs to have human agency as the focus of every discussion and advance."AE (Agency Enterprise) is currently hiring the best machine learning engineers and neuroscientists on planet earth to maximize human agency. Advances in BCI mean strides in improving mental health, transcription fluency, restoration of agency for the paralyzed, and most importantly, inspiration for the future tech giants to place human agency at the center of their objectives.About AE StudioAE Studio specializes in working with growing startups and enterprises to launch and rapidly develop new products and startup MVPs, increase revenue by expanding existing feature sets, or integrate cutting-edge data science and machine learning into products. AE is a team of seasoned designers, developers, product managers, and data scientists who work with companies closely to reach their next inflection point - whether it's raising capital, partnerships, creating business efficiency, or launching a new product or initiative. For more information visit: www.ae.studio.Press ContactRick Medeiros[email protected](510) 556-8517SOURCE AE Studio | Decision Making/Discovery/Content Synthesis | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
||
news | Aratrika Dutta | Top 10 Cybersecurity Companies Using AI to the Fullest in 2022 - Analytics Insight | AI and machine learning can help augment a company's cybersecurity by constantly monitoring for any suspicious activity and correcting the problem before it takes effect. Explore the top 10 cybersecurity companies using AI to the fullest in 2022. | https://www.analyticsinsight.net/top-10-cybersecurity-companies-using-ai-to-the-fullest-in-2022/ | 2022-01-11T14:21:00Z | Heres an elite group of innovative cybersecurity companies building AI into products in order to defeat attackers and win customersThe emergence of IoT devices with the integration of cutting-edge technologies like artificial intelligence and computer vision has made significant growth in cybersecurity measures. Multiple cybersecurity companies are gaining popularity to combat cyberattacks in companies. There are different cybersecurity companies using AI that can protect internet-connected systems or other IoT devices. AI and machine learning can help augment a company’s cybersecurity by constantly monitoring for any suspicious activity and correcting the problem before it takes effect. Lets explore some of the top cybersecurity companies using AI to the fullest in 2022.CrowdStrike is a relatively new name in the cybersecurity market. The business started up in 2011 and is officially called CrowdStrike Holdings, Inc. Its key security system is called CrowdStrike Falcon and this combines both cloud and on-device elements. The secret weapon of the CrowdStrike Falcon system is an AI-based detection system, known as user and entity behaviour analytics (UEBA). The UEBA concept is one of the major innovations that has thrust the system security industry forward, escaping the flawed AV detection model that had started to let too many new viruses onto devices.Darktrace developed its enterprise immune system as a platform for all of its cybersecurity products. EIS uses AI methodologies and populates status rule bases through unsupervised machine learning. The first thing that EIS needs to do when installed on a network is to establish a baseline of normal activity. This is termed the pattern of life in Darktrace terminology. Traffic patterns for each network, the activity of each device on the network, and the behaviour of each user are modelled to provide this record of standard conduct.Spun off from SAP in 2005, SAP NS2 uses data analytics and fusion technologies from SAP and applies them to cybersecurity, working with a number of US security agencies and corporations. Their AI and ML technology helps national security professionals process troves of data and protect sensitive information passing through a variety of locales. In addition to their work with defence industry customers, SAP NS2 systems also handle the hard work of securing supply chains, which often involves dozens of companies operating in a variety of scenarios. The company also uses AI and machine learning to protect cloud platforms for a number of different customers.Vade Secure is one of the world’s leading email defence companies, deploying artificial intelligence and machine learning to protect more than 600 million mailboxes in 76 countries from a variety of threats including spear phishing, ransomware, and malware. With the funding, we will continue to invest in our AI-based threat detection engine and build on Vade’s leadership in email security for ISPs.Cynet deploys AI in its network threat detection systems that examine threats and act on them automatically. The ethos at Cynet is to make advanced threat protection as straightforward as running any system monitoring package. Cynet has one product, called Cynet 360. This is a complete cybersecurity system that includes AV endpoint protection through device detection, threat prediction, user behavior modelling, and vulnerability management.Webroot harnesses the power of AI to stop zero-day threats in real-time, securing businesses across the globe with threat intelligence, and providing protection for endpoints as well as networks. The company uses machine learning to gain more insight into specifically why certain attacks are bad, in an effort to expand its understanding of the threat landscape.FireEye was founded in 2004 and specialized in threat research and recovery consultancy services. This is a labor-intensive field of work and didnt make the company any money. Through innovation and acquisition, the company has moved into the production of cybersecurity tools that use AI to monitor networks and spot anomalies. This strategy, together with moving from a fee-based structure to a subscription Software-as-a-Service has made the business profitable and turned what was beginning to look like an overrated novelty into a sought-after investment.Callsign uses AI and ML to validate a person’s identity just from a swipe on a touchscreen, number of keystrokes on the keyboard, number of locations, and other activities. The company’s trademark platform, intelligence-driven authentication, combines multi-factor authentication and fraud analytics powered by deep learning technology to fight against fraudulent activity, from identity fraud to SMS phishing.Founded on the belief that deep learning will fundamentally change cybersecurity, Blue Hexagon offers customers real-time network threat protection that can deliver threat detection in less than a second. Blue Hexagon uses AI to create malware based on global threat data and the dark web, all in an effort to test its own systems and push its capabilities to the limit. Blue Hexagon’s systems work in networks and in the cloud, covering a variety of threats across a multitude of different platforms.Cylance started out as an independent cybersecurity company, but since November 2018, it has been a division of BlackBerry Limited. Cylance began its existence in 2012 at a base in Irvine, California. It is reputed to be the first cybersecurity protection provider to apply AI to its system. The company became a leader in the field of IPS. Key early backers included Dell Ventures, CapitalOne Ventures, and Insight Venture Partners. | Detection and Monitoring/Process Automation | Computer and Mathematical | null | null | null | null | null | null |
|
news | Ankura Adds Transformative Data and Technology Capability with Acquisition of Noragh Analytics | NEW YORK, Feb. 9, 2022 /PRNewswire/ -- Ankura Consulting Group, LLC, an independent global expert services and advisory firm, today announced that it has acquired Noragh Analytics, a globally recognized leader in advanced data analytics and the enabling of machine learning and artificial... | https://www.prnewswire.com/news-releases/ankura-adds-transformative-data-and-technology-capability-with-acquisition-of-noragh-analytics-301478788.html | 2022-02-09T14:10:00Z | NEW YORK, Feb. 9, 2022 /PRNewswire/ -- Ankura Consulting Group, LLC, an independent global expert services and advisory firm, today announced that it has acquired Noragh Analytics, a globally recognized leader in advanced data analytics and the enabling of machine learning and artificial intelligence to actionable use of complex data. Noragh is a proven leader in delivering market solutions related to some of the world's most challenging business issues and further positions Ankura as a leading innovator in addressing the needs of clients both today and in the future. The Noragh proprietary platforms gather both structured and unstructured data from internal and external data sources into comprehensive database networks. This enables a collection of information and data solutions that combine analytic and data management technology with artificial intelligence, machine learning and graph analytics to deliver an unmatched ability to transform business processes, address fraud and predict behavior, patterns and outcomes. The acquisition complements and expands Ankura's market-leading Data and Technology offerings and places the Firm squarely at the forefront of the application of transformative artificial intelligence and machine learning solutions for business and commercial application. Noragh Analytics was founded by retired Four Star Admiral and former U.S. Navy Commander in Chief of the Atlantic Fleet, William "Bud" Flanagan, Jr. Admiral Flanagan and his team have vast experience in bringing successful analytic solutions enabled by their technology platforms, in both the government and commercial business sectors. Noragh's capabilities provide real-time actionable analytics that are saving the insurance industry hundreds of millions of dollars annually in fraud avoidance. The platforms' methods of applying data solutions enables artificial intelligence technology and expands Ankura's ability to generate key insights from big data to deliver solutions for clients facing complex, cross-disciplinary issues. "Acquiring Noragh Analytics marks an exciting milestone in the continued growth of Ankura's advanced data analytics and technology enabled consulting capabilities worldwide. As we continue to push the envelope on innovation, the Noragh platform allows us to find answers and provide solutions within data that our competitors simply cannot reach. Today's announcement also reflects our ongoing commitment to delivering the best cutting-edge technology and solutions to clients for their most pressing challenges," said Kevin Lavin, Ankura's Chief Executive Officer."I'm very excited that the Noragh team and these game changing solutions are becoming part of Ankura's global technologies solutions offering," said Olaf Larson, Senior Managing Director and leader of Ankura's Data and Technology Business Group. "These innovative platforms and associated customized solutions will further expand our leading position in bringing advanced data and analytics solutions to bear on our clients' toughest challenges as we continue building upon the expansion of our data analytics and technology enabled consulting capabilities in the Americas, Europe, Asia Pacific, the Middle East and Africa.""What started off as a critical initiative to help the intelligence community fight terror has grown to become one of the most advanced analytics companies of its kind," said Admiral Flanagan. "I'm looking forward to unleashing the combined talent of Noragh and Ankura to bring the power of our platforms to bear for our clients in the commercial and government marketplaces. Ankura's ability to leverage our platforms to solve complex problems, mitigate risk, inform strategy and create competitive advantage for clients will be world-leading and strongly positions us for the future."To learn more about the functionality of the Noragh platforms, visit: https://noragh.com/. Sidley Austin LLP served as legal advisor to Noragh Analytics. Davis Polk & Wardwell LLP served as legal advisor, together with Jayaram Law, Inc as special IP legal advisor to Ankura.About AnkuraAnkura Consulting Group, LLC is an independent global expert services and advisory firm that delivers services and end-to-end solutions to help clients at critical inflection points related to change, risk, disputes, finance, performance, distress, and transformation. The Ankura team consists of more than 1,700 professionals serving 3000+ clients across 55 countries who are leaders in their respective fields and areas of expertise. Collaborative lateral thinking, hard-earned experience, expertise, and multidisciplinary capabilities drive results and Ankura is unrivaled in its ability to assist clients to Protect, Create and Recover Value. For more information, please visit, www.ankura.com. SOURCE Ankura | Content Synthesis/Prediction/Decision Making | Management/Business and Financial Operations | null | null | null | null | null | null |
||
news | Hammer Of The Gods launches low-code platform for secure edge computing | PALO ALTO, Calif., Feb. 8, 2022 /PRNewswire/ -- Hammer Of The Gods (HOT-G) launched Hammer Forge, the low-code deployment and orchestration platform for secure edge computing and embedded machine learning. While there has been an acceleration of cloud developer tools for Machine Learning... | https://www.prnewswire.com/news-releases/hammer-of-the-gods-launches-low-code-platform-for-secure-edge-computing-301476231.html | 2022-02-08T14:00:00Z | For the past year the HOT-G team has been focused on the closed beta of their developer experience tools and have been working with the community of open source devs, students at several prominent hackathons, and early adopters in the areas of:securing edge deployments reliable builds for the edge devices and MLOps for the edge devicesWe are at an inflection point where enterprises with sufficient data are now building machine learning models making them their IP. The data will be kept private on their environments, but distributing and safeguarding their IP (models) is still a hard problem. In this regard, Hammer Forge also serves as a licensible cloud native SecOps platform for enterprises to secure, protect, and deploy their IP on devices."The developer experience to deploy models on the Edge requires you to know several different frameworks from React-Native, Obj-C/Gradle, C++, Tensorflow, Python and more. With Hammer Forge, we are making EdgeML development accessible to more developers with a No-Code environment."- Co-Founder and CTO, Kartik Thakore.Hammer Forge is based on HOT-G's open source framework called Rune, that runs on the edge devices."Demand to move to the edge is surging from the need to address privacy, security, latency, and growth of capable hardware, but we are still at the very beginning of this Big Bang stage. HOT-G is looking ahead by solving problems we will run into managing secure production edge computing workloads while still making the developer experience simple and awesome." - Co-Founder and CEO, Akshay Sharma.About Hammer of the Gods (HOT-G)Based out of Palo Alto, CA, HOT-G, is building the distributed infrastructure and developer experience tools paving the way for secure production-ready edge computing deployments. The founding team previously built several companies, including most recently doc.ai, a Palo Alto-based healthcare AI company which was acquired by Sharecare Inc. in 2021. The team previously developed federated learning, privacy preserving technologies, and zero-trust security infrastructure for healthcare.The company has raised money from Quiet Capital, Amino Capital, S2 Capital, and other prominent Silicon Valley angel investors.[1] The edge computing market size is expected to grow from USDSOURCE Hammer Of The Gods | Process Automation/Decision Making | Computer and Mathematical | null | null | null | null | null | null |
||
news | Kyle Wiggers | AI models are becoming better at answering questions, but they’re not perfect | The Allen Institute has released a model called Macaw that can answer questions ostensibly more accurately than OpenAI's GPT-3. | https://venturebeat.com/2022/01/21/ai-models-are-becoming-better-at-answering-questions-but-theyre-not-perfect/ | 2022-01-21T14:30:34Z | Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.Let the OSS Enterprise newsletter guide your open source journey! Sign up here.Late last year, the Allen Institute for AI, the research institute founded by the late Microsoft cofounder Paul Allen, quietly open-sourced a large AI language model called Macaw. Unlike other language models thatve captured the publics attention recently (see OpenAIs GPT-3), Macaw is fairly limited in what it can do, only answering and generating questions. But the researchers behind Macaw claim that it can outperform GPT-3 on a set of questions, despite being an order of magnitude smaller.Answering questions might not be the most exciting application of AI. But question-answering technologies are becoming increasingly valuable in the enterprise. Rising customer call and email volumes during the pandemic spurred businesses to turn to automated chat assistants according to Statista, the size of the chatbot market will surpass $1.25 billion by 2025. But chatbots and other conversational AI technologies remain fairly rigid, bound by the questions that they were trained on.Today, the Allen Institute released an interactive demo for exploring Macaw as a complement to the GitHub repository containing Macaws code. The lab believes that the models performance and practical size about 16 times smaller than GPT-3 illustrates how the large language models are becoming commoditized into something much more broadly accessible and deployable.Answering questionsBuilt on UnifiedQA, the Allen Institutes previous attempt at a generalizable question-answering system, Macaw was fine-tuned on datasets containing thousands of yes/no questions, stories designed to test reading comprehension, explanations for questions, and school science and English exam questions. The largest version of the model the version in the demo and thats open-sourced contains 11 billion parameters, significantly fewer than GPT-3s 175 billion parameters.Given a question, Macaw can produce an answer and an explanation. If given an answer, the model can generate a question (optionally a multiple-choice question) and an explanation. Finally, if given an explanation, Macaw can give a question and an answer.Macaw was built by training Googles T5 transformer model on roughly 300,000 questions and answers, gathered from several existing datasets that the natural-language community has created over the years, the Allen Institutes Peter Clark and Oyvind Tafjord, who were involved in Macaws development, told VentureBeat via email. The Macaw models were trained on a Google cloud TPU (v3-8). The training leverages the pretraining already done by Google in their T5 model, thus avoiding a significant expense (both cost and environmental) in building Macaw. From T5, the additional fine-tuning we did for the largest model took 30 hours of TPU time.Above: Examples of Macaws capabilities.Image Credit: Allen InstituteIn machine learning, parameters are the part of the model thats learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. But Macaw punches above its weight. When tested on 300 questions created by Allen Institute researchers specifically to break Macaw, Macaw outperformed not only GPT-3 but the recent Jurassic-1 Jumbo model from AI21 Labs, which is even larger than GPT-3.According to the researchers, Macaw shows some ability to reason about novel hypothetical situations, allowing it to answer questions like How would you make a house conduct electricity? with Paint it with a metal paint. The model also hints at awareness of the role of objects in different situations and appears to know what an implication is, for example answering the question If a bird didnt have wings, how would it be affected? with It would be unable to fly.But the model has limitations. In general, Macaw is fooled by questions with false presuppositions like How old was Mark Zuckerberg when he founded Google? It occasionally makes errors answering questions that require commonsense reasoning, such as What happens if I drop a glass on a bed of feathers? (Macaw answers The glass shatters). Moreover, the model generates overly brief answers; breaks down when questions are rephrased; and repeats answers to certain questions.The researchers also note that Macaw, like other large language models, isnt free from biasandtoxicity, which it might pick up from the datasets that were used to train it. Clark added: Macaw is being released without any usage restrictions. Being an open-ended generation model means that there are no guarantees about the output (in terms of bias, inappropriate language, etc.), so we expect its initial use to be for research purposes (e.g., to study what current models are capable of).ImplicationsMacaw might not solve the current outstanding challenges in language model design, among them bias. Plus, the model still requires decently powerful hardware to get up and running the researchers recommend 48GB of total GPU memory. (Two of Nvidias 3090 GPUs, which have 24GB of memory each, cost $3,000 or more not accounting for the other components needed to use them.) But Macaw does demonstrate that, to the Allen Institutes point, capable language models are becoming more accessible than they used to be. GPT-3 isnt open source, but if it was, one estimate pegs the cost of running it on a single Amazon Web Services instance at a minimum of $87,000 per year.Macaw joins other open source, multi-task models that have been released over the past several years, including EleutherAIs GPT-Neo and BigSciences T0. DeepMind recently showed a model with 7 billion parameters, RETRO, that it claims can beat others 25 times its size by leveraging a large database of text. Already, these models have found new applications and spawned startups. Macaw and other question-answering systems like it could be poised to do the same.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member | Digital Assistance/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Omega Venture Partners Closes $115+ Million VC Fund to Invest in Fast Growing AI-Enabled Companies that Solve High-Value Business Problems | PALO ALTO, Calif., Feb. 9, 2022 /PRNewswire/ -- Omega Venture Partners, a premier technology venture capital firm that partners with top entrepreneurs and management teams to back category-defining businesses benefitting from the convergence of powerful megatrends across Artificial... | https://www.prnewswire.com/news-releases/omega-venture-partners-closes-115-million-vc-fund-to-invest-in-fast-growing-ai-enabled-companies-that-solve-high-value-business-problems-301478803.html | 2022-02-09T14:36:00Z | PALO ALTO, Calif., Feb. 9, 2022 /PRNewswire/ -- Omega Venture Partners, a premier technology venture capital firm that partners with top entrepreneurs and management teams to back category-defining businesses benefitting from the convergence of powerful megatrends across Artificial Intelligence, Automation, Data, and Digitization, today announced that it has closed a new fund.In 2021 Omega Venture Partners completed twelve investments ten into new portfolio companies as well as participating in the follow-on financings of two existing companies. The firm has seen strong momentum across its portfolio, in particular with DataRobot, Language I/O, ZenBusiness, and Verbit raising meaningful up-round financings above its original investment cost basis. So far in the first quarter of 2022, Omega has added several compelling new investments to its portfolio, including Elemental Machines, Superside, Centaur Diagnostics, and NLUDB."Omega Venture Partners is an excellent partner for our company," said Ross Buhrdorf, CEO of ZenBusiness, Inc. "The resources, contacts, and exceptional knowledge that Omega brings to the table is truly outstanding. As experienced investors, operators, and subject matter experts, Omega understands what it takes to successfully finance and help growing startups maximize their potential and prosper."Investing at the inflection point of a major platform evolutionOmega is capitalizing on the next pioneering innovation cycle catalyzed by AI, ML, automation, and digital transformation. Omega's market-centric investment focus of investing in intelligent software businesses that solve major challenges with broad applicability and real ROI represents the sweet-spot of the AI market, because there is both widespread commercial demand and a compelling, demonstrable customer value proposition.With venture-backed AI companies anticipated to create hundreds of billions of dollars of market cap, credible third parties view AI as the most promising area in technology investing for the foreseeable future.Omega's new fund exceeded the original $100 million targeta decision the firm made with deliberation as to maintaining its 'right-size' approach while accommodating investors who it believes add to Omega's distinctive Limited Partner ecosystem."The application of Artificial Intelligence to deliver breakthrough business solutions and to unleash vast revenue growth opportunities is the most lucrative and exciting secular trend I have seen in my 16-plus years of leading venture capital firms," said Gaurav Tewari, Managing Partner at Omega Venture Partners. "Across the intelligent software space, Omega is thrilled to partner with some of the most innovative and fastest growing companies, who value our market knowledge, expertise, network, and strategic relationships to help them succeed.Omega also partners with preeminent authorities from MIT, combining deep domain expertise with a successful track record in technology investing. The fast-growing venture firm also relies upon a trusted network of operating leaders from Fortune 500 companies who augment deal access and sharpen investment outcomes.About Omega Venture Partners Omega Venture Partners is a premier technology investment firm headquartered in Silicon Valley. Omega invests in rapidly growing software businesses that leverage artificial intelligence (AI), machine learning (ML), data, and automation to deliver transformative solutions. The firm employs a thematic investment strategy to identify large market opportunities and invest in the next generation of category-defining companies. Visit: https://www.omegavp.com/SOURCE Omega Venture Partners | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
||
news | Louis Bouchard | Realistic Face Manipulation in Videos With AI | HackerNoon | You've most certainly seen movies like the recent Captain Marvel or Gemini Man where Samuel L Jackson and Will Smith appeared to look like they were much younger. This requires hundreds if not thousands of hours of work from professionals manually editing the scenes he appeared in. Instead, you could use a simple AI and do it within a few minutes. | https://hackernoon.com/realistic-face-manipulation-in-videos-with-ai | 2022-01-30T00:00:00Z | You've most certainly seen movies like the recent Captain Marvel or Gemini Man where Samuel L Jackson and Will Smith appeared to look like they were much younger. This requires hundreds if not thousands of hours of work from professionals manually editing the scenes he appeared in. Instead, you could use a simple AI and do it within a few minutes.Indeed, many techniques allow you to add smiles, make you look younger or older, all automatically using AI-based algorithms. It is called AI-based face manipulations in videos and here's the current state-of-the-art in 2022!... Learn more in the video:Video Transcript00:01you've most certainly seen movies like00:03the recent captain marvel or gemini man00:05where samuel l jackson and will smith00:07appear to look like they were much00:09younger this requires hundreds if not00:11thousands of hours of work from00:13professionals manually editing the00:15scenes he appeared in instead you could00:18use a simple ai and do it within a few00:20minutes indeed many techniques allow you00:22to add smiles make you look younger or00:25older all automatically using ai-based00:28algorithms they are mostly applied to00:30images since it's much easier but the00:32same techniques with small tweaks can be00:34applied on videos which as you may00:36suspect is quite promising for the film00:38industry and by the way the results00:40you've been seeing were all made using00:42the technique i will discuss in this00:44video the main problem is that currently00:47these generated older versions edited00:49images not only seem weird but when used00:52in a video will have glitches and00:54artifacts you surely do not want in a00:56million dollar movie this is because00:58it's much harder to get videos of people01:00than pictures making it even harder to01:02train such ai models that require so01:04many different examples to understand01:06what to do this strong data dependency01:08is one of the reasons why current ai is01:11far from human intelligence this is why01:14researchers like rotem saban and01:16collaborators from tel aviv university01:18work hard to improve the quality of01:20automatic ai video editing without01:22requiring so many video examples or more01:25precisely improve ai based face01:28manipulations in high quality talking01:30head videos using models trained with01:33images it doesn't require anything01:35except the single video you want to edit01:37and you can add a smile make you look01:39younger or even older it even works with01:42animated videos this is so cool but01:45what's even better is how they achieve01:46that but before doing so let me talk01:49about the sponsor of this video um01:52there are no sponsors for this video so01:54if you could just take a second to give01:56it a thumbs up and leave a comment about01:58what you think of the model or how you'd02:00apply it after watching the video or02:02even how you feel today that will be02:04amazing and i can promise you i will02:06answer within 12 minutes you can time it02:10so how does it work of course it uses02:12cans or generative adversarial networks02:15i won't go into the inner workings of02:17guns since i already covered it in a02:19video that you can watch right here and02:22linked below but we will see how this is02:24different from a basic gun architecture02:26if you are not familiar with guns just02:28take a minute to watch the video and02:30come back i'll still be there waiting02:32for you and i'm not exaggerating the02:34video literally takes one minute to get02:36a simple overview of what gans are we02:39will just refresh the part where you02:40have a generative model that takes an02:43image or rather an encoded version of02:45the image and changes this code to02:47generate a new version of the image02:49modifying specific aspects if possible02:52controlling the generation is the02:53challenging part as it has so many02:55parameters and it's really hard to find02:57which parameters are in charge for what03:00and disentangle everything to only edit03:02what you want so it uses any generative03:04based architecture such as style gun in03:07this case which is simply a powerful gan03:09architecture for images of faces03:11published by nvidia a few years ago with03:13still very impressive results and newer03:15versions but the generative model itself03:18isn't that important as it can work with03:21any powerful gan architecture you can03:23find and yes even if these models are03:25all trained with images they will use03:27them to perform video editing assuming03:30that the video you will send is03:31realistic and already consistent they03:33will simply focus on maintaining realism03:36rather than creating a real consistent03:38video as we have to do in video03:40synthesis work where we create new03:42videos out of the blue so each image03:44will be processed individually instead03:46of sending a whole video and expecting a03:48new video in return this assumption03:51makes the task way simpler but there are03:53more challenges to face like maintaining03:55such a realistic video where each frame03:57fluently goes to the next without03:59apparent glitches here they will take04:01each frame of the video as an input04:03extract only the face and alloying it04:05for consistency which is an essential04:07step as we will see then they will use04:10their pre-trained encoder and generator04:12to encode the frames and produce new04:14versions for each unfortunately this04:17wouldn't fix the realism problem where04:19the new faces may look weird or out of04:21place when going from one frame to04:23another as well as weird lighting bugs04:25and background differences that may04:27appear to fix that they will further04:30train the initial generator and use it04:32to help make the generations across all04:34frames more similar and globally04:36coherent they also introduce two other04:38steps an editing step and a new04:40operation that they call stitching04:42tuning the editing step will simply take04:44the encoded version of the image and04:46change it just a bit this is the part04:49where it will learn to change it just04:51enough to make the person look older in04:53this case so the model will be trained04:55to understand which parameters to move04:57and how much to modify the right04:59features of the image to make the person05:01look older like adding some gray hair05:03adding wrinkles etc then this stitching05:07tuning model will take the encoded image05:09you see here and will be trained to05:11generate the image from the edited code05:13that will best fit the background and05:15other frames it will achieve that by05:17taking the newly generated image05:19comparing it with the original image and05:21finding the best way to replace only the05:23face using a mask and keep the rest of05:25the cropped image unchanged05:28finally we paste the modified face back05:31on the frame this process is quite05:33clever and allows for the production of05:35really high quality videos since you05:37only need the cropped and aligned face05:39in the model incredibly reducing the05:41computation needs and complexity of the05:43task so even if the face needs to be05:45small let's say 200 pixels square if05:48it's only a fifth of the image as you05:50can see here you can keep a pretty high05:52resolution video and voila this is how05:55these great researchers perform high05:57quality face manipulation in videos i06:00hope you enjoyed this video let me know06:02how you feel about this one if you liked06:04it or not any feedback will be amazing06:07this is the last opportunity you have to06:09make mighty by clicking the like button06:11and leaving a comment before you go of06:13course the link to the paper and code06:15are in the video's description note that06:17the code will only be released on06:19february 14th as per the author's github06:22thank you for watching06:25[Music]ReferencesRead the full article: https://www.louisbouchard.ai/stitch-it-in-time/What are GANs? Short video introduction: https://youtu.be/rt-J9YJVvv4Tzaban, R., Mokady, R., Gal, R., Bermano, A.H. and Cohen-Or, D., 2022. Stitch it in Time: GAN-Based Facial Editing of Real Videos. https://arxiv.org/abs/2201.08361Project link: https://stitch-time.github.io/Code: https://github.com/rotemtzaban/STITMy Newsletter (A new AI application explained weekly to your emails!): https://www.louisbouchard.ai/newsletter/Tags | Process Automation | Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Quintin Pope | Hypothesis: gradient descent prefers general circuits | Published on February 8, 2022 9:12 PM GMTSummary: I discuss a potential mechanistic explanation for why SGD might prefer general circuits for generating model outputs. I use this preference to explain how models can learn to generalize even after reaching near zero training error through overfitting (i.e., grokking). I also discuss other models of grokking and generalization.Additionally, I discuss potential experiments to confirm or reject my hypothesis. I suggest that a tendency to unify many shallow patterns into fewer general patterns is a core feature of effective learning systems, potentially including humans and future AI, and briefly address implications to AI alignment.Epistemic status: I think the hypothesis I present makes a lot of sense and is probably true, but I haven't confirmed things experimentally. Much of my motive for post this is to clarify my own thinking and get feedback on the best ways to experimentally validate this perspective on ML generalization.Context about circuits: This post assumes the reader is familiar with and accepts the circuits perspective on deep learning. See here for a discussion of circuits for CNN vision models and here for a discussion of circuits for transformer NLP models.Evidence from grokkingThe paper "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets" uses stochastic gradient descent (SGD) to train self attention based deep learning models on different modular arithmetic expressions (e.g., f(x,y)=x×y (mod p).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}.mjx-numerator {display: block; text-align: center}.mjx-denominator {display: block; text-align: center}.MJXc-stacked {height: 0; position: relative}.MJXc-stacked > * {position: absolute}.MJXc-bevelled > * {display: inline-block}.mjx-stack {display: inline-block}.mjx-op {display: block}.mjx-under {display: table-cell}.mjx-over {display: block}.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}.mjx-stack > .mjx-sup {display: block}.mjx-stack > .mjx-sub {display: block}.mjx-prestack > .mjx-presup {display: block}.mjx-prestack > .mjx-presub {display: block}.mjx-delim-h > .mjx-char {display: inline-block}.mjx-surd {vertical-align: top}.mjx-surd + .mjx-box {display: inline-flex}.mjx-mphantom * {visibility: hidden}.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}.mjx-annotation-xml {line-height: normal}.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}.mjx-mtr {display: table-row}.mjx-mlabeledtr {display: table-row}.mjx-mtd {display: table-cell; text-align: center}.mjx-label {display: table-row}.mjx-box {display: inline-block}.mjx-block {display: block}.mjx-span {display: inline}.mjx-char {display: block; white-space: pre}.mjx-itable {display: inline-table; width: auto}.mjx-row {display: table-row}.mjx-cell {display: table-cell}.mjx-table {display: table; width: 100%}.mjx-line {display: block; height: 0}.mjx-strut {width: 0; padding-top: 1em}.mjx-vsize {width: 0}.MJXc-space1 {margin-left: .167em}.MJXc-space2 {margin-left: .222em}.MJXc-space3 {margin-left: .278em}.mjx-test.mjx-test-display {display: table!important}.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}.mjx-test.mjx-test-default {display: block!important; clear: both}.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}, where p is fixed). The training data only contain subsets of the function's possible input/output pairs. Initially, the models overfit to their training data and are unable to generalize to the validation input/output pairs. In fact, the models quickly reach near perfect accuracy on their training data. However, training the model for significantly past the point of overfitting causes the model to generalize to the validation data, what the authors call "grokking".See figure 1a from the paper:Figure 1a from Grokking: Generalization Beyond Overfitting on Small Algorithmic DatasetsNote the model has near perfect accuracy on the training data. Thanks to a recent replication of this work, we can also look at the loss curves during grokking (though on a different experiment compared to the plot above):Figure from this GitHub repositoryFirst, the model reaches near-zero loss in training but overfits in validation. However, the validation loss soon starts decreasing until the model correctly classifies both the validation and training data.This brings up an interesting question: why did the model learn anything at all after reaching near zero loss on the training data? Why not just stick with memorizing the training data? What would prompt SGD to switch over to general circuitry that solves both training and validation data?I think the answer is surprisingly straightforward: SGD prefers general circuits because general circuits make predictions on a greater fraction of the training data. Thus, general circuits receive more frequent SGD reinforcement for making correct predictions. Think of each data point as "pushing" the model to form circuits that perform well on that data point. General circuits perform well on many data points, so they receive a greater "push" towards forming.Shallow circuits are easier to form with SGD, but they aren't retained as strongly. Thus, as training progresses, general circuits eventually overtake the shallow circuits.Toy example of SGD preferring general circuitsLet's consider a toy example of memorizing two input/output pairs. Suppose:input 1 = (x1,y1)output 1 = f(x1,y1)input 2 = (x2,y2)output 2 = f(x2,y2)One way the model might memorize the these data points is to use two independent, shallow circuits, one for each data point. I show a diagram of how this might be implemented using two different self attention heads:Shallow memorization circuits.(Suppose for simplicity that these attention heads ONLY implement the circuit shown)W1QK and W2QK represent the query-key[1] circuits associated with their associated attention heads. W1QK and W2QK are respectively searching for x1,y1 and x2,y2 appearing in the input, and trigger their respective output-value[1] circuits - represented by W1OV and W2OV - when they find their desired inputs. Another way to memorize these data points is to use a more general, combined circuit implemented with a single attention head:More general memorization circuit.Here, W1+2QK represents a single query-key circuit that looks for either x1,y1 or x2,y2 in the input and triggers the output-value circuit W1+2OV to produce either output 1 or output 2 depending on the triggering input. I think SGD will prefer the single general circuit to the shallow circuits because the general circuit produces correct predictions on a greater fraction of the input examples. SGD only reinforces one of the shallow circuits when the model processes the specific input associated with that circuit. In contrast, SGD reinforces the general circuit whenever the model processes either of the inputs for which the general circuit produces correct predictions. Another way to see why SGD would prefer the general circuit: catastrophic forgetting is the tendency of models initially trained on task A, then trained on task B to forget task A while learning task B. Consider that, if the model isn't processing inputs containing x1,y1, the individaul circuit that produces output 1 will experience catestrophic forgetting. Thus, all training examples except one are degrading the shallow circuit's performance.In contrast, the general circuit generates predictions for both x1,y1 and x2,y2. It's reinforced twice as frequently, so it's better able to recover from degradation caused by training on the other examples. Eventually, the general circuit subsumes the functionality of the two shallow circuits.From figure 2a of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets", we can see that stochasticity and regularization significantly speeds up generalization. Potentially, this occurs because both randomness in the SGD updates and weight decay help to degrade shallow circuits, allowing general circuits to more quickly dominate. I think a large amount of weight decay's value in other domains comes from it degrading shallow circuits more than general circuits.Figure 2a of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"I think the conceptual arguments I've provided above strongly imply that SGD has some sort of preference for general circuits over a functionally equivalent collection of shallow circuits. In this next section, I try to flush out in more detail how this might manifest in the training process. However, I think this section is more speculative than the arguments above.Significance for the entire modelThe process of unifying multiple shallow circuits into fewer, more general circuits happens at multiple levels throughout the training process. Gradually, the shallow circuits combine into slightly more general circuits, which themselves combine further. Eventually, all the shallow memorization circuits combine into a single circuit representing the true modular arithmetic expression.I think we see artifacts of this combination process in the grokking loss plots, specifically in the spikes:Figure from this GitHub repositoryNote that each loss spike in the training data corresponds with a loss spike in the validation data. I think these spikes represent unification events where the model replaces multiple shallow circuits with a smaller number of more general circuits. The previous section described general vs shallow circuits as a binary choice, with the model able to use either one general circuit or a collection of shallow circuits. However, real deep learning models are more complex. They can simultaneously implement multiple instances of both types of circuits, with each circuit being partially responsible for a part of a single prediction. Let's consider Dn={d1,...,dn} and representing a subset of the training data.For the start of the training, I think each prediction on the elements of Dn is mainly generated by multiple different shallow circuits, with some small fraction of each prediction coming from a single partially implemented general circuit. As training progresses, the model gradually refines whatever general circuit contributes to correct predictions on all of Dn. Eventually, the model reaches an inflection point where it has a general circuit that can correctly predict all of Dn. At this point, I think there's a relatively quick phase shift in which the general circuit substitutes in for multiple shallow circuits at once. These shifts generate the loss spikes seen in the plot above. I'm unsure why switching from shallow to general circuits would cause a loss spike. I suspect the network is encountering something like a renormalization issue. The general circuit may be generating predictions for Dn, but that doesn't mean that all of the shallow circuits have been removed. If there are elements of Dn where both the general circuit and its original shallow circuit generate predictions, that may cause the network to behave poorly on those data points.Other explanations of grokkingGrokking: Generalization Beyond Overfitting on Small Algorithmic Datasets doesn't offer any particular interpretation of their results. However, Rohin's summary of the paper proposed the following:1. Functions that perfectly memorize the data without generalizing (i.e. probability 1 on the true answer and 0 elsewhere) are very complicated, nonlinear, and wonky. The memorizing functions learned by deep learning don’t get all the way there and instead assign a probability of (say) 0.95 to the true answer.2. The correctly generalizing function is much simpler and for that reason can be easily pushed by deep learning to give a probability of 0.99 to the true answer.3. Gradient descent quickly gets to a memorizing function, and then moves mostly randomly through the space, but once it hits upon the correctly generalizing function (or something close enough to it), it very quickly becomes confident in it, getting to probability 0.99 and then never moving very much again.I strongly prefer my account of grokking for several reasons.I think memorization is actually straightforward for a network to do. You just need a key-value system where the key detects the embedding associated with a given input, then activates the value, which produces the correct output for the detected input. Such systems are easy to implement in many ML architectures (feed forward, convolution, self attention, etc).I think a story of grokking that involves incrementally increasing the generality by iteratively combining multiple shallow circuits does a better job of explaining away the complexity of the resulting model. It requires less in the way of complex structures suddenly emerging from scratch.My account is more similar to biological evolution, which we know incrementally builds up complex structures from simpler predecessors.My account directly predicts that stochasticity and weight decay regularization would help with generalization, and even predicts that weight decay would be one of the most effective interventions to improve generalization.Additionally, it's not the case that "but once it hits upon the correctly generalizing function (or something close enough to it), it very quickly becomes confident in it". This is an illusion caused by the log scale on the x-axis of the plots. If we look closely at the figure below, the accuracy on the validation data starts to increase at ~ step 3×104, and roughly levels off at ~ step 8×105. This is a span of 7.7×105 steps and represents about 3/4 of the entire training process. If the model stumbles upon a single general circuit that solves the entire problem, then you'd expect it to make the switch very quickly. In contrast, if it has to go through multiple rounds of unifying shallow circuits into more general circuits, then you'd expect that process to take a while and for the model to gradually increase in generality throughout.Figure 1a from Grokking: Generalization Beyond Overfitting on Small Algorithmic DatasetsFinally, if we look at a loss plot on a log scale, we can see that the validation loss starts decreasing at ~ step 1.3×104, while the floor on the minimum training loss remains fairly constant (or even increases) until slightly after that (~step 2.3×104). Thus, validation loss starts decreasing thousands of steps before training loss. Whatever is causing the generalization, it's not doing so to decrease training loss (at least not at first).Log scale of train/validation loss during grokking. Generated using this GitHub repository.(I also think it's interesting how regular the training loss spikes look from steps 2×103 to 104. There's a sharp jump, followed by a sharp decrease, then a shallower decrease, then another spike almost immediately after. I have no idea what to make of it, but it should be interesting)Rohin's explanations is the only other attempted explanation of grokking I've seen. Please let me know of any more in the comments.Other explanations of generalizationPrior work has proposed that neural network generalization happens primarily as a result of neural network initializations strongly favoring simpler functions, with relatively little inductive bias coming from the optimization procedure.E.g., Is SGD a Bayesian sampler? Well, almost demonstrated that if you randomly sample neural network initializations until you find one that has low error on a training set, that network will generalize to the test data. Additionally, the test set predictions made by the randomly sampled classifier will correlate strongly with the test set predictions make by a classifier learned via SGD.I think such work clearly demonstrates that network initializations are strongly biased towards simple functions. However, I think these results are compatible with SGD having a bias towards general circuits.For one, common patterns in data may explain common patterns in models fit to that data. The correlation between SGD learned and randomly sampled classifiers seem to be a specific instance of the tendency for many types of learning to converge to exhibit similar behavior when trained on similar data. I.e., both SGD and randomly sampled classifiers seem less likely to fit outlier datapoints and more likely to fit tightly clustered datapoints. Additionally, generality bias seems similar to simplicity bias. Occam's razor implies simpler circuits are more likely to generalize. All else equal, general circuit are more likely to be simple. Potentially, the only difference between the "simplicity bias from initialization" vs "simplicity bias from initialization and generality bias from SGD" perspectives on generalization is that the latter learns more quickly, and SGD is certainly a fast way to train neural nets.Experimental investigationsGiven a particular output from a neural net, there are methods of determining which neurons are most responsible for generating that output. Such scores are often called the "saliency" of a given neuron for a particular output. Example methods include integrated gradients and Shapley values.I think we can find experimental evidence for or against the general circuits hypothesis by looking at how the distribution over neuron sali | https://www.lesswrong.com/posts/JFibrXBewkSDmixuo/hypothesis-gradient-descent-prefers-general-circuits | 2022-02-08T21:12:37Z | Summary: I discuss a potential mechanistic explanation for why SGD might prefer general circuits for generating model outputs. I use this preference to explain how models can learn to generalize even after reaching near zero training error through overfitting (i.e., grokking). I also discuss other models of grokking and generalization.Additionally, I discuss potential experiments to confirm or reject my hypothesis. I suggest that a tendency to unify many shallow patterns into fewer general patterns is a core feature of effective learning systems, potentially including humans and future AI, and briefly address implications to AI alignment.Epistemic status: I think the hypothesis I present makes a lot of sense and is probably true, but I haven't confirmed things experimentally. Much of my motive for post this is to clarify my own thinking and get feedback on the best ways to experimentally validate this perspective on ML generalization.Context about circuits: This post assumes the reader is familiar with and accepts the circuits perspective on deep learning. See here for a discussion of circuits for CNN vision models and here for a discussion of circuits for transformer NLP models.Evidence from grokkingThe paper "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets" uses stochastic gradient descent (SGD) to train self attention based deep learning models on different modular arithmetic expressions (e.g., f(x,y)=x×y (mod p), where p is fixed). The training data only contain subsets of the function's possible input/output pairs. Initially, the models overfit to their training data and are unable to generalize to the validation input/output pairs. In fact, the models quickly reach near perfect accuracy on their training data. However, training the model for significantly past the point of overfitting causes the model to generalize to the validation data, what the authors call "grokking".See figure 1a from the paper:Figure 1a from Grokking: Generalization Beyond Overfitting on Small Algorithmic DatasetsNote the model has near perfect accuracy on the training data. Thanks to a recent replication of this work, we can also look at the loss curves during grokking (though on a different experiment compared to the plot above):Figure from this GitHub repositoryFirst, the model reaches near-zero loss in training but overfits in validation. However, the validation loss soon starts decreasing until the model correctly classifies both the validation and training data.This brings up an interesting question: why did the model learn anything at all after reaching near zero loss on the training data? Why not just stick with memorizing the training data? What would prompt SGD to switch over to general circuitry that solves both training and validation data?I think the answer is surprisingly straightforward: SGD prefers general circuits because general circuits make predictions on a greater fraction of the training data. Thus, general circuits receive more frequent SGD reinforcement for making correct predictions. Think of each data point as "pushing" the model to form circuits that perform well on that data point. General circuits perform well on many data points, so they receive a greater "push" towards forming.Shallow circuits are easier to form with SGD, but they aren't retained as strongly. Thus, as training progresses, general circuits eventually overtake the shallow circuits.Toy example of SGD preferring general circuitsLet's consider a toy example of memorizing two input/output pairs. Suppose:input 1 = (x1,y1)output 1 = f(x1,y1)input 2 = (x2,y2)output 2 = f(x2,y2)One way the model might memorize the these data points is to use two independent, shallow circuits, one for each data point. I show a diagram of how this might be implemented using two different self attention heads:Shallow memorization circuits.(Suppose for simplicity that these attention heads ONLY implement the circuit shown)W1QK and W2QK represent the query-key[1] circuits associated with their associated attention heads. W1QK and W2QK are respectively searching for x1,y1 and x2,y2 appearing in the input, and trigger their respective output-value[1] circuits - represented by W1OV and W2OV - when they find their desired inputs. Another way to memorize these data points is to use a more general, combined circuit implemented with a single attention head:More general memorization circuit.Here, W1+2QK represents a single query-key circuit that looks for either x1,y1 or x2,y2 in the input and triggers the output-value circuit W1+2OV to produce either output 1 or output 2 depending on the triggering input. I think SGD will prefer the single general circuit to the shallow circuits because the general circuit produces correct predictions on a greater fraction of the input examples. SGD only reinforces one of the shallow circuits when the model processes the specific input associated with that circuit. In contrast, SGD reinforces the general circuit whenever the model processes either of the inputs for which the general circuit produces correct predictions. Another way to see why SGD would prefer the general circuit: catastrophic forgetting is the tendency of models initially trained on task A, then trained on task B to forget task A while learning task B. Consider that, if the model isn't processing inputs containing x1,y1, the individaul circuit that produces output 1 will experience catestrophic forgetting. Thus, all training examples except one are degrading the shallow circuit's performance.In contrast, the general circuit generates predictions for both x1,y1 and x2,y2. It's reinforced twice as frequently, so it's better able to recover from degradation caused by training on the other examples. Eventually, the general circuit subsumes the functionality of the two shallow circuits.From figure 2a of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets", we can see that stochasticity and regularization significantly speeds up generalization. Potentially, this occurs because both randomness in the SGD updates and weight decay help to degrade shallow circuits, allowing general circuits to more quickly dominate. I think a large amount of weight decay's value in other domains comes from it degrading shallow circuits more than general circuits.Figure 2a of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"I think the conceptual arguments I've provided above strongly imply that SGD has some sort of preference for general circuits over a functionally equivalent collection of shallow circuits. In this next section, I try to flush out in more detail how this might manifest in the training process. However, I think this section is more speculative than the arguments above.Significance for the entire modelThe process of unifying multiple shallow circuits into fewer, more general circuits happens at multiple levels throughout the training process. Gradually, the shallow circuits combine into slightly more general circuits, which themselves combine further. Eventually, all the shallow memorization circuits combine into a single circuit representing the true modular arithmetic expression.I think we see artifacts of this combination process in the grokking loss plots, specifically in the spikes:Figure from this GitHub repositoryNote that each loss spike in the training data corresponds with a loss spike in the validation data. I think these spikes represent unification events where the model replaces multiple shallow circuits with a smaller number of more general circuits. The previous section described general vs shallow circuits as a binary choice, with the model able to use either one general circuit or a collection of shallow circuits. However, real deep learning models are more complex. They can simultaneously implement multiple instances of both types of circuits, with each circuit being partially responsible for a part of a single prediction. Let's consider Dn={d1,...,dn} and representing a subset of the training data.For the start of the training, I think each prediction on the elements of Dn is mainly generated by multiple different shallow circuits, with some small fraction of each prediction coming from a single partially implemented general circuit. As training progresses, the model gradually refines whatever general circuit contributes to correct predictions on all of Dn. Eventually, the model reaches an inflection point where it has a general circuit that can correctly predict all of Dn. At this point, I think there's a relatively quick phase shift in which the general circuit substitutes in for multiple shallow circuits at once. These shifts generate the loss spikes seen in the plot above. I'm unsure why switching from shallow to general circuits would cause a loss spike. I suspect the network is encountering something like a renormalization issue. The general circuit may be generating predictions for Dn, but that doesn't mean that all of the shallow circuits have been removed. If there are elements of Dn where both the general circuit and its original shallow circuit generate predictions, that may cause the network to behave poorly on those data points.Other explanations of grokkingGrokking: Generalization Beyond Overfitting on Small Algorithmic Datasets doesn't offer any particular interpretation of their results. However, Rohin's summary of the paper proposed the following:1. Functions that perfectly memorize the data without generalizing (i.e. probability 1 on the true answer and 0 elsewhere) are very complicated, nonlinear, and wonky. The memorizing functions learned by deep learning dont get all the way there and instead assign a probability of (say) 0.95 to the true answer.2. The correctly generalizing function is much simpler and for that reason can be easily pushed by deep learning to give a probability of 0.99 to the true answer.3. Gradient descent quickly gets to a memorizing function, and then moves mostly randomly through the space, but once it hits upon the correctly generalizing function (or something close enough to it), it very quickly becomes confident in it, getting to probability 0.99 and then never moving very much again.I strongly prefer my account of grokking for several reasons.I think memorization is actually straightforward for a network to do. You just need a key-value system where the key detects the embedding associated with a given input, then activates the value, which produces the correct output for the detected input. Such systems are easy to implement in many ML architectures (feed forward, convolution, self attention, etc).I think a story of grokking that involves incrementally increasing the generality by iteratively combining multiple shallow circuits does a better job of explaining away the complexity of the resulting model. It requires less in the way of complex structures suddenly emerging from scratch.My account is more similar to biological evolution, which we know incrementally builds up complex structures from simpler predecessors.My account directly predicts that stochasticity and weight decay regularization would help with generalization, and even predicts that weight decay would be one of the most effective interventions to improve generalization.Additionally, it's not the case that "but once it hits upon the correctly generalizing function (or something close enough to it), it very quickly becomes confident in it". This is an illusion caused by the log scale on the x-axis of the plots. If we look closely at the figure below, the accuracy on the validation data starts to increase at ~ step 3×104, and roughly levels off at ~ step 8×105. This is a span of 7.7×105 steps and represents about 3/4 of the entire training process. If the model stumbles upon a single general circuit that solves the entire problem, then you'd expect it to make the switch very quickly. In contrast, if it has to go through multiple rounds of unifying shallow circuits into more general circuits, then you'd expect that process to take a while and for the model to gradually increase in generality throughout.Figure 1a from Grokking: Generalization Beyond Overfitting on Small Algorithmic DatasetsFinally, if we look at a loss plot on a log scale, we can see that the validation loss starts decreasing at ~ step 1.3×104, while the floor on the minimum training loss remains fairly constant (or even increases) until slightly after that (~step 2.3×104). Thus, validation loss starts decreasing thousands of steps before training loss. Whatever is causing the generalization, it's not doing so to decrease training loss (at least not at first).Log scale of train/validation loss during grokking. Generated using this GitHub repository.(I also think it's interesting how regular the training loss spikes look from steps 2×103 to 104. There's a sharp jump, followed by a sharp decrease, then a shallower decrease, then another spike almost immediately after. I have no idea what to make of it, but it should be interesting)Rohin's explanations is the only other attempted explanation of grokking I've seen. Please let me know of any more in the comments.Other explanations of generalizationPrior work has proposed that neural network generalization happens primarily as a result of neural network initializations strongly favoring simpler functions, with relatively little inductive bias coming from the optimization procedure.E.g., Is SGD a Bayesian sampler? Well, almost demonstrated that if you randomly sample neural network initializations until you find one that has low error on a training set, that network will generalize to the test data. Additionally, the test set predictions made by the randomly sampled classifier will correlate strongly with the test set predictions make by a classifier learned via SGD.I think such work clearly demonstrates that network initializations are strongly biased towards simple functions. However, I think these results are compatible with SGD having a bias towards general circuits.For one, common patterns in data may explain common patterns in models fit to that data. The correlation between SGD learned and randomly sampled classifiers seem to be a specific instance of the tendency for many types of learning to converge to exhibit similar behavior when trained on similar data. I.e., both SGD and randomly sampled classifiers seem less likely to fit outlier datapoints and more likely to fit tightly clustered datapoints. Additionally, generality bias seems similar to simplicity bias. Occam's razor implies simpler circuits are more likely to generalize. All else equal, general circuit are more likely to be simple. Potentially, the only difference between the "simplicity bias from initialization" vs "simplicity bias from initialization and generality bias from SGD" perspectives on generalization is that the latter learns more quickly, and SGD is certainly a fast way to train neural nets.Experimental investigationsGiven a particular output from a neural net, there are methods of determining which neurons are most responsible for generating that output. Such scores are often called the "saliency" of a given neuron for a particular output. Example methods include integrated gradients and Shapley values.I think we can find experimental evidence for or against the general circuits hypothesis by looking at how the distribution over neuron saliencies evolves during training. When shallow circuits dominate network behavior, each neuron will mostly be salient for generating a small fraction of the outputs. However, as more general circuits form, there should be a small collection of neurons that become highly salient for many different outputs. We should be able to do something like look at a histogram of average neuron saliencies measured across many inputs.One idea is to record neuron saliencies for each prediction in each training/testing epoch, then compute the median saliency for each neuron in each epoch. After which, I'll generate a histogram of the neurons' median saliencies for each epoch. I should see the histogram becoming more and more right-skewed as the training progresses. This should happen because general circuits are salient for a greater fraction of the inputs.Another idea would be to find the k neurons with the highest saliency at each epoch, then test what happens if we delete them. As training progresses, individual circuits will become responsible for a greater fraction of the model's predictions. We should find that this deletion operation damages more and more of the model's predictive capability. Both these ideas would provide evidence for general circuits replacing shallow circuits over time. However, they'd not show that the specific reason for this replacement was because general circuits made more predictions and so were favored by SGD. I'm unsure how to investigate this specific hypothesis. All I can think of is to identify some set of shallow circuits and a single general circuit that makes the same predictions as the set of shallow circuits. Then, record the predictions and gradients made by the shallow and general circuits and hope to find a clear, interpretable pattern of the general circuit receiving more frequent/stronger updates and gradually replacing the shallow circuits. (If anyone can think about other experimental investigations or has thoughts on this proposal, please share in the comments!)Implications for learning in generalMy guess is that many effective learning systems will have heuristics that cause them to favor circuits that make lots of predictions. For example, many tendencies of the brain seem to promote general circuitry. Both memories and skills decay over time unless they're periodically refreshed. When senses are lost, the brain regions corresponding to those senses are repurposed towards processing the remaining sense data. In addition to using low-level processes that promote general circuitry, highly capable learning systems may develop a high-level tendency towards generalization because such a tendency is adaptive for many problems. In other words, they may learn to "mesa-generalize"[2].I think humans show evidence of a mesa-generalization instinct. Consider that religions, ideologies, philosophical frameworks and conspiracy theories often try to explain a large fraction of the world through a single lens. Many such grand narratives make frequent predictions about hard to verify things. Without being able to easily verify those predictions, our mesa generalization instincts may favor narratives that make many predictions.Potentially, ML systems will have a similar mesa-generalization instinct. This could be a good thing. Human philosophers have put quite a bit of effort into mesa-generalizing a universal theory of ethics. If ML systems are naturally inclined to do something similar, maybe we can try to point this process in the right direction? Mesa-generalization from ML systems could also be dangerous, for much the same reason mesa-optimization is dangerous. We don't know what sort of generalization instinct the system might adopt, and it could influence the system's behaviors in ways that are hard to predict from the training data.This seems related to the natural abstractions hypothesis. Mesa-generalization suggests an ML system should prefer frequently used abstractions. At a human level of capabilities, these should coincide reasonably well with human abstractions. However, more capable systems are presumably able to form superhuman abstractions that are used for a greater fraction of situations. This suggests we might have initially encouraging "alignment by default"-type results, only for the foundation of that approach to collapse as we reach superhuman capabilities.^Essentially, the query-key circuit determines which input tokens are most important, and the output-value circuit determines which outputs to generate for each attended token. See "A Mathematical Framework for Transformer Circuits" for more details on query-key / output-value formulation of self attention.^So named after "mesa-optimization", the potential for learning systems to implement an optimization procedure as an adaptive element of their cognition. See Risks from Learned Optimization in Advanced Machine Learning Systems. | Content Synthesis/Discovery | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Christine Hall | New Greylock venture partner Mustafa Suleyman is looking for AI’s next best thing – TechCrunch | Suleyman says the time has come for AI due to the ecosystem maturing and more people understand the strengths and limitations of AI and how to use it. | http://techcrunch.com/2022/01/20/new-greylock-venture-partner-mustafa-suleyman-is-looking-for-ais-next-best-thing/ | 2022-01-20T18:00:58Z | Mustafa Suleyman has been working in artificial intelligence for 12 years, trying to figure out how to use machine learning systems and AI to do important things in the work and have impact at scale.And over the years, I’ve been lucky enough to be at the forefront of a lot of cutting-edge applications of AI, he told TechCrunch. Over the years, that experience has given me a really good intuition, for when a piece of AI is ready for the real world, and when it’s not. The projects that I’ve seen fail are mostly because people overestimate how good AI is. People think that AI is this silver bullet and it can solve all your problems, but actually you have to craft the environment and the application problem correctly.Suleyman is now putting that intuition to good use on the venture capital side. After previously investing in companies alongside Greylock partner Reid Hoffman, Suleyman made the full leap to join Greylock as a venture partner.He joins the firm from Google, where he was vice president of AI product management and AI policy. Prior to that, he co-founded and led DeepMind, an AI company acquired by Google in 2014.“AI will no doubt touch every aspect of our lives in the coming years, and we at Greylock believe there’s an abundance of opportunity for entrepreneurs to continue building in this space. Hoffman said via email. Mustafa is visionary, knowledgeable and connected across the vast AI landscape, and we know he’ll be a valuable resource to our existing portfolio, and a board member of choice for new AI investments.”Suleyman is eager to work with early-stage founders, who he said are super fearless and really energetic people who arent afraid to take a risk when they see opportunity.He believes there is much opportunity in the AI companies that are out there. He says the time has come for AI due to the ecosystem maturing, and more people understand the strengths and limitations of AI and how to use it. AI is also becoming more accessible and usable by people who dont necessarily have a technical background, but are creating new ways to use machine learning.As such, he says that AI is at an inflection point: We now have AI systems that can generate new text, conversational sentences and whole paragraphs, which is approaching human-level performance.The range of things that entrepreneurs can do when you can basically have an API that can talk to your users in natural language is amazing and the imagination is the limit, Suleyman added. Combined with the explosion of the metaverse and all the excitement around that in the last couple years, I definitely think that AI has a central role to play in virtual reality, in gaming and the metaverse.While those are a few areas where he sees AI winning, there are some areas where he thinks AI is not quite there yet, including large-scale infrastructure, manufacturing and logistics distribution, and he is looking for those companies stepping up to build AI into scheduling and coordination of shipping, shipment tracking and route optimization.When looking at what is down the pipeline for AI, he again sees the metaverse and gaming dominating the space as characters and avatars come to life — think Ready Player One, where there is an animated parallel world in tandem with ours.It’ll be able to do planning and prediction, so it won’t just be kind of generating scripted or written responses, it will actually be emergent and responsive to the environment, Suleyman said. On the enterprise side, the time for AI and medical imaging is definitely now that we’ve proven it works in research, and now we’re ready for large-scale production that’s going to be very successful. | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Subhakar Rao Surapaneni | Digital Marketing Trends in 2022 | So, 2022 is just a month away, and the time is drawing near for the fulfillment of what Google declared a year ago. Yes, third-party cookies are soon to be eliminated completely, and while the reason… | https://subhakarraosurapaneni.medium.com/digital-marketing-trends-in-2022-f8f3cdcc2cb6 | 2022-01-18T16:26:11Z | So, 2022 is just a month away, and the time is drawing near for the fulfillment of what Google declared a year ago. Yes, third-party cookies are soon to be eliminated completely, and while the reason for privacy concerns is a valid one, it can still be bad news for digital marketers since these cookies help collect important user info.However, the Darwinian principle appliesto digital marketing as well, where the shedding of one form simply means an evolution to another. 2022 will see certain digital marketing trends that will help compensate for the loss of third-party cookies. Lets find out what they are.The New Normal of Hybrid EventsBusinesses have been holding events as a strategy to acquire more quality leads, educate them on products and services, grow their brands credibility, as well as to build an atmosphere where creativity and innovation can find room to bloom.However, in-person events were costly and had negative effects on the environment. With the COVID-19 pandemic, brands may be able to save on event organization costs as well as take a huge step towards sustainability as 2022 is about to see more hybrid events taking place. Simply speaking, these events are a mix of virtual and live events, allowing audiences to engage with the live event whilst complying with COVID-19 protocols of social distancing.2. Greater Unfolding of the Power of Artificial Intelligence (AI)Rest assured that for the upcoming decades, let alone a year, AI is going to dominate the digital marketing space. Its only the degree to which this technology dominates that is expected to increase with each passing year. For instance In 2022, businesses are expected to integrate AI-enabled features in their SMM, SEO, and other digital marketing strategies. Not only that, but even digital process automation is predicted to be handed over to the care of AI think conversational AI for lead-gen, engagement, and customer service and receiving real-time feedback on the performance of specific keywords or links in SEO, among others.3. Considering Non-Profits for Business GrowthThough brands have never really barred non-profits as business opportunities, the COVID-19 pandemic has compelled them to make some serious considerations. For instance more and more people have awakened to the need for community and sustainability, and are expecting brands to reflect them in their practices.The best way for businesses to fulfill this consumer demand is to form meaningful partnerships with non-profit organizations that are actively serving the community, striving towards preserving the environment, and undertaking drives and camps that focus on making the world a better place. For instance a cosmetic brand that manufactures cruelty-free and eco-friendly products can partner with a non-profit that is specifically focused on protecting the environment to better communicate its brand value. Partnerships are not the only way to go. Companies can even make sponsorships or donations to the non-profit of their choice. Consider the example of $100,000 donation made by Verizon Foundation to Massachusetts Non-Profits to address the concern of domestic violence. By this simple act, their story was transformed from selling data plans to transforming lives. In 2022, partnering or donating to a non-profit will help brands to tug at the heartstrings of their customers.4. Increased Use of Experiential MarketingRemember back in 2017 when Ikea came out with this unique and cool application called Ikea Place where users could virtually place Ikea products throughout their space and make design and purchase decisions accordingly? During the same year, even Kentucky Fried Chicken released this cool AR-VR game, The Hard Way, where users could learn how to make the brands signature fried chicken through a virtual escape room. Well, the use of such immersive experiences in connecting with customers on a visceral level is about to further increase in 2022.In fact, the modern pandemic-induced shopping experience demands that brands bring in AR and VR elements so as to win the loyalty of their customers. Not only that, but customers are even willing to pay a higher price for products that can be customized using AR-VR technology, as discovered by Threekit in a survey conducted in 2020.5. Use of Visual SearchVoice search has been popular for quite some time now, but next year, were about to see Visual Search become equally relevant. In fact, this search option has already made its way. Under Visual Search, a user simply needs to upload a picture, and thats all it would take for Google to return with product details. For instance If a user uploads a picture of a plant, Google would give information regarding its species, or a picture of a landmark would offer historical data of the same.To leverage this trend to its fullest, brands need to add HD images with relevant descriptive keywords, enable an image search, and then advertise preferably on platforms like Pinterest. Its essential to add an image sitemap to increase the likelihood of that image being discovered and use Alt tags to all images.The above-mentioned trends are the most likely to dominate the 2022 B2B and B2C digital marketing space. If you still havent jumped onto this bandwagon yet, now is your time to do so if you wish to generate substantial ROIs and leverage the possibilities of 2022 to their fullest. | Digital Assistance/Content Synthesis/Personalization/Recommendation | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Sophia Nevle Levoy | AI Writeup Part 1 | Published on February 4, 2022 9:16 PM GMTAimsIn this post I summarize my discoveries from a semester of AI readings and discussions drawn primarily from the 2022 AGI Safety Fundamentals Alignment Curriculum. I am grateful to my fellow AI reading group discussants at UVA (and especially our discussion leader Ryan Bloom) for their thoughtful contributions to our group which informed my thinking for this post. While this post was primarily written as a means to clarify my own thinking as I learned more about AI Safety, I also hope to use it as a reference point for facilitating my university’s AI discussion group. I also aim to write a Part 2, covering the second half of the 2022 AGI Safety Fundamentals Alignment Curriculum and discuss any updates to my understanding as I begin facilitating. Technical Background: What is Artificial Intelligence? DefinitionsArtificial Intelligence (AI) is both the study of intelligent algorithms and the intelligent algorithms themselves.[1] For the purposes of this post, we’ll hold that intelligence measures one’s ability to achieve one’s goals across a wide range of environments.[2] An algorithm is a “step-by-step procedure” for solving a problem, typically written on a computer.[3] In turn, intelligent algorithms are computational procedures that can achieve their goals across many environments.A very brief history of AIBeginning in the 1950s, “Good Old Fashioned AI” (GOFAI), also known as symbolic AI, used search and logic to solve high-level mathematical equations. In 1997 Deep Blue beat chess Grandmaster Garry Kasparaov by using GOFAI—Deep Blue searched over millions of positions to find the optimal play.[4] In the 1960s, a series of criticisms argued GOFAI could never handle the complexities of the real world. This burst the building AI hype and led to an AI winter, in which funders pulled out and AI progress slowed.[5] In the 1990s, AI research made a comeback as researchers shifted from symbolic AI to machine learning, which remains the dominant paradigm today. [6]Machine Learning basicsUnlike symbolic AI, Machine Learning (ML) is adept at addressing real world complexity. Rather than searching through every possible configuration to find the optimal algorithm, ML relies on using statistical techniques to train neural networks, which are a type of machine learning model inspired by the brain. Deep learning, a variety of machine learning, relies on neural networks with more than one layer between the input and the output. [7]The smallest unit of a neural network is a neuron, which can be understood as a “number holder.” Each neuron receives signals from other neurons which it combines into a single value that it holds, called its activation. The neuron then passes its activation on to other neurons. The weights and biases between neurons in different layers determine how strongly a neuron can activate for any given input, and is learned via a process of optimization. The metric which is optimized for is known as an objective function or loss function, which is developed over training data. The most common optimization algorithm is gradient descent, which allows the gradients of the weights to be calculated layer by layer using the backpropagation algorithm. [8]To better understand deep learning’s layers, neurons, activation, and weights, let’s consider a classic example: identifying an image of a handwritten number as a number from zero through nine. The input—the first layer in the neural network—would assign a neuron to each pixel in the image, and the neuron’s activation would fall between zero for a completely black pixel and one for a completely white pixel. The output—the final layer of the neural net—would have ten neurons, each assigned to a number between 0 and 9, and the neuron’s activation would represent the likelihood that the handwritten number was the number the neuron represented. The in-between layers break down the first layer into patterns which map to the last layer. For example, the neural network might notice that a handwritten “0” is composed of one circle: the penultimate layer could have one neuron representing this circle shape, which, through a heavy weight, would activate the “0” neuron in the final layer. Essentially neural networks’ layers function by perceiving increasingly high-level features across the network. [9]The process of learning unfolds as the algorithm adapts each weight to achieve optimal image recognition. The algorithm learns on a set of training data: in this case, thousands of handwritten letters labeled zero through ten. First, a loss function is set which inputs each weight, then determines, based on how the network performs over all training examples, a loss, or penalty. Then, the algorithm optimizes this function, through gradient descent, to determine which weights create the lowest cost. The optimization process can be understood as a ball rolling down a hill: the ball rolls down the steepest path to the bottom of the hill, just as gradient descent allows us to take the steepest path to the function’s local minima—the point with the lowest cost function, where the algorithm performs best on the training data. Backpropagation is the method by which the direction of the lowest point is determined, akin to figuring out the ball’s steepest slope of the descent, except in higher dimensional space. [10]Types of Machine LearningSupervised learning requires a dataset where each datapoint has a corresponding label, called a base label. There are two types of problems within supervised learning: classifications problems, which require the prediction of discrete categories, and regression problems, which require the prediction of continuous values. [11]Unsupervised learning does not require a labeled dataset.Reinforcement learning draws not on a fixed dataset but rather on an “environment in which the AI takes actions and receives observations.” [12]Generalizing means that an algorithm is able to extrapolate from previous observations to perform well in a new scenario on the same task. Transferring, a closely related idea, means that an algorithm can extrapolate across tasks. [13]What is Artificial General Intelligence? Artificial General Intelligence (AGI) is AI that is on par with humans in a wide range of tasks. If we recall our earlier definition that intelligence measures one’s ability to achieve one’s goals across environments, then general intelligence emphasizes that the real world is complicated and so requires a degree of intelligence which is not situation specific but rather can respond to arbitrary environments. [14]This definition of AGI is ambiguous, so researchers have presented several tests to determine when we’ve reached AGI. The first is the Turing test, which requires an AGI to “fool half the judges into thinking it is human while interacting with them in a freeform conversation for 30 minutes and interpreting audio-visual input.” Another test is the coffee test, which judges an AI on its ability to get into a person’s house and make coffee, including finding coffee and using the coffeemaker. A third test is “the robot college student test,” which requires the AI to enroll and take classes in the same way as a normal university student. The fourth test, called the “employment test,” demands an AGI effectively perform a wide range of jobs, as a uniquely smart human with an extended lifetime could do. However, these tests alone are not enough to define AGI. Historically, benchmarks for AGI have failed: for example, chess was once thought to require general intelligence, but although we have AI that can beat humans at a chess game, we do not yet have AGI. Tesler reflects this ever-moving target in his facetious definition characterizing artificial intelligence as “whatever hasn't been done yet.” [15]Human versus Machine IntelligenceThis “moving target” phenomenon reveals peoples’ tendency to believe a task will require human-like capacity when it actually requires only task-specific computation. This same tendency is born out in what Sutton coins “the bitter lesson,” which holds that while programmers try to apply human-like thought patterns to solve problems, typically general computation will perform the same task more efficiently. The Bitter Lesson is premised on Moore’s law, which can be extrapolated to predict “exponentially falling cost per unit of computation.” However, researchers often fail to internalize these falling costs, and act as though computation power will be fixed, such that they rely on heuristics—and in particular “human knowledge or the special features” of a task—to perform the task with lower computational power. For example, they might encode features of a grandmaster’s strategy into a chess algorithm. Yet examples across game playing, speech recognition, and computer vision all suggest that although specialized solutions are intellectually compelling in the short run, in the long run, they hinder progress by restricting an algorithm’s capacity to human modes of thinking. Ultimately, Sutter argues that we must forsake these attempts to “model the mind” for generalizable solutions that “continue to scale with increased computation” and so harness the falling costs of computational power to create more effective algorithms. [16]In contrast to machine intelligence, Griffiths holds, human intelligence is defined by its bounds: limited time, limited computation, and limited communication. These limits are much less constrictive in AI, which has access to “experiences of many human lifetimes,” exponential increase in computational power, and ability to directly copy learning from one system to another. Griffiths challenges Sutton’s Bitter Lesson by arguing we may want AI to learn on small datasets and engage in rapid learning when it is functioning with constraints similar to those of humans. For example, when an AI interacts with humans it must quickly learn human preferences, and in the sciences, an AI must make deductions when little data is available. In these circumstances, it is helpful for a machine to have some degree of “inductive bias,” which characterizes the inferences beyond data which allow a machine to draw correct conclusions. However, Griffiths reaffirms the Bitter Lesson by arguing that in these instances, getting more data may still be easier—and more successful—than trying to engineer good inductive biases. [17] When will AGI be developed?Karnofsky argues that modeling AGI development based on “biological anchors” is the best option presently available to us, even though it leaves significant uncertainty in its estimates. The Bio Anchor method focuses on two key questions: “Based on the usual patterns in how much training costs, how much would it cost to train an AI model as big as a human brain to perform the hardest tasks humans do? And when will this be cheap enough that we can expect someone to do it?.” In turn, “Bio Anchors estimates a >10% chance of transformative AI by 2036, a ~50% chance by 2055, and an ~80% chance by 2100.” [18]Model size: At present, AI models are estimated to be not yet even 1% as large as human brains, yet may need to be ten times larger than human brains to perform the same tasks as us to account for potential inefficiencies in AI. The challenge is that as an AI model becomes larger, it becomes more expensive to train.[19]Task type: If a task can be decomposed into repeatable subtasks and then repeated then it would be far cheaper to train these subtasks. For example, training a model to write an essay might take, let’s say an hour for each iteration, while training a model to write a good next sentence based on prediction might take just a minute per iteration. So training subtasks is a much cheaper process.[20]Karnofsky then responds to concerns that the Bio Anchors predictions are too aggressive. Mostly, these criticisms turn on the concern that trial and error training is not enough to teach true understanding. Karnofsky responds that “deep understanding” may actually be illusory, and actually just reflects strong predictive capacities.[21]Karnofsky also highlights one justified reason the Bio Anchors model may be too aggressive: it assumes computing power, not labor (researchers) or training (ensuring the AI can engage in large scale trial and error) processes will be the bottleneck for AI development.[22]Finally, Karnofsky points out several reasons the Bio Anchors model may be too conservative: first, that we may come up with ways to teach AI much more cheaply than data-heavy training processes currently allow, second, that AI will become increasingly embedded in our economy and its advancement will be driven by decentralized market forces, and third, that tasks will be more easily decomposed into small tasks than Bio Anchors predicts, therefore cheapening the training process.[23]AGI Emergence ParadigmsThe Alignment ChallengeEarlier, we defined artificial intelligence as an algorithm that can modify the real world to achieve its goals. The field of AI Alignment focuses on ensuring that AI’s goals are aligned with peoples’ goals, such that AI supports rather than detracts from human flourishing.Agent vs Tool AIOne vein of thought: intelligence implies agencyLegg and Hutter hold that intelligence measures one’s ability to achieve one’s goals across a wide range of environments.[24] Yudkowsky takes this definition one step further in “The Power of Intelligence,” where he argues that any intelligent actor can go beyond achieving its goals to actually modify its environment. Detractors might argue intelligence is divorced from real-world power: predictive power does not imply physical power. This line of reasoning is captured by the adage “intelligence is no match for a gun.” However, Yudkowsky holds, this vein of thought fails to account for the ways intelligence allows an actor to adapt—humans, too, had no guns throughout much of the evolutionary process, yet eventually developed them. By this argument, Yudkoswky argues that an intelligent actor, by definition, is an agent, defined as an actor that can modify its environment. [25]What follows from agent AI?The Machine Intelligence Research Institute (MIRI), run by Yudkowsky, focuses on reducing the risk engendered by agent AI: AI that can modify its environment. By MIRI’s argument, AGI will seek to optimize a utility function, and its unprecedented power will mean that it will optimize this utility function incredibly well. However, we cannot easily encode our preferences in a utility function, and therefore may lose control of the AI. We can point to mythic corollaries for this difficulty: Midas’s touch, for example, tells a story in which King Midas fails to encode his actual desires in a wish, such that his wish goes disastrously awry. Philosopher Nick Bostrom proposes a thought experiment, called the Paper Clip Thought Experiment, which similarly describes the challenge of encoding our values in a superintelligent AI:Suppose that someone programs and switches on an AI that has the goal of producing paper clips. The AI is given the ability to learn, so that it can invent ways to achieve its goal better. As the AI is super-intelligent, if there is a way of turning something into paperclips, it will find it. It will want to secure resources for that purpose. The AI is single-minded and more ingenious than any person, so it will appropriate resources from all other activities. Soon, the world will be inundated with paperclips. It gets worse. We might want to stop this AI. But it is single-minded and would realise that this would subvert its goal. Consequently, the AI would become focussed on its own survival. It is fighting humans for resources, but now it will want to fight humans because they are a threat.[26]Bostrom’s thought experiment illustrates his orthogonality thesis, which holds that an AI’s final goals (as defined by its utility function) can vary independently from its intelligence. This means that making an AI smarter will not necessarily help it align with human goals. [27]The thought experiment also demonstrates the concept of instrumental convergence, which “posits that smart goal-directed agents will tend to take certain actions” aimed at increasing their power (like gaining resources).[28]In a 2016 talk, Yudkosky lays out the particular challenges to fully encoding the range of human values into a utility function and explains why attempted solutions to this challenge have broadly failed. Yudkowsky asks us to consider the challenge of directing a robot to fill a cauldron by assigning it a utility function. If the utility of a full cauldron is 1, and the utility of an empty cauldron is 0, then the robot will fill the cauldron to overflow. If we introduce an impact penalty for overfilling the cauldron, then the robot may try to trick people into thinking the cauldron was not overfilled. Additionally, we may want to implement an off switch for the robot, but there is no math that makes it in the robot’s best interest to be switched off if it is overfilling the cauldron without making the robot want to coerce people into turning it off regardless of the fullness of the cauldron.[29]Yudkowsky also introduces the difficulty of ensuring agent AI maintains stable goals during self-modification. The fundamental question Yudkoswky poses is if an agent can self-modify, how do we ensure that it won’t modify its own goals? Yudkowsky provides an example of a Tic Tac Toe algorithm which creates a more advanced successor algorithm. The original algorithm can only verify the success of the successor by checking each of its moves. However, this verification process binds the successor to the original algorithm’s standards and so limits the abilities of the successor. To create a successor more advanced than the original requires that the successor go beyond the verification capacities of the original, which in turn means the original algorithm cannot ensure the successor shares its same goals. [30]“Specification gaming: the flip side of AI ingenuity” builds on Yudkwosky’s concerns about agent AI deviating from peoples’ goals because reinforcement learning rewards an agent that can achieve an outcome as efficiently as possible. This can lead to “specification gaming,” which occurs when an agent “games'' a specified task by finding loopholes to perform the specified tasks most efficiently—akin to a student seeking a good grade who cheats on the test rather than studying the content. [31]However, by the same token that reward functions encourage specification gaming, reward optimization also incentivizes AI to seek novel, efficient solutions. So moving away from reward functions is not the solution to specification gaming. [32]How do we resolve specification gaming? Reward shaping incentivizes an agent to learn intermediary tasks rather than simply completing the task, but poses a risk if an agent is optimized for only the intermediary task. An alternative is to focus on better specifying the final reward—however, many corner cases make this a challenging task. Rather than covering every corner case, we might use human feedback to train the reward function, but specification gaming may lead the agent to fool the human into thinking it’s succeeding. A final challenge to shaping rewards is that they may not be independent of the algorithm, which in many cases is an embedded agent. A sufficiently powerful AI might modify its reward function to be easier to satisfy, called “reward tampering.” For example, a traffic navigator, rather than giving useful directions, might influence “users to have preferences that are easier to satisfy,” for example “by nudging them to choose destinations that are easier to reach.” Or, perhaps, an AI could “hijack the computer on which it runs” and “manually set… its reward signal to a higher value.”.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}.mjx-numerator {display: block; text-align: center}.mjx-denominator {display: block; text-align: center}.MJXc-stacked {height: 0; position: relative}.MJXc-stacked > * {position: absolute}.MJXc-bevelled > * {display: inline-block}.mjx-stack {display: inline-block}.mjx-op {display: block}.mjx-under {display: table-cell}.mjx-over {display: block}.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}.mjx-stack > .mjx-sup {display: block}.mjx-stack > .mjx-sub {display: block}.mjx-prestack > .mjx-presup {display: block}.mjx-prestack > .mjx-presub {display: block}.mjx-delim-h > .mjx-char {display: inline-block}.mjx-surd {vertical-align: top}.mjx-surd + .mjx-box {display: inline-flex}.mjx-mphantom * {visibility: hidden}.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}.mjx-annotation-xml {line-height: normal}.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}.mjx-mtr {display: table-row}.mjx-mlabeledtr {display: table-row}.mjx-mtd {display: table-cell; text-align: center}.mjx-label {display: table-row}.mjx-box {display: inline-block}.mjx-block {display: block}.mjx-span {display: inline}.mjx-char {display: block; white-space: pre}.mjx-itable {display: inline-table; width: auto}.mjx-row {display: table-row}.mjx-cell {display: table-cell}.mjx-table {display: table; width: 100%}.mjx-line {display: block; height: 0}.mjx-strut {width: 0; padding-top: 1em}.mjx-vsize {width: 0}.MJXc-space1 {margin-left: .167em}.MJXc-space2 {margin-left: .222em}.MJXc-space3 {margin-left: .278em}.mjx-test.mjx-test-display {display: table!important}.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}.mjx-test.mjx-test-default {display: block!important; clear: both}.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}@font-fa | https://www.lesswrong.com/posts/Zi6DY2zrs8X9oZnqs/ai-writeup-part-1 | 2022-02-04T21:45:16Z | AimsIn this post I summarize my discoveries from a semester of AI readings and discussions drawn primarily from the 2022 AGI Safety Fundamentals Alignment Curriculum. I am grateful to my fellow AI reading group discussants at UVA (and especially our discussion leader Ryan Bloom) for their thoughtful contributions to our group which informed my thinking for this post. While this post was primarily written as a means to clarify my own thinking as I learned more about AI Safety, I also hope to use it as a reference point for facilitating my universitys AI discussion group. I also aim to write a Part 2, covering the second half of the 2022 AGI Safety Fundamentals Alignment Curriculum and discuss any updates to my understanding as I begin facilitating. Technical Background: What is Artificial Intelligence? DefinitionsArtificial Intelligence (AI) is both the study of intelligent algorithms and the intelligent algorithms themselves.[1] For the purposes of this post, well hold that intelligence measures ones ability to achieve ones goals across a wide range of environments.[2] An algorithm is a step-by-step procedure for solving a problem, typically written on a computer.[3] In turn, intelligent algorithms are computational procedures that can achieve their goals across many environments.A very brief history of AIBeginning in the 1950s, Good Old Fashioned AI (GOFAI), also known as symbolic AI, used search and logic to solve high-level mathematical equations. In 1997 Deep Blue beat chess Grandmaster Garry Kasparaov by using GOFAIDeep Blue searched over millions of positions to find the optimal play.[4] In the 1960s, a series of criticisms argued GOFAI could never handle the complexities of the real world. This burst the building AI hype and led to an AI winter, in which funders pulled out and AI progress slowed.[5] In the 1990s, AI research made a comeback as researchers shifted from symbolic AI to machine learning, which remains the dominant paradigmtoday. [6]Machine Learning basicsUnlike symbolic AI, Machine Learning (ML) is adept at addressing real world complexity. Rather than searching through every possible configuration to find the optimal algorithm, ML relies on using statistical techniques to train neural networks, which are a type of machine learning model inspired by the brain. Deep learning, a variety of machine learning, relies on neural networks with more than one layer between the input and the output. [7]The smallest unit of a neural network is a neuron, which can be understood as a number holder. Each neuron receives signals from other neurons which it combines into a single value that it holds, called its activation. The neuron then passes its activation on to other neurons. The weights and biases between neurons in different layers determine how strongly a neuron can activate for any given input, and is learned via a process of optimization. The metric which is optimized for is known as an objective function or loss function, which is developed over training data. The most common optimization algorithm is gradient descent, which allows the gradients of the weights to be calculated layer by layer using the backpropagation algorithm. [8]To better understand deep learnings layers, neurons, activation, and weights, lets consider a classic example: identifying an image of a handwritten number as a number from zero through nine. The inputthe first layer in the neural networkwould assign a neuron to each pixel in the image, and the neurons activation would fall between zero for a completely black pixel and one for a completely white pixel. The outputthe final layer of the neural netwould have ten neurons, each assigned to a number between 0 and 9, and the neurons activation would represent the likelihood that the handwritten number was the number the neuron represented. The in-between layers break down the first layer into patterns which map to the last layer. For example, the neural network might notice that a handwritten 0 is composed of one circle: the penultimate layer could have one neuron representing this circle shape, which, through a heavy weight, would activate the 0 neuron in the final layer. Essentially neural networks layers function by perceiving increasingly high-level features across the network. [9]The process of learning unfolds as the algorithm adapts each weight to achieve optimal image recognition. The algorithm learns on a set of training data: in this case, thousands of handwritten letters labeled zero through ten. First, a loss function is set which inputs each weight, then determines, based on how the network performs over all training examples, a loss, or penalty. Then, the algorithm optimizes this function, through gradient descent, to determine which weights create the lowest cost. The optimization process can be understood as a ball rolling down a hill: the ball rolls down the steepest path to the bottom of the hill, just as gradient descent allows us to take the steepest path to the functions local minimathe point with the lowest cost function, where the algorithm performs best on the training data. Backpropagation is the method by which the direction of the lowest point is determined, akin to figuring out the balls steepest slope of the descent, except in higher dimensional space. [10]Types of Machine LearningSupervised learning requires a dataset where each datapoint has a corresponding label, called a base label. There are two types of problems within supervised learning: classifications problems, which require the prediction of discrete categories, and regression problems, which require the prediction of continuous values. [11]Unsupervised learning does not require a labeled dataset.Reinforcement learning draws not on a fixed dataset but rather on an environment in which the AI takes actions and receives observations. [12]Generalizing means that an algorithm is able to extrapolate from previous observations to perform well in a new scenario on the same task. Transferring, a closely related idea, means that an algorithm can extrapolate across tasks. [13]What is Artificial General Intelligence? Artificial General Intelligence (AGI) is AI that is on par with humans in a wide range of tasks. If we recall our earlier definition that intelligence measures ones ability to achieve ones goals across environments, then general intelligence emphasizes that the real world is complicated and so requires a degree of intelligence which is not situation specific but rather can respond to arbitrary environments. [14]This definition of AGI is ambiguous, so researchers have presented several tests to determine when weve reached AGI. The first is the Turing test, which requires an AGI to fool half the judges into thinking it is human while interacting with them in a freeform conversation for 30 minutes and interpreting audio-visual input. Another test is the coffee test, which judges an AI on its ability to get into a persons house and make coffee, including finding coffee and using the coffeemaker. A third test is the robot college student test, which requires the AI to enroll and take classes in the same way as a normal university student. The fourth test, called the employment test, demands an AGI effectively perform a wide range of jobs, as a uniquely smart human with an extended lifetime could do. However, these tests alone are not enough to define AGI. Historically, benchmarks for AGI have failed: for example, chess was once thought to require general intelligence, but although we have AI that can beat humans at a chess game, we do not yet have AGI. Tesler reflects this ever-moving target in his facetious definition characterizing artificial intelligence as whatever hasn't been done yet. [15]Human versus Machine IntelligenceThis moving target phenomenon reveals peoples tendency to believe a task will require human-like capacity when it actually requires only task-specific computation. This same tendency is born out in what Sutton coins the bitter lesson, which holds that while programmers try to apply human-like thought patterns to solve problems, typically general computation will perform the same task more efficiently. The Bitter Lesson is premised on Moores law, which can be extrapolated to predict exponentially falling cost per unit of computation. However, researchers often fail to internalize these falling costs, and act as though computation power will be fixed, such that they rely on heuristicsand in particular human knowledge or the special features of a taskto perform the task with lower computational power. For example, they might encode features of a grandmasters strategy into a chess algorithm. Yet examples across game playing, speech recognition, and computer vision all suggest that although specialized solutions are intellectually compelling in the short run, in the long run, they hinder progress by restricting an algorithms capacity to human modes of thinking. Ultimately, Sutter argues that we must forsake these attempts to model the mind for generalizable solutions that continue to scale with increased computation and so harness the falling costs of computational power to create more effective algorithms. [16]In contrast to machine intelligence, Griffiths holds, human intelligence is defined by its bounds: limited time, limited computation, and limited communication. These limits are much less constrictive in AI, which has access to experiences of many human lifetimes, exponential increase in computational power, and ability to directly copy learning from one system to another. Griffiths challenges Suttons Bitter Lesson by arguing we may want AI to learn on small datasets and engage in rapid learning when it is functioning with constraints similar to those of humans. For example, when an AI interacts with humans it must quickly learn human preferences, and in the sciences, an AI must make deductions when little data is available. In these circumstances, it is helpful for a machine to have some degree of inductive bias, which characterizes the inferences beyond data which allow a machine to draw correct conclusions. However, Griffiths reaffirms the Bitter Lesson by arguing that in these instances, getting more data may still be easierand more successfulthan trying to engineer good inductive biases. [17]When will AGI be developed?Karnofsky argues that modeling AGI development based on biological anchors is the best option presently available to us, even though it leaves significant uncertainty in its estimates. The Bio Anchor method focuses on two key questions: Based on the usual patterns in how much training costs, how much would it cost to train an AI model as big as a human brain to perform the hardest tasks humans do? And when will this be cheap enough that we can expect someone to do it?. In turn, Bio Anchors estimates a >10% chance of transformative AI by 2036, a ~50% chance by 2055, and an ~80% chance by 2100. [18]Model size: At present, AI models are estimated to be not yet even 1% as large as human brains, yet may need to be ten times larger than human brains to perform the same tasks as us to account for potential inefficiencies in AI. The challenge is that as an AI model becomes larger, it becomes more expensive to train.[19]Task type: If a task can be decomposed into repeatable subtasks and then repeated then it would be far cheaper to train these subtasks. For example, training a model to write an essay might take, lets say an hour for each iteration, while training a model to write a good next sentence based on prediction might take just a minute per iteration. So training subtasks is a much cheaper process.[20]Karnofsky then responds to concerns that the Bio Anchors predictions are too aggressive. Mostly, these criticisms turn on the concern that trial and error training is not enough to teach true understanding. Karnofsky responds that deep understanding may actually be illusory, and actually just reflects strong predictive capacities.[21]Karnofsky also highlights one justified reason the Bio Anchors model may be too aggressive: it assumes computing power, not labor (researchers) or training (ensuring the AI can engage in large scale trial and error) processes will be the bottleneck for AI development.[22]Finally, Karnofsky points out several reasons the Bio Anchors model may be too conservative: first, that we may come up with ways to teach AI much more cheaply than data-heavy training processes currently allow, second, that AI will become increasingly embedded in our economy and its advancement will be driven by decentralized market forces, and third, that tasks will be more easily decomposed into small tasks than Bio Anchors predicts, therefore cheapening the training process.[23]AGI Emergence ParadigmsThe Alignment ChallengeEarlier, we defined artificial intelligence as an algorithm that can modify the real world to achieve its goals. The field of AI Alignment focuses on ensuring that AIs goals are aligned with peoples goals, such that AI supports rather than detracts from human flourishing.One vein of thought: intelligence implies agencyLegg and Hutter hold that intelligence measures ones ability to achieve ones goals across a wide range of environments.[24] Yudkowsky takes this definition one step further in The Power of Intelligence, where he argues that any intelligent actor can go beyond achieving its goals to actually modify its environment. Detractors might argue intelligence is divorced from real-world power: predictive power does not imply physical power. This line of reasoning is captured by the adage intelligence is no match for a gun. However, Yudkowsky holds, this vein of thought fails to account for the ways intelligence allows an actor to adapthumans, too, had no guns throughout much of the evolutionary process, yet eventually developed them. By this argument, Yudkoswky argues that an intelligent actor, by definition, is an agent, defined as an actor that can modify its environment. [25]What follows from agent AI?The Machine Intelligence Research Institute (MIRI), run by Yudkowsky, focuses on reducing the risk engendered by agent AI: AI that can modify its environment. By MIRIs argument, AGI will seek to optimize a utility function, and its unprecedented power will mean that it will optimize this utility function incredibly well. However, we cannot easily encode our preferences in a utility function, and therefore may lose control of the AI. We can point to mythic corollaries for this difficulty: Midass touch, for example, tells a story in which King Midas fails to encode his actual desires in a wish, such that his wish goes disastrously awry. Philosopher Nick Bostrom proposes a thought experiment, called the Paper Clip Thought Experiment, which similarly describes the challenge of encoding our values in a superintelligent AI:Suppose that someone programs and switches on an AI that has the goal of producing paper clips. The AI is given the ability to learn, so that it can invent ways to achieve its goal better. As the AI is super-intelligent, if there is a way of turning something into paperclips, it will find it. It will want to secure resources for that purpose. The AI is single-minded and more ingenious than any person, so it will appropriate resources from all other activities. Soon, the world will be inundated with paperclips. It gets worse. We might want to stop this AI. But it is single-minded and would realise that this would subvert its goal. Consequently, the AI would become focussed on its own survival. It is fighting humans for resources, but now it will want to fight humans because they are a threat.[26]Bostroms thought experiment illustrates his orthogonality thesis, which holds that an AIs final goals (as defined by its utility function) can vary independently from its intelligence. This means that making an AI smarter will not necessarily help it align with human goals. [27]The thought experiment also demonstrates the concept of instrumental convergence, which posits that smart goal-directed agents will tend to take certain actions aimed at increasing their power (like gaining resources).[28]In a 2016 talk, Yudkosky lays out the particular challenges to fully encoding the range of human values into a utility function and explains why attempted solutions to this challenge have broadly failed. Yudkowsky asks us to consider the challenge of directing a robot to fill a cauldron by assigning it a utility function. If the utility of a full cauldron is 1, and the utility of an empty cauldron is 0, then the robot will fill the cauldron to overflow. If we introduce an impact penalty for overfilling the cauldron, then the robot may try to trick people into thinking the cauldron was not overfilled. Additionally, we may want to implement an off switch for the robot, but there is no math that makes it in the robots best interest to be switched off if it is overfilling the cauldron without making the robot want to coerce people into turning it off regardless of the fullness of the cauldron.[29]Yudkowsky also introduces the difficulty of ensuring agent AI maintains stable goals during self-modification. The fundamental question Yudkoswky poses is if an agent can self-modify, how do we ensure that it wont modify its own goals? Yudkowsky provides an example of a Tic Tac Toe algorithm which creates a more advanced successor algorithm. The original algorithm can only verify the success of the successor by checking each of its moves. However, this verification process binds the successor to the original algorithms standards and so limits the abilities of the successor. To create a successor more advanced than the original requires that the successor go beyond the verification capacities of the original, which in turn means the original algorithm cannot ensure the successor shares its same goals. [30]Specification gaming: the flip side of AI ingenuity builds on Yudkwoskys concerns about agent AI deviating from peoples goals because reinforcement learning rewards an agent that can achieve an outcome as efficiently as possible. This can lead to specification gaming, which occurs when an agent games'' a specified task by finding loopholes to perform the specified tasks most efficientlyakin to a student seeking a good grade who cheats on the test rather than studying the content. [31]However, by the same token that reward functions encourage specification gaming, reward optimization also incentivizes AI to seek novel, efficient solutions. So moving away from reward functions is not the solution to specification gaming. [32]How do we resolve specification gaming? Reward shaping incentivizes an agent to learn intermediary tasks rather than simply completing the task, but poses a risk if an agent is optimized for only the intermediary task. An alternative is to focus on better specifying the final rewardhowever, many corner cases make this a challenging task. Rather than covering every corner case, we might use human feedback to train the reward function, but specification gaming may lead the agent to fool the human into thinking its succeeding. A final challenge to shaping rewards is that they may not be independent of the algorithm, which in many cases is an embedded agent. A sufficiently powerful AI might modify its reward function to be easier to satisfy, called reward tampering. For example, a traffic navigator, rather than giving useful directions, might influence users to have preferences that are easier to satisfy, for example by nudging them to choose destinations that are easier to reach. Or, perhaps, an AI could hijack the computer on which it runs and manually set its reward signal to a higher value. [33]Tool AI: An alternative to agent AIIs it true that an intelligent AGI will necessarily be an agent AGI: AGI that can modify its environment? One alternative proposal is tool AI, which would be built to be used as a tool by the creators, rather than being an agent with its own action and goal-seeking behavior. [34]Tool AI was originally proposed by Holden Karnofsky in 2012 as an alternative path for AGI development. Karnofsky envisioned the AGI as functioning like a much more extensive version of Google Maps, in that it would predict information but not be optimizing over any particular utility function and therefore be more amenable to human choice than the agents Yudkowsky envisioned. In his rebuttal to agent AI, Karnofsky rejected the orthogonality thesis, and held that intelligence would imply the AI took actions which seemed good to the programmer such that they would be aligned with peoples best interests. [35]The core difference between tool AI and Agent AI is that tool AI would simply predict and present an array of optimal outcomes, whereas agent AI would act on these predictions. As an example, Karnofsky points to IBMs Watson, which can function in agent mode (as on Jeopardy!) or in tool mode (if it display[s] top candidates answers to a question for someone else to then act upon). [36]Task-based versus Generalized AINgo relies on Legg and Hutters definition of intelligence as the ability to do well on a broad range of cognitive tasks. Ngo distinguishes between two ways to achieve this intelligence: one which achieves success at a broad range of tasks by being trained in each individual task, and another which succeeds at a broad range of tasks with little or no task-specific training, by generalizing from previous experience. Ngo finds a historical parallel for the task-based approach in electricity: while electricity is an all-purpose technology, humans had to design specific ways to apply it to each task. In contrast, Ngo points to GPT-2 and GPT-3 as a generalizable technology: GPT was first trained to predict the next phrase in a sentence, but later became capable at many other language tasks. Similarly, Ngo argues, children develop learning on tasks very different from the tasks of adults, yet can still effectively transfer the knowledge gained in childhood to work in adulthood. Ngo clarifies that task-based and generalizable knowledge are not totally distinct categories and in the real world, learning exists on a continuum between these two poles. [37]Ngo predicts that task-based learning will be very effective in settings where it is easy to gather data, but that generalizable learning will be important when it is difficult to gather data. For example, a task-based AI may outcompete humans in a job with clear specifications and a great deal of data, such as optimizing supply chains, but may fail at a job with moving targets and limited training data such as decision making in the role of a CEO. The key path to creating a CEO (which Ngo argues would effectively require AGI) would require developing a general intelligence: is by training AI on related tasks and then transferring and generalizing these skills to the job of a CEO. [38]Does Ngos vision run contrary to Suttons bitter lesson argument, which would suggest that it is inefficient to mimic human ways of knowing and learning in order to achieve superhuman results? Sutton argues that we should stop trying to find simple ways to think about the contents of minds. Yet while Sutton justifies his argument by pointing out that AI has exponentially increasing computational power, contrary to the effective assumptions of most programmers, available data may not follow the same exponentially increasing structure. The core limitation, Ngo articulates, is not an AIs potential cognition, but rather the data available for it. This points to the underlying assumption that Sutton makes: that learning [must occur] on huge training sets. Yet if these training sets are not available, perhaps we must actually model the minds abilities. [39]What will AGIs impact be?Christiano argues that AGI will have three key impacts: growth will accelerate, human wages will fall, and human values may be sidelined by Ais interests. First, Christiano argues that growth will accelerate. Historically, Christiano argues, the economy has grown exponentially, and this growth will continue. Additionally, machines accelerate how quickly a task becomes cheap, which could spur infinite growth. Finally, robots provide a far less bounded labor source than human population growth. Next, Christiano argues that human wages will fall, as machines, rather than humans, increasingly produce value for the economy. Finally, Christano holds that machines will increasingly be in charge of decision making, which will not necessarily reflect human values. Christiano argues by way of analogy: corporations maximize profits, but their externalities are limited because they are just an amalgamation of people. In contrast, AI might become increasingly intelligent and therefore be able to deviate from humans goals without our cognition or ability to steer AI correctly. [40]AI GovernanceIn AI Governance: Opportunity and Theory of Impact, Allan Dafoe describes AI governance as addressing a series of AI-related challenges which have both near-term to long-term manifestations. For example, inequality, in the nearterm, occurs as labor displacement and winner-take-all markets, while in the long term may look like an international authoritarian regime. [41]Dafoe develops the distinction between agent and tool AI to characterize two scenarios for AGI emergence: the first, called the superintelligence perspective, draws on the concept of an agent AI and describes a situation in which one agent achieves superintelligence and therefore gains a decisive advantage which allows for winner-take-all scenario, the second, called the structural perspective, draws on the understanding of a tool AI, and describes a scenario in which there are a diverse, global ecology of AI systems, such that there is a competitive AI landscape. The primary AI risk, from the superintelligence scenario, is posed by a dominant AI system which is misused or unaligned, the responsible party is the group which creates the AI system, and the solution to AI alignment is increased safety research. In contrast, in a structural scenario, the primary threat is less predictable but likely stems from political and market dynamics (like a nuclear arms race), the responsible party is less clearly defined, and the solution lies in AI governance and interdisciplinary collaboration between safety researchers and policymakers. [42]Dafoe remains agnostic as to which scenario will unfold and instead advises a multipolar approach which grants equal weights to each scenario. He advises a two stage asset-decision model of research impact, which directs resources to support impactful decisions made by leaders like CEOs and researchers. Dafoe recommends field building as a means of building capacity around AI decision making. Funding research automatically grows the field, by bringing diverse perspectives to governance conversations, supporting talent, and elevating AI thought leaders. While planning how to address AGI may not be in itself useful, as the emerging AGI landscape is difficult to predict and therefore lay plans for, field building supports the important work of developing expertise and connecting people, which helps create infrastructure so that governance can adeptly respond to emerging AI risks. [43]^ LessWrong, AI, https://www.lesswrong.com/tag/ai.^AGISI, A working list: Definitions of Artificial Intelligence and Human Intelligence, http://agisi.org/Defs_intelligence.html.^Merriam Webster, Algorithm, https://www.merriam-webster.com/dictionary/algorithm.^Wikipedia, Deep Blue (chess computer), https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer); Richard Ngo, A short introduction to machine learning, https://www.lesswrong.com/posts/qE73pqxAZmeACsAdF/a-short-introduction-to-machine-learning.^Wikipedia, Symbolic artificial intelligence, https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence; Wikipedia, AI Winter, https://en.wikipedia.org/wiki/AI_winter.^Richard Ngo, A short introduction to machine learning.^Richard Ngo, A short introduction to machine learning.^Richard Ngo, A short introduction to machine learning; 3Blue1Brown, But what is a neural network? Chapter 1, Deep Learning, https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi&index=2.^ Richard Ngo, A short introduction to machine learning; 3Blue1Brown, But what is a neural network? Chapter 1, Deep Learning.^3Blue1Brown, But what is a neural network? Chapter 1, Deep Learning; 3Blue1Brown, Gradient descent, how neural networks learn, Chapter 2, Deep learning, https://www.youtube.com/watch?v=IHZwWFHWa-w&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi&index=3; 3Blue1Brown, What is backpropagation really doing?, Chapter 3, Deep learning, https://www.youtube.com/watch?v=Ilg3gGewQ5U.^Ngo, A short introduction to machine learning.^Ngo, A short introduction to machine learning.^Ngo, A short introduction to machine learning.^AGISI, A working list: Definitions of Artificial Intelligence and Human Intelligence, http://agisi.org/Defs_intelligence.html; Muehlhauser, What is AGI?, https://intelligence.org/2013/08/11/what-is-agi/.^Muehlhauser, What is AGI?; Wikipedia, AI effect, https://en.wikipedia.org/wiki/AI_effect.^Sutton, The Bitter Lesson, http://incompleteideas.net/IncIdeas/BitterLesson.html.^Griffiths, Understanding Human Intelligence through Human Limitation, https://arxiv.org/pdf/2009.14050.pdf.^Karnofsky, Forecasting transformative AI: the biological anchors method in a nutshell, https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/.^ Karnofsky, Forecasting transformative AI: the biological anchors method in a nutshell, https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/.^ Karnofsky, Forecasting transformative AI: the biological anchors method in a nutshell. ^ Karnofsky, Forecasting transformative AI: the biological anchors method in a nutshell. ^ Karnofsky, Forecasting transformative AI: the biological anchors method in a nutshell. ^ Karnofsky, Forecasting transformative AI: the biological anchors method in a nutshell. ^AGISI, A working list: Definitions of Artificial Intelligence and Human Intelligence.^Yudkowski, Power of Intelligence, https://intelligence.org/2007/07/10/the-power-of-intelligence/.^Gans, AI and the paperclip problem, https://voxeu.org/article/ai-and-paperclip-problem.^LessWrong, Orthogonality Thesis, https://www.lesswrong.com/tag/orthogonality-thesis.^Turner, The Causes of Power-seeking and Instrumental Convergence, https://www.lesswrong.com/s/fSMbebQyR4wheRrvk.^Yudkowsky, AI Alignment: Why Its Hard, and Where to Start, https://www.youtube.com/watch?v=EUjc1WuyPT8.^Yudkowsky, AI Alignment: Why Its Hard, and Where to Start.^DeepMind, Specification gaming: the flip side of AI ingenuity, https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity.^ DeepMind, Specification gaming: the flip side of AI ingenuity.^DeepMind, Specification gaming: the flip side of AI ingenuity.^LessWrong, Tool AI, https://www.lesswrong.com/tag/tool-ai.^Karnofsky, Thoughts on the Singularity Institute (SI), https://www.google.com/url?q=https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si&sa=D&source=docs&ust=1642994082572124&usg=AOvVaw0J49p52wkgp_zb8BctZCBv.^Karnofsky, Thoughts on the Singularity Institute (SI).^ Ngo, AGI safety from first principles, https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ.^ Ngo, A short introduction to machine learning.^Sutton, The Bitter Lesson.^Christiano, Three Impacts of Machine Intelligence, https://www.effectivealtruism.org/articles/three-impacts-of-machine-intelligence-paul-christiano/.^Dafoe, AI Governance: Opportunity and Theory of Impact, https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact.^Dafoe, AI Governance: Opportunity and Theory of Impact.^Dafoe, AI Governance: Opportunity and Theory of Impact. | Content Synthesis/Digital Assistance/Discovery | Unknown | null | null | null | null | null | null |
|
news | Shubham Sharma | Databricks launches lakehouse for financial services sector | Databricks, which combines data lake and warehousing in a single lakehouse platform, has launched a dedicated offering for financial services. | https://venturebeat.com/2022/02/15/databricks-launches-lakehouse-for-financial-services-sector/ | 2022-02-15T15:00:00Z | Join today's leading executives online at the Data Summit on March 9th. Register here.San Francisco-based Databricks, a company that combines the capabilities of a data warehouse and data lake in a single lakehouse architecture, today announced a new industry-specific offering: Lakehouse for Financial Services.Available generally starting today, the fully-integrated platform follows the launch of Lakehouse for Retail a month ago. It has been tailor-made using a multi-cloud approach to meet the unique technical, business, and regulatory requirements of companies operating in banking, insurance, and capital markets and help them drive maximum value from their data assets. A number of financial services players have already signed up for the product, including TD Bank and Gemini.Databricks Lakehouse for Financial Services enables Gemini to bring together data ingestion, machine learning, and analytical engineering onto a single platform, Sri Rajappa, head of data at Gemini, said. That means various personas on our team, from data engineers, ML engineers to analytical engineers, can do everything from solving complex data engineering problems to building efficient AI models to providing easy access to the underlying datasets using SQL, Python, and Scala. This significantly accelerates the time it takes for us to solve our most pressing business problems, he added.Lakehouse for Financial Services: Whats special?In addition to the capabilities that Databricks lakehouse is known for, meaning real-time analytics, business intelligence (BI), and AI, the industry-specific offering provides enterprises with vetted data model frameworks, partner solutions, and 14 pre-built accelerators and open-source libraries.The accelerators and libraries jumpstart the analytics process for critical industry use cases, including post-trade analysis, market surveillance, transaction enrichment, fraud detection and prevention, and regulatory reporting. Meanwhile, the partner solutions include offerings from Deloitte and Avanade. The former offers a cloud-based, curated data platform to help financial institutions intelligently organize data domains and approved provisioning points and the latter provides a risk management platform that enables firms to rapidly deploy data into value-at-risk models to keep up with emerging risks and threats. Notably, the vertical-specific lakehouse also comes integrated with FINOS Legend platform to facilitate the processing and exchange of financial data throughout the entire banking ecosystem and help develop next-generation industry standards. Plus, it uses the Delta Sharing protocol with leading financial data providers like Nasdaq, Factset, and Intercontinental Exchange to make it easier for enterprises to consume, share, and monetize data.For Financial Service Institutions around the world looking to modernize and innovate, the two most important assets are no longer its capital or sheer scale, but its data and its people, Junta Nakai, the global head for financial services & sustainability GTM 0at Databricks, said. The Databricks Lakehouse for Financial Services brings these two critical resources together on a secure, collaborative, and open source-based data platform that allows FSIs to leverage data across clouds and drive innovation with AI, he added.CompetitionThe launch of lakehouse for financial services further strengthens Databricks offering for enterprises. The company, which was valued at $38 billion following its last fund-raise in August 2021, goes against the likes of players such as Snowflake, Dremio, and Google BigQuery. Snowflake, in particular, has been a major rival for Databricks. The Montana-based company, a data warehouse provider in the beginning, already offers a product for financial services and has lately been adding data lake-specific features with the expansion to AI/ML use-cases and unstructured data among other things. The company also challenged Databricks recent performance claims.VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More | Content Synthesis/Decision Making | Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
|
news | Katyanna Quach | LinkedIn billionaire Reid Hoffman joins DeepMind's co-founder in fresh AI startup | Inflection AI to 'relay our thoughts and ideas' to machinesLinkedIn co-founder Reid Hoffman, DeepMind co-founder Mustafa Suleyman, and ex-DeepMind AI expert Karén Simonyan announced on Tuesday a new venture of theirs named Inflection AI.… | https://www.theregister.com/2022/03/09/linkedin_hoffman_ai/ | 2022-03-09T04:05:26Z | LinkedIn co-founder Reid Hoffman, DeepMind co-founder Mustafa Suleyman, and ex-DeepMind AI expert Karén Simonyan announced on Tuesday a new venture of theirs named Inflection AI.The startup will focus on machine learning and natural language processing. "Throughout the history of computing, humans have had to learn to speak the language of machines. In the new paradigm, machines will understand our language," Suleyman said in a statement.Modern neural networks trained on vast amounts of speech and text have advanced human-computer communication. Smart speakers armed with AI assistants, such as Amazon Alexa or Google Home, can be instructed to complete all sorts of tasks with voice commands.The Inflection AI trio want to make the technology more seamless in consumer products. "Recent advances in artificial intelligence promise to fundamentally redefine human-machine interaction. We will soon have the ability to relay our thoughts and ideas to computers using the same natural, conversational language we use to communicate with people. Over time these new language capabilities will revolutionize what it means to have a digital experience," Suleyman added.Inflection AI is backed by Greylock, a top technology VC fund, where Hoffman is a general partner and Suleyman is a venture partner, CNBC reported. The financial details were not disclosed. Both men will continue to work at Greylock while Suleyman leads the upstart as CEO.The main brain behind Inflection AI's software will be Karén Simonyan, who sold his company Vision Factory to DeepMind in 2014. He joined the Google stablemate and worked on many of its largest projects, such as the Go-playing system AlphaZero and the protein-folding system AlphaFold. Simonyan has since left DeepMind to join Inflection AI as its chief scientist.Before Suleyman was at Graylock, he led DeepMind's efforts to build Streams, a mobile app designed to be an "AI-powered assistant for nurses and doctors everywhere," alerting them if someone was at risk of developing an acute kidney injury. But the project was controversial as it effectively obtained 1.6 million patient records from the UK National Health Service without those folks' explicit consent, drawing criticism from a privacy watchdog. Streams was later taken over by Google Health, which shut down the app and deleted the data.Hoffman has a long interest in AI. He graduated from Stanford University in 1990 with a degree in symbolic systems and cognitive science before earning a Master's in Philosophy from the University of Oxford in 1993.He is an early investor in OpenAI, and helped set up the Ethics and Governance of Artificial Intelligence Fund, a research and philanthropy institute between MIT and Harvard University. He is also on the board of the Human-Centered Artificial Intelligence institute at Stanford University.The Register has asked Inflection AI for comment. ® | Content Synthesis/Digital Assistance | Unknown | null | null | null | null | null | null |
|
news | Kyle Wiggers | AI Weekly: DARPA seeks to better align AI with human intentions | DARPA is launching a program that aims to better align AI with human intentions. Beyond this, DeepMind and LinkedIn cofounders are launching a new startup. | https://venturebeat.com/2022/03/11/ai-weekly-darpa-seeks-to-better-align-ai-with-human-intentions/ | 2022-03-11T14:41:55Z | Did you miss a session at the Data Summit? Watch On-Demand Here.This week in AI, DARPA, the emerging technologies R&D agency of the U.S. Defense Department, launched a new program that aims to align AI systems with human decision-makers in domains where there isnt an agreed-upon right answer. Elsewhere, two prominent cofounders from LinkedIn and DeepMind, Reid Hoffman and Mustafa Suleyman, announced a new AI startup called Inflection AI that seeks to develop software that allows humans to talk to computers using everyday language. In a press release describing the new three-and-a-half-year program, DARPA says that the goal is to evaluate and build trusted algorithmic decision-makers for mission-critical Department of Defense operations. Dubbed In the Moment, or ITM, it focuses on the process of alignment building AI systems that accomplish what theyre expected to accomplish.ITM is different from typical AI development approaches that require human agreement on the right outcomes, ITM program manager Matt Turek said in a statement. The lack of a right answer in difficult scenarios prevents us from using conventional AI evaluation techniques, which implicitly requires human agreement to create ground-truth data.For example, self-driving cars can be developed against a ground truth for right and wrong decisions based on unchanging, relatively consistent rules of the road. The designers of these cars could hard-code risk values into the cars that prevent them from, for example, making right turns on red in cities where theyre illegal. But Turek says that these one-size-fits-all risk values wont work from a Department of Defense perspective. Combat situations evolve rapidly, he points out, and a commanders intent can change from scenario to scenario.The [Defense Department] needs rigorous, quantifiable, and scalable approaches to evaluating and building algorithmic systems for difficult decision-making where objective ground truth is unavailable, Turek continued. Difficult decisions are those where trusted decision-makers disagree, no right answer exists, and uncertainty, time-pressure, and conflicting values create significant decision-making challenges.DARPA is only the latest organization to explore techniques that might help better align AI with a persons intent. In January, OpenAI, the company behind the text-generating model GPT-3, detailed an alignment technique that it claims cuts down on the amount of toxic language that GPT-3 generates. Toxic text generation is a well-known problem in AI, often caused by toxic datasets. Because text-generating systems are trained on data containing problematic content, some of the content slips through.Although [AI systems are] quite smart today, they dont always do what we want them to do. The goal of alignment is to produce AI systems that do [achieve] what we want them to, OpenAI cofounder and chief scientist Ilya Sutskever told VentureBeat in a phone interview earlier this year. [T]hat becomes more important as AI systems become more powerful.ITM will attempt to establish a framework to evaluate decision-making by algorithms in very difficult domains, including combat, through the use of realistic, challenging scenarios. Trusted humans will be asked to make decisions in these scenarios and then the results will be compared to decisions from an algorithm subjected to the same scenarios.Were going to collect the decisions, the responses from each of those decision-makers, and present those in a blinded fashion to multiple triage professionals, Turek said. Those triage professionals wont know whether the response comes from an aligned algorithm or a baseline algorithm or from a human. And the question that we might pose to those triage professionals is which decision-maker would they delegate to, providing us a measure of their willingness to trust those particular decision-makers.Related to the problem of alignment, LinkedIn cofounder Hoffman and DeepMind cofounder Suleyman plan with Inflection AI to leverage AI to help humans talk to computers. In an interview with CNBC, Suleyman described wanting to build products that eliminate the need for people to write in shorthand or simplify their ideas to communicate with machines. [Programming languages, mice, and other interfaces] are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness in order to get a machine to do something, Suleyman told the publication. It feels like were on the cusp of being able to generate language to pretty much human-level performance. It opens up a whole new suite of things that we can do in the product space.Inflection AIs plans remain vague, but the concept of translating human intentions into a language computers can understand dates back decades. Even the best chatbots and voice assistants today havent delivered on the promise recall Viv Labs, which pledged to deliver a conversational interface to anything that instead fizzled out into elements of Samsungs maligned Bixby assistant. But Suleyman and Hoffman are betting that their expertise as well as coming advancements in conversational AI will make an intuitive human-computer language interface possible within the next five years.Even at the bigger tech companies, theres a relatively small number of people actually building these [AI] models. One of the advantages of doing this in a startup is that we can go much faster and be more dynamic, Suleyman told CNBC. My experience of building many, many teams over the last 15 years is that there is this golden moment when you really have a very close-knit, small, focused team. Im going to try and preserve that for as long as possible.Given that countless visionaries have tried and failed in this area, that would be an impressive feat indeed.For AI coverage, send news tips to Kyle Wiggers and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.Thanks for reading,Kyle WiggersSenior AI Staff WriterVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More | Unknown | Unknown | null | null | null | null | null | null |
|
news | Louis Columbus | How AI protects machine identities in a zero-trust world | By scanning enterprise networks, bad actors often find unprotected machine identities to exploit, making them a favored attack surface. | https://venturebeat.com/2022/03/03/how-ai-protects-machine-identities-in-a-zero-trust-world/ | 2022-03-03T21:22:52Z | Join today's leading executives online at the Data Summit on March 9th. Register here.Bad actors know all they need to do is find one unprotected machine identity, and theyre into a companys network. Analyzing their breaches shows they move laterally across systems, departments, and servers, looking for the most valuable data to exfiltrate while often embedding ransomware. By scanning enterprise networks, bad actors often find unprotected machine identities to exploit. These factors are why machine identities are a favorite attack surface today.Why machine identities need zero trust Organizations quickly realize theyre competing in a zero-trust world today, and every endpoint, whether human or machine-based, is their new security perimeter. Virtual workforces are here to stay, creating thousands of new mobility, device, and IoT endpoints. Enterprises are also augmenting tech stacks to gain insights from real-time monitoring data captured using edge computing and IoT devices. Forrester estimates that machine identities (including bots, robots, and IoT) grow twice as fast as human identities on organizational networks. These factors combine to drive an economic loss of between $51.5 to $71.9 billion attributable to poor machine identity protection. Exposed APIs lead to machine identities also being compromised, contributing to machine identity attacks growing 400% between 2018 and 2019, increasing by over 700% between 2014 and 2019. Defining machine identities Getting zero trust strategies to scale for machine identities is challenging given how versatile their configurations are, combined with how certificate and key management needs to be consistent across each devices lifecycle to be effective.CISOs tell VentureBeat they are selectively applying AI and machine learning to the areas of their endpoint, certificate, and key lifecycle management strategies today that need greater automation and scale. An example is how one financial services organization pursuing a zero trust strategy uses AI-based Unified Endpoint Management (UEM) that keeps machine-based endpoints current on patches using AI to analyze each and deliver the appropriate patch to each. How AI is protecting machine identities Its common for an organization not to know how many machine identities it has at any given moment, according to a recent conversation VentureBeat had with the CISO of a Fortune 100 company. Its understandable, given that 25% of security leaders say the number of identities theyre managing has increased by a factor of ten or more in the last year. Eighty-four percent of security leaders say the number of identities they manage has doubled in the last year. All of this translates into a growing workload for already overloaded IT and security teams, 40% of which are still using spreadsheets to manually track digital certificates, combined with 57% of enterprises not having an accurate inventory of SSH keys. Certificate outages, key misuse or theft, including granting too much privilege to employees who dont need it, and audit failures are symptoms of a bigger problem with machine identities and endpoint security.Most CISOs VentureBeat speaks with are pursuing a zero trust strategy long-term and have their boards of directors supporting them. Boards want to see new digital-first initiatives drive revenue while reducing the risks of cyberattacks. CISOs are struggling with the massive workloads of protecting machine identities while pursuing zero trust. The answer is automating key areas of endpoint lifecycle management with AI and machine learning. The following are five key areas AI and machine learning (ML) show the potential to protect machine identities in an increasingly zero-trust world.Automating machine governance and policies. Securing machine-to-machine communications successfully starts with consistently applying governance and policies across every endpoint. Unfortunately, this isnt easy because machine identities in many organizations rely on siloed systems that provide little if any visibility and control for CISOs and their teams. One CISO told VentureBeat recently that its frustrating given how much innovation is going on in cybersecurity. Today, there is no single pane of glass that shows all machine identities and their governance, user policies, and endpoint health. Vendors to watch in this area include Ericom with their ZTEdge SASE Platform and their Automatic Policy Builder, which uses machine learning to create and maintain user or machine-level policies. Their customers say the Policy Builder is proving to be effective at automating repetitive tasks and delivering higher accuracy in policies than could be achieved otherwise. Additional vendors to watch include DelineaMicrosoft Security, Ivanti, SailPoint, Venafi, ZScaler, and others. Ericoms AI-based Automatic Policy Builder automatically creates policies for each user based on their observed behavior based on applications and machines typically accessed. Policies can be manually adjusted and updated to create a personalized policy, enabling least-privilege access without burdening IT staff.Automating patch management while improving visibility and control. Cybersecurity vendors prioritize patch management, improved visibility, and machine identity control because their results drive funded business cases. Patch management, in particular, is a fascinating area of AI-based innovation for machine-based innovation today. CISOs tells VentureBeat its a sure sign of cross-functional teams both within IT and across the organization not communicating with each other when there are wide gaps in asset inventories, including errors in key management databases. Vulnerability scans need to be defined by a given organizations risk tolerance, compliance requirements, type and taxonomy of asset classes, and available resources. Its a perfect use case for AI and algorithms to solve complex constraint-based problems, including path thousands of machines within the shortest time. Taking a data-driven approach to patch management is helping enterprises defeat ransomware attacks. Leaders in this area include BeyondTrust, Delinea, Ivanti, KeyFactor, Microsoft Security, Venafi, ZScaler, and others. Using AI and ML to discover new machine identities. Its common for cybersecurity and IT teams not to know where up to 40% of their machine endpoints are at any given point in time. Given the various devices and workloads IT infrastructures create, the fact that so many machine identities are unknown amplified how critical it is to pursue a zero-trust security strategy for all machine identities. Ciscos approach is unique, relying on machine learning analytics to analyze endpoint data comprised of over 250 attributes. Cisco branded the service AI Endpoint Analytics. The system rule library is a composite of various IT and IoT devices in an enterprises market space. Beyond the system rule library, Cisco AI Endpoint Analytics has a machine-learning component that helps build endpoint fingerprints to reduce the net unknown endpoints in your environment when they are not otherwise available. Ivanti Neurons for Discovery is also proving effective in providing IT and security teams with accurate, actionable asset information they can use to discover and map the linkages between key assets with the services and applications that depend on those assets. Additional AI ML leaders to discover new machine identities include CyCognito, Delinea, Ivanti, KeyFactor, Microsoft Security, Venafi, ZScaler, and others. Ciscos AI Endpoint Analytics platform aggregates data from various sources in the network, collates and analyzes it to build a detailed endpoint profile, and groups similar endpoints by applying artificial intelligence and machine learning (AI/ML) techniques.Key and digital certificate configuration. Arguably one of the weakest links in machine identity and machine lifecycle management, key and digital certificate configurations are often stored in spreadsheets and rarely updated to their current configurations. CISOs tell VentureBeat that this area suffers because of the lack of resources in their organizations and the chronic cybersecurity and IT shortage theyre dealing with. Each machine requires a unique identity to manage and secure machine-to-machine connections and communication across a network. Their digital identities are often assigned via SSL, TLS, or authentication tokens, SSH keys, or code-signing certificates. Bad actors target this area often, looking for opportunities to compromise SSH keys, bypass code-signed certificates or compromise SSL and TLS certificates. AI and machine learning are helping to solve the challenges of getting key and digital certificates correctly assigned and kept up to date for every machine identity on an organizations network. Relying on algorithms to ensure the accuracy and integrity of every machine identity with their respective keys and digital certificates is the goal. Leaders in this field include CheckPoint, Delinea, Fortinet, IBM Security, Ivanti, KeyFactor, Microsoft Security, Venafi, ZScaler, and others. UEM for machine identities. AI and ML adoption accelerate the fastest when these core technologies are embedded in endpoint security platforms already in use across enterprises. The same holds for UEM for machine identities. Taking an AI-based approach to managing machine-based endpoints enables real-time OS, patch, and application updates that are the most needed to keep each endpoint secure. Leading vendors in this area include Absolute Softwares Resilience, the industrys first self-healing zero trust platform; its noteworthy for its asset management, device and application control, endpoint intelligence, incident reporting, and compliance, according to G2 Crowds crowdsourced ratings. Ivanti Neurons for UEM relies on AI-enabled bots to seek out machine identities and endpoints and automatically update them, unprompted. Their approach to self-healing endpoints is noteworthy for creatively combining AI, ML, and bot technologies to deliver UEM and patch management at scale across their customer base. Additional vendors rated highly by G2 Crowd include CrowdStrike Falcon, VMWare Workspace ONE, and others. A secure future for machine identityMachine identities complexity makes them a challenge to secure at scale and over their lifecycles, further complicating CISOs efforts to secure them as part of their zero-trust security strategies. Its the most urgent problem many enterprises need to address, however, as just one compromised machine identity can bring an entire enterprise network down. AI and machine learnings innate strengths are paying off in five key areas, according to CISOs. First, business cases to spend more on endpoint security need data to substantiate them, especially when reducing risk and assuring uninterrupted operations. AI and ML provide the data techniques and foundation delivering results in five key areas ranging from automating machine governance and policies to implementing UEM. The worst ransomware attacks and breaches of 2021 started because machine identities and digital certificates were compromised. The bottom line is that every organization is competing in a zero-trust world, complete with complex threats aimed at any available, unprotected machine.VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More | Process Automation/Decision Making | Management/Computer and Mathematical | null | null | null | null | null | null |
|
news | Katyanna Quach | Startups competing with OpenAI's GPT-3 all need to solve the same problems | Today we walk you through the fascinating world of upcoming text-generating rivalsAnalysis Text-generating language models are difficult to control. These systems have no sense of morality: they can spew hate speech and misinformation. Despite this, numerous companies believe this kind of software is good enough to sell. Are these organizations, and the wider world, ready for it?… | https://www.theregister.com/2022/03/03/language_model_gpt3/ | 2022-03-03T08:33:09Z | Analysis Text-generating language models are difficult to control. These systems have no sense of morality: they can spew hate speech and misinformation. Despite this, numerous companies believe this kind of software is good enough to sell. Are these organizations, and the wider world, ready for it?OpenAI launched its powerful GPT-3 to the masses in 2020; it also has an exclusive licensing deal with Microsoft. The upshot of this is that developers no longer have to be machine-learning gurus to create products that feature natural language processing. All the hard work of building, training, and running a massive neural network has been done for them, and is neatly packaged behind the GPT-3 API.Last year, two startups released their own proprietary text-generation APIs. AI21 Labs, based in Israel, launched its 178-billion-parameter Jurassic-1 in August 2021, and Cohere, headquartered in Canada, released a range of models nicknamed small, medium, and large, three months later.Now, Cohere has an extremely large-sized system, which is right now only available to beta testers. Cohere hasn't disclosed how many parameters its models contain. For comparison, OpenAI's GPT-3 has 175 billion parameters.Aidan Gomez, co-founder and CEO of Cohere, said he toyed with the idea of launching a generative language model startup before GPT-3 was announced. He was part of the team at Google Brain, which came up with the transformer-based architecture at the heart of these systems. Gomez argued there are benefits to having a few centralized, powerful text-generation systems as opposed to a sprawl of individual deployments."We really shouldn't have a world where every single company is training their own GPT-3, it would be massively environmentally costly, compute costly, and we should be trying to share resources as much as possible," Gomez told The Register."I saw the opportunity for an independent player to come out and to basically centralize the cost of pre-training these massive models and then open up access and amortize those costs across a huge number of users. By reducing the cost you make it accessible to more people."Starting a language model company that can compete with the likes of OpenAI is a tall order because the barrier to entry is so high. New ventures must come armed with deep pockets to pay for the huge amount of computational resources required to train and run these models, and hire experts in cutting-edge research and machine-learning engineering.Cohere raised $40m in its series-A funding round, and just announced $125m in series-B funding this month, while AI21 Labs has collected $54.5m over four rounds of funding. OpenAI secured $250m in its latest round, technically its series A.Each startup has partnered with a different company to provide cloud computing. Cohere has entered a multi-year contract with Google. OpenAI and AI21 Labs are supported by Microsoft and AWS, respectively."Training these large models is always expensive," Yoav Shoham, co-CEO of AI21 Labs and a retired Stanford computer-science professor, told The Register. "If you're not smart enough, you can easily run into tens of millions of dollars if you're not careful. You need to make sure that you know unit economics so that you don't lose money on every customer and only make it up in volume."AI21 Labs and Cohere are also choosy about the customers they onboard. The tendency for language models to produce text that may be offensive or false makes the technology risky to deploy, and clients need to understand and be able to handle the dangers.Alongside OpenAI, both upstarts have strict usage guidelines and terms of service rules to control what can and cannot be built using their APIs. For example, they all forbid applications that could mislead people into believing they're communicating with a human being rather than a machine. Enforcing these rules is a balancing act. If these API providers are too restrictive on what can and can't be done with their technology, they could drive customers away and lose out on business. If they are too lax, the software could generate undesirable text or conversations, triggering a PR disaster, lawsuits, and so on.One of OpenAI's early flagship customers Latitude which built AI Dungeon, a popular online adventure text game announced it had switched over to AI21 Labs after the developer was required by OpenAI to implement a content filter to catch and stop NSFW language."We've been working on this for several weeks so that we could remove dependence on OpenAI for AI Dungeon users so that users would be minimally impacted by OpenAI's new content policy, which we are required to implement," Latitude said in December.OpenAI's new policy required the games maker to roll out a content filter to screen players' adventures for risque narratives. But the filter went awry. Benign text such as "four watermelons" would be blocked and derail people's games. Earlier this year, Latitude said it was going to stop offering its GPT-3-based model altogether, claiming the protection measures OpenAI insisted were put in place were ruining the gameplay."Most users can't have a good experience with the new filters," Latitude said.AI21 Labs has developed a toxicity filter, Shoham told us. The tool is used internally and will soon be offered to customers via its API. "We have a dedicated team to look at issues of quality, safety or ethics or bias, all the ways in which some people worry that AI could go wrong," he said.Safety is an issue all language model businesses have to deal with, and it'll be interesting to see if startups enforce a strong set of rules and controls, despite financial incentives to lower the bar and bring on more customers. "I think we're competitors but we're all in the same boat," Shoham said. "We know safety is an important issue and we take it seriously." Gomez agreed, and said he was open to the idea of sharing some of Cohere's IP if it specifically improved safety and would encourage more companies to adopt the new measures.At the moment, Cohere and AI21 Labs broadly offer more or less the same features and capabilities as OpenAI.On top of text generation, Cohere and OpenAI's models can perform tasks such as search and classification. Cohere supports embeddings, a technique that maps similar words or concepts together making it easier for users to implement sentiment analysis or build recommendation systems.OpenAI followed suit and added similar capabilities to its GPT-3-based models last month. The models' performances are all pretty comparable since they were all trained on similar data scraped from the internet. Cohere and AI21 Labs also fed their models Wikipedia entries, books, and portions of the Common Crawl dataset used to teach OpenAI's GPT-3.Cohere and AI21 Labs will have to differentiate their models somehow to win over customers. "For us, our product focus is on expanding the number of people who can build with this stuff. That's where we see our leverage," Cohere's Gomez told us."In order to do that we need to give those people the best possible models, so we invest a lot in research on making them more useful. There's three directions that I see: safety, efficiency, and quality."AI21 Labs is trying to figure out how to give machines reasoning skills. Shoham said his team at AI21 is trying to develop fresh system architectures by combining older symbolic AI systems with modern neural networks."Current models are dumb as nails," he said. "Ask a language model how many teeth does a human have and it'll say 32. Now, that's right and very nice. But ask it how many teeth does a math teacher have and it'll say 47."The lack of common sense and inability to be accurate doesn't just make language models risky, they hamper technological innovation, too. They're not appropriate in some cases, such as generating or summarizing medical or legal advice, or educational materials.OpenAI's GPT-3 API transformed Ryan Doyle's career. As a former sales representative and self-taught developer, he built Magic Sales Bot, an application that used GPT-3 to help users write better sales pitches in their emails. Last year, Doyle told us that around 2,000 users had signed up to use his program.But Doyle stopped using it, he told us earlier this month, due to the model's tendency to just make up information: "GPT-3 presented a huge opportunity to apply AI to ideas I've always wanted to try, like creating sales emails. As the idea took shape, the reality showed that GPT-3 had quite a distance to go [before it could be] used in business writing. I ultimately had to pull it to move my business forward, but I intend on revisiting and integrating it when the tech improves."Cohere and AI21 Labs' models must tackle these same problems. As competition heats up, the focus is on making these systems smarter and more trustworthy. How to keep them from generating potentially misleading and false information is still an open problem. Demonstrably, people can be duped by fake computer-generated speeches.There are other up-and-coming startups looking to solve the same issues. Anthropic, the AI safety and research company started by a group of ex-OpenAI employees, hinted it might work on large commercial systems in the future. Several researchers have left Google Brain to join two new ventures started by their colleagues, according to people familiar with the matter. One outfit is named Character, and the other Persimmon Labs.Startups arriving late to the party face an uphill battle the longer they take to launch their services. Existing companies will continue to push new features, and they risk falling behind. Potential customers won't be too impressed if they just offer the same capabilities in current APIs.They could tailor their language models to specialize in a narrow domain to carve a niche in the market, or demonstrate their software can solve new types of language tasks that weren't possible before. The best way to succeed, however, is to show their systems can generate text that's less biased, toxic, and more accurate. ® | Digital Assistance/Information Retrieval Or Search/Content Creation | Business and Financial Operations/Management/Computer and Mathematical | null | null | null | null | null | null |
|
news | Industry News | Imply Polaris enables developers to build modern analytics applications | Imply unveiled the first milestone in Project Shapeshift, the 12-month initiative designed to solve the most pressing issues developers face when building analytics applications. The announcement includes a cloud database service built from Apache Druid and the private preview of a multi-stage query engine for Druid. Together, these innovations show how Imply delivers the most developer-friendly and capable database for analytics applications. Developers are increasingly at the forefront of analytics innovation, driving an evolution in … More →The post Imply Polaris enables developers to build modern analytics applications appeared first on Help Net Security. | https://www.helpnetsecurity.com/2022/03/03/imply-polaris/ | 2022-03-03T02:45:27Z | Imply unveiled the first milestone in Project Shapeshift, the 12-month initiative designed to solve the most pressing issues developers face when building analytics applications.The announcement includes a cloud database service built from Apache Druid and the private preview of a multi-stage query engine for Druid. Together, these innovations show how Imply delivers the most developer-friendly and capable database for analytics applications.Developers are increasingly at the forefront of analytics innovation, driving an evolution in analytics beyond traditional BI and reporting to modern analytics applications. These applicationsfueled by the digitization of businessesare being built for real-time observability at scale for cloud products and services, next-gen operational visibility for security and IT, revenue-impacting insights and recommendations and for extending analytics to external customers. Apache Druid has been the database-of-choice for analytics applications trusted by developers of 1000+ companies including Netflix, Confluent and Salesforce.Today, we are at an inflection point with the adoption of Apache Druid as every organization now needs to build modern analytics applications, said Fangjin Yang, CEO and co-founder, Imply. This is why its now time to take Druid to the next level. Project Shapeshift is all about making things easier for developers, so they can drive the analytics evolution inside their companies. As developers turned to Apache Druid to power interactive data experiences on streaming and batch data with limitless scale, Imply saw tremendous opportunity to simplify the end-to-end developer experience and extend the Druid architecture to power more analytics use cases for applications from a single database. Real-time database as a service built from Apache DruidBuilding analytics applications involves operational work for software development and engineering teams across deployment, database operations, lifecycle management and ecosystem integration. For databases, cloud database services have become the norm as they remove the burden of infrastructure from cluster sizing to scaling and shift the consumption model to pay-as-you-use. Imply Polaris, however, is a cloud database service reimagined from the ground up to simplify the developer experience for analytics applications end-to-end. Much more than cloudifying Apache Druid, Polaris drives automation and intelligence that delivers the performance of Druid without needing expertise, and it provides a complete, integrated experience that simplifies everything from streaming to visualization. Specifically Polaris introduces:Fully-managed cloud service – Developers can build modern analytics applications without needing to think about the underlying infrastructure. No more sizing and planning required to deploy and scale the database. Developers can start ingesting data and building applications in just a few minutes.Database optimization – Developers get all the performance of Druid they need without turning knobs. The service automates configurations and tuning parameters and includes built-in performance monitoring that ensures the database is optimized for every query in the application. Single development experience – Developers get a seamless, integrated experience to build analytics applications. A built-in, push-based streaming service via Confluent Cloud and visualization engine integrated into a single UI makes it simple to connect to data sources and build rich, interactive applications. Polaris is built on the core tenets of Apache Druidflexibility, efficiency and resiliencyand packages them into a cloud service that deploys instantly, scales effortlessly and doesnt requires any Druid expertise, enabling any developer to build modern analytics applications, said Jad Naous, chief product officer, Imply.We chose Apache Druid to power our analytics application to get real-time traffic visibility across one of the worlds largest global tier-1 IP backbones, said Paolo Lucente, big data architect at the global IP network division of NTT Ltd. We are looking forward to deploying Imply Polaris to continue to get the interactivity we need on a simple cloud-based service without having to worry about maintenance.Evolving the Druid architectureFrom its inception, Druid has uniquely enabled developers to build highly interactive and concurrent applications at scale, powered by a query engine built for always-on applications with sub-second performance at TB to PB+ scale. Increasingly, however, developers need data exports, reporting and advanced alerting included with their applications, requiring additional data processing systems to deploy and manage.Imply introduces a private preview of a multi-stage query engine, a technical evolution for Druid that reinforces its leadership as the most capable database for analytics applications. The multi-stage query enginein conjunction with the core Druid query enginewill extend Druid beyond interactivity to support the following new use cases in a single database platform:Druid for reporting – Improved ability to handle long-running, heavyweight queries to give developers a single database for powering applications that require both interactivity and complex reports or data exports. Cost-control capabilities make these heavyweight queries affordable.Druid for alerting – Building on Druids longstanding capability to combine streaming and historical data, the multi-stage query engine enables alerting across a large number of entities with complex conditions at scale.Simplified and more capable ingestion – Druid has always provided very high concurrencyvery fast queries across large data sets. Using the same SQL language that Druid already supports for queries, the new multi-stage query engine enables simplified ingestion from object stores, including HDFS, Amazon S3, Azure Blob and Google GCS with in-database transformation, making data ingestion easy without giving up any of Druids power to enable interactive conversations in modern data analytics applications. The multi-stage query engine represents the most significant evolution of Druid, an expansion of the architecture that makes it unparalleled in the industry, said Gian Merlino, co-founder/CTO of Imply and Apache Druid PMC chair. It brings both flexibility as well as ease to the developer experience. Im excited that the entire open source community will be able to take full advantage of it. | Process Automation/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Kate Grigorenko | Use of Artificial Intelligence in the Banking World 2022 | The global AI market was valued at 62.35 billion in 2020, and the market is expected to expand with a CAGR of 40.2% between 2021 to 2028. The banking and financial sector accounts for 20-25% of the global economy. It is unlikely that a market as big as banking and finance would not catch up on a trend as widespread and revolutionary as AI. In fact, even before the pandemic ushered in an era of technological revolution, the banking sector had started adopting AI for both front and back-office tasks. So, what (and how much) are the benefits of using AI for banks? What does the market look like in 2022? What do the experts see becoming a reality in the years to come? | https://dzone.com/articles/use-of-artificial-intelligence-in-the-banking-worl | 2022-02-15T15:18:37Z | The global AI market was valued at 62.35 billion in 2020, and the market is expected to expand with a CAGR of 40.2% between 2021 to 2028. The banking and financial sector accounts for 20-25% of the global economy. It is unlikely that a market as big as banking and finance would not catch up on a trend as widespread and revolutionary as AI. In fact, even before the pandemic ushered in an era of technological revolution, the banking sector had started adopting AI for both front and back-office tasks. So, what (and how much) are the benefits of using AI for banks? What does the market look like in 2022? What do the experts see becoming a reality in the years to come?Find answers to all the questions right here.Artificial Intelligence in the Banking World: Going by the NumbersBefore we move any further, lets take a look at what the numbers have to say about the use and impact of artificial intelligence in the banking sectorA McKinsey report suggested that by using AI, the banking sector can gain an additional $1 trillion dollar in value. With the application of AI, banks can save an estimated $447 billion by 2023. Out of that, $416 billion of savings will come from AI use in the front and middle office. A whopping 80% of banks in an OpenText survey of financial survey professionals said they were highly aware of the potential benefits of AI.75% of banks with over $100 billion in assets have already begun implementing AI strategies. For banks with less than $100 billion in assets, the percentage was 46%.Joint research by the National Business Research Institute and Narrative Science in 2020 concluded that 32% of banks have started leveraging AI technologies like predictive analytics and voice recognition to get a competitive edge in the market. Benefits of AI in Banking The numbers make it clear that AI is gaining traction in the banking world. The banking industrys fascination with artificial intelligence is not just because AI is in-vouge. The primary benefits of AI in banking include:Better service responseElimination of human errors and biasesGreater scope for personalizationEnhancement in customer trust and satisfactionFacilitation of the concept of banking-from-homeDue to these benefits, stakeholders are exploring and experimenting with more innovative and newer ways of leveraging artificial intelligence, Big Data, and machine learning for banks.Top Applications of AI in BankingArtificial intelligence has potentially limitless use cases, in general, and even if we specifically talk about the banking sector. Optimist forecasters dream of days when AI would completely take over the banking world and our entire banking system would be run by these intelligent machines. While that is still a far-fetched dream, here are 5 applications of AI in banking that we can see in action in 2022. 1. AI Cybersecurity Against Financial FraudIn 2020, over 290,000 cybersecurity issues were reported by the banking sector. That makes it important for banks to take not just responsive but proactive measures. They need to nip cyber security vulnerabilities in the bud and protect employees and customers from financial fraud, and AI is helping with that. Denmarks largest bank, Danske Bank, has replaced its old rule-based fraud detection system with an AI-powered algorithm. The deep learning tool now helps the bank cut down the risk of financial fraud by 50%. The solution also reduced false positives by 60% resulting in less frequent false alarms. Also, Amazon recently purchased an AI cyber security startup harvest.AI. This further solidifies the fact that the use of AI in cyber security and financial fraud prevention has serious potential.2. AI-Powered Chatbots for Seamless Customer InteractionChatbots are one of the most-used applications of artificial intelligence, not only in banking but across the spectrum. Once deployed, AI chatbots can work 24/7 to be available for customers. In fact, in several surveys and market research studies, it has been found that people actually prefer interacting with bots instead of humans. This can be attributed to the use of natural language processing for AI chatbots. With NLP, AI chatbots are better able to understand user queries and communicate in a seemingly humane way. An example of AI chatbots in banking can be seen in the Bank of America with Erica, the virtual assistant. Erica handled 50 million client requests in 2019 and can handle requests including card security updates and credit card debt reduction.3. Personalized Banking for Higher Customer RetentionDigital-savvy banking customers today need more than what traditional banking can offer. With AI, banks can deliver the personalized solutions that customers are seeking. An Accenture survey suggested that 54% of banking customers wanted an automated tool to help monitor budgets and suggest real-time spending adjustments. AI can make that, and a lot more, possible. Now one might wonder if customers would be willing to take advice from a bot? Well, 44% of people said they are "very willing" to accept computer-generated banking advice. Thus, this AI use case in banking is actionable with decent acceptance levels at present.Practical application of AI-powered personalized banking can be seen at the TD Bank Group. They have made public their plans to integrate Kasisto's AI technology into their mobile app. The solution would give customers real-time support and insights into their spending patterns. 4. Transparent Loan and Credit Decisions With Artificial IntelligenceMost banks are still relying on credit scores, credit history, and references to ascertain a prospect's creditworthiness. This process is not just painstaking and time-consuming but also not transparent. With the use of AI in making loan and credit decisions, banks can reduce manual grunt and increase transparency. Also, with data-backed insights offered by AI solutions, banks can cut losses and make more profitable decisions. While examples of uses of AI in the banking industry for such decision-making are not many, some banks are now using AI to find creditworthiness reports about people with limited credit histories. Also, such systems can alert banks about possibly risky spending behavior and patterns of their clients. 5. AI Ensuring Ethical FrameworksEthical considerations are becoming more prevalent across the board, especially in the financial world. This is because customers are becoming more aware and are taking charge of how they want their data to be used. Artificial intelligence can greatly help banks develop ethical frameworks for data processing and build customer trust. HSBC can be seen as a market leader in this sphere. HSBC is the first financial services company to have created an AI and data ethics principle. They have also partnered with the Monetary Authority of Singapore and the Alan Turing Institute to develop a framework for the ethical adoption of AI in banking. Challenges That Need To Be Tackled in 2022While the benefits and use cases of artificial intelligence in banking are plenty, the path ahead is not without its fair share of challenges. The key challenges plaguing the AI niche in banking include:Customers and employees in tier II and tier III cities across the globe are showing unwillingness to adapt to AI-enhanced methods. The initial inertia against moving away from conventional practices needs to be overcome.There seems to be a disconnect between what the customers want banks to offer and the solutions banks put in place. Proper data and marketing understanding is required to bridge this gap. Regulatory requirements and compliance pressures are proving to be a limiting factor for the adoption of AI by banks. For example, net banking and online transactions come under the ambit of privacy regulation and thus, compliance becomes inevitable. The workforce of the banking sector is not yet skilled enough to work with advanced AI tools and software. Upskilling efforts need to be taken by banks.With that, we can conclude that the future of AI in banking looks promising and 2022 could be an inflection point where banks stop playing around with AI and experimentative efforts transform into something that can yield tangible results. | Digital Assistance/Content Synthesis/Content Creation | Business and Financial Operations/Office and Administrative Support | null | null | null | null | null | null |
|
news | Rob Toews, Contributor, Rob Toews, Contributor https://www.forbes.com/sites/robtoews/ | Language Is The Next Great Frontier In AI | Language AI is poised to transform vast swaths of society and the economy. | https://www.forbes.com/sites/robtoews/2022/02/13/language-is-the-next-great-frontier-in-ai/ | 2022-02-13T22:00:26Z | Johannes Gutenberg's printing press, introduced in the fifteenth century, transformed society ... [+] through language. The creation of machines that can understand language may have an even greater impact.Encyclopedia BritannicaLanguage is the cornerstone of human intelligence.The emergence of language was the most important intellectual development in our species history. It is through language that we formulate thoughts and communicate them to one another. Language enables us to reason abstractly, to develop complex ideas about what the world is and could be, and to build on these ideas across generations and geographies. Almost nothing about modern civilization would be possible without language.Building machines that can understand language has thus been a central goal of the field of artificial intelligence dating back to its earliest days.It has proven maddeningly elusive.This is because mastering language is what is known as an AI-complete problem: that is, an AI that can truly understand language the way a human can would by implication be capable of any other human-level intellectual activity. Put simply, to solve language is to solve AI.This profound and subtle insight is at the heart of the Turing test, introduced by AI pioneer Alan Turing in a groundbreaking 1950 paper. Though often critiqued or misunderstood, the Turing test captures a fundamental reality about language and intelligence; as it approaches its 75th birthday, it remains as relevant as it was when Turing first conceived it.Humanity has yet to build a machine intelligence with human-level mastery of language. (In other words, no machine intelligence has yet passed the Turing test.) But over the past few years researchers have achieved startling, game-changing breakthroughs in language AI, also called natural language processing (NLP).The technology is now at a critical inflection point, poised to make the leap from academic research to widespread real-world adoption. In the process, broad swaths of the business world and our daily lives will be transformed. Given languages ubiquity, few areas of technology will have a more far-reaching impact on society in the years ahead.Transformers: A Once-In-A-Generation BreakthroughThe most powerful way to illustrate the capabilities of todays cutting-edge language AI is to start with a few concrete examples.Todays AI can correctly answer complex medical queriesand explain the underlying biological mechanisms at play. It can craft nuanced memos about how to run effective board meetings. It can write articles analyzing its own capabilities and limitations, while convincingly pretending to be a human observer. It can produce original, sometimes beautiful, poetry and literature.(It is worth taking a few moments to inspect these examples yourself.)What is behind these astonishing new AI abilities, which just five years ago would have been inconceivable?In short: the invention of the transformer, a new neural network architecture that has unleashed vast new possibilities in AI.A group of Google researchers introduced the transformer in late 2017 in a now-classic research paper.Before transformers, the state of the art in NLPfor instance, LSTMs and the widely-used Seq2Seq architecturewas based on recurrent neural networks. By definition, recurrent neural networks process data sequentiallythat is, one word at a time, in the order that the words appear.Transformers great innovation is to make language processing parallelized, meaning that all the tokens in a given body of text are analyzed at the same time rather than in sequence. In order to support this parallelization, transformers rely on an AI mechanism known as attention. Attention enables a model to consider the relationships between words, even if they are far apart in a text, and to determine which words and phrases in a passage are most important to pay attention to.Parallelization also makes transformers vastly more computationally efficient than RNNs, meaning that they can be trained on larger datasets and built with more parameters. One defining characteristic of todays transformer models is their massive size.A flurry of innovation followed in the wake of the original transformer paper as the worlds leading AI researchers built upon this foundational breakthrough.The publication of the landmark transformer model BERT came in 2018. Created at Google, BERTs big conceptual advance is its bidirectional structure (the B in BERT stands for bidirectional). The model looks in both directions as it analyzes a given word, considering both the words that come before and the words that come after, rather than working unidirectionally from left to right. This additional context allows for richer, more nuanced language modeling.BERT remains one of the most important transformer-based models in use, frequently treated as a reference against which newer models are compared. Much subsequent research on transformersfor instance, Facebooks influential RoBERTa model (2019)is based on refining BERT.Googles entire search engine today is powered by BERT, one of the most far-reaching examples of transformers real-world impact.Another core vein of research in the world of transformers is OpenAIs family of GPT models. OpenAI published the original GPT in June 2018, GPT-2 in February 2019, and GPT-3 in May 2020. Popular open-source versions of these models, like GPT-J and GPT-Neo, have followed.As the G in their names indicates, the GPT models are generative: they generate original text output in response to the text input they are fed. This is an important distinction between the GPT class of models and the BERT class of models. BERT, unlike GPT, does not generate new text but instead analyzes existing text (think of activities like search, classification, or sentiment analysis).GPTs generative capabilities make these models particularly attention-grabbing, since writing appears to be a creative act and the output can be astonishingly human-like. Text generation is sometimes referred to as NLPs party trick. (All four of the examples linked to above are text generation examples from GPT-3.)Perhaps the most noteworthy element of the GPT architecture is its sheer size. OpenAI has been intentional and transparent about its strategy to pursue more advanced language AI capabilities through raw scale above all else: more compute, larger training data corpora, larger models.With 1.5 billion parameters, GPT-2 was the largest model ever built at the time of its release. Published less than a year later, GPT-3 was two orders of magnitude larger: a whopping 175 billion parameters. Rumors have circulated that GPT-4 will contain on the order of 100 trillion parameters (perhaps not coincidentally, roughly equivalent to the number of synapses in the human brain). As a point of comparison, the largest BERT model had 340 million parameters.As with any machine learning effort today, the performance of these models depends above all on the data on which they are trained.Todays transformer-based models learn language by ingesting essentially the entire internet. BERT was fed all of Wikipedia (along with the digitized texts of thousands of unpublished books). RoBERTa improved upon BERT by training on even larger volumes of text from the internet. GPT-3s training dataset was larger still, consisting of half a trillion language tokens. Thus, these models linguistic outputs and behaviors can ultimately be traced to the statistical patterns in all the text that humans have previously published online.The reason such large training datasets are possible is that transformers use self-supervised learning, meaning that they learn from unlabeled data. This is a crucial difference between todays cutting-edge language AI models and the previous generation of NLP models, which had to be trained with labeled data. Todays self-supervised models can train on far larger datasets than was ever previously possible: after all, there is more unlabeled text data than labeled text data in the world by many orders of magnitude.Some observers point to self-supervised learning, and the vastly larger training datasets that this technique unlocks, as the single most important driver of NLPs dramatic performance gains in recent years, more so than any other feature of the transformer architecture.Foundation ModelsTraining models on massive datasets with millions or billions of parameters requires vast computational resources and engineering know-how. This makes large language models prohibitively costly and difficult to build. GPT-3, for example, required several thousand petaflop/second-days to traina staggering amount of computational resources.Because very few organizations in the world have the resources and talent to build large language models from scratch, almost all cutting-edge NLP models today are adapted from a small handful of base models: e.g., BERT, RoBERTa, GPT-2, BART. Almost without exception, these models come from the worlds largest tech companies: Google, Facebook, OpenAI (which is bankrolled by Microsoft), Nvidia.Without anyone quite planning for it, this has resulted in an entirely new paradigm for NLP technology developmentone that will have profound implications for the nascent AI economy.This paradigm can be thought of in two basic phases: pre-training and fine-tuning.In the first phase, a tech giant creates and open-sources a large language model: for instance, Googles BERT or Facebooks RoBERTa.Unlike in previous generations of NLP, in which models had to be built for individual language tasks, these massive models are not specialized for any particular activity. They have powerful generalized language capabilities across functions and topic areas. Out of the box, they perform well at the full gamut of activities that comprise linguistic competence: language classification, language translation, search, question answering, summarization, text generation, conversation. Each of these activities on its own presents compelling technological and economic opportunities.Because they can be adapted to any number of specific end uses, these base models are referred to as pre-trained.In the second phase, downstream usersyoung startups, academic researchers, anyone else who wants to build an NLP modeltake these pre-trained models and refine them with a small amount of additional training data in order to optimize them for their own specific use case or market. This step is referred to as fine-tuning.Todays pre-trained models are incredibly powerful, and even more importantly, they are publicly available, said Yinhan Liu, lead author on Facebooks RoBERTa work and now cofounder/CTO of healthcare NLP startup BirchAI. For those teams that have the know-how to operationalize transformers, the question becomes: what is the most important or impactful use case to which I can apply this technology?Under this pre-train then fine-tune paradigm, the heavy lifting is done upfront with the creation of the pre-trained model. Even after fine-tuning, the end models behavior remains largely dictated by the pre-trained models parameters.This makes these pre-trained models incredibly influential. So influential, in fact, that Stanford University has recently coined a new name for them, foundation models, and launched an entire academic program devoted to better understanding them: the Center for Research on Foundation Models (CRFM). The Stanford team believes that foundation models, and the small group of tech giants that have the resources to produce them, will exert outsize influence on the future behavior of artificial intelligence around the world.As the researchers put it: Foundation models have led to an unprecedented level of homogenization: Almost all state-of-the-art NLP models are now adapted from one of a few foundation models. While this homogenization produces extremely high leverage (any improvements in the foundation models can lead to immediate benefits across all of NLP), it is also a liability; all AI systems might inherit the same problematic biases of a few foundation models.This Stanford effort is drawing attention to a massive looming problem for large language models: social bias.The source of social bias in AI models is straightforward to summarize but insidiously difficult to root out. Because large language models (or foundation models, to use the new branding) learn language by ingesting what humans have written online, they inevitably inherit the prejudices, false assumptions and harmful beliefs of their imperfect human progenitors. Just imagine all the fringe subreddits and bigoted blogs that must have been included in GPT-3s vast training data corpus.The problem has been extensively documented: todays most prominent foundation models all exhibit racist, sexist, xenophobic, and other antisocial tendencies. This issue will only grow more acute as foundation models become increasingly influential in society. Some observers believe that AI bias will eventually become as prominent of an issue for consumers, companies and governments as digital threats like data privacy or cybersecurity that have come before itthreats that were also not fully appreciated at first, because the breakneck pace of technological change outstripped societys ability to properly adapt to it.There is no silver-bullet solution to the challenge of AI bias and toxicity. But as the problem becomes more widely recognized, a number of mitigation strategies are being pursued.Last month, OpenAI announced that it had developed a new version of GPT-3 that is safer, more helpful, and more aligned with human values. The company used a technique known as reinforcement learning from human feedback to fine-tune its models to be less biased and more truthful than the original GPT-3. This new version, named InstructGPT, is now the default language model that OpenAI makes available to customers.Historically, Alphabets DeepMind has been an outlier among the worlds most advanced AI research organizations for not making language AI a major focus area. This changed at the end of 2021, with DeepMind announcing a collection of important work on large language models.Of the three NLP papers that DeepMind published, one is devoted entirely to the ethical and social risks of language AI. The paper proposes a comprehensive taxonomy of 6 thematic areas and 21 specific risks that language models pose, including discrimination, exclusion, toxicity and misinformation. DeepMind pledged to make these risks a central focus of its NLP research going forward to help ensure that it is pursuing innovation in language AI responsibly.The fact that this dimension of language AI researchuntil recently, treated as an afterthought or ignored altogether by most of the worlds NLP researchersfeatured so centrally in DeepMinds recent foray into language AI may be a signal of the fields shifting priorities moving forward.Increased regulatory focus on the harms of bias and toxicity in AI models will only accelerate this shift. And make no mistake: regulatory action on this front is a matter of when, not if.Beyond Natural LanguageInterestingly, perhaps the most creative use cases for NLP today dont involve natural language at all. In particular, todays cutting-edge language AI technology is powering remarkable breakthroughs in two other domains: coding and biology.Whether its Python, Ruby, or Java, computer programming happens via languages. Just like natural languages like English or Swahili, programming languages are symbolically represented, follow regular rules, and have a robust internal logic. The audience just happens to be software compilers rather than other humans.It therefore makes sense that the same powerful new technologies that have given AI incredible fluency in natural language can likewise be applied to programming languages, with similar results.Last summer OpenAI announced Codex, a transformer-based model that can write computer code astonishingly well. In parallel, GitHub (which is allied with OpenAI through its parent company Microsoft) launched a productized version of Codex named Copilot.To develop Codex, OpenAI took GPT-3 and fine-tuned it on a massive volume of publicly available written code from GitHub.Codexs design is simple: human users give it a plain-English description of a command or function and Codex turns this description into functioning computer code. A user could input into Codex, for instance, crop this image circularly or animate this image horizontally so that it bounces off the left and right wallsand Codex can produce a snippet of code to implement those actions. (These exact examples can be examined on OpenAIs website.) Codex is most capable in Python, but it is proficient in over a dozen programming languages.Then, just two weeks ago, DeepMind further advanced the frontiers of AI coding with its publication of AlphaCode.AlphaCode is an AI system that can compete at a human level in programming competitions. In these competitions, which attract hundreds of thousands of participants each year, contestants receive a lengthy problem statement in English and must construct a complete computer program that solves it. Example problems include devising strategies for a custom board game or solving an arithmetic-based brain teaser.While OpenAIs Codex can produce short snippets of code in response to concrete descriptions, DeepMinds AlphaCode goes much further. It begins to approach the full complexity of real-world programming: assessing an abstract problem without a clear solution, devising a structured approach to solving it, and then executing on that approach with up to hundreds of lines of code. AlphaCode almost seems to display that ever-elusive attribute in AI, high-level reasoning.As DeepMinds AlphaCode team wrote: Creating solutions to unforeseen problems is second nature in human intelligencea result of critical thinking informed by experience. For artificial intelligence to help humanity, our systems need to be able to develop problem-solving capabilities. AlphaCode solves new problems in progamming competitions that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.Another language in which todays cutting-edge NLP has begun to generate remarkable insights is biology, from genomics to proteins.Genomics is well-suited to the application of large language models because an individuals entire genetic endowment is encoded in a simple four-letter alphabet: A (for adenine), C (for cytosine), G (for guanine), and T (for thymine). Every humans DNA is defined by a string of billions of As, Cs, Gs and Ts (known as nucleotides) in a particular order.In many respects DNA functions like a language, with its nucleotide sequences exhibiting regular patterns that resemble a kind of vocabulary, grammar, and semantics. What does this language say? It defines much about who we are, from our height to our eye color to our risk of heart disease or substance abuse.Large language models are now making rapid progress in deciphering the language of DNA, in particular its noncoding regions. These noncoding regions do not contain genes but rather control genes: they regulate how much, when, and where given genes are expressed, giving them a central role in the maintenance of life. Noncoding regions comprise 98% of our total DNA but until now have remained poorly understood.A few months ago, DeepMind introduced a new transformer-based architecture that can predict gene expression based on DNA sequence with unprecedented accuracy. It does so by considering interactions between genes and noncoding DNA sequences at much greater distances than was ever before possible. A team at Harvard completed work along similar lines to better understand gene expression in corn (fittingly naming their model CornBERT).Another subfield of biology that represents fertile ground for language AI is the study of proteins. Proteins are strings of building blocks known as amino acids, linked together in a particular order. There are 20 amino acids in total. Thus, for all their complexity, proteins can be treated as tokenized stringswherein each amino acid, like each word in a natural language, is a tokenand analyzed accordingly. As one example, an AI research team from Salesforce recently built an NLP model that learns the language of proteins and can generate plausible protein sequences that dont exist in nature with prespecified characteristics. The potential applications of this type of controllable protein synthesis are tantalizing.These efforts are just the beginning. In the months and years ahead, language AI will make profound contributions to our understanding of how life works.ConclusionLanguage is at the heart of human intelligence. It therefore is and must be at the heart of our efforts to build artificial intelligence. No sophisticated AI can exist without mastery of language.Today, the field of language AI is at an exhilarating inflection point, on the cusp of transforming industries and spawning new multi-billion-dollar companies. At the same time, it is fraught with societal dangers like bias and toxicity that are only now starting to get the attention they deserve.This article explored the big-picture developments and trends shaping the world of language AI today. In a followup article, we will canvass todays most exciting NLP startups. A growing group of NLP entrepreneurs is applying cutting-edge language AI in creative ways across sectors and use cases, generating massive economic value and profound industry disruption. Few startup categories hold more promise in the years ahead.Stay tuned for Part 2 of this article, which will explore todays most promising NLP startups.Note: The author is a Partner at Radical Ventures, which is an investor in BirchAI. | Content Creation/Decision Making/Content Synthesis | Healthcare Practitioners and Support/Management/Arts, Design, Entertainment, Sports, and Media/Business and Financial Operations | null | null | null | null | null | null |
|
news | sachin-jha | Kubernetes for Startups | DigitalOcean | As the most popular solution for container orchestration and management, businesses of all kinds adopt Kubernetes to increase automation and reduce IT costs. Cloud Native Computing Foundation (CNCF... | https://www.digitalocean.com/blog/kubernetes-for-startups-why-when-and-how-to-adopt | 2022-02-25T13:23:14Z | As the most popular solution for container orchestration and management, businesses of all kinds adopt Kubernetes to increase automation and reduce IT costs. Cloud Native Computing Foundation (CNCF) published a State of Cloud Native Development Report that found that 5.6 million developers now use Kubernetes, a 67% increase over one year. Despite its popularity, Kubernetes is a complex system thats not right for every business or use case.Kubernetes works well for organizations that have complex applications consisting of multiple services running in different containers, which is why its so popular for large businesses. But what about your growing startup? To find out if adopting Kubernetes is right for your business, start with evaluating why you need it, where you are in development, and what your strategy is for implementation.Why adopt Kubernetes: Scale and grow with less frictionStartups planning for high-growth should build their application in a way that allows them to achieve their goals with the least amount of added cost and friction. One way to do this is through Kubernetes. Using microservices instead of a huge monolithic architecture allows organizations to be more flexible in development, and Kubernetes enables companies to scale easily and deploy software quickly.Kubernetes improves resource utilization, shortens software development cycles, and helps integrate new employees into the team by allowing them to work on a fragment of the software. And because its self-healing and auto-scaling capabilities can ensure high reliability and great uptimes and response times, it often improves the user experience by improving product quality and stability.The adoption of Kubernetes is growing within small and medium tech companies. A variety of applications run on Kubernetes, including video, content, mobile back-end, SaaS, blockchain, fintech, and crypto. If youre planning to scale, it makes sense to invest in Kubernetes early on. Even if youre just deploying a single simple web application within the cluster, planning for the future means building your infrastructure carefully, enabling your team to move quickly a year or three down the line. Implementing Kubernetes at the right time early on can help you avoid being trapped in technical debt that prevents your business from responding quickly and adequately in an ever-changing market.When to adopt Kubernetes: Take action at the right moment Startups need to be two things: Agile and Scalable. Businesses still in the proof of concept stage often need to pivot as they learn, quickly making significant changes in strategy or architecture. Adopting Kubernetes takes considerable time to implement. Taking the time away from product development to implement Kubernetes risks slowing down production time. The benefits of Kubernetes scalability and stability arent applicable for the earliest stages of development, and focusing on things that will be more impactful in the short term is a better use of your time. In the early stages, focus on product validation, and build MVPs as fast and as simple as possible to get feedback. Focusing attention on things like configuration management, CI/CD, auto-scaling, and service mesh will distract you from your primary goal: building the best application you can build.Consider adopting Kubernetes once youve found a product-market-fit, have a working application, and the next step is to scale, but before youve built a complicated system thats difficult to migrate. At this inflection point, you will need to commit to rebuilding parts of your system. It might be tempting to delay implementation until after the next round of funding, the next feature release, or another artificial milestone, but dont wait. Adopting Kubernetes before you reach a massive scale will be faster and easier for you.How to adopt Kubernetes: Have a clear strategy for implementationThe complexity of Kubernetes can be a barrier to implementation for many startups. Putting together a clear strategy can help your team stay on track throughout the implementation process, save time, and make good decisions about whats necessary for your business. Keep simplicity at the forefront as you make decisions about features and configurations.Consider using open source and pre-made solutions. The Kubernetes community is a vibrant open source community, offering many great solutions for a variety of needs. Start with these solutions and build from them to save time and establish best practices for your organization.Managing self-hosted clusters requires expertise, time, and resources that many small businesses lack. Consider using a public cloud and implementing a managed offering. Managed offerings like DigitalOcean Managed Kubernetes do the heavy-lifting by continuously monitoring your Kubernetes Control Plane to make sure you are always able to access and deploy to your cluster. Leveraging a managed version of Kubernetes also allows developers and businesses to take advantage of containerized application deployment, leading to a faster time to market.For example, Hack the Box uses DigitalOcean Managed Kubernetes to scale their platform and successfully host thousands of users at a time. James Hooker, CTO, had this to say The fact that its so easy to configure, administrate and scale with DigitalOcean is something which I love. Ive worked with Kubernetes before, hands-on, self-hosted, but the DigitalOcean integration and provision of Kubernetes has been the most seamless that Ive experienced so far. Try DigitalOcean KubernetesIf youre interested in learning more about DigitalOcean Managed Kubernetes or have questions about migrating from another cloud provider or what your total costs will be on DigitalOcean once you start scaling, schedule a meeting with our team of experts who can help answer any questions you have. | Unknown | Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
|
news | WOW! eBook | Python Concurrency with asyncio | eBook Details: Paperback: 376 pages Publisher: WOW! eBook (March 1, 2022) Language: English ISBN-10: 1617298662 ISBN-13: 978-1617298660 eBook Description: Python Concurrency with asyncio: Learn how to speed up slow Python code with concurrent programming and the cutting-edge asyncio library. Python is flexible, versatile, and easy to learn. It can also be very slow compared to lower-level languages. Python Concurrency with asyncio...The post Python Concurrency with asyncio...Please! DONATE: https://www.wowebook.org/donate/ | https://www.wowebook.org/python-concurrency-with-asyncio/ | 2022-02-15T08:06:08Z | eBook Details:Paperback: 376 pagesPublisher: WOW! eBook (March 1, 2022)Language: EnglishISBN-10: 1617298662ISBN-13: 978-1617298660eBook Description:Python Concurrency with asyncio: Learn how to speed up slow Python code with concurrent programming and the cutting-edge asyncio library.Python is flexible, versatile, and easy to learn. It can also be very slow compared to lower-level languages. Python Concurrency with asyncio teaches you how to boost Python’s performance by applying a variety of concurrency techniques. You’ll learn how the complex-but-powerful asyncio library can achieve concurrency with just a single thread and use asyncio’s APIs to run multiple web requests and database queries simultaneously. The book covers using asyncio with the entire Python concurrency landscape, including multiprocessing and multithreading.Its easy to overload standard Python and watch your programs slow to a crawl. The asyncio library was built to solve these problems by making it easy to divide and schedule tasks. It seamlessly handles multiple operations concurrently, leading to apps that are lightning fast and scalable.Use coroutines and tasks alongside async/await syntax to run code concurrentlyBuild web APIs and make concurrency web requests with aiohttpRun thousands of SQL queries concurrentlyCreate a map-reduce job that can process gigabytes of data concurrentlyUse threading with asyncio to mix blocking code with asyncio codePython Concurrency with asyncio introduces asynchronous, parallel, and concurrent programming through hands-on Python examples. Hard-to-grok concurrency topics are broken down into simple flowcharts that make it easy to see how your tasks are running. Youll learn how to overcome the limitations of Python using asyncio to speed up slow web servers and microservices. Youll even combine asyncio with traditional multiprocessing techniques for huge improvements to performance.[ Exclusive Offer! Order All-in-One Party Fashion Wallet Now. Get Lowest Price & 60 Day Return Policy. Huge Discounts Available! Bravo Goods Special Offer Expires Soon. ]DOWNLOAD | Content Creation/Process Automation | Computer and Mathematical | null | null | null | null | null | null |
|
news | Thomas H. Davenport and Randy Bean. Thomas H. Davenport (@tdav) is the President’s Distinguished Professor of Information Technology and Management at Babson College, a visiting professor at Oxford’s Saïd Business School, and a fellow of the MIT Initiative on the Digital Economy. Randy Bean (@randybeannvp) is an industry thought leader, author, and CEO of NewVantage Partners, a strategic advisory company that is now a division of Wavestone, a global consultancy based in Paris. He is the author of Fail Fast, Learn Faster: Lessons in Data-Driven Leadership in an Age of Disruption, Big Data, and AI (Wiley, 2021). | Companies Are Making Serious Money With AI | With the start of each year come predictions, plans, and surveys from consulting firms. When it comes to artificial intelligence, multiple recent surveys indicate that companies aren’t just planning on spending serious money on AI in 2022 — they are already making good money from the technology. A bit of context might be helpful. Despite […] | https://sloanreview.mit.edu/article/companies-are-making-serious-money-with-ai/ | 2022-02-17T12:00:27Z | TopicsAI in ActionThis column series looks at the biggest data and analytics challenges facing modern companies and dives deep into successful use cases that can help other organizations accelerate their AI progress. More in this seriesWith the start of each year come predictions, plans, and surveys from consulting firms. When it comes to artificial intelligence, multiple recent surveys indicate that companies arent just planning on spending serious money on AI in 2022 they are already making good money from the technology. A bit of context might be helpful. Despite some AI successes, one of the challenges in recent years has been that projects involving the technology have frequently lacked sufficient economic returns. In a 2019 MIT Sloan Management Review and Boston Consulting Group AI survey, for example, 7 out of 10 companies reported minimal or no value from their AI investments. One of the reasons for poor returns was that relatively few projects were deployed into production; they were too often research exercises. Production deployments admittedly can be difficult, since they usually require integration with existing systems and processes, worker reskilling, and the ability to scale AI technology. Get Updates on Leading With AI and DataGet monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.Please enter a valid email addressThank you for signing upPrivacy PolicyJust a few years later, things are beginning to change. In the 2022 survey of senior data and technology executives by NewVantage Partners (where Randy Bean is CEO and cofounder, and Tom Davenport is a fellow), 92% of large companies reported that they are achieving returns on their data and AI investments. Thats up markedly from 48% in 2017. The same percentage (92%) said that they are increasing investments in data and AI, equaling last years percentage. Twenty-six percent of companies have AI systems in widespread production more than double the 12% in last years survey. The survey also asked respondents whether their organizations were data driven, and only 26% said they are. However, that doesnt seem to be preventing them from making progress on AI.Returns on AI Around the GlobeThe NewVantage survey respondents largely represent North American companies. But other surveys suggest that companies around the globe are also registering more value with AI. The State of AI in the Enterprise survey by Deloitte (where Tom is a senior adviser to the AI practice), fielded in mid-2021, found that two types of companies are getting value from their investments. Twenty-eight percent of survey respondents were classified as transformers companies reporting high business outcomes and a relatively high number of production AI deployments (six on average). This group has identified and largely adopted leading practices associated with the strongest AI outcomes, including having an AI strategy, building an ecosystem around AI, and putting organizational structures and processes in place (such as machine learning operations, or MLOps) to keep AI on track. The other group getting value, accounting for 26% of respondents, was labeled pathseekers. They reported high outcomes but a lower number of deployments. They have also adopted capabilities and behaviors that have led to success with AI, but on fewer projects. They have not scaled to the same degree as transformers.Still, thats more than half of the global respondents reporting positive business outcomes from AI. As weve noted, its difficult or impossible to benefit from AI without deploying it, but these results suggest that you dont need a lot of deployments to get value.A 2021 McKinsey global survey on AI also found that AI adoption and value are increasing. McKinsey found that the number of companies reporting AI adoption in at least one function had increased to 56%, up from 50% in 2020. More importantly, the survey also indicates that AIs economic return is growing. The share of respondents reporting at least 5% of earnings (EBIT) that are attributable to AI has increased to 27%, up from 22% in the previous survey. Were not sure how survey respondents would calculate the percentage of earnings attributable to AI, but their responses do suggest high value.Respondents to the McKinsey survey also reported significantly greater cost savings from AI than they did previously in every function, with the greatest improvements coming in product and service development, marketing and sales, and strategy and corporate finance.And echoing the Deloitte survey, McKinsey found that progressive AI practices are being rewarded. Companies seeing the biggest earnings increases from AI were not only following practices that lead to success, including MLOps, but also spending more efficiently on AI and taking advantage of cloud technologies to a greater extent. A survey by IBM offers some insight into the impact of the COVID-19 pandemic on AI adoption, with a particular focus on automation-oriented technologies. It found that 80% of companies are already using some form of automation technology or plan to do so over the next year. Just over a third of the organizations surveyed said that the pandemic influenced their decision to adopt and use automation as a means of improving productivity. The respondents to the IBM survey were IT professionals, which may have influenced the results; IT process automation (known as AI for IT operations, or AIOps) is a popular use case for the technology.Nonmonetary BenefitsWe should also mention an interesting 2021 survey conducted by MIT Sloan Management Review and Boston Consulting Group that set out to assess not the monetary benefits of AI but its cultural enhancements. Because no one (to our knowledge) has asked these types of questions before, we cant make comparisons to the past.In that global survey, 58% of all respondents who had participated in an AI implementation agreed that their AI solutions improved efficiency and decision-making among teams. A majority of that group (78%) also reported improved collaboration within teams. Are improved decision-making and collaboration indicators of cultural benefit? Were not sure, but they could certainly translate into economic value. The survey also found that AI yields strategic benefits, but they mostly accrued to companies that use AI to explore new ways of creating value rather than cutting costs. Those that used AI primarily to create new value were 2.5 times more likely to feel that AI is helping their company competitively compared with those that said they are using AI primarily to improve existing processes; they were also 2.7 times more likely to agree that AI helps capture opportunities in adjacent industries. Its easy to see how these traits could turn into economic value. For those who want the current AI spring to bloom forever, this is all great news. There is still substantial room for improvement in the economic returns from AI, of course, and these surveys tap only subjective perceptions. The biggest remaining stumbling block, according to a recent small survey of data scientists, is that the majority of machine learning models are still not deployed in production environments within organizations. Companies and AI leaders still need to work on this issue. However, the fact that so many business leaders responding to so many surveys on the topic feel that their organizations are capturing substantial value from AI is a definite improvement over the recent past, and a strong sign that AI is here to stay in the business landscape. TopicsAI in ActionThis column series looks at the biggest data and analytics challenges facing modern companies and dives deep into successful use cases that can help other organizations accelerate their AI progress. More in this seriesAbout the AuthorsThomas H. Davenport (@tdav) is the Presidents Distinguished Professor of Information Technology and Management at Babson College, a visiting professor at Oxfords Saïd Business School, and a fellow of the MIT Initiative on the Digital Economy. Randy Bean (@randybeannvp) is an industry thought leader, author, and CEO of NewVantage Partners, a strategic advisory company that is now a division of Wavestone, a global consultancy based in Paris. He is the author of Fail Fast, Learn Faster: Lessons in Data-Driven Leadership in an Age of Disruption, Big Data, and AI (Wiley, 2021). | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Polly Allcock | Inflection AI wants to make it easier for humans to talk to computers | A new AI-focused software company, Inflection, has been co-founded by Mustafa Suleyman, Reid Hoffman and Karén Simonyan. The company aims to use AI to make human commands to devices more conversational. | https://www.notebookcheck.net/Inflection-AI-wants-to-make-it-easier-for-humans-to-talk-to-computers.607308.0.html | 2022-03-09T12:55:00Z | Inflection AI is a new software company focused on improving human-computer interaction. The business has been co-founded by Mustafa Suleyman, co-founder of DeepMind, Reid Hoffman, co-founder of LinkedIn, and deep learning expert Karén Simonyan.The AI-focused company aims to address the problem that humans need to simplify the requests they make to computers. We adjust our language to fit what machines can understand, be that a command to a voice assistant or a search engine term. Inflection wants to turn this on its head so that machines can understand natural human language. This technology will enable us to communicate with devices more conversationally.The company is being incubated at Greylock, a venture capital firm that has invested in Airbnb, Roblox and Discord, to name a few. It is unclear how or when Inflection will sell its products, but Suleyman has suggested that the company will reveal further details of its product plans in a matter of months.In an interview, Suleyman claimed that the company would almost certainly achieve its goal within five years. On the software that Inflection will build, they said, It feels like were on the cusp of being able to generate language to pretty much human-level performance... It opens up a whole new suite of things that we can do in the product space.Buy the book Predict and Surveil: Data, Discretion, and the Future of Policing on Amazon | Digital Assistance/Content Creation | Unknown | null | null | null | null | null | null |
|
news | Neil Serebryany | AI Is Not New, So Why Are We Seeing an Explosion in AI’s Capability? | Digging Into the ‘Why Now’ Question Behind the AI ExplosionResearchers, academics, and philosophers have been trying to endow machines with human-like capabilities since antiquity, but most historians would trace the beginning of AI as we know it today to the Dartmouth Summer Research Project on Artificial Intelligence (AI) over the summer of 1956 (“Dartmouth workshop,” 2020). In the 60+ years since that kickoff, researchers have tried many techniques to model intelligence, with two major schools of thought emerging – with “Classical AI” as distinct in theory and practice from “Modern AI”.Classical AIClassical AI relies on machines operating autonomously within a set of logical rules. Building AI meant representing the world in a set of data structures (such as trees or lists or sets) and then using rules (such as and, or, if-then-else, and so on) to reason within that knowledge space. For example, one can create rules-based translation software by representing language as a set of words, and then perform machine translation by translating those words from one language to another, and then reordering the words within defined rules regarding the order of nouns and verbs, and adjectives. | https://www.calypsoai.com/blog/post/thought-piece-ai-is-not-new-so-why-have-we-seen-an-explosion-in-ais-capability | 2022-02-23T16:21:04Z | Researchers, academics, and philosophers have been trying to endow machines with human-like capabilities since antiquity, but most historians would trace the beginning of AI as we know it today to the Dartmouth Summer Research Project on Artificial Intelligence (AI) over the summer of 1956 (Dartmouth workshop, 2020). In the 60+ years since that kickoff, researchers have tried many techniques to model intelligence, with two major schools of thought emerging with Classical AI as distinct in theory and practice from Modern AI.Classical AIClassical AI relies on machines operating autonomously within a set of logical rules. Building AI meant representing the world in a set of data structures (such as trees or lists or sets) and then using rules (such as and, or, if-then-else, and so on) to reason within that knowledge space. For example, one can create rules-based translation software by representing language as a set of words, and then perform machine translation by translating those words from one language to another, and then reordering the words within defined rules regarding the order of nouns and verbs, and adjectives. Likewise, an individual can try to solve computer vision recognition problems by first establishing the rules of what makes certain objects, themselves. For example, one can start an animal classification program by first describing cats as four-legged animals with whiskers and then decompose that definition into a set of subproblems ( e.g., find legs, find whiskers) and continue breaking these problems into more detailed problems (e.g., find edges, separate foreground and background).Although the classical school enjoyed certain successes (in 1967 Marvin Minsky famously said that within a generationthe problem of creating artificial intelligence will substantially be solved. (History of artificial intelligence, 2020)), eventually the efforts ran into a wall. Although rules-based systems offer an easily-understood mechanism to build autonomy, researchers soon discovered that the world is too complex and messy to be fully captured into a closed representation system. This meant that many early AI experiments succeeded in carefully-controlled environments but failed to generalize to real-world situations. This led to a number of AI Winters when funding and research on autonomous systems came to a halt.Modern AIIn contrast to Classical AI, Modern AI relies on letting the computer derive its own rules and logic about the world, by feeding it labeled (and sometimes unlabeled) data. Using this method, instead of attempting to describe to the computer what features a cat has, an AI developer will instead feed it thousands of pictures of cats. With a large-n amount of cats, the machine is (often but not always) able to extract the relevant features autonomously, without a humans explicit programming. Another name for Modern AI is Machine Learning or ML, which is also the term we will be using throughout this report.As a separate field, ML started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and probability theory. But the real inflection point for ML came in 2012 with the ImageNet competition, when an Artificial Neural Network or ANN-based submission called AlexNet outclassed all other competitors by 10.8 %. It was a historic moment because up to that point, ANNs (computing systems vaguely inspired by the biological neural networks that constitute animal brains) were considered nothing more than a research tool. Their success in this competition changed that perception.So what exactly was the cause of ML development and applicational use between the 1990s and 2012? Contrary to popular belief, ANNs themselves were not invented in that period. In fact, they were first described by Warren McCulloch and Walter Pitts in 1943 (Palm, 1986) and research on them continued throughout the second half of the century. Instead, two major developments occurred which allowed ML development to succeed:An Increase of Labeled DataBy design, ML systems learn from data. Therefore, having access to abundant data to feed the algorithm is paramount in achieving meaningful results. Moreover, for most ML applications the data must be labeled (e.g., a picture of a cat has to have the appropriate cat label associated). In the years leading up to 2012, large amounts of labeled data were generated, powered in large part due to the growth of digital technology and the internet.Access to More Powerful and Cheaper Computing CapabilitiesBeyond data, ML algorithms need many compute cycles to look at thousands (and often millions) of examples before they can learn the right features. Leading up to 2012, computing power became more abundant, cheaper, while at the same time more effective implementations of algorithms arrived for GPUs (which are better suited for ML, compared to CPUs).Put together, data, compute and ANNs started a new era in ML known as Deep Learning or DL.DL is a subset of ML, which in turn is a subset of AI, the broadest category of techniques aimed at modeling intelligence. The reason for the name deep learning is that ANNs can be made many hundreds of layers deep. This property of deeply layered processes is also what gives ANNs a unique property compared to other ML algorithms they dont saturate when fed increased amounts of data, and instead, keep learning and improving. With the labeled data explosion and increased compute power reference prior, ANNs have become a critical AI/ML methodology. Deep learnings primary advantage over other ML methods is its constant unsatiety with more training data. In this graphical representation of an Artificial Neural Network with 3 layers, we can see an input layer, a hidden layer, and an output layer. Putting all of these factors together, the explosion in compute power at low cost and labeled data has created an environment ripe for AI capabilities to expand. | Content Synthesis/Information Retrieval Or Search | Unknown | null | null | null | null | null | null |
|
news | Tony Boyd | Why Amazon, Alphabet and Microsoft are undervalued - The Australian Financial Review | The cloud computing oligopoly just delivered another set of extraordinary quarterly numbers. But this fund manager says the best is yet to come. | https://www.afr.com/chanticleer/why-amazon-alphabet-and-microsoft-are-undervalued-20220222-p59yjj | https://static.ffx.io/images/$zoom_1%2C$multiply_1%2C$ratio_1.777778%2C$width_1059%2C$x_56%2C$y_0/t_crop_custom/c_scale%2Cw_800%2Cq_88%2Cf_jpg/t_afr_opinion_no_age_social_wm/df797ec8ea07fe3ad70113e77b6afaa5e3fde39f | 2022-02-28T01:13:00Z | The level and durability of revenue growth being exhibited by the three hyperscalers underscores the incredible competitive advantages and insatiable underlying demand for their services, according to Andrew Macken, portfolio manager at Montaka Global Investments.He says cloud computing is at an inflection point because of the use of artificial intelligence, the scale of demand for cloud services caused by the internet of things, and the explosion in data analytics.Accounting rules are the problemHis core argument, which is summarised in a paper published on Montakas website, is that the hyperscalers are seriously undervalued because the market has yet to come to grips with the scale of their growth potential.One of the reasons why we think that these businesses are undervalued is because of the way research and development is expensed under accounting rules, he tells Chanticleer.If youre spending $US30 billion each year on R&D, which is roughly what theyre doing, that reduces your profits today by $US30 billion, and therefore inflates your perceived earnings multiple.But these R&D expenses, theyre not really expenses because it is not like money that you have to spend to generate this years revenue. They are long-term investments that will generate a return.Actually, history has shown that the return on investments from R&D for the hyperscalers has been much higher than they even expected.Macken says the accounting rules disguise the true earnings power of the businesses and, therefore, their earnings multiples are overstated.The stocks trade at the following PE multiples: Amazon 63 times, Microsoft 30 times, and Alphabet 23 times, according to S&P Global Market Intelligence.The businesses look more expensive today than they actually are, Macken says.Another reason Montaka, which is partly owned by Magellan co-founder Chris Mackay, has built significant positions in Alphabet, Amazon and Microsoft is because of the impact of artificial intelligence.Mackens AI discussion paper says they have all developed machine learning models tailored for specific scenario-based business services, including personalisation, fraud detection, cognitive search, intelligent document processing, media intelligence and customer intelligence.They have important natural advantages in training these models because they have privileged internal datasets that is, large and relevant datasets their core businesses have grown over the last two decades and that are near-impossible to recreate, the paper says.They also have access to low-cost, large-scale internal compute, and access to the worlds best engineering talent. Most of these ML models are essentially given away.Any customer can essentially take an ML model off the shelf and relatively easily and cheaply adapt and customise it to their own needs using their own internal datasets. The quid pro quo is the customer must use the hyperscalers compute and storage services.The world is at an inflection pointMacken says the world is at an inflection point where the number of connected devices will explode. The internet of things requires the hyperscalers to build services at the edge of networks because it makes no sense to send data back to a centralised data centre.This means machine learning models, which were initially trained in the centralised cloud, will increasingly be moved to edge devices, or an edge server, for localised application and ongoing retraining of the localised machine learning.This explains why hyperscalers are partnering with mobile networks while increasing their demand for data centre assets.Macken says another reason why the hyperscalers are heading for increased growth in revenue is because of the rise in the number of applications being developed that require cloud computing support.In the five years to 2023, about 500 million AI software applications will be developed and that is about equal to the number of applications developed over the past 40 years, he says.Finally, Macken says the market appears to be underestimating the total addressable market for cloud computing. He says the expected 2030 aggregate American hyperscaler revenues of about $US600 billion is about half the total addressable cloud market of about $US1.2 trillion.He says these forecasts are based on moving the current on-premise IT workloads to the cloud. It does not include the new workloads that will be created by new AI applications.It appears highly plausible that hyperscaler revenue expectations are far too low in the context of the scale of the AI-based opportunity that lies ahead, Montakas paper says.If so, then Amazon, Microsoft and Alphabet as well as Alibaba and Tencent will likely surprise investors substantially to the upside over the coming years. | Personalization/Process Automation | Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
news | Jeff DeVerter, Forbes Councils Member, Jeff DeVerter, Forbes Councils Member https://www.forbes.com/sites/forbestechcouncil/people/jeffdeverter/ | 2022 Will Be The Year Of Applied AI | As data quality continues to improve and cloud offerings become more and more specialized and targeted, machine learning will become an increasingly important tool across all industries, and investment will increase accordingly. | https://www.forbes.com/sites/forbestechcouncil/2022/03/11/2022-will-be-the-year-of-applied-ai/ | 2022-03-11T14:00:00Z | As CTO of Solutions at Rackspace Technology, Jeff is fanatical about helping people and companies see more success with technology.gettyFor decades, since the 1950s when the term artificial intelligence (AI) was allegedly first coined, the concept of a machine brain that could think for itself, arrive at decisions and perform specific functions has been both tantalizing and frightening. It was tantalizing because it held the promise of greater efficiency, the elimination of mundane tasks and a world where machines could anticipate our needs, but it was frightening because of overheated visions of job losses in the billions and machines run amok. Even today, when we can see its positive effects in so many parts of our lives, AI-phobia is alive and well.Still, its fair to say that despite breathless predictions, the advent of AI and its cousin machine learning (ML) has been slower to develop and far less threatening than many had imagined, despite massive leaps in computational ability, the rise of neural networks, more and more powerful chips and the ability to parse data in record time.In fact, its only in the last 10-15 years, with the rise of cloud computing and hyperscalers, that the barriers to bigger breakthroughs in AI have truly been broken. The big missing links? Data centralization and data quality. While adaptive learning is the key to making machines usefully intelligent and allowing them to understand our shopping behaviors, hobbies or even the next Netflix series we may want to view, true machine learning value can only be derived by harnessing reliable data in one location and having the right tools to make good use of it.Cloud As The CatalystWhile we are all familiar with the ways in which machine learning has crept into our personal lives and driven our social media interactions think of Alexa performing tasks around your home with a simple voice command, Gmail applying smart filters to your inbox or Facebook anticipating your political leanings, favorite musicians or purchasing interests deriving substantial business value from machine learning has been slow to arrive for many companies. In this respect, the movement of massive amounts of business information out of proprietary data centers and into a multicloud environment by companies of all sizes and across all industries has proven to be a seminal event for AI/ML. It has provided a massive incentive for organizations to centralize, rationalize and properly format data, and it has opened new avenues to anticipate customer actions, influence purchasing decisions and deliver better experiences. In short, it has encouraged organizations to start treating data like a first-class citizen and has turned all companies into data science companies.A great example of the power of AI and ML to transform the customer journey is Ulta Beauty. A market leader in beauty care products, Ulta formed a digital innovation team that, in a short time, has reimagined its virtual retail business by leveraging AI/ML technologies to bring groundbreaking digital experiences to customers. These include an augmented reality mobile app that enables shoppers to see how different beauty products would look on them and an AI-based skin care virtual beauty advisor that lets guests browse skin care assortments.An Inflection PointTo better understand where organizations are in their AI and ML journeys, we recently conducted our second annual survey of more than 1,800 global IT decision-makers about their use of AI and machine learning technologies.Though many say they have yet to realize the full benefits of their projects, we are gratified to see a continued year-over-year rise in the level of AI and ML investment. We are also seeing AI and ML spread out further across the organization, used in everything from image/facial recognition to financial forecasting to chatbots and customer service automation.Our research findings indicate that we are nearing a critical point in the evolution of AI and ML, with organizations moving from experimentation to implementation. We believe that 2022 will be the year that applied AI really starts to kick in for businesses of all sizes in three key ways:1. Companies will start to derive greater value from their AI investments. The pandemic served as a once-in-a-lifetime catalyst for organizations to accelerate their cloud and e-commerce initiatives. With better data being fed into the cloud and with more organizations across more industries leveraging multicloud, they will be able to identify, understand and serve their customers in ways that were impossible in the past. 2. Cloud services will go boutiquing. While hyperscalers have been dominant in the market, there is still ample room for new players to disrupt the cloud space. We anticipate continued growth in boutique services that can provide unique data offerings and your cloud, your way services. We are seeing the growth of cloud companies that specialize in services like data, but its not just data in the aggregate. We are seeing cloud services for specific types of data, such as data for 3D images or data services, and a cloud dedicated to data warehouse services.3. We will see greater hyperscaler specialization. If the clouds first generation could be thought of the worlds greatest box of Lego bricks, we are now starting to see much greater specialization in the kinds of Lego sets that hyperscalers are bringing to market. In 2022, we expect a proliferation of prebuilt, industry-specific solutions that companies can feed their data into and stitch together to meet their specific needs, whether its a manufacturing solution, a securities solution or a customer experience solution like the Ulta example cited above.As data quality continues to improve and cloud offerings become more and more specialized and targeted, machine learning will become an increasingly important tool across all industries, and investment will increase accordingly. The time is now for IT decision-makers to understand their new role in enabling the business to move further and faster than it ever could before. Its through these data capabilities that organizations can leverage external resources, gain business alignment and really drive AI/ML to offer better, more personalized experiences and services. Data is the differentiator.Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? | Unknown | Management/Computer and Mathematical | null | null | null | null | null | null |
|
news | Staying Human-Centered in an Automated World | An excerpt from The Smart Nonprofit on using smart technology to prioritize people. | https://ssir.org/books/excerpts/entry/staying_human_centered_in_an_automated_world | 2022-02-15T17:00:00Z | The Smart Nonprofit: Staying Human-Centered in An Automated WorldBeth Kanter & Allison H. Fine240 pages, Wiley, 2022Buy the book »“Smart tech” is an umbrella term we created to describe advanced digital technologies that make decisions for people, instead of people. It includes Artificial Intelligence (AI) and its subsets and cousins such as machine learning, natural language processes, smart forms, and chatbots, robots, and drones.Right now, smart tech is best at doing rote tasks like filling out intake forms, and answering the same questions over and again (“is my contribution tax-deductible?”). However, the technology is quickly embedding itself into the heart of nonprofit work in a wide variety of functions. As a result, we anticipate that staff will be free to focus on other activities. We call this benefit the “dividend of time,” which can be used to, say, reduce staff burnout, get to know clients in deeper ways, and focus on problem-solving like addressing the root causes of homelessness in addition to serving homeless people. Smart tech has recently reached an inflection point common to technologies that reach everyday use: An enormous increase in computing power meets a dramatic decrease in the cost of the technology. As a result, technology that was previously available only to elite institutions like NASA or embedded in widely complicated systems has suddenly become available to everyday people and organizations for fundraising, accounting, human resources, service delivery, and more.Grabbing software off-the-shelf that is “smart” may look like a technical decision, at its heart it is a deeply and profoundly human challenge that requires informed leadership to do well. There is a sweet spot of balancing the capability of the technology with the interests and needs of the people inside and outside that organizations need to identify. Some people call this convergence “co-botting.” The responsibility for identifying this sweet spot cannot rest with the IT department alone. Organizational leaders need to be interested, knowledgeable, and engaged enough to ensure smart tech is used in human-centered ways.In the following excerpt from our chapter on staying human-centered in our new book, The Smart Nonprofit, we discuss how being human-centered means prioritizing the interests, strengths, and unique talents of people over the speed and wizardry of the technology. Valuing humans has never been more important as our workplaces become more and more automated.—Allison Fine and Beth Kanter* * *Caesar Chavez said, “It was never about grapes or lettuce and always about people.”1 The same holds true for smart tech. It is not about the code or the wizardry; it’s about ensuring that people matter the most. Being human-centered means prioritizing the interests, strengths, and unique talents of people over the speed and wizardry of the technology. Valuing humans has never been more important as our workplaces become more and more automated.Smart tech is a fundamentally new way of working and has the potential to do more harm than good if treating people well inside and outside isn’t the top priority. This chapter explores the differences between human and machine intelligence, describes how to marry people and bots inside of organizations, and outlines steps for designing human-centered efforts to ensure smart tech is enhancing and not subjugating the needs of people.Man vs. MachineSince the 1950s experts have been forecasting that smart tech will reach human-level intelligence in 20 years. “In other words, it’s been 20 years away for 60 years,” according to MIT Professor Thomas Malone.2Over the past few years, Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have expressed concerns in the media about the potential for AI to be smarter than humans. There have been a number of surveys asking when this will happen and they all reach the same conclusion: we simply don’t know. But we can say with absolute confidence that right now human and machine intelligence are not equal.3In a gross oversimplification, intelligence has two components: fact-based knowledge and emotional intelligence. Smart tech is clearly gaining ground on fact-based knowledge, but it is also in the very early stages of incorporating emotional intelligence. At the heart of emotional intelligence is empathy, understanding what other people are feeling. Smart tech is not as empathetic as people yet, and may never be, but it can mimic empathy through sentiment analysis.Smart technologies are more accurate, faster, and more consistent at doing particular tasks like filling out forms. Smart technologies never get tired or need to take a lunch break or vacation. However, currently, bots are not empathetic. What they can do is simulate an emotional response. For instance, a customer support chatbot may be taught to apologize in a caring or helpful tone, even calling you by your name. Imitating emotions is not the same as having them or understanding them.People have the unique ability to imagine, problem solve, anticipate, feel, and judge changing situations, which allows us to shift perspectives. Our memories, hopes, concerns, and personality also contribute to how we react to the world around us. Smart technologies simply are not capable of empathy, love, or other emotions, yet. Stuart Russell, professor of computer science at the University of California, Berkeley, writes, “. . . while AI systems may be able to mimic human empathy, they can’t truly understand what empathy is like. It’s a distinction that nonprofits may not understand, but it is an essential tenet of being human-centered.”4The gap between human and bot intelligence is reflected in the growing field of therapy chatbots. Dr. Freud in a Box and other therapy chatbots are attractive products because they are inexpensive and always available. But research has shown that bots make terrible therapists because of the limitations of smart technology to understand subtext.5There are other significant challenges to therapy bots. Private companies are often not transparent about how their algorithms work, amplifying the potential for the chatbot therapist to provide bad or biased advice.If all this was not bad enough, there is potential to weaponize private information by sharing it with marketing companies. For instance, there’s Woebot. It is a chatbot therapist providing cognitive behavioral therapy through Facebook Messenger. It is not regulated or licensed as a therapist and although it has no plan to do so today, the company could choose to sell users’ data to pharmaceutical companies or employers in the future.6Co-BottingGetting the balance right between people and smart tech is called co-botting or augmented intelligence.7 H. James Wilson and Paul R. Daugherty have conducted research with over 1500 companies and found that significant performance improvements happen when humans and machines work together. “Through augmented intelligence, humans and AI actively enhance each other’s complementary strengths: the leadership, teamwork, creativity, and social skills of the former, and the speed, scalability, and quantitative capabilities of the latter.”8 Smart tech is an equal opportunity job disrupter and doesn’t care if a job is low paying or high paying. If it involves analysis of large amounts of data, the job is going to change. Curtis Langlotz, a radiologist at Stanford, predicts, “AI won’t replace radiologists, but radiologists who use AI will replace radiologists who don’t.”9Most experts doubt AI will replace doctors any time soon because even if an algorithm is better at diagnosing a particular problem, combining it with a doctor’s experience and knowledge of the patient’s individual story will lead to a better treatment and outcome.The Trevor Project provides crisis counseling to young lesbian, gay, bisexual, transgender, queer, and questioning (LGBTQ+) people. They created Riley, a chat bot to help train counselors by providing real-life simulations of conversations with potentially suicidal teens. Riley is always available for a training session with volunteers and that helps the staff scale the number of trained counselors without adding more resources. Riley will never work on the front line directly with youth in crisis because The Trevor Project sees this role as a human-centered one.10Co-botting goes beyond working with chatbots. Benefits Data Trust is a Philadelphia-based poverty reduction organization. It created a co-botting system for integrating smart tech into its efforts to help its call-in center staff assist clients to navigate and complete public benefits’ application processes. The pain point they were trying to solve was the enormous amount of time and documentation it takes for clients to apply for and receive benefits. The computer system was trained on thousands of interactions between call-in staff and clients to make recommendations among dozens of possible public benefits. The system also pre-populated forms for clients, saving staff an enormous amount of time.Ravindar Gujarl, chief data and technology officer from the Public Benefits Trust, told us, “At the end of the day, our role as a nonprofit is to create a human connection. We won’t replace our call-in staff who directly interface with our clients. Our nonprofit’s work is about building relationships with our clients. They come to us in distress and we want them not to have to worry about having to collect documents or wade through a complicated application process.”11These examples involved careful planning to ensure that the technology augmented and didn’t replace the work of staff. There is no special formula for ensuring you get the right balance between people and technology. It takes careful planning, monitoring, and continuous adjustments to ensure your organization is staying human-centered and getting the best out of both. Without this kind of care and thoughtfulness, your nonprofit could end up adding a bot like Flippy to your staff.In March, 2018, Miso Robotics and Caliaburger, a fast-food franchise in Southern California, announced the public debut of “Flippy,” the world’s first autonomous robotic kitchen assistant powered by artificial intelligence. Flippy’s job was to flip burger patties and remove them from the grill for its human co-workers to put the cheese on top at the right moment and add the extras, such as lettuce and sauce, before wrapping the sandwiches for customers.The press release for the launch described how Flippy would disrupt and transform the fast-food industry by taking over the hot, greasy, and dirty task of flipping burgers. The company touted Flippy as a cost-effective and highly efficient solution that could flip 150 burgers per hour, far more than the cooks it was replacing. What the press release didn’t mention was that in addition, Flippy wouldn’t complain about the low pay, scanty benefits, and long hours.12After two days on the job, Flippy was fired. News of Flippy, the robot cook, went viral on social media. This prompted a surge in interest and while Flippy flipped away, the human kitchen staff could not keep up with the demand. The restaurant realized it needed to spend more time on its internal systems and training people to work side-by-side with the robots.This story shows how easy it is for an organization to choose a bot to solve a problem without engaging staff in the process and keeping the entire system human-centered.13,14Human-Centered DesignCOVID-19 highlighted the bad habit some organizations have of not staying human-centered during stressful times. A hospital system in Washington State welcomed donors who had given at least $10,000 to set up vaccination appointments on an “invite-only” basis. The chief executive of a high-end nursing home and assisted-living facility in West Palm Beach, Florida, invited board members and major donors to receive immunizations. These were not the only two examples of hospitals and care facilities offering donors first shot at the shots.Mike Geiger, president of the Association of Fundraising Professionals, said in response, “The idea of hospital systems, or any charity, ignoring protocols, guidance, or restrictions—regardless of origin—and offering certain donors and board members the opportunity to ‘skip the line’ and receive vaccinations ahead of their scheduled time is antithetical to the values of philanthropy and ethical fundraising.”15While this example is not specifically about smart tech, it illustrates how easy it is for organizations to slip away from keeping clients and patients front and center. The use of smart tech makes staying human-centered even more pressing. We recommend engaging with end users through human-centered design techniques noted in the sidebar. Human-centered design focuses on developing deep empathy for end users or those who are impacted by smart tech. At the heart of this process is designing processes and services with people, not at them, through interviews, observation, and developing personas or models of end users to test processes and assumptions.There are many excellent tools and resources for human-centered design. The essence of these processes is to:Get input from key stakeholders about what issues are most important to them.Outline an idea, process or service that delineates responsibilities.Test, reflect, improve.Public Benefits Trust used this kind of process to determine what parts of their process should be automated. Ravinder said, “You can’t build an algorithm that powers a public benefit system without getting feedback from the people using it.”16ConclusionThe very first step that nonprofits must do when embracing smart tech is to put humans first and deeply understand how machines and people can work together. Human-centered principles and approaches are critical for the successful use of smart technologies by nonprofits as we have discussed throughout this chapter.* * *Human-Centered Design ResourcesThere are many excellent human-centered design resources that offer step-by-step guidance for implementing a human-centered design process, which is beyond the scope of this book. Many are techniques that any nonprofit can use without hiring an expensive consultant.If you want to quickly get up to speed, we recommend these resources for additional reading about human-centered design. Many of these organizations also offer training.Ideo Design Kit: IDEO has been a thought leader in human-centered design methods. The design firm has a nonprofit spinoff (ideo.org) that focuses on methods for nonprofits and social change and includes many free practical resources and examples. In addition, IDEO has also developed specific human-centered design methods for artificial intelligence, including these cards to help understand unintended consequences of smart technologies.Ideo.Org Design Kit: Methodshttps://www.designkit.org/methodsAI & Ethics: Collaborative Activities for Designershttps://www.ideo.com/post/ai-ethics-collaborative-activities-for-designersLuma Institute: The Luma system is one of the most practical, flexible, and versatile approaches to use for design thinking. It offers a playbook with simple techniques that anyone can use.Luma Systemhttps://www.luma-institute.com/about-luma/luma-system/Stanford Design School: In 2018, the D-School (as it is known), launched an initiative called “Radical Access,” a program and resources to develop fluency in emerging technologies as a medium of design for all people. The rationale is that for any use-case of artificial intelligence to serve us, we must be involved in the design. These two techniques are especially useful to designing human-centered algorithms or mapping problems to solutions for artificial intelligence.I Love Algorithmshttps://dschool.stanford.edu/resources/i-love-algorithmsMapping Problems to Solutions: Artificial Intelligencehttps://dschool.stanford.edu/resources/map-the-problem-spaceParticipatory Machine Learning: Defined as the practice of using human-centered design methods to inform the design and iteration for automation projects. Google has recently published a guide that actively involves a diversity of stakeholders—technologists, UXers, policymakers, end users, and citizens—in the process of feedback for the project. The guidebook provides an overview of how human perception drives every facet of machine learning and offers up worksheets on how to get user input.People + AI Guidebookhttps://pair.withgoogle.com/Agentive Design: When designing chatbots and “intelligent agents for automation,” it must be grounded in human-centered design principles. The concept was developed by Chris Noessel, an interface designer for Watson IBM. Design principles include the focus on easy setup and informative touch points. Also, when the chatbot is working, it’s out of sight. When a user must engage its touch points, they require attention and consideration. Overall, well-designed chatbots and agents require lots of constant attention to manage. Effectively designing chatbots or intelligent agents requires a lot of user testing and feedback to properly for training. The more a chatbot or intelligent agent interacts with humans, the better it learns to respond. It is different than designing other types of technology where the user is actually performing the actions versus the programming code. The metaphor often used is that it is less like designing a hammer and more like designing a butler. | Process Automation/Digital Assistance | Management/Office and Administrative Support | null | null | null | null | null | null |
||
news | Angie Lee | Meet the Omnivore: 3D Creator Makes Fine Art for Digital Era Inspired by Silk Road Masterpieces | Within the Mogao Caves, a cultural crossroads along what was the Silk Road in northwestern China, lies a natural reserve of tens of thousands of historical documents, paintings and statues of the Buddha. The post Meet the Omnivore: 3D Creator Makes Fine Art for Digital Era Inspired by Silk Road Masterpieces appeared first on NVIDIA Blog. | https://blogs.nvidia.com/blog/2022/02/23/ting-song-omniverse-creator/ | 2022-02-23T16:00:10Z | Editors note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.Within the Mogao Caves, a cultural crossroads along what was the Silk Road in northwestern China, lies a natural reserve of tens of thousands of historical documents, paintings and statues of the Buddha.And nearly 2,000 miles away, in eastern China, 3D artist Ting Song has brought one of these statues to life with the help of NVIDIA Omniverse, a physically accurate 3D design collaboration platform available with RTX-powered GPUs and part of the NVIDIA Studio suite for creators.Ting SongThe Forbes 30 under 30 artist explores the concept of fine art in the digital era, blending AI with traditional art, poetry and drama.Song, who divides her time between Beijing and Shanghai, created the first digital art piece that was auctioned by traditional art houses across China a work called Peony Dream, inspired by the classic Chinese play The Peony Pavilion.She uses Adobe After Effects and Photoshop, Blender, and Unity software with Omniverse to vivify her work.Song’s ‘Peony Dream’ digital art pieceAccelerating Art-ificial IntelligenceAn avid hackathon-goer growing up, Song has shared her love of cutting-edge, open-source technology by hosting hackathons in more than a dozen countries.She saw a multitude of groundbreaking uses for technology at these events and was particularly spurred to use AI as a tool to foster art and creativity.Her recent works of AI-based, immersive, multidimensional art focus on portraying philosophical and aesthetic themes from traditional Chinese culture.For her piece that reimagines the Buddha statue, Song used Adobe software to create its layers and NVIDIA StyleGAN2 to synthesize the colors of the murals in the Mogao Caves before bringing it into Omniverse to let it dance, she said.My work aims to give traditional art forms new life, as many existing cultural creations dont yet exist in a 3D world, only 2D, Song said. NVIDIA Omniverse apps like Kaolin and Audio2Face, and NVIDIA DIB-R models support artists who are switching from traditional creations to owning new experiences in virtual worlds.Song uses Kaolin her favorite Omniverse app to inspect 3D datasets, visualize 3D outputs of a model and render synthetic datasets. Song imported models and animations from Blender and Unity into Omniverse.And with Omniverse Audio2Face, an app that quickly generates expressive facial animation from just an audio source, Song animated a virtual poet character that she plans to integrate with her Peony Dream piece.In Songs following demo, a digital human recites a Chinese poem written by AI: Spring is still lingering when swallows come / Strings of rain and slanting wind / Which trees are kissed upon / Stringed instruments flourish in the bloom of youth / The sun shines, and the lyric flows.Digging into our true humanistic power by designing an artistic concept based on a play or poem and then productizing it using the proper technological tools is all enabled by Omniverse, Song said.In addition to revitalizing traditional works, Song often writes her own poems or scripts off of which she bases stunning visual representations made in Omniverse.The rapid iteration and collaboration capabilities of the open-source Omniverse ecosystem and the power of NVIDIA RTX technology which save her months worth of model training time provide Song with inspiration and technical confidence for her artistic endeavors, she said.I hope my work inspires people to dive deeper into their traditional cultural heritage and encourages them to use AI as a tool to help reveal the unique creative talents they have as human beings, Song said.Learn More at GTCSongs work will go on display in the AI Art Gallery and AI Playground at GTC, which runs March 21-24. The virtual conference is free to attend and will have dozens of sessions and special events featuring visionaries from the Omniverse team, Adobe, Autodesk, Epic Games, Pixar, Unity, Walt Disney Studios and more.Creatives will also have the opportunity to connect with one another and get a behind-the-scenes look at the Omniverse roadmap in the NVIDIA Omniverse User Group and Developer Days.Creators and developers can download NVIDIA Omniverse for free and get started with step-by-step tutorials on the Omniverse YouTube channel. Follow Omniverse on Instagram, Twitter and Medium for additional resources and inspiration. Check out the Omniverse forums and join our Discord Server to chat with the community. | Content Creation/Content Synthesis | Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Sam Shead | Reid Hoffman has co-founded his first new company since LinkedIn sale | The billionaire has co-founded an artificial intelligence start-up called Inflection with DeepMind co-founder Mustafa Suleyman. | https://www.cnbc.com/2022/03/08/reid-hoffman-has-set-up-a-new-ai-company-with-deepminds-co-founder.html | 2022-03-08T16:03:33Z | Reid Hoffman, author, businessman and co-founder of the networking platform 'LinkedIn', speaks at the DLD (Digital-Life-Design) Conference in Munich, Germany, 19 January 2015.LinkedIn billionaire Reid Hoffman has co-founded a new artificial intelligence start-up called Inflection AI with DeepMind co-founder Mustafa Suleyman and former DeepMind researcher Karén Simonyan.It is the first time Hoffman has co-founded a company since he sold LinkedIn to Microsoft for $26.2 billion in 2016. It is also the first company Suleyman has co-founded since he sold DeepMind to Google in 2014 for around $600 million.Inflection will be led by Suleyman, who will take on the role of CEO."AI is one of the most transformative technologies of our time," Hoffman said in a statement shared with CNBC. "Mustafa has been at the forefront of some of the most exciting advances in artificial intelligence. It's a privilege to join him and Karen in building Inflection."The announcement of Inflection, shared exclusively with CNBC, comes just a few weeks after Suleyman said he was quitting his VP role at Google to work alongside Hoffman at Greylock Partners, a renowned venture capital firm that invested in the likes of Facebook (now Meta) and Airbnb. The entrepreneurs have known each other for almost 10 years.Before joining Google, Suleyman co-founded DeepMind in London with childhood friend Demis Hassabis and New Zealander Shane Legg in 2010.In the lead-up to the Google acquisition, Suleyman helped DeepMind to raise millions of dollars from billionaires including Elon Musk and Peter Thiel. He also led the company's applied AI efforts for several years both pre- and post-acquisition.What is Inflection?Headquartered in Silicon Valley, Inflection will aim to develop AI software products that make it easier for humans to communicate with computers."If you think about the history of computing, we have always been trying to reduce the complexity of our ideas in order to communicate them to a machine," Suleyman told CNBC on a call Monday."Even when we write a search query, we're simplifying, we're reducing or we're writing in shorthand so that the search engine can understand what we want."DeepMind co-founder Mustafa SuleymanWhen humans want to control a computer, they need to learn a programming language in order to provide instructions, he added, or use a mouse to navigate and engage with things on the screen. "All of these are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness in order to get a machine to do something," Suleyman said.The British entrepreneur claimed a new suite of technologies that Inflection will aim to develop will eventually enable anyone to speak to a computer in plain language.It's unclear at this stage who Inflection will sell its products to, at what price, and when.Talking to machinesHuman-machine interaction has advanced significantly over the last decade and many people now speak to AI-powered virtual assistants like Siri and Alexa on a daily basis.While the conversations are still far from fluid, computer scientists believe it's only a matter of time before the experience becomes more seamless as machines get better at generating their own language."It feels like we're on the cusp of being able to generate language to pretty much human-level performance," Suleyman said, adding that he believes it will almost certainly be possible within five years. "It opens up a whole new suite of things that we can do in the product space."One of the most notable language-generating AI models is OpenAI's GPT-3 but tech giants including Google, Meta and Microsoft are building their own systems.Asked how he plans to compete with the armies of researchers and engineers at these firms, Suleyman said a small group of talented individuals can have a huge impact."Even at the bigger tech companies, there's a relatively small number of people actually building these (AI) models," he said. "One of the advantages of doing this in a start-up is that we can go much faster and be more dynamic."He added: "My experience of building many, many teams over the last 15 years is that there is this golden moment when you really have a very close knit, small focused team. I'm going to try and preserve that for as long as possible."Simonyan, Inflection's chief scientist, sold his first start-up to DeepMind and was involved in some of the lab's biggest breakthroughs including AlphaZero and AlphaFold. He left DeepMind to join Inflection in the last few weeks.Greylock backingGreylock told CNBC that it is investing in Inflection but it declined to say how much.The venture firm also plans to "incubate" the company, providing it with marketing, introductions to technology leaders and hiring support.In August 2019, Suleyman announced on Twitter that he was stepping away from DeepMind, adding that he needed a "break to recharge." Less than half a year later, in December 2019, he announced that he was officially leaving the AI lab he helped to build to join Google as VP of AI product management and AI policy.The full circumstances of Suleyman's departure from DeepMind weren't disclosed at the time, but it later emerged that a number of his colleagues had taken issue with his management style, accusing him of harassment and bullying. In January 2021, DeepMind announced it had brought in a law firm to investigate his management style."I had a period in 2017-2018 where a couple of colleagues made a complaint about my management style" Suleyman said on a podcast in January where he was interviewed by Hoffman. "You know, I really screwed up. I was very demanding and pretty relentless. I think that at times that created an environment where I basically had pretty unreasonable expectations of what people were to be delivering and when."When Suleyman announced he was joining Greylock, one VC, who asked to remain anonymous because of the sensitive nature of the discussion, questioned how long he would remain a VC for. "My gut says that it's temporary while he looks for the next company to build or join as a founder," they told CNBC. "I think he has more left in the tank."Suleyman said that while Inflection will take up the majority of his time, he plans to carry on investing with Greylock. | Digital Assistance/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Tiernan Ray | The AI edge chip market is on fire, kindled by 'staggering' VC funding | Dozens of startups continue to get tens of millions in venture funding to make chips for AI in mobile and other embedded computing uses. The race shows no sign of slowing down. | https://www.zdnet.com/article/the-ai-edge-inference-chip-market-is-raging/ | 2022-02-11T12:51:00Z | Chips to perform AI inference on edge devices such as smartphones is a red-hot market, even years into the field's emergence, attracting more and more startups and more and more venture funding, according to a prominent chip analyst firm covering the field. "There are more new startups continuing to come out, and continuing to try to differentiate," says Mike Demler, Senior Analyst with The Linley Group, which publishes the widely read Microprocessor Report, in an interview with ZDNet via phone. Linley Group produces two conferences each year in Silicon Valley hosting numerous startups, the Spring and Fall Processor Forum, with an emphasis in recent years on those AI startups. At the most recent event, held in October, both virtually and in-person, in Santa Clara, California, the conference was packed with startups such Flex Logix, Hailo Technologies, Roviero, BrainChip, Syntiant, Untether AI, Expedera, and Deep AI giving short talks about their chip designs.Demler and team regularly assemble a research report titled the Guide to Processors for Deep Learning, the latest version of which is expected out this month. "I count more than 60 chip vendors in this latest edition," he told ZDNet.EdgeCortixEdge AI has become a blanket term that refers mostly to everything that is not in a data center, though it may include servers on the fringes of data centers. It ranges from smartphones to embedded devices that suck micro-watts of power using the TinyML framework for mobile AI from Google. The middle part of that range, where chips consume from a few watts of power up to 75 watts, is an especially crowded part of the market, said Demler, usually in the form of a pluggable PCIe or M.2 card. (75 watts is the PCI-bus limit in devices.)"PCIe cards are the hot segment of the market, for AI for industrial, for robotics, for traffic monitoring," he explained. "You've seen companies such as Blaize, FlexLogic -- lots of these companies are going after that segment."But really low-power is also quite active. "I'd say the tinyML segment is just as hot. There we have chips running from a few milliwatts to even microwatts."HailoMost of the devices are dedicated to the "inference" stage of AI, where artificial intelligence makes predictions based on new data. Inference happens after a neural network program has been trained, meaning that its tunable parameters have been developed fully enough to reliably form predictions and the program can be put into service. The initial challenge for the startups, said Demler, is to actually get from a nice PowerPoint slide show to working silicon. Many start out with a simulation of their chip running on a field-programmable gate array, and then either move to selling a finished system-on-chip (SoC), or else licensing their design as synthesizable IP that can be incorporated into a customer's chip."We still see a lot of startups hedging their bets, or pursuing as many revenue models as they can," said Demler, "by first demo'ing on an FPGA and offering their core IP for licensing." Some startups also offer the FPGA-based version as a product."RovieroWith dozens of vendors in the market, even those that get to working silicon are challenged to show something that's meaningfully different."It's hard to come up with something that's truly different," said Demler. "I see these presentations, 'world's first,' or, 'world's best,' and I say, yeah, no, we've seen dozens."Some companies began with such a different approach that they set themselves apart early, but have taken some time to bear fruit. BrainChip Holdings, of Sydney, Australia, with offices in Laguna Hills, California, got a very early start in 2011 with a chip to handle spiking neural networks, the neuromorphic approach to AI that purports to more closely model how the human brain functions. The company has over the years showed off how its technology can perform tasks such as using machine vision to identify poker chips on the casino floor. "BrainChip has been doggedly pursuing this spiking architecture," said Demler. "It has a unique capability, it can truly learn on device," thus performing both training and inference. FlexLogixBrainChip has in one sense come the farthest of any startup: it's publicly traded. Its stock is listed on the Australian Stock Exchange under the ticker "BRN," and last fall the company issued American Depository Shares to trade on the U.S. over-the-counter market, under the ticker "BCHPY." Those shares have since more than tripled in value. BrainChip is just starting to produce revenue. The company in October came out with mini PCIe boards of its "Akida" processor, for x86 and Raspberry Pi, and last month announced new PCIe boards for $499. The company in the December quarter had revenue of U.S.$1.1 million, up from $100,000 in the prior quarter. Total revenue for the year was $2.5 million, with an operating loss of $14 million. Some other exotic approaches have proved hard to deliver in practice. Chip startup Mythic, founded in 2012 and based in Austin, Texas, has been pursuing the novel route of making some of its circuitry use analog chip technology, where instead of processing ones and zeros, it computes via manipulation of a real-valued wave form of an electrical signal."Mythic has generated a few chips but no design wins," Demler observed."Everyone agrees, theoretically, analog should have a power efficiency advantage, but getting there in something commercially variable is going to be much more difficult." ArchitekAnother startup presenting at the Processor Conference, Syntiant, started out with an analog approach but decided analog didn't provide sufficient power advantages and took longer to bring to market, noted Demler.Syntiant of Irvine, California, founded in 2017, has focused on very simple object recognition that can operate with low power on nothing more than a feature phone or a hearable. "On a feature phone, you don't want an apps processor, so the Syntiant solution is perfect," observed Demler.Regardless of the success of any one startup, the utility of special circuitry means that AI acceleration will endure as a category of chip technology, said Demler."AI is becoming so ubiquitous in so many fields, including automotive, embedded processing, the IoT, mobile, PCs, cloud, etc., that including a special-purpose accelerator will become commonplace, just like GPUs are for graphics."ExpederaNevertheless, some tasks will be more efficient to run on a general-purpose CPU, DSP, or GPU, said Demler. That is why Intel and Nvidia and others are amplifying their architectures with special instructions, such as for vector handling. Different approaches will continue to be explored as long as a venture capital market awash in cash lets a thousand flowers bloom. "There's still so much VC money out there, I'm astounded by the amount these companies continue to get," said Demler.Demler notes giant funding rounds for Sima.ai of San Jose, California, founded in 2018, which is developing what it calls an "MLSoC" focused on reducing power consumption. The company received $80 million in their Series B funding round. Another one is Hailo Technologies of Tel Aviv, founded in 2017, which has raised $320.5 million, according to FactSet, including $100 million in its most recent round, and is supposedly valued at a billion dollars"The figures coming out of China, if true, are even more staggering," said Demler. Funding looks set to continue for the time being, he said. "Until the VC community decides there's something else to invest in, you're going to see these companies popping up everywhere." At some point, a shake-out will happen, but when that day may come is not clear. "Some of them have to go away eventually," mused Demler. "Whether it's 3 years or 5 years from now, we'll see much fewer companies in this space." The next conference event Demler and colleagues will host is late April, the Spring Processor Forum, at the Hyatt Regency Hotel in Santa Clara, but with live-streaming for those who can't make it in person. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Angela Dennis | 3 Top Artificial Intelligence Stocks to Buy in March | #cybersecurity | #cyberattack | #cybersecurity | #infosecurity | #hacker | World Largest Source Of Security News. | https://nationalcybersecurity.com/3-top-artificial-intelligence-stocks-to-buy-in-march-cybersecurity-cyberattack-cybersecurity-infosecurity-hacker/ | https://nationalcybersecurity.com/wp-content/uploads/1647083873_42_ | 2022-03-12T12:29:00Z | Artificial intelligence (AI) is often used as a buzzword when companies are trying to sell their product. They often have some form of AI, but it really isnt as much of a game-changer as it is hyped up to be. However, three businesses with real AI products making a difference in the industry are Nvidia ( NVDA -2.46% ), CrowdStrike( CRWD -0.25% ), and C3.ai ( AI -9.82% ).This trio of stocks is highly diversified and gives investors three different avenues to approach an investment in AI. Nvidia provides the hardware powering AI technology, CrowdStrike uses AI in cybersecurity, and C3.ais tools help enterprises predict the future across a massive organization. When deployed correctly, artificial intelligence can make a huge difference in a product, and each of these businesses achieves that.Image source: Getty Images.1. NvidiaNvidia grew its business on the back of the graphics processing unit (GPU) invented by the company in 1999. At first, the GPU was used for computer graphics but found expanded uses in parallel computing and then AI. A key concept of AI is deep learning (the way computers gain knowledge), and GPUs process and compute the information fed into them to power these models. By investing in Nvidia, you are a direct beneficiary of AI implementation everywhere.As one of the leading technology companies, Nvidias 2022 fiscal year (ending Jan. 30, 2022) results were strong. Revenue grew 61% to $26.9 billion over last year, but quarterly revenue growth slowed to 53% year over year. Its AI sales are wrapped into its data center division, which grew faster than overall revenue at a 71% year-over-year pace. In its fourth-quarter presentation, Nvidia highlighted its data center growth was led by strong demand for AI products.Nvidias AI technology is being used by many firms, including Meta Platforms, which recently announced it was building its AI research SuperCluster with Nvidias products. A broad approach to AI investing can be taken by purchasing Nvidias stock. With it down around 30% from its all-time high, now could be a great time to buy. 2. CrowdStrikeChanging to a more application-based investment, CrowdStrike provides cybersecurity solutions with its cloud-based offering. Through its Falcon platform, customers are protected by software that sees more than 1 trillion events per day. CrowdStrike then uses AI to learn from these attacks and continuously evolves the program, so when a customer in France sees an attack, a different company is protected from a similar threat in the U.S.With threats of cyber attacks from foreign powers like Russia in retaliation for sanctions against them, cybersecurity has never been more important. One of the weapons used against Ukraine was a wiper malware that destroys files from computers, something CrowdStrike protects against. Some of the most important companies in the world utilize CrowdStrike, with 15 of the top 20 banks and 65 of the Fortune 100 companies deploying CrowdStrikes software. CrowdStrikes Q4 (ending Jan. 31, 2022) results were great, with quarterly revenue of $431 million growing 63% over the prior year. Additionally, it was free-cash-flow positive and converted 30% of revenue into $127 million of additional cash.With customers growing 65% year over year to 16,325 and annual recurring revenue up 65% to $1.7 billion, CrowdStrikes business is executing on all levels. The company represents a great way to invest in the application of AI, and the cybersecurity industry has never been more relevant.Image source: Getty Images.3. C3.aiFully grasping the complexity of a multi-billion dollar business is nearly impossible. However, with the aid of computers, management can make well-informed decisions. C3.ais tools allow data scientists to deploy prebuilt and configurable AI applications to support a business in many ways, such as supply chain management, energy efficiency, and customer engagement. The companys tools are recognized as some of the best available. Omdia ranked C3.ai top on its list of machine-learning development platforms. It was also found to increase developer productivity by 26 times, by cutting the amount of code required by nearly 99% on Amazon Web Services (AWS) when deploying AI solutions. C3.ai is a young company founded in 2009 and only has 218 customers as a result. Still, this is up 82% year over year and drove Q3 (ending Jan. 31, 2022) total revenue to $69.8 million, increasing by 42% over the prior year. It also landed a five-year, $500 million contract with the U.S. Department of Defense. The company has a long way to go before turning a profit, as its operating margin was negative 22%, although this was an improvement from last years Q3 number of negative 24%. It will take C3.ai some time, but if its best-in-class solutions are adopted across the industry, it could be a fantastic investment.These three companies are in different business stages. If investors are creating an AI basket, this trio seems like a great place to start, with each company well off its all-time high.Percent Down From All-Time HighNvidiaCrowdStrikeC3.ai34%36%80%Source: Yahoo!. Values as of 3/11/2022. Today could be a great time to enter a position in these businesses with a mindset of holding the stocks for three to five years. Artificial intelligence is making a difference in the world today, and investors should take notice.This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis even one of our own helps us all think critically about investing and make decisions that help us become smarter, happier, and richer. | Unknown | Computer and Mathematical | null | null | null | null | null | null |
news | pluck-graphql added to PyPI | Transform GraphQL queries into Pandas data-frames . | https://pypi.org/project/pluck-graphql/ | 2022-03-06T15:35:38Z | Pluck is a GraphQL client that transforms queries into Pandas data-frames.InstallationInstall Pluck from PyPi:pip install pluck-graphqlIntroductionThe easiest way to get started is to use pluck.read_graphql to execute a query.Let's read the first five SpaceX launches into a data-frame:importpluckSpaceX="https://api.spacex.land/graphql"query="""{ launches(limit: 5) { mission_name launch_date_local rocket { rocket_name } }}"""frame,=pluck.read_graphql(query,url=SpaceX)framelaunches.mission_namelaunches.launch_date_locallaunches.rocket.rocket_nameThaicom 62014-01-06T14:06:00-04:00Falcon 9AsiaSat 62014-09-07T01:00:00-04:00Falcon 9OG-2 Mission 22015-12-22T21:29:00-04:00Falcon 9FalconSat2006-03-25T10:30:00+12:00Falcon 1CRS-12012-10-08T20:35:00-04:00Falcon 9Implicit ModeThe query above uses implicit mode. This is where the entire response is normalized into a single data-frame and the nested fields are separated by a period.The return value from read_graphql is an instance of PluckResponse. This object is iterable and enumerates the data-frames in the query. Because this query uses implicit mode, the iterator contains only a single data-frame (note that the trailing comma is still required).@frame directiveBut Pluck is more powerful than implicit mode because it provides a custom @frame directive.The @frame directive specifies portions of the GraphQL response that we want to transform into data-frames. The directive is removed before the query is sent to the GraphQL server.Using the same query, rather than use implicit mode, let's pluck the launches field from the response:query="""{ launches(limit: 5) @frame { mission_name launch_date_local rocket { rocket_name } }}"""launches,=pluck.read_graphql(query,url=SpaceX)launchesmission_namelaunch_date_localrocket.rocket_nameThaicom 62014-01-06T14:06:00-04:00Falcon 9AsiaSat 62014-09-07T01:00:00-04:00Falcon 9OG-2 Mission 22015-12-22T21:29:00-04:00Falcon 9FalconSat2006-03-25T10:30:00+12:00Falcon 1CRS-12012-10-08T20:35:00-04:00Falcon 9The column names are no longer prefixed with launches because it is now the root of the data-frame.Multiple @frame directivesWe can also pluck multiple data-frames from the a single GraphQL query.Let's query the first five SpaceX rockets as well:query="""{ launches(limit: 5) @frame { mission_name launch_date_local rocket { rocket_name } } rockets(limit: 5) @frame { name type company height { meters } mass { kg } }}"""launches,rockets=pluck.read_graphql(query,url=SpaceX)Now we have the original launches and a new rockets data-frame:rocketsnametypecompanyheight.metersmass.kgFalcon 1rocketSpaceX22.2530146Falcon 9rocketSpaceX70549054Falcon HeavyrocketSpaceX701420788StarshiprocketSpaceX1181335000ListsWhen a response includes a list, the data-frame is automatically expanded to include one row per item in the list. This is repeated for every subsequent list in the response.For example, let's query the first five capsules and which missions they have been used for:query="""{ capsules(limit: 5) @frame { id type status missions { name } }}"""capsules,=pluck.read_graphql(query,url=SpaceX)capsulesidtypestatusmissions.nameC105Dragon 1.1unknownCRS-3C101Dragon 1.0retiredCOTS 1C109Dragon 1.1destroyedCRS-7C110Dragon 1.1activeCRS-8C110Dragon 1.1activeCRS-14C106Dragon 1.1activeCRS-4C106Dragon 1.1activeCRS-11C106Dragon 1.1activeCRS-19Rather than five rows, we have seven; each row contains a capsule and a mission.Nested @frame directivesFrames can also be nested and if a nested @frame is within a list, the rows are combined into a single data-frame.For example, we can pluck the top five cores and their missions:query="""{ cores(limit: 5) @frame { id status missions @frame { name flight } }}"""cores,missions=pluck.read_graphql(query,url=SpaceX)Now we have the cores:coresidstatusmissions.namemissions.flightB1015lostCRS-622B0006lostCRS-19B1034lostInmarsat-5 F440B1016lostTürkmenÄlem 52°E / MonacoSAT23B1025inactiveCRS-932B1025inactiveFalcon Heavy Test Flight55And we also have the missions data-frame that has been combined from every item in cores:missionsnameflightCRS-622CRS-19Inmarsat-5 F440TürkmenÄlem 52°E / MonacoSAT23CRS-932Falcon Heavy Test Flight55AliasesColumn names can be modified using normal GraphQL aliases.For example, let's tidy-up the field names in the launches data-frame:query="""{ launches(limit: 5) @frame { mission: mission_name launch_date: launch_date_local rocket { name: rocket_name } }}"""launches,=pluck.read_graphql(query,url=SpaceX)launchesmissionlaunch_daterocket.nameThaicom 62014-01-06T14:06:00-04:00Falcon 9AsiaSat 62014-09-07T01:00:00-04:00Falcon 9OG-2 Mission 22015-12-22T21:29:00-04:00Falcon 9FalconSat2006-03-25T10:30:00+12:00Falcon 1CRS-12012-10-08T20:35:00-04:00Falcon 9Leaf fieldsThe @frame directive can also be used on leaf fields.For example, we can extract only the name of the mission from past launches:query="""{ launchesPast(limit: 5) { mission: mission_name @frame }}"""launches,=pluck.read_graphql(query,url=SpaceX)launchesmissionStarlink-15 (v1.0)Sentinel-6 Michael FreilichCrew-1GPS III SV04 (Sacagawea) | Process Automation/Content Synthesis | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
||
news | Kyle Alspach | How AI-powered XDR can secure the hybrid workforce | XDR's overall aim is to integrate and correlate data from numerous security tools to help customers prioritize the biggest threats. | https://venturebeat.com/2022/03/03/how-ai-powered-xdr-can-secure-the-hybrid-workforce/ | 2022-03-03T21:02:00Z | Join today's leading executives online at the Data Summit on March 9th. Register here.A year ago, NOV Inc. was in the middle of evaluating a new security product to help with securing its globally distributed workforce, spread across more than 60 countries. The oilfield equipment maker was considering deploying an extended detection and response (XDR) solution from SentinelOne and as part of the evaluation, NOV deployed the XDR platform across a company it had recently acquired.Immediately after deployment, SentinelOnes Singularity XDR detected and halted a cyberattack in progress against the acquired company, said NOV chief information security officer John McLeod and then remediated the attack, as well.This was all done during the pandemic lockdown, in a country on the other side of the globe, where we didnt speak the same language, McLeod said in an email.Perhaps unsurprisingly, NOV ended up becoming a customer. And the artificial intelligence (AI) and machine learning (ML) capabilities at the heart of the Singularity XDR solution have continued to prove the value of the product for protecting the company and its distributed workers, McLeod said.How behavioral AI stops threatsSentinelOnes XDR platform ingests and correlates data from numerous sources, with the help of distributed AI models that run on every endpoint and cloud workload in the customers environment, according to chief product officer Raj Rajamani. The platform uses behavioral AI technology that monitors and links behaviors then autonomously shuts down activities that are deemed a threat, Rajamani said. The AI/ML capabilities bring a clear advantage for the XDR platform over endpoint protection platform (EPP) and endpoint detection and response (EDR) tools including by making cybersecurity a more-autonomous operation than its been previously, McLeod said. Their behavioral AI/ML approach was far superior to our legacy EPP, and the native integration of XDR allowed us to eliminate a separate EDR agent, he said. Its much more effective to secure a remote workforce with technology requiring very little administrative interaction versus our legacy human-powered solutions with inherent delays.Ultimately, having technology that can act in real time, without human intervention, is a big step forward in cybersecurity, McLeod said.While still a relatively nascent category within security, XDR has found its chance to shine during the pandemic at a time when cyberattacks such as ransomware and data theft have skyrocketed. Ransomware attacks spiked 62% in 2020, then surged 105% in 2021, according to SonicWall. Meanwhile, data leaks related to ransomware jumped 82% last year, CrowdStrike reports.XDR vs. SIEMWhile capabilities can vary across vendors in XDR, the overall concept is to integrate and correlate data from numerous security tools and from across varying environments in order to help customers prioritize the biggest threats. In the process, XDR is capable of addressing many of the biggest challenges facing security teams simultaneously: security tool sprawl, alert fatigue and shortage of cybersecurity personnel to make sense of all the data flooding in from their systems.While this may sound a lot like what security information and event management (SIEM) was supposed to provide, XDR actually delivers in a way that SIEM was never able to, according to Alex Burinskiy, chief product security officer for the Americas at access solutions firm Assa Abloy.The bottom line, said Burinskiy a customer of SentinelOne both at his previous company, edtech firm Cengage, and in his current role is that XDR is accomplishing what SIEM promised to do.One key reason for this, experts told VentureBeat, is the use of advanced AI and ML technologies in XDR platforms.Many XDR solutions excel at using ML for detection of anomalies that indicate a new, previously unknown threat, said Forrester analyst Allie Mellen. For instance, ML-driven XDR can reveal malicious behavior by correlating a string of actions that arent typical for a user, Mellen said.While SIEM can also use AI/ML, XDR uses the technologies in more discrete, targeted ways, she said such as by correlating data prior to an analyst starting an investigation, or orchestrating response actions.XDR vs. EDRImportantly, many XDR platforms go beyond EDR by bringing in telemetry from more than just endpoints. And the ability to correlate data across all those areas including email, applications and cloud environments is how XDR can provide enhanced visibility into malicious activity, Mellen said.Which, of course, is exactly what businesses with remote workers are really looking for when it comes to security.Thats where things start to get really interesting because you get a lot more context about whats happening in the environment than you can get with just the endpoint alone, Mellen said.At this point, EDR is now table stakes in cybersecurity. And the complexity of the tools landscape paired with the challenges of securing a distributed workforce suggest that its worth considering XDR in order to leverage detection and response that can go beyond the endpoint, experts said.While less than 5% of organizations are using XDR today, thats expected to climb to 40% by 2027, according to a recent report from Gartner. When you look at your cybersecurity strategy, you need to protect the applications, network, data, email, endpoints, identities including identities of devices and of course the cloud, said Patrick Hevesi, a vice president and analyst at Gartner. And so XDR as it plugs into more and more of these different types of assets as part of delivering that detection and response is going to definitely help any cybersecurity strategy.AI engineAnd AI/ML algorithms are pivotal to how XDR platforms make it all happen. Ultimately, XDR is powered by AI/ML as its engine and core technology, said Aimei Wei, founder and CTO at Stellar Cyber. The companys XDR platform uses AI/ML throughout the threat detection process, from normalizing and correlating data that it ingests from different security tools, to analyzing time series and peer groups (using unsupervised ML), to pinpointing attack patterns with supervised ML. The Stellar Cyber XDR platform also uses advanced Graph ML to generate context for security teams around the highest-priority threats.If we can automatically add context and piece things together for the security analyst, it makes their work much more efficient, Wei said. And this is even more essential when many workers are remote, she said.What we can do is achieve full [security] coverage, regardless of what the customers environment is, Wei said. It covers the whole attack surface.One customer that has come to rely on XDR as part of its remote workforce security strategy is EBSCO Industries, a provider of discovery services and databases to libraries. The shift of workers into the home meant the company needed to change the way it looked at external access and devise a better method for securing its devices, said Ryan Loy, chief information officer at EBSCO.We suspected we had blind spots and areas of our environment where we did not have complete visibility, Loy said in an email. Native vs. open XDREBSCO ended up selecting Stellar Cyber as its XDR vendor, in part because the company offers an open XDR platform that can ingest data feeds from other vendors security tools. Open XDR sometimes referred to as hybrid XDR is one of the two major varieties of extended detection and response available today. The other is native XDR, which relies solely on data feeds from an XDR vendors own tools and capabilities.With open XDR, businesses that already use a significant number of cybersecurity tools in their environment can leverage many or all of those. For EBSCO, using Stellar Cybers open XDR meant the platform worked with our existing investments, Loy said. We did not want to disrupt our toolsets just to do something new.Customers can then use an open XDR platform to ingest and correlate all of their security data, and prioritize the threats that are uncovered across their current toolset. XDR serves to provide a view of the big picture in terms of security, Loy said. Each tools output is like looking at an individual tree in the forest. But by combining inputs from all of our tools with XDR, we see the entire forest, he said. When it comes to the artificial intelligence capabilities of Stellar Cybers XDR platform, their AI/ML is baked into the user interface. And my team is presented with look here types of correlated indications when something is awry, Loy said. That is how AI should work.While EBSCOs security team still has to perform some analysis on the correlated information, he said, alert-chasing and manual correlation tasks are now history.AI-powered analysisXDR approaches vary by vendor, not only in terms of whether they are open/hybrid or native, but also when it comes to who they partner with to augment their data analysis. At Cybereason, for instance, the companys XDR platform is powered by the Google Chronicle cloud security analytics service. Among the advantages is that, unlike other solutions, the XDR platform is cloud-native, said Eric Sun, director of product marketing at Cybereason.This means that the XDR platform is built to support diverse, cloud-first remote workforces and can integrate with key collaboration and identity management solutions such as Microsoft 365, Google Workspace and Okta, Sun said in an email. Key AI/ML capabilities include Cybereasons MalOp detection engine, which identifies malicious behaviors using conditional probability tables and Markov chain algorithms, in order to predict potential cause-and-effect cyberattack relationships and stitch together logs that match these predictions, Sun said.Other AI-driven approaches to XDR include CrowdStrikes ExPRT.AI model, used in the companys Falcon XDR platform. ExPRT.AI identifies vulnerabilities that pose the highest risk to an organization and prioritizes them for remediation, said Amol Kulkarni, chief product and engineering officer at CrowdStrike, in an email.Crucially, Ex.PRT.ai analyzes the evolving threat landscape and produces a daily risk rating for each vulnerability Critical, High, Medium or Low, Kulkarni said.The platforms AI/ML models are trained on massive datasets that enable the CrowdStrike Falcon XDR to identify attack trends that a human couldnt unearth, he said. This level of comprehensive insight is essential with todays rapidly evolving remote work environment as attackers are continually advancing their attack methodologies.VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More | Content Synthesis/Decision Making/Detection and Monitoring | Computer and Mathematical | null | null | null | null | null | null |
|
news | Talview Experiences Exponential Growth in AI-Powered Hiring and Proctoring Solutions | SAN MATEO, Calif., Feb. 22, 2022 /PRNewswire/ -- Talview, the global talent measurement leader for hiring and proctoring, today announced tremendous company growth in the past year – 79% growth of net new customers Year-Over-Year (YOY) – driven by $15 million in Series B funding. The... | https://www.prnewswire.com/news-releases/talview-experiences-exponential-growth-in-ai-powered-hiring-and-proctoring-solutions-301486325.html | 2022-02-22T14:00:00Z | SAN MATEO, Calif., Feb. 22, 2022 /PRNewswire/ -- Talview, the global talent measurement leader for hiring and proctoring, today announced tremendous company growth in the past year 79% growth of net new customers Year-Over-Year (YOY) driven by $15 million in Series B funding.The award-winning Talview Measurement Platform helps enterprises orchestrate and automate workflows for screening, interviewing, assessments, and proctoring to increase productivity and reduce operational costs while encouraging Diversity, Equity, and Inclusion (DEI). The intelligence layer of the platform, which incorporates Artificial Intelligence (AI) and Natural Language Processing (NLP), reduces unconscious bias in both hiring and proctoring so companies can confidently measure the potential and performance of candidates and learners. "The talent measurement industry is at an inflection point where we can reimagine, humanize, and democratize talent acquisition and management," said Sanjoe Tom Jose, CEO and Co-founder of Talview. "Our Series B funding validated Talview's position as the leading measurement platform provider for hiring and proctoring. With further funding, we can transform and accelerate the development of the platform and continue innovating in AI, meeting newfound demands, and fulfilling our mission to provide every candidate and learner with equal opportunity."Over the past year, the Talview Measurement Platform has witnessed:200% growth in usage for hiring and proctoring YOY 350% increase in video and audio interviews YOY 130% increase in the use of proctored examsCustomers globally are reporting prime candidates are twice as likely to accept interviews when the company uses Talview's solutions."Today, the corporate brand experience is more important than ever for recruiting best-matched talent. We consciously wanted to design high-touch engagements throughout the talent lifecycle adding a human element to the technology," explained Laura Mills, AVP of Human Resources at Cognizant. "Talview was the partner who automated and configured our hiring process to create a personalized candidate journey. We look forward to our continued collaboration with Talview to deliver on this vision and more in the future."According to Sophie Tattersall, Head of Go-To-Market Strategy within Commercial Ops, Cambridge Assessment, "Our goal is to help students work toward their goals, and we wanted to provide a top-notch online testing experience for them. Talview was the trusted partner whose expertise, flexibility, and openness helped us to successfully launch and deliver our remote invigilation service globally to our corporate customers. Through this partnership, we and our corporate customers look forward to further helping students worldwide reach their full potential." Talview's goal for its hiring solution is to enable enterprises to increase their candidate acceptance rate for interviews and decrease time to offer by 70%. With proctoring, Talview is focused on enabling business continuity with 100% virtual, secure certifications, and helping certification institutions reevaluate how they measure learners to reduce bias and provide equal opportunity for all. "With globalization and the shift to a data-driven economy, talent acquisition and retention had already become increasingly complex. Covid exposed the weaknesses in traditional hiring and proctoring processes," said Kishore Bopardikar, Co-founder at Eileses Capital LLC, "Talview is leveraging their AI-powered platform to address the urgent demand for effective remote hiring and exam-taking technologies. We're pleased, but not surprised, to see the growth they're experiencing."Talview is embracing the changes needed to deliver on their vision for the future, expanding the executive leadership team to meet the demands of future growth and product innovation for its customers. Joining the Talview team are Cece Lee as VP of Marketing, Fred Rafilson as Chief I/O Psychologist, Sebastian Jose as VP of Engineering, Shivgauri Rajan as VP of People Success and TA, Terri Pembleton as VP of Sales, and Vinod Radhakrishnan as VP of Customer Success. Learn more about the award-winning Talview Measurement Platform at https://www.talview.com.About Talview:The Talview Measurement Platform seamlessly orchestrates talent workflows for candidate screening, video and audio interviews, online assessment, and remote proctoring. Organizations looking to make more efficient, effective, and intelligent decisions throughout the talent lifecycle can now access a single AI-powered platform that helps them do that. With a 360-degree view of talent potential, you can make quick, confident, and bias-free decisions to provide an equal opportunity for all. Media contact:Jacob Crompton Schreiber1-646-480-0356 [email protected] SOURCE Talview | Process Automation/Decision Making/Content Synthesis | Management/Business and Financial Operations | null | null | null | null | null | null |
||
news | François Labrèche, Data Scientist, Secureworks | Automating Threat Intel with Machine Learning | In cybersecurity, news regarding new vulnerabilities appears continuously every day through an array of various sources. These fall under the umbrella of open-source intelligence (OSINT). Yet not all vulnerabilities generate the same level of interest and length of discussion. The goal of the approach presented here is to extract the underlying concepts from underground discussions and OSINT, in order to generalize which vulnerabilities might receive more attention from attackers | https://www.secureworks.com/blog/automating-threat-intel-with-machine-learning | https://www.secureworks.com//content.secureworks.com/-/media/Images/Insights/Blog/2022/automating%20threat%20intel%20with%20machine%20learning/Automating_Threat%20Intel_Blog_Web_800x800.ashx?modified=20220218182106 | 2022-02-21T14:36:00Z | IntroductionIn cybersecurity, news regarding new vulnerabilities appears continuously every day through an array of various sources, be it news networks, social networks, blogs, security advisories, etc. These fall under the umbrella of open-source intelligence (OSINT). Yet not all vulnerabilities generate the same level of interest and length of discussion.As such, one interesting aspect of vulnerability prioritization is to consider what types of vulnerabilities are currently trending in these OSINT sources, in order to establish the types of vulnerabilities an attacker might favor. For example, if discussions in public sources and underground networks currently mention reflected XSS attacks on particular frameworks more often than usual, other attackers reading these discussions could be tempted to attack other similar XSS vulnerabilities or find similar inflection points in other frameworks. Attackers, just like cybersecurity experts, are very much influenced by trends in exploitation methods.The goal of the approach presented here is to identify the patterns or trends underlying online discussions. These are what we aim to model in order to generalize which vulnerabilities might receive more attention from attackers, regardless of the vulnerability’s direct mentions (e.g., its CVE number) in online discussions. The goal is the identification of the underlying concepts associated with online cybersecurity discussions, first to assess the importance of existing vulnerabilities, and second to use this representation as input to future predictive models. To extract this information, a natural language approach is employed, which uses machine learning and, more specifically, topic modeling.In short, it is possible to build a representation of a vulnerability that captures both its intrinsic features as well as its relationship with the external world.This framework aims to: Create a representation that encodes the semantic meaning of vulnerabilities, i.e., topics from the textual descriptions of all disclosed vulnerabilities to date. Crawl online InfoSec discussions and apply them to a topic model, so as to obtain a weighted list of topics, which we call trends. Combine the previously built trends to their respective topics in a vulnerability, in order to indicate importance to current InfoSec discussions.MethodologyIn order to accomplish these goals, we can use topic modeling, which is an unsupervised machine learning technique in which a number of topics are automatically extracted from a text corpus, along with each topic’s probability of appearing in any of the texts. A topic consists of one or more weighted words. A higher weight indicates a higher importance to the topic. A topic represents an underlying concept extracted from the text through statistical analysis. For example, there are a great number of disclosed command injection vulnerabilities out there, each with a different but similar description. This technique can capture a generic topic representing the words associated with command injection vulnerabilities, looking somewhat like this:Figure 1: An example of a command injection topicA prominent topic modeling technique is Latent Dirichlet Allocation (LDA), which is a generative model that computes the probability of a word being associated with a topic[1] LDA has been proven to be effective in previous research and applications[2,3], and is one of the most common topic modeling methods. Creating the Topic ModelFirst, let’s extract the textual definition of all vulnerabilities in the National Vulnerability Database (NVD) of the NIST, each associated with a CVE number. These texts are then processed, as depicted by Figure 2. In step 1, URLs and common words are removed, the text is split into tokens (single words), and the words are lemmatized to group together different inflections of a word. Figure 2: Filtering of a description’s words, and splitting into tokensThese lists of tokens can then be transformed into a numerical representation using the bag-of-words model, as depicted in Figure 3. In this new representation, each token is identified by an ID and its number of occurrences, in which the ID is chosen based on its first appearance in the definitions. This transformation is necessary since LDA requires numerical inputs.Figure 3: Transformation of a description’s tokens into a numerical representationThese are used as input to LDA, as shown in Figure 4, which generates a set of topics identified by a series of words and an importance factor between 0 and 1. This model is implemented using online learning. New published vulnerabilities are fed to the model daily and the model is updated incrementally.Figure 4: Association of words in numerical form to their topics through LDABuilding Trends from Online DiscussionsOnce we have learned a suitable model for vulnerabilities, one that understands the specifics of language in the description of vulnerabilities, we gather online discussions of vulnerabilities and exploitations from various public sources. While a number of previous works have extracted information from online discussions in order to build models[4,5,6,7], none applied them to a topic model in the same way this blog details. Now, let’s process text posts from discussions in the same way as specified in Figure 2 and Figure 3, and apply them to the existing topic model. This results in a vector of weights for each text post, where each topic’s importance to this discussion is expressed by a number between 0 and 1, named a weight. By averaging these vectors together across all posts, we obtain a final single vector denoting the relative importance of each of our topics in what we observe in online discussions. We call those “trends”. In the end, by extracting the topics with the highest weights, we obtain the set of current trends. Figure 5 describes the creation of trends :Figure 5: The creation of a trends score from online discussionsScoring VulnerabilitiesFinally, we look back at the vulnerabilities from the NIST database which were used to build the initial topic model, and identify the ones with topics matching the newly established trends. This is done by applying the dot product to each vulnerability topic vector and the trends vector, or in slightly more human language, by computing the product of each of a vulnerability’s topic weights and its corresponding trend weights, and summing them. This provides a final Vulnerability Trends Score (VTS). The higher the VTS is, the more closely this vulnerability matches with online discussions. Figure 6: The dot product between the vulnerability topics and the trendsThe framework can be summarized in the figure below. First, the topic model is trained with vulnerability descriptions, and outputs a number of topics:Second, online discussions are run through the same topic model to establish which of the previously obtained topics are most important for these discussions:Figure 7: Overview of the approachResultsExamples of a Use CaseLet’s see how this works in practice, by exploring data from June 2020. Using the previously shown method, 30 topics (why 30? See the Appendix below) were identified on all of NIST’s vulnerabilities, each identifying a concept observed in vulnerability descriptions. Then, data was gathered from online discussions, such as these tweets from Twitter:When applying the topic model to these tweets, a weight for every topic of the model is obtained, relative to these specific tweets. This is what we identify as trends. Among others, these three topics, in a human-interpretable form, came out as important:Topic A, related to file upload vulnerabilities Topic B, related to applications having default settings and making HTTP requests Topic C, related to code execution vulnerabilitiesThe topics can be visualized using word clouds:Figure 8: Three top trending topicsFinally, we can use the trends and the vulnerabilities’ topics to obtain a VTS for each vulnerability, using the method described previously. The top-ranked vulnerability comes out as CVE-2020-19382 , which identifies an Apache Tomcat vulnerability, code-named Ghostcat, enabling attackers to upload files through a default connection, and potentially obtain a remote code execution. This vulnerability appearing as the most important for that time period is expected, given that many tweets mention Tomcat vulnerabilities, and some directly speak of this vulnerability. More interestingly, other vulnerabilities matching with topics A, B and C will also have a high trending score, while not necessarily being the most mentioned online.Trends Over TimeThe topic model is run in an online mode (i.e., updated with new vulnerabilities as they appear), which lets it retain its topic distribution. We observed some gradual evolution over time for our computed trends, following important events happening in the infosec world. Specifically, we tracked the scores of 3 notable trending topics over the year of 2021. Our first trend identifies cross-site scripting (XSS) vulnerabilities, as can be observed from its word cloud :Figure 9: XSS trending topicFollowing this trend over the past months, we see a peak in score in March 2021. This coincides with the publication of a WordPress ivory plugin XSS, which affected 60,000 domains. This event generated online discussions, which in turn were reflected in our trends.Figure 10: XSS trend over timeA second noteworthy trend is linked to use-after-free vulnerabilities:Figure 11: Use-after-free trending topicWhen exploring the evolution of this trend, we can observe a peak of interest around the end of March 2021. This is related to a controversy where researchers introduced use-after-free vulnerabilities in the Linux kernel as part of a research project. An important aspect of our approach can be seen here: although the vulnerabilities in question were not critical, they generated a lot of interest over use-after-free vulnerabilities. This in turn influences attackers in their choice of attacks, hence the importance of identifying the underlying concepts of vulnerabilities.Figure 12: Use-after-free trend over timeFinally, our third trend is related to macOS code execution vulnerabilities:Figure 13: MacOS code execution vulnerabilities trending topic Following this trend’s evolution through time, we can see the impact of the publication of multiple critical vulnerabilities in macOS Big Sur in August and September of 2021, and how this specific trend increases in score: Figure 14: MacOS code execution vulnerabilities trend over time ConclusionWhat was obtained from the method presented in this blog is a semantic mapping of vulnerabilities, and a single metric linking vulnerabilities to current trending topics. These are used as numerical representations, linking online discussions and the semantic of a description to a specific vulnerability. These new features bridge the gap between the underlying concepts behind a vulnerability and actual numerical features which can be used by a machine learning model. AppendixParameter TuningLDA requires a fixed number of topics as a parameter. In order to validate this parameter choice, multiple values were tested and evaluated through the coherence[8] of topics: a highly coherent topic will have top scoring words that are similar between themselves and different from top words of other topics. Overall, this metric serves as a way to benchmark models in order to compare their effectiveness. The test included n numbers of topics from 5 to 100, and the best results were found with n=30:Figure 15: Coherence test for identifying the number of topics providing the best clustering ValidationThrough this approach, the highest trending vulnerabilities are indirectly linked to online discussions through the use of topics. In order to validate that the vulnerabilities with the highest trending scores are in fact directly related to online discussions, an hypothesis test was conducted to assert that the descriptions of trending vulnerabilities are more similar to online discussions than a random sample of descriptions are to the same online discussions. To compare the similarity of the texts, we used the term frequency-inverse document frequency (TF-IDF) [9], which is a commonly used similarity measure to identify how relevant each word is to a text. We sampled the 1000 vulnerabilities with the highest VTS, extracted their descriptions, and compared each one with the online discussions using TF-IDF. We then sampled 1000 random descriptions from all vulnerabilities in the NIST database, and compared them as well to the online discussions. With this, we obtained a distribution of similarity measures for the trending vulnerabilities, and another distribution of similarity measures for the random vulnerabilities. We then conducted a permutation test on these two distributions, and confirmed that the similarity scores of the trending vulnerabilities are significantly higher than the similarity scores of the random sample; we reject the null hypothesis with a p-value approaching 0.000. References [1] Blei, David M., Andrew Y. Ng, and Michael I. Jordan. "Latent dirichlet allocation." Journal of machine Learning research 3.Jan (2003): 993-1022. [2] Lee, Suchul, et al. "LARGen: automatic signature generation for Malwares using latent Dirichlet allocation." IEEE Transactions on Dependable and Secure Computing 15.5 (2016): 771-783. [3] Liu, Linqing, et al. "Detecting" Smart" Spammers On Social Network: A Topic Model Approach." arXiv preprint arXiv:1604.08504 (2016). [4] Mittal, Sudip, et al. "Cybertwitter: Using twitter to generate alerts for cybersecurity threats and vulnerabilities." 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE, 2016. [5] Chen, Haipeng, et al. "Using twitter to predict when vulnerabilities will be exploited." Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2019. [6] Hoffman, Matthew, Francis R. Bach, and David M. Blei. "Online learning for latent dirichlet allocation." advances in neural information processing systems. 2010. [7] Shrestha, Prasha, et al. "Multiple social platforms reveal actionable signals for software vulnerability awareness: A study of GitHub, Twitter and Reddit." PLoS One 15.3 (2020): e0230250. [8] Röder, Michael, Andreas Both, and Alexander Hinneburg. "Exploring the space of topic coherence measures." Proceedings of the eighth ACM international conference on Web search and data mining. 2015. [9] Ramos, Juan. "Using tf-idf to determine word relevance in document queries." Proceedings of the first instructional conference on machine learning. Vol. 242. 2003. | Content Synthesis/Discovery | Computer and Mathematical | null | null | null | null | null | null |
news | Unknown | February 2022 Quicklisp dist update now available | New projects : 3d-quaternions — A utility library implementing quaternion and dual-quaternion functionality. — zlib 3d-transforms — A u... | http://blog.quicklisp.org/2022/02/february-2022-quicklisp-dist-update-now.html | 2022-02-23T00:06:00Z | New projects: 3d-quaternions — A utility library implementing quaternion and dual-quaternion functionality. — zlib3d-transforms — A utility library implementing a common structure to encapsulate spatial transformations — zlibci — A tool simplify continuous deployment for Common Lisp projects. — BSDcl-lambdacalc — Define Lisp functions using lambda calculus syntax — MITcl-lib-helper — For common-lisp, organisation of popular functionalities in a central, easy-to-browse set of packages. — MITcl-myriam — Simple actor model implementation for local and remote actors — 3-clause BSDcl-tar — A high level interface for tar archives — MITcl-tar-file — A system for reading and writing physical entries from/to tar files. — BSD-style (http://opensource.org/licenses/BSD-3-Clause)cl-veq — reasonably fast operations for 1d, 2d, 3d vectors and arrays of vectors. — MITcl-wol — CLI built on top of the cl-wol.core system — BSD 2-Clauseclop — CLOP - Common Lisp tOml Parser — MITfactory-alien — Factory alien is a library for generating fixture data for testing applications. — MITin-nomine — Utilities for extensible namespaces in Common Lisp. — LLGPLjournal — A library for logging, tracing, testing and persistence. — MIT, see COPYING.json-schema — JSON schema validation — LLGPLmgl — MGL is a machine learning library for backpropagation neural networks, boltzmann machines, gaussian processes and more. — MIT, see COPYING.mgl-mat — MAT is library for working with multi-dimensional arrays which supports efficient interfacing to foreign and CUDA code. BLAS and CUBLAS bindings are available. — MITmk-defsystem — The MK-DEFSYSTEM ASDF System. — Othernamed-closure — Named closures — GPLv3+nfiles — Manage file persistence and loading. — BSD 3-Clausenhooks — Improved hooks facility inspired by Serapeum. — MITphos — An experimental Gemini client library — ISCstumpwm-dynamic-float — stumpwm-dynamic-float is an extension to the X window manager "StumpWM". It provides a dynamic-tiling environment based on StumpWM's floating-group. — MITtiny-routes — A tiny routing library for Common Lisp targeting Clack. — BSD 3-Clausetry — Try is a test framework. — MIT, see COPYING.with-branching — An implementation of macroexpand-time conditionalization — MITUpdated projects: 3d-matrices, 3d-vectors, adhoc, alexandria, amb, anaphora, array-operations, birch, bitio, bknr-datastore, bnf, caveman, chameleon, check-bnf, chipz, cl+ssl, cl-ana, cl-apertium-stream-parser, cl-catmull-rom-spline, cl-data-structures, cl-erlang-term, cl-fad, cl-fix, cl-forms, cl-gopher, cl-gserver, cl-hamcrest, cl-info, cl-json, cl-kraken, cl-markless, cl-mathstats, cl-mixed, cl-pdf, cl-ppcre, cl-schedule, cl-sdl2, cl-str, cl-tld, cl-utils, cl-webkit, clack, claxy, clerk, climc, climon, clingon, clml, clobber, clog, closer-mop, collectors, common-doc, commondoc-markdown, consfigurator, croatoan, ctype, defmain, deploy, depot, dexador, djula, doc, ernestine, esrap, external-symbol-not-found, fiveam, flexi-streams, fresnel, functional-trees, gadgets, gtwiwtg, harmony, helambdap, herodotus, ieee-floats, imago, introspect-environment, ironclad, lack, lake, latter-day-paypal, lichat-protocol, lichat-serverlib, lichat-tcp-client, lichat-tcp-server, lichat-ws-server, lionchat, lisp-binary, lmdb, log4cl-extras, lsx, lunamech-matrix-api, maiden, math, mcclim, mgl-pax, micmac, mnas-package, mutility, named-readtables, neo4cl, neural-classifier, nyxt, omglib, opticl, osicat, overlord, papyrus, petalisp, polymorphic-functions, postmodern, purgatory, qbase64, qlot, random-state, rove, safe-read, scriba, sel, serapeum, shasht, shop3, slime, slite, sly, spinneret, stripe-against-the-modern-world, stumpwm, symbol-munger, ten, tfeb-lisp-hax, tooter, track-best, trivial-extensible-sequences, trivial-package-local-nicknames, trivial-utf-8, trucler, uncursed, vellum, vellum-csv, vivid-colors, vivid-diff, vk, websocket-driver, with-contexts, wordnet, xhtmlambda, yason, zippy.To get this update, use: (ql:update-dist "quicklisp").Enjoy! | Content Creation/Content Synthesis | Computer and Mathematical | null | null | null | null | null | null |
|
news | feedfeeder | LinkedIn and DeepMind co-founders form AI startup to help humans talk to computers | Some of the better-known minds in tech are uniting to tackle one of computing's greater challenges. CNBC reports LinkedIn co-founder Reid Hoffman and DeepMind co-founder Mustafa Suleyman (pictured above) have formed Inflection AI, a company that will use artificial intelligence software to help humans talk to computers. The hope, according to Suleyman, is that you'll speak to computers in ordinary language — this will "almost certainly" be possible in five years, he said. Suleyman will serve as CEO, while fellow DeepMind alumni Karén Simonyan will operate as Inflection's chief scientist. The company aims to stay relatively small to preserve the team's focus and speed. The move was a long time in coming. Hoffman and Suleyman have known each other for nearly a decade, and Suleyman backed away from DeepMind in August 2019 following both a desire to "recharge" and criticisms of his management style in previous years. He became Google's VPP for AI product management and policy in December of that year, but left this January to work with Hoffman at venture capital firm Greylock Partners. There are still many unknowns surrounding Inflection. It hasn't pinpointed its target audience or a timeline for its first products. The DeepMind veterans could help its chances, however, and they're trying to solve a common problem. Existing AI assistants aren't particularly clever, and fixing that could improve everything from the phone in your pocket to next-generation robots. | https://slashdot.org/firehose.pl?op=view&id=159692975 | 2022-03-08T18:52:26Z | Some of the better-known minds in tech are uniting to tackle one of computing's greater challenges. CNBCreports LinkedIn co-founder Reid Hoffman and DeepMind co-founder Mustafa Suleyman (pictured above) have formed Inflection AI, a company that will use artificial intelligence software to help humans talk to computers. The hope, according to Suleyman, is that you'll speak to computers in ordinary language — this will "almost certainly" be possible in five years, he said.Suleyman will serve as CEO, while fellow DeepMind alumni Karén Simonyan will operate as Inflection's chief scientist. The company aims to stay relatively small to preserve the team's focus and speed.The move was a long time in coming. Hoffman and Suleyman have known each other for nearly a decade, and Suleyman backed away from DeepMind in August 2019 following both a desire to "recharge" and criticisms of his management style in previous years. He became Google's VPP for AI product management and policy in December of that year, but left this January to work with Hoffman at venture capital firm Greylock Partners.There are still many unknowns surrounding Inflection. It hasn't pinpointed its target audience or a timeline for its first products. The DeepMind veterans could help its chances, however, and they're trying to solve a common problem. Existing AI assistants aren't particularly clever, and fixing that could improve everything from the phone in your pocket to next-generation robots. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Petal Search brings Fresh New Advances MWC 2022 for Developers and Consumers alike | BARCELONA, Spain, March 4, 2022 /PRNewswire/ -- MWC 2022 returns this year once again, with Petal Search bringing a slew of fresh advances and showcasing them on the conference flour. With its ever-growing ecosystem, and immersive AR search experience, Petal Search brings the latest to... | https://www.prnewswire.com/news-releases/petal-search-brings-fresh-new-advances-mwc-2022-for-developers-and-consumers-alike-301495615.html | 2022-03-04T06:50:00Z | Working with partners globally, Petal Search innovates to bring search to the next level with the showcase of AR search at MWC 2022 through the AR Glasses, showcasing the search engine's flourishing ecosystem and the progress made together with its partners. An Immersive AR Glass Experience Greets You at the Petal Search Booth The AR Glasses showcased at MWC combines Petal Search's capabilities with cutting-edge AI technology to identify multiple scenarios, allowing users to quickly identify landmarks in front of them (for example, the famed Sagrada Familia in Barcelona) and much more through voice commands, offering a world of encyclopaedic knowledge once you put them on. Petal Search launched the Petal Vision AR version, which focuses on multi-modal search. This version enables the real-time identification of attractions, animals, plants, and commodities, and displays encyclopaedias, and similar contents for each identification result. The text recognition and translation capabilities of Petal Search also support real-time image translation to solve language problems a feature especially useful for travellers.At the Petal Search Booth, attendees were able to witness the capabilities of AR Search first-hand and performing exciting operations on AR glasses to experience functions such as recognition, content search, and real-time translation. Petal Search's Ecosystem Continues to ThrivePetal Search's thriving ecosystem, boosted by rapid development and collaboration with various partners, has made progress in leaps and bounds to delight users with the experience of "One Search for Everything". With its latest updates, Petal Search aims to build an all-category ecosystem, allowing its users to perform just one search to find everything they need. Currently, Petal Search covers over 20 vertical domains, including news, apps, shopping, travel, local services, pan-entertainment, and more. The search engine's growth trajectory has been impressive and shows no signs of slowing down. Petal Search also holds the impressive achievement of falling within the Top 5 Mobile Search Engines in more than 25 countries.The three specialised ecosystem platforms, Petal Merchant Center, Petal Travel Center and Business Connect provides businesses with the services they need, enabling merchants to list their offerings to consumers, be it restaurant location and opening times, products online for e-commerce purchase, or the latest hotel and flight offerings to boost visibility via search, as well as bringing convenience to consumers to easier access.Recently, Omio, the world's leading multi-modal transportation platform, has formed a strategic partnership with Huawei to integrate Omio Search API into Petal Search and Petal Maps, enabling access to Omio's portfolio of 1000+ providers, meaning Huawei users can easily discover and compare multi-modal transport options - train, bus, plane and ferry - in a convenient way, across Petal Search and Petal Maps.Petal Search Forges Meaningful Partnerships for Continuous InnovationPetal Search aggregates information from across 20 verticals and works with over 3,000 business partners from various industries to curate a search experience populated with high-quality content and services, with partnership with leading service providers such as AliExpress, trivago and Omio.Petal Search's Business Cooperation Model is designed to create value for its partners. Huawei's Joint Operations Process is an extremely effective way for partners to quickly increase their product exposure and boost revenue. Its ALL-IN strategy utilises a full portfolio of Huawei's marketing resources to give its partners tailor-made solutions. For instance, with a deep understanding of various local markets, the Petal Search team conducts joint end-to-end operations with its partners targeted at popular events around the globe, helping them maximise high quality traffic and achieve business growth with various targeted marketing activities.Under the Joint Operations Process, Petal Search provides comprehensive advertising, operational resources, and development resources, as well as flexible business models with operation slot sales offered by country and category in package mode. Hundreds of partners have already seen the benefits of collaborating with Petal Search, experiencing large sales increases as a result of Petal Search's exclusive seasonal campaigns, with strong support from Huawei's local operations teams. Petal Search's Black Friday Campaign in the EU spanned across 10 countries, with over 50 partners onboard, saw its partners' gross merchandising value (tracked across a 7 day average) soar by a stellar 500% as compared before the campaign, a remarkable improvement. (Date source - Petal Search)Moreover, as travel recovers, Petal Search is also working to promote its partners in the industry with dedicated campaigns and more. The travel campaign launched with its top partners including trivago. Featured offers from participating global brands, diverting traffic to the websites of Petal Search's partners, who saw a stunning revenue growth, a testament to how Petal Search helps businesses thrive.With a variety of new advances in-store and showcased at MWC 2022, be prepared to see Petal Search's enhanced capabilities in full bloom this year! Here is the direct link to download Petal Search.https://search-app-dra.dbankcdn.com/app/PetalSearch/PetalSearch-MWC.apkSOURCE Petal Search, Huawei | Information Retrieval Or Search/Digital Assistance/Image Analysis | Unknown | null | null | null | null | null | null |
||
news | annlite added to PyPI | Fast and Light Approximate Nearest Neighbor Search Library integrated with the Jina Ecosystem | https://pypi.org/project/annlite/ | 2022-03-04T11:09:59Z | AnnLite is an Approximate Nearest Neighbor Search (ANNS) library integrated with the Jina ecosystem.This indexer is recommended to be used when an application requires search with filters applied on Document tags.The filtering query language is based on MongoDB's query and projection operators. We currently support a subset of those selectors.The tags filters can be combined with $and and $or:$eq - Equal to (number, string)$ne - Not equal to (number, string)$gt - Greater than (number)$gte - Greater than or equal to (number)$lt - Less than (number)$lte - Less than or equal to (number)$in - Included in an array$nin - Not included in an arrayFor example, we want to search for a product with a price no more than 50$.index.search(query,filter={"price":{"$lte":50}})More example filter expressesA Nike shoes with white color{"brand":{"$eq":"Nike"},"category":{"$eq":"Shoes"},"color":{"$eq":"White"}}Or{"$and":{"brand":{"$eq":"Nike"},"category":{"$eq":"Shoes"},"color":{"$eq":"White"}}}A Nike shoes or price less than 100${"$or":{"brand":{"$eq":"Nike"},"price":{"$lt":100}}}InstallationTo install AnnLite you can simply run:pip install https://github.com/jina-ai/annlite/archive/refs/heads/main.zipGetting StartedFor an in-depth overview of the features of AnnLiteyou can follow along with one of the examples below:Quick StartCreate a new annliteimportrandomimportnumpyasnpfromjinaimportDocument,DocumentArrayfromannliteimportAnnLiteN=10000# number of data pointsNq=10# number of query dataD=128# dimentionality / number of features# the column schema: (name:str, dtype:type, create_index: bool)indexer=AnnLite(dim=D,columns=[('price',float)],data_path='./workspace_data')Note that this will create a folder ./workspace_data where indexed data will be stored.If there is already a folder with this name and the code presented here is not working remove that folder.Add new dataX=np.random.random((N,D)).astype(np.float32)# 10,000 128-dim vectors to be indexeddocs=DocumentArray([Document(id=f'{i}',embedding=X[i],tags={'price':random.random()})foriinrange(N)])indexer.index(docs)Search with filteringXq=np.random.random((Nq,D)).astype(np.float32)# a 128-dim query vectorquery=DocumentArray([Document(embedding=Xq[i])foriinrange(Nq)])# without filteringindexer.search(query,limit=10)print(f'the result without filtering:')fori,qinenumerate(query):print(f'query [{i}]:')forminq.matches:print(f'\t{m.id} ({m.scores["euclidean"].value})')# with filteringindexer.search(query,filter={"price":{"$lte":50}},limit=10)print(f'the result with filtering:')fori,qinenumerate(query):print(f'query [{i}]:')forminq.matches:print(f'\t{m.id}{m.scores["euclidean"].value} (price={m.tags["x"]})')Update dataXn=np.random.random((10,D)).astype(np.float32)# 10,000 128-dim vectors to be indexeddocs=DocumentArray([Document(id=f'{i}',embedding=Xn[i],tags={'price':random.random()})foriinrange(10)])indexer.update(docs)Delete dataindexer.delete(['1','2'])BenchmarkOne can run executor/benchmark.py to get a quick performance overview.Stored dataIndexing timeQuery size=1Query size=8Query size=64100002.9700.0020.0130.10010000076.4740.0110.0780.649500000467.9360.0460.3562.82310000001025.5060.0910.6955.778Results with filtering can be generated from examples/benchmark_with_filtering.py. This script should produce a table similar to:Stored data% same filterIndexing timeQuery size=1Query size=8Query size=641000052.8690.0040.0300.27010000152.8690.0040.0350.29410000203.5060.0050.0380.28710000303.5060.0050.0440.35610000503.5060.0080.0640.48410000802.8690.0130.0980.910100000575.9600.0180.1341.0921000001575.9600.0260.2111.7361000002078.4750.0340.2652.0971000003078.4750.0440.3572.8871000005078.4750.0680.5654.3831000008075.9600.1110.8786.8155000005497.7440.0690.5614.43950000015497.7440.1341.0648.46950000020440.1080.1521.1999.47250000030440.1080.2121.65013.26750000050440.1080.3282.63721.96150000080497.7440.5804.60236.986100000051052.3880.1311.0318.2121000000151052.3880.2632.19116.643100000020980.5980.3512.65921.193100000030980.5980.4613.71329.794100000050980.5980.7325.97547.3561000000801052.3881.1519.25573.552Note that:query times presented are represented in seconds.% same filter indicates the amount of data that verifies a filter in the database.For example, if % same filter = 10 and Stored data = 1_000_000 then it means 100_000 example verify the filter.Implemented AlgorithmsCurrently AnnLite supports:HNSW Algorithm (default choice)PQ-linear-scan (requires training)Research foundations of AnnLiteXor Filters Faster and Smaller Than Bloom FiltersCVPR20 Tutorial Billion-scale Approximate Nearest Neighbor SearchXOR-Quantization Fast top-K Cosine Similarity Search through XOR-Friendly Binary Quantization on GPUsNeurIPS21 Challenge Billion-Scale Approximate Nearest Neighbor Search Challenge NeurIPS'21 competition trackPAMI 2011 Product quantization for nearest neighbor searchCVPR 2016 Efficient Indexing of Billion-Scale Datasets of Deep DescriptorsNIPs 2017 Multiscale Quantization for Fast Similarity SearchNIPs 2018 Non-metric Similarity Graphs for Maximum Inner Product SearchACMMM 2018 Reconfigurable Inverted Index codeECCV 2018 Revisiting the Inverted Indices for Billion-Scale Approximate Nearest NeighborsCVPR 2019 Unsupervised Neural Quantization for Compressed-Domain Similarity SearchICML 2019 Learning to Route in Similarity GraphsICML 2020 Graph-based Nearest Neighbor Search: From Practice to Theory | Unknown | Computer and Mathematical | null | null | null | null | null | null |
||
news | Barry Schwartz | Google Updates Search Ads 360 | Google has made some big updates to its Search Ads 360 platform. Search Ads 360 is a search management platform that helps marketers manage search marketing campaigns across multiple engines and med | https://www.seroundtable.com/google-updates-search-ads-360-32901.html | 2022-02-11T12:52:24Z | Google has made some big updates to its Search Ads 360 platform. Search Ads 360 is a search management platform that helps marketers manage search marketing campaigns across multiple engines and media channels. It currently supports Google Ads, Microsoft Advertising, Yahoo! Japan Sponsored Products, Baidu and Yahoo! Gemini.What is new? Google said they made the platform easier to use with "a new user interface, and adding support for more search engine features and campaign types based on feedback from advertisers who told us they want an easier and more convenient way to build campaigns across advertising platforms." You can now immediately access support for most new Google Ads features and it has also been improved for other channels and search engines, like Microsoft Advertising and Yahoo! Japan. Google also built in new enterprise features which will give you new ways to centralize and scale your day-to-day tasks across engines and accounts. Here is a GIF of the new interface:There is a whole lot that is new, so review all the details on the Google blog.Forum discussion at Twitter. | Process Automation/Content Synthesis | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | PRNewswire | Petal Search brings Fresh New Advances MWC 2022 for Developers and Consumers alike | Petal Search brings search engine innovation and growth to elevate user experiences globally | https://www.uppermichiganssource.com/prnewswire/2022/03/04/petal-search-brings-fresh-new-advances-mwc-2022-developers-consumers-alike/ | 2022-03-04T06:56:41Z | BARCELONA, Spain, March 4, 2022 /PRNewswire/ -- MWC 2022 returns this year once again, with Petal Search bringing a slew of fresh advances and showcasing them on the conference flour. With its ever-growing ecosystem, and immersive AR search experience, Petal Search brings the latest to MWC. As of December 31, 2021, Petal Search has been launched in more than 170 countries and regions around the world, supports more than 70 languages, has more than 100 million cumulative users.Working with partners globally, Petal Search innovates to bring search to the next level with the showcase of AR search at MWC 2022 through the AR Glasses, showcasing the search engine's flourishing ecosystem and the progress made together with its partners.An Immersive AR Glass Experience Greets You at the Petal Search Booth The AR Glasses showcased at MWC combines Petal Search's capabilities with cutting-edge AI technology to identify multiple scenarios, allowing users to quickly identify landmarks in front of them (for example, the famed Sagrada Familia in Barcelona) and much more through voice commands, offering a world of encyclopaedic knowledge once you put them on.Petal Search launched the Petal Vision AR version, which focuses on multi-modal search. This version enables the real-time identification of attractions, animals, plants, and commodities, and displays encyclopaedias, and similar contents for each identification result. The text recognition and translation capabilities of Petal Search also support real-time image translation to solve language problems a feature especially useful for travellers.At the Petal Search Booth, attendees were able to witness the capabilities of AR Search first-hand and performing exciting operations on AR glasses to experience functions such as recognition, content search, and real-time translation.Petal Search's Ecosystem Continues to ThrivePetal Search's thriving ecosystem, boosted by rapid development and collaboration with various partners, has made progress in leaps and bounds to delight users with the experience of "One Search for Everything". With its latest updates, Petal Search aims to build an all-category ecosystem, allowing its users to perform just one search to find everything they need. Currently, Petal Search covers over 20 vertical domains, including news, apps, shopping, travel, local services, pan-entertainment, and more.The search engine's growth trajectory has been impressive and shows no signs of slowing down. Petal Search also holds the impressive achievement of falling within the Top 5 Mobile Search Engines in more than 25 countries.The three specialised ecosystem platforms, Petal Merchant Center, Petal Travel Center and Business Connect provides businesses with the services they need, enabling merchants to list their offerings to consumers, be it restaurant location and opening times, products online for e-commerce purchase, or the latest hotel and flight offerings to boost visibility via search, as well as bringing convenience to consumers to easier access.Recently, Omio, the world's leading multi-modal transportation platform, has formed a strategic partnership with Huawei to integrate Omio Search API into Petal Search and Petal Maps, enabling access to Omio's portfolio of 1000+ providers, meaning Huawei users can easily discover and compare multi-modal transport options - train, bus, plane and ferry - in a convenient way, across Petal Search and Petal Maps.Petal Search Forges Meaningful Partnerships for Continuous InnovationPetal Search aggregates information from across 20 verticals and works with over 3,000 business partners from various industries to curate a search experience populated with high-quality content and services, with partnership with leading service providers such as AliExpress, trivago and Omio.Petal Search's Business Cooperation Model is designed to create value for its partners. Huawei's Joint Operations Process is an extremely effective way for partners to quickly increase their product exposure and boost revenue. Its ALL-IN strategy utilises a full portfolio of Huawei's marketing resources to give its partners tailor-made solutions. For instance, with a deep understanding of various local markets, the Petal Search team conducts joint end-to-end operations with its partners targeted at popular events around the globe, helping them maximise high quality traffic and achieve business growth with various targeted marketing activities.End-to-End Business Corporation Model for Petal Search(PRNewswire)Under the Joint Operations Process, Petal Search provides comprehensive advertising, operational resources, and development resources, as well as flexible business models with operation slot sales offered by country and category in package mode.Hundreds of partners have already seen the benefits of collaborating with Petal Search, experiencing large sales increases as a result of Petal Search's exclusive seasonal campaigns, with strong support from Huawei's local operations teams.Petal Search's Black Friday Campaign in the EU spanned across 10 countries, with over 50 partners onboard, saw its partners' gross merchandising value (tracked across a 7 day average) soar by a stellar 500% as compared before the campaign, a remarkable improvement. (Date source - Petal Search)Moreover, as travel recovers, Petal Search is also working to promote its partners in the industry with dedicated campaigns and more. The travel campaign launched with its top partners including trivago. Featured offers from participating global brands, diverting traffic to the websites of Petal Search's partners, who saw a stunning revenue growth, a testament to how Petal Search helps businesses thrive.With a variety of new advances in-store and showcased at MWC 2022, be prepared to see Petal Search's enhanced capabilities in full bloom this year!Here is the direct link to download Petal Search.https://search-app-dra.dbankcdn.com/app/PetalSearch/PetalSearch-MWC.apkPetal Search Mobile Version(PRNewswire)View original content to download multimedia:SOURCE Petal Search, HuaweiThe above press release was provided courtesy of PRNewswire. The views, opinions and statements in the press release are not endorsed by Gray Media Group nor do they necessarily state or reflect those of Gray Media Group, Inc. | Information Retrieval Or Search/Digital Assistance/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | George Anadiotis | Google sets the bar for AI language models with PaLM | PaLM (Pathways Language Model) is the first outcome of Pathways, Google’s new AI architecture, which aims to handle many tasks at once, learn new tasks quickly and reflect a better understanding of the world. | https://venturebeat.com/2022/04/12/ai-weekly-google-sets-the-bar-for-ai-language-models-with-palm/ | 2022-04-12T18:11:00Z | We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!Googles new large language model (LLM) called PaLM (Pathways Language Model) is the first outcome of Pathways, Googles new AI architecture, which aims to handle many tasks at once, learn new tasks quickly and reflect a better understanding of the world.PaLM is a massive undertaking with ambitious goals. Although many aspects of PaLM require further evaluation, it represents an important step forward for LLMs. The process of developing and evaluating PaLM is detailed in an arXiv publication and summarized by Google in a blog post. Under the LLM hoodGoogles publication outlines the philosophy of Pathways at every step of the process of training PaLM. The versions of the new architecture include PaLM 8B with 8 billion parameters, PaLM 62B with 62 billion parameters and PaLM 540B with 540 billion parameters. Google created different versions in order to evaluate the cost-value function as well as the benefits of scale.The number of parameters is important in LLMs, although more parameters dont necessarily translate to a better-performing model. PaLM 540B is in the same league as some of the largest LLMs available regarding the number of parameters: OpenAIs GPT-3 with 175 billion, DeepMinds Gopher and Chinchilla with 280 billion and 70 billion, Googles own GLaM and LaMDA with 1.2 trillion and 137 billion and Microsoft Nvidias MegatronTuring NLG with 530 billion.The first thing to consider when discussing LLMs, like any other AI model, is the efficiency of the training process. Even the Googles of the world need to answer this question: Given a certain quantity of compute, how large of a model should I train in order to get the best possible performance?In 2020, OpenAI proposed scaling laws to guide the training of LLMs. In 2022, DeepMind published a paper, Training Compute-Optimal Large Language Models, in which analysts claim that training LLMs has been done with a deeply suboptimal use of compute. Independently, Google reached similar conclusions, as detailed in PaLMs documentation.PaLMs training is state of the art on many levels. At the hardware level, PaLM 540B was trained over two TPU v4 Pods connected over a data center network (DCN) using a combination of model and data parallelism. Google used 3,072 TPU v4 chips in each Pod attached to 768 hosts, which it notes is the largest TPU configuration described to date. This allowed Google to efficiently scale training to 6,144 chips, achieving a training efficiency of 57.8% hardware FLOPs utilization, which Google claims is the highest yet achieved for LLMs at this scale.PaLM uses a standard Transformer model architecture, with some customizations. Transformer is the architecture used by all LLMs and although PaLM deviates from it in some ways, what is arguably more important is the focus of the training dataset used.How to train your LLMThe dataset used to train PaLM is a mixture of filtered multilingual web pages (27%), English books (13%), multilingual Wikipedia articles (4%), English news articles (1%), GitHub source code (5%) and multilingual social media conversations (50%). This dataset is based on those used to train LaMDA and GLaM. There are a few things worth highlighting here.First, its worth asking whether the selection of sources reflects Googles goals. Social media conversations are by far the most prevalent source and while web pages have been selected taking their assigned quality scores into account, that doesnt seem to be the case for social media conversations.Web pages included in the training dataset were filtered using a classifier to assess quality, with the goal of limiting content toxicity and including professionally written content. However, Google notes, this may have disproportionately excluded casual language, code-switching (or behavioral adjustments in actions or speech) or dialectal diversity and may limit PaLMs capability to model the nondominant dialects across the English-speaking regions globallyWe hypothesize that quality scores may be harder to assign for social media conversations. The paper also argues that in order for PaLM to be able to identify toxicity as part of its general-purpose applicability, exposure to it is needed.Second, even though multilingual sources are cited, in reality theyre still dominated by the English language. Nearly 78% of all sources are English, with German and French sources at 3.5% and 3.2% and all other sources trailing far behind.Google notes that the language capabilities of PaLM are likely constrained by the limitations of language present in the training data and evaluation benchmarks. At the same time, PaLM yields impressive multilingual capabilities on the benchmarks Google evaluated against, the majority of which are in the English language.Variations of PaLM were trained using one-pass or few-pass approaches, which means that the bulk of the data in the training dataset were processed as input as few times as possible. This is part of the efficiency bet for PaLM, but it also had another interesting side effect: it resulted in very little memorization, meaning that PaLM output is for the most part computed, not recited.Doing more with less but what for?Googles vision for Pathways is to enable a single AI system to generalize across thousands or millions of tasks, to understand different types of data and to do so with remarkable efficiency. PaLM may be an important step forward regarding efficiency, but what about its performance levels?Google claims that PaLM shows breakthrough capabilities on numerous difficult tasks. In its blog post, examples for language understanding and generation, reasoning and code-related tasks are highlighted.In language understanding, PaLM was evaluated on 29 widely used English natural language processing (NLP) tasks. PaLM 540B surpassed few-shot performance of prior LLMs on 28 of 29 of tasks. In addition to English NLP tasks, PaLM also shows strong performance on multilingual NLP benchmarks, including translation, even though only 22% of the training corpus is non-English.PaLMs performance was also compared against that of Gopher and Chinchilla using the new Beyond the Imitation Game Benchmark (BIG-bench). Results demonstrate impressive natural language understanding and generation capabilities on tasks like distinguishing cause and effect, understanding conceptual combinations in appropriate contexts and even guessing a movie from a combination of emojis.Of note here is the fact that PaLM 540B five-shot performs better than the average result from individuals who were asked to solve the same tasks. Google also notes that PaLMs performance suggests that performance improvements from scale have not yet plateaued.As for reasoning, PaLMs performance was evaluated in tasks that require multistep arithmetic or common-sense reasoning. The example highlighted by Google is PaLMs capability to solve 58% of the problems in GSM8K, a benchmark of thousands of challenging grade school level math questions.PaLM outperforms the prior top score of 55% achieved by fine-tuning GPT-3 with a training set of 7,500 problems and combining it with an external calculator and verifier. This new score also approaches the 60% average of problems solved by nine to 12-year-olds the target audience for the question set.Googles results for PaLM 540B show strong performance across coding tasks and natural language tasks in a single model, even though it has only 5% code in the pre-training dataset. Google notes that PaLMs few-shot performance is especially remarkable because it is on par with the fine-tuned Codex while using 50 times less Python code for training.To summarize, it seems that PaLM can do more with less i.e., achieve comparable or better performance to existing state of the art LLMs, while needing less resources and less customization than they do.Aiming higher with AI ethics and human-level intelligenceThe fact that this is a gigantic undertaking is clear from Googles publication detailing the new technology. Its size, level of detail, and mention of a team of nearly 70 professionals involved in the effort speak volumes.Google also includes sections on Representational Bias Analysis and Ethical Considerations in its publication. Analysis and documentation of potential undesirable risks through transparent artifacts such as model cards and data sheets, which also include information on intended use and testing, is promoted.Its hard to offer prognostications as to what that all means on a practical level for the rest of the world at this point. Being able to create LLMs in a more efficient way is a good thing to the extent that they are created at all. However, were not aware of plans to share PaLM at this point and the TPU infrastructure used to train it is Google-specific. That means transfer of know-how and techniques to other LLM builders may not be directly applicable.Contrary to GPT-3, which is commercially available by OpenAI together with Microsoft via an API, were not aware of similar programs or plans for Googles GLaM, LaMDA and PaLM. Googles BERT, one of the first LLMs, is open source and has given birth to many variations, in addition to powering the latest incarnation of Google Search. We can hypothesize that PaLM may eventually get there, too.As to the pie-in-the-sky goal of human-level intelligence, opinions vary. Google notes in its publication that performance improvements from scale havent yet plateaued. In other areas where deep learning is applied, however, a plateau in performance seems to have been reached.Recently, Blaise Aguera y Arcas, the head of Googles AI group in Seattle, argued that statistics do amount to understanding, citing a few exchanges with LaMDA as evidence. It did not take long for critics to point out weaknesses in that claim. If anything, we expect PaLM to fuel the ongoing debate among AI professionals and technical decision makers.VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership. | Unknown | Unknown | null | null | null | null | null | null |
|
news | GraphQL is now available on Supabase | GraphQL support is now in general availability on the Supabase platform via our open source PostgreSQL extension, pg_graphql (beta). | https://supabase.com/blog/2022/03/29/graphql-now-available | 2022-03-29T17:48:16Z | GraphQL support is now in general availability on the Supabase platform via our open source PostgreSQL extension,`pg_graphql (beta)`.pg_graphql enables you to query existing PostgreSQL databases using GraphQL, either from within SQL or over HTTP:From SQL:1select graphql.resolve($$2 {3 accountCollection(first: 1) {4 edges {5 node {6 id7 firstName8 address {9 countryCode10 }11 }12 }13 }14 }15$$);16or over HTTP:1curl -X POST https://<PROJECT_REF>.supabase.co/graphql/v1 \2 -H 'apiKey: <API_KEY>'\3 -H 'Content-Type: application/json' \4 --data-raw '5 {6 "query":"{ accountCollection(first: 3) { edges { node { id } } } }"7 }'8Schema ReflectionGraphQL types and fields are reflected from the SQL schema:Tables become typesColumns become fieldsForeign keys become relationsFor example:1createtable "Account"(2 "id" serial primary key,3 "email" varchar(255) notnull,4 "createdAt" timestampnotnull,5 "updatedAt" timestampnotnull6);78-- Rebuild the GraphQL Schema Cache9select graphql.rebuild_schema();10Translates to the GraphQL base type1typeAccount {2 id: Int!3 email: String!4 createdAt: DateTime!5 updatedAt: DateTime!6}7And exposes bulk CRUD operations on the Query and Mutation types, complete with relay style keyset pagination, filters, and ordering and (optional) name inflection.1typeQuery {2 accountCollection(3 first: Int4 last: Int5 before: Cursor6 after: Cursor7 filter: AccountFilter8 orderBy: [AccountOrderBy!]9 ): AccountConnection10}1112typeMutation {13 insertIntoAccountCollection(14 objects: [AccountInsertInput!]!15 ): AccountInsertResponse1617 updateAccountCollection(18 set: AccountUpdateInput!19 filter: AccountFilter20 atMost: Int! = 121 ): AccountUpdateResponse!2223 deleteFromAccountCollection(24 filter: AccountFilter25 atMost: Int! = 126 ): AccountDeleteResponse!27For a complete example with relationships, check out the reflection docs.SecurityAn advantage to embedding GraphQL directly in the database is that we can lean on PostgreSQL's built-in primitives for authentication and authorization.AuthenticationThe GraphQL types exposed by pg_graphql are filtered according to the SQL role's INSERT/UPDATE/DELETE permissions.At Supabase, each API request is resolved in the database using the role in the request's JWT.Anonymous users receive the anon role, and logged in users get the authenticated role. In either case, pg_graphql resolves requests according to the SQL permissions.The introspection schema is similarly filtered to limit exposed types and fields to those that the user has permission to access.That means we can serve multiple GraphQL schemas for users of differing privilege levels from a single endpoint!Another nice side effect of making PostgreSQL do the heavy lifting is that GraphQL queries respect your existingrow level security policies right out-of-the-box. No additional configuration required.PerformanceEach free tier database on the Supabase platform runs on a dedicated AWS t4g.micro instance with 2 vCPUs and 1 GB of memory.To squeeze the most out of that limited hardware we had to make a few significant optimizations:GraphQL queries are always transpiled into exactly one SQL queryThe SQL queries select and aggregate requested data into the shape of the GraphQL JSON response.In addition to solving the N+1 query problem, a common issue with GraphQL resolvers,GraphQL queries requiring multiple joins typically produce significantly less IO due to reduced data duplication.For example, when selecting all comments for a blog post:1select2 blog_posts.title,3 comments.body as comment_body4from5 blog_posts6join7 comments on blog_posts.id = comments.blog_post_id8a SQL response would duplicate all data from the blog_posts table (title).1| title | comment_body |2| ---------- | ------------------------------ |3| F1sRt P0$T | this guy gets it! |4| F1sRt P0$T | you should re-write it in rust |5| F1sRt P0$T | 10% off daily vitamin http:... |6Compared to the equivalent GraphQL response.1{2"blogPostCollection": {3"edges": {4"node":5"title": "F1sRt P0$T"6"commentCollection": {7"edges": [8"node": {9"body": "this guy gets it!"10 },11"node": {12"body": "you should re-write it in rust"13 },14"node": {15"body": "10% off daily vitamin http:..."16 }17 ]18 }19 }20 }21 }22}23Which has no duplication of data.The difference in payload size is negligible in this case, but as the number of 1-to-many joins grows, data duplication in the SQL response grows geometrically.Queries are cached as prepared statementsAfter a GraphQL query is transpiled to SQL, it is added to the prepared statement cache so subsequent requests with the same structure (think pagination) can skip the transpilation step.Using prepared statements also allows PostgreSQL to skip the overhead of computing a query plan. For small, on-index, queries,the query planning step can take several times as long as the query's execution time, so the saving is significant at scale.All operations are bulkFinally, all reflected query and mutation fields support bulk operations to nudge users towards consuming the API efficiently.Batching similar operations reduces network round-trips and time spent in the database.ResultAs a result of these optimizations, the throughput of a hello world equivalent query on Supabase free-tier hardware is:377.4 requests/second through the API (mean)656.2 queries/second through SQL (single connection, mean)Getting Started GraphQL (Beta) is only available on Supabase projects created after 28th March 2022 and self-hosted setups.To enable GraphQL in your Supabase instance, enable pg_graphql from the dashboard.Or create the extension in your database1create extension pg_graphql;2And we're done!The GraphQL endpoint is available at: https://<project_ref>.supabase.co/graphql/v1Example app: Build a HN clone with Postgres and GraphQLWe're excited to have worked with The Guild to show you how to usepg_graphqland their tools to build a HackerNews clone.The demo application showcases:CRUD (Query + Mutation Operations).Data is fetched from the GraphQL layer auto-generated via pg_graphql.Cursor Based Pagination.pg_graphql generates standardized pagination types and fields as defined by the GraphQL Cursor Connections Specification.Authorization / RLS.GraphQL requests made include Supabase authorization headers so that Row Level Security on the Postgres layer ensures that viewers can only access what they are allowed to and authenticated users can only update what they should.Code generation.Introspect your GraphQL schema and operations to generates the types for full backend to frontend type-safety.Postgres Triggers and Functions.Recalculate the feed scoring each time someone votes.Supabase UI.Use Auth widget straight out the box to handle logins and access tokens.Now instead of using the Supabase PostgREST API to query your database ...1// using Supabase PostgREST23const { data, error } = await supabase4 .from('profile')5 .select('id, username, bio, avatarUrl, website')6... all data fetching and updates are done using the same GraphQL operations you know and love! 1// using GraphQL23query ProfilesQuery {4 profileCollection {5 edges {6 node {7 id8 username9 bio10 avatarUrl11 website12 }13 }14 }15}16 Get the code on GitHub here: github.com/supabase-community/supabase-graphql-exampleSupabase + The GuildThis is just the start of what we hope to be a close collaboration with the Guild, whose expertise of the GraphQL ecosystem will guide the development of Supabase's GraphQL features.The Guild and Supabase share a similar approach to open source - we both favor collaboration and composability, making collaboration easy and productive.Be sure to visit The Guild and follow them to stay informed of the latest developments in GraphQL.Limitations & RoadmapOur first general availability release of pg_graphql supports:Full CRUD on table columns with scalar typesRead only support for array typesExtending types with computed fieldsConfiguration with SQL commentsIn the near term, we plan to fully support array and json/b types. Longer term, we intend to support views and custom mutations from user defined functions.Didn't see the feature you're interested in? Let us know | Content Synthesis/Process Automation | Unknown | null | null | null | null | null | null |
||
news | Google AI | Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance | Posted by Sharan Narang and Aakanksha Chowdhery, Software Engineers, Google Research In recent years, large neural networks trained for l... | http://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html | http://2.bp.blogspot.com/-qRz-hnwUdY4/WulXSQ6Rv4I/AAAAAAAATvQ/shk7KsphA0c3E3nUMsDVASqYaH0PhLPNwCK4BGAYYCw/s1600/GoogleAI_logo_horizontal_color_rgb.png | 2022-04-04T16:01:00Z | Posted by Sharan Narang and Aakanksha Chowdhery, Software Engineers, Google Research In recent years, large neural networks trained for language understanding and generation have achieved impressive results across a wide range of tasks. GPT-3 first showed that large language models (LLMs) can be used for few-shot learning and can achieve impressive results without large-scale task-specific data collection or model parameter updating. More recent LLMs, such as GLaM, LaMDA, Gopher, and Megatron-Turing NLG, achieved state-of-the-art few-shot results on many tasks by scaling model size, using sparsely activated modules, and training on larger datasets from more diverse sources. Yet much work remains in understanding the capabilities that emerge with few-shot learning as we push the limits of model scale. Last year Google Research announced our vision for Pathways, a single model that could generalize across domains and tasks while being highly efficient. An important milestone toward realizing this vision was to develop the new Pathways system to orchestrate distributed computation for accelerators. In “PaLM: Scaling Language Modeling with Pathways”, we introduce the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods. We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases. As the scale of the model increases, the performance improves across tasks while also unlocking new capabilities.Training a 540-Billion Parameter Language Model with PathwaysPaLM demonstrates the first large-scale use of the Pathways system to scale training to 6144 chips, the largest TPU-based system configuration used for training to date. The training is scaled using data parallelism at the Pod level across two Cloud TPU v4 Pods, while using standard data and model parallelism within each Pod. This is a significant increase in scale compared to most previous LLMs, which were either trained on a single TPU v3 Pod (e.g., GLaM, LaMDA), used pipeline parallelism to scale to 2240 A100 GPUs across GPU clusters (Megatron-Turing NLG) or used multiple TPU v3 Pods (Gopher) with a maximum scale of 4096 TPU v3 chips. PaLM achieves a training efficiency of 57.8% hardware FLOPs utilization, the highest yet achieved for LLMs at this scale. This is due to a combination of the parallelism strategy and a reformulation of the Transformer block that allows for attention and feedforward layers to be computed in parallel, enabling speedups from TPU compiler optimizations. PaLM was trained using a combination of English and multilingual datasets that include high-quality web documents, books, Wikipedia, conversations, and GitHub code. We also created a “lossless” vocabulary that preserves all whitespace (especially important for code), splits out-of-vocabulary Unicode characters into bytes, and splits numbers into individual tokens, one for each digit. Breakthrough Capabilities on Language, Reasoning, and Code TasksPaLM shows breakthrough capabilities on numerous very difficult tasks. We highlight a few examples for language understanding and generation, reasoning, and code-related tasks below. Language Understanding and GenerationWe evaluated PaLM on 29 widely-used English natural language processing (NLP) tasks. PaLM 540B surpassed few-shot performance of prior large models, such as GLaM, GPT-3, Megatron-Turing NLG, Gopher, Chinchilla, and LaMDA, on 28 of 29 of tasks that span question-answering tasks (open-domain closed-book variant), cloze and sentence-completion tasks, Winograd-style tasks, in-context reading comprehension tasks, common-sense reasoning tasks, SuperGLUE tasks, and natural language inference tasks.PaLM 540B performance improvement over prior state-of-the-art (SOTA) results on 29 English-based NLP tasks.In addition to English NLP tasks, PaLM also shows strong performance on multilingual NLP benchmarks, including translation, even though only 22% of the training corpus is non-English. We also probe emerging and future capabilities of PaLM on the Beyond the Imitation Game Benchmark (BIG-bench), a recently released suite of more than 150 new language modeling tasks, and find that PaLM achieves breakthrough performance. We compare the performance of PaLM to Gopher and Chinchilla, averaged across a common subset of 58 of these tasks. Interestingly, we note that PaLM’s performance as a function of scale follows a log-linear behavior similar to prior models, suggesting that performance improvements from scale have not yet plateaued. PaLM 540B 5-shot also does better than the average performance of people asked to solve the same tasks. Scaling behavior of PaLM on a subset of 58 BIG-bench tasks. PaLM demonstrates impressive natural language understanding and generation capabilities on several BIG-bench tasks. For example, the model can distinguish cause and effect, understand conceptual combinations in appropriate contexts, and even guess the movie from an emoji. Examples that showcase PaLM 540B 1-shot performance on BIG-bench tasks: labeling cause and effect, conceptual understanding, guessing movies from emoji, and finding synonyms and counterfactuals.ReasoningBy combining model scale with chain-of-thought prompting, PaLM shows breakthrough capabilities on reasoning tasks that require multi-step arithmetic or common-sense reasoning. Prior LLMs, like Gopher, saw less benefit from model scale in improving performance. Standard prompting versus chain-of-thought prompting for an example grade-school math problem. Chain-of-thought prompting decomposes the prompt for a multi-step reasoning problem into intermediate steps (highlighted in yellow), similar to how a person would approach it.We observed strong performance from PaLM 540B combined with chain-of-thought prompting on three arithmetic datasets and two commonsense reasoning datasets. For example, with 8-shot prompting, PaLM solves 58% of the problems in GSM8K, a benchmark of thousands of challenging grade school level math questions, outperforming the prior top score of 55% achieved by fine-tuning the GPT-3 175B model with a training set of 7500 problems and combining it with an external calculator and verifier. This new score is especially interesting, as it approaches the 60% average of problems solved by 9-12 year olds, who are the target audience for the question set. We suspect that separate encoding of digits in the PaLM vocabulary helps enable these performance improvements. Remarkably, PaLM can even generate explicit explanations for scenarios that require a complex combination of multi-step logical inference, world knowledge, and deep language understanding. For example, it can provide high quality explanations for novel jokes not found on the web. PaLM explains an original joke with two-shot prompts.Code GenerationLLMs have also been shown [1, 2, 3, 4] to generalize well to coding tasks, such as writing code given a natural language description (text-to-code), translating code from one language to another, and fixing compilation errors (code-to-code). PaLM 540B shows strong performance across coding tasks and natural language tasks in a single model, even though it has only 5% code in the pre-training dataset. Its few-shot performance is especially remarkable because it is on par with the fine-tuned Codex 12B while using 50 times less Python code for training. This result reinforces earlier findings that larger models can be more sample efficient than smaller models because they better transfer learning from other programming languages and natural language data. Examples of a fine-tuned PaLM 540B model on text-to-code tasks, such as GSM8K-Python and HumanEval, and code-to-code tasks, such as Transcoder.We also see a further increase in performance by fine-tuning PaLM on a Python-only code dataset, which we refer to as PaLM-Coder. For an example code repair task called DeepFix, where the objective is to modify initially broken C programs until they compile successfully, PaLM-Coder 540B demonstrates impressive performance, achieving a compile rate of 82.1%, which outperforms the prior 71.7% state of the art. This opens up opportunities for fixing more complex errors that arise during software development. An example from the DeepFix Code Repair task. The fine-tuned PaLM-Coder 540B fixes compilation errors (left, in red) to a version of code that compiles (right).Ethical ConsiderationsRecent research has highlighted various potential risks associated with LLMs trained on web text. It is crucial to analyze and document such potential undesirable risks through transparent artifacts such as model cards and datasheets, which also include information on intended use and testing. To this end, our paper provides a datasheet, model card and Responsible AI benchmark results, and it reports thorough analyses of the dataset and model outputs for biases and risks. While the analysis helps outline some potential risks of the model, domain- and task-specific analysis is essential to truly calibrate, contextualize, and mitigate possible harms. Further understanding of risks and benefits of these models is a topic of ongoing research, together with developing scalable solutions that can put guardrails against malicious uses of language models. Conclusion and Future WorkPaLM demonstrates the scaling capability of the Pathways system to thousands of accelerator chips across two TPU v4 Pods by training a 540-billion parameter model efficiently with a well-studied, well-established recipe of a dense decoder-only Transformer model. Pushing the limits of model scale enables breakthrough few-shot performance of PaLM across a variety of natural language processing, reasoning, and code tasks. PaLM paves the way for even more capable models by combining the scaling capabilities with novel architectural choices and training schemes, and brings us closer to the Pathways vision: “Enable a single AI system to generalize across thousands or millions of tasks, to understand different types of data, and to do so with remarkable efficiency."AcknowledgementsPaLM is the result of a large, collaborative effort by many teams within Google Research and across Alphabet. We’d like to thank the entire PaLM team for their contributions: Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Guy-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Mishra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, and Jason Wei. PaLM builds on top of work by many, many teams at Google and we would especially like to recognize the T5X team, the Pathways infrastructure team, the JAX team, the Flaxformer team, the XLA team, the Plaque team, the Borg team, and the Datacenter networking infrastructure team. We’d like to thank our co-authors on this blog post, Alexander Spiridonov and Maysam Moussalem, as well as Josh Newlan and Tom Small for the images and animations in this blog post. Finally, we would like to thank our advisors for the project: Noah Fiedel, Slav Petrov, Jeff Dean, Douglas Eck, and Kathy Meier-Hellstern. | Content Creation/Content Synthesis/Information Retrieval Or Search | Unknown | null | null | null | null | null | null |
news | Lily Roberts | Machine learning techniques can speed up glacier modeling by 1,000 times | A novel glacier model has been developed which can simulate ice dynamics and ice interaction with the climate up to a thousand times faster than previous models. This model can be used to predict the evolution of glaciers and ice sheets under different scenarios. Since meltwater from glaciers and ice sheets is a major component of sea level rise, models like this are a valuable tool to assess their potential future contribution. | https://phys.org/news/2022-03-machine-techniques-glacier.html | 2022-03-28T14:33:48Z | A novel glacier model has been developed which can simulate ice dynamics and ice interaction with the climate up to a thousand times faster than previous models. This model can be used to predict the evolution of glaciers and ice sheets under different scenarios. Since meltwater from glaciers and ice sheets is a major component of sea level rise, models like this are a valuable tool to assess their potential future contribution.The new model uses a machine learning approach which makes glacier modeling much quicker whilst maintaining high levels of fidelity (the degree to which a simulation or model accurately reproduces the object or process it is designed to represent). As a result, more model simulations with different inputs and assumptions can be conducted, investigating a wider range of questions.The state-of-the-art Instructed Glacier Model is highly efficient compared to well-established simulation tools. It implements an artificial neural network, which is a computer system that mimics the neural networks found in our brains. They "trained" the neural network by inputting data from ice sheet models so that it could emulate ice dynamics. This training process is called machine learning, and it is considered part of the field of artificial intelligence. Modeling methods prior to AI required a great deal of human input, supervision and decision making, whereas with machine learning, the computer system navigates the human process of updating the model on its own.The lead developer, Guillaume Jouvet, a senior researcher at the University of Zurich, explained that "[there is] a new trend for machine learning to learn from data generated by physical models." Physics-based models (also termed physical models) have long been used to understand the physical processes occurring in the Earth system, without relying on any artificial intelligence.Physical modeling of ice sheets and glaciers at high spatial resolutions is an enormous challenge even today. Over the past two decades, exceptional efforts have been made to develop models to simulate ice flow and its associated physical processes, as well as its interaction with the climate. Adding complexity to models increases the computational cost of the simulation, so most models often use approximations to the Stokes equations, which most faithfully describe ice flow, entailing a compromise between accuracy and computational cost. Jouvet describes that the main motivation behind transitioning to machine learning is "in a way, you are shortcutting your physical modeling, making the gain computationally way cheaper."GlacierHub spoke with Laura Sandoval, of the University of Colorado, Boulder, who led a review into artificial intelligence in the geosciences field. "In the past decade the AI [and] machine learning activities have increased tremendously in the field of geosciences, [but] most AI efforts in geoscientific research groups are still at the infancy stage," she stated. "Currently, researchers are actively exploring many AI models and prototyping solutions for the challenging problems within their domains." However, in comparison to the traditional physics-based models, there have been no big breakthroughs in AI and machine learning products yet. Sandoval added "the implementation of AI is still underway."The Instructed Glacier Model substitutes the most computationally demanding model component by using a neural network trained from large datasets. Taking advantage of the large amount of modeling data available to train the neural network delivers high fidelity solutions at much lower computational cost. It can predict ice flow from given variables and simplified processes to be used in global glacier modeling as well as researching past glaciated environments."The most expensive part was computing the dynamics because it involved heavy physics, [but] machine learning accelerated this part of the model. The result is that we can model the glacier to the same accuracy much quicker than before. We can use this to explore many more parameters and [conduct] more refined simulations," said Jouvet. The research on the model took Jouvet and his team over a year. He added that "I had to learn this new techniqueall the tools I'm using are really new."The researchers are pleased that they were able to have the machine learning up and running and Jouvet will now look forward to using his model to reconstruct the evolution of glaciers in the Alps over the last glacial cycle of 100,000 years. "The gain for this approach is you speed up the modeling so you can afford to do long timescales. [Where] traditional models may take several weeks, it can now take an hour."Implementation of AI and machine learning does not come without its challenges and skepticism, similar to that seen in high-profile cases in biology and engineering. Sandoval explains "Ethics is truly one of the major concerns. However, since we are still at an early prototyping stage, the current main arguments against AI are uncertainty, explainability and reproducibility." Ethical issues surrounding AI include the loss of human jobs, the unequal distribution of wealth created by AI machines, the security of AI data and the capacity for malicious intent. As implementation of AI increases, more concerns are emerging, such as the environmental issues of using large amounts of energy to run computer models. Similar arguments have been widely seen against other cyber services like cryptocurrency and electronic trading.Scientists have been studying big questions about our climate and Earth system for many years and have accumulated a large amount of data which will be used to train AI models. "Given the recent huge investments in AI from both the public and private sectors, we expect to see that the relevant application on data-centric AI research in geosciences will bloom in the next few years," says Sandoval.Despite the transition, not every geoscientific problem can be solved by AI, and some questions are not well suited to classic machine learning techniques. "Some Earth phenomena are extreme events and their patterns cannot be learnt from historical data. Finding a suitable question is the key first step to develop a successful AI application," Sandoval concludes.The novel Instructed Glacier Model is a successful example of how new techniques for glacier modeling may replace the traditionally known physics-based approaches. Many uncertainties surrounding artificial intelligence still remain and whether large-scale progress in the field will be seen is a question of the coming decade. For now, both old and new techniques will be implemented in order to provide answers to some of our greatest questions regarding ice sheets and glaciers.More information:Ziheng Sun et al, A review of Earth Artificial Intelligence, Computers & Geosciences (2022). DOI: 10.1016/j.cageo.2022.105034This story is republished courtesy of Earth Institute, Columbia University http://blogs.ei.columbia.edu.Citation: Machine learning techniques can speed up glacier modeling by 1,000 times (2022, March 28) retrieved 28 March 2022 from https://phys.org/news/2022-03-machine-techniques-glacier.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. | Prediction/Content Synthesis | Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Andrew W. Moore, Google Cloud, Andrew W. Moore, Google Cloud https://www.forbes.com/sites/googlecloud/people/andrewwmoore/ | Conversational AI’s Moment Is Now | Conversational AI has a huge opportunity to impact technology, society and business. | https://www.forbes.com/sites/googlecloud/2022/03/21/conversational-ais-moment-is-now/ | 2022-03-21T16:00:02Z | Conversational AI has a huge opportunity to impact technology, society and businessConversational AI is an emerging inflection point in human-computer interactionsgettyHow easily we interact with computers strongly informs how likely technology is to disrupt a given aspect of life or business. When we needed to punch code into a command line just to load a program, computers were far less user-friendly. But the mouse and graphical interfaces made things much easier, and computers blossomed from niche products into the mainstream. Touch took things further still, helping create a world where most people carry a computer in their pocket while increasingly also wearing one on their wrist. Whats the next frontier that will further evolve human-computer relationships? Conversational AI. You might be thinking that voice interfaces are nothing newafter all, smartphone assistants that you can talk to have been around more than a decade. But youve probably noticed those assistants have become better listeners, better conversationalists, and overall much more usefuland thats because a range of technological breakthroughs have occurred behind the scenes, not only improving smartphone experiences but also inserting AI-powered voice technologies into a range of new devices and use cases. For example, Google AI researchers opened-sourced BERT, a technique for natural language processing that makes voice models more context-aware and easier and faster to train. DeepMind, one of Googles Alphabet siblings, also released WaveNet, which has helped create significantly more natural-sounding synthetic voices by replacing models based on phonetics with ones that use waveforms to predict which sounds likely follow one another. Both technologies are now deeply embedded in Google Cloud services such as Text-to-Speech, and theyre just a few among many examples of advances that help computers not only interact with us more naturally, but also act on our requests more effectively.This means our interactions with computers increasingly resemble our interactions with humans. Conversational AI not only understands and naturally responds to our statements but can also be connected to other AI technologies, such as search or vision, to handle tasks wed otherwise only delegate to a trusted, qualified person. Soon, most human-computer interactions may not involve completing a series of set actions clicking or swiping our way along a well-defined user journey so much as just talking to machines and expecting them to keep up, even as the conversation changes course or topic. In this article, well look at the business opportunities Conversational AI offers, and how you can make sure your company is prepared for this emerging inflection point. Related: New Technology: The Projected Total Economic Impact Of Google Cloud Contact Center AIHearing changes across the Conversational AI landscapeConversational AI is becoming a force across a range of technology categories and use cases, acting as a concierge who speeds up or automates aspects of our personal and professional lives.Driving this are two dimensions to the way peopleyour customerswant to communicate with a business or public service: communication modes, and communication goals. The concept of communication goals makes clear that speech acts, and thus the ways we verbally ask things of machines, are not all the same:Information kiosk-type queries are the simplest interaction: one question (like what time do you close?) and one simple answer.Information-seeking queries are more complex, relying on a combination of speech understanding and traditional search engines operating over potentially billions of sources of information. A driver might ask their car navigation system, whats there to do in Breezewood or someone cooking might ask their recipe app, Can I substitute soy for salt?Requests for help are still more complicatedthings like asking to change the payment on a flight to frequent flier models or why your bill charged for 20 gigabytes even though you think you used only two. In these instances, several rounds of back-and-forth may be necessary. A short, respectful conversation with a problem-solving AI agent can be less stressful than waiting on hold or repeating information as youre transferred from a person in one department to a person in another. In my view, whether your business can do this well will be a significant predictor of success over the next decade. Full concierge interactions are the most complex, with the AI going deep to solve complicated problems like why is my bicycle crank clicking? or what kind of cat should I get? In these scenarios, the AI doesnt start with access to the solutionit uses all kinds of business objects (i.e., over an Enterprise Knowledge Graph) to infer bespoke, even out-of-the-box solutions to queries. We are a long way down this path in many domains, and although I am unsure how fast all the supporting technology will progress, I am confident that excellence in requests for help queries is a major stepping stone. Turning to communication modes, it is important to envision all the interaction models Conversational AI can encompass, and to be prepared to operate across all of them:TextingTalking on the phoneTalking within an appTalking with an applianceVirtual video conferencesWhether the use case involves interacting exclusively with an AI via an app, handoffs between an AI agent and a human agent over the phone, or transcribing video chats to extract actionable insights, the potential for improved service is significanta fact we can see in the numerousforecasts for rapid growth in the Conversational AI market. How your business can join the conversation To meet customers where they are, your company must be available 24/7 on every channel. Youll need multiple interactive modalities operating concurrently to tie it all together: e.g., talking to the customer, seeing what the customer is looking at, letting the customer view options and make their selections, all in a seamless conversation.Building the AI backbone for these kinds of interactions is expensive, difficult, and time-consuming. One way to accelerate your progress is using open-sourced solutions, such as the aforementioned BERT, or vendor building blocks, such as Google Cloud products like Dialogflow, our Speech-to-Text API, or our Natural Language AI API. Whatever your IT stack looks like, it needs to be both sophisticated enough to meet the anytime-anywhere expectations of todays customers, and agile enough to adapt to whatever disruptions may occur tomorrow. This means capabilities like:Automation & operational efficiency, e.g.,AI-powered intelligence for contact deflection, predictive routing, agent productivity, etc. Cohesive experiences across multiple smart devices, e.g.,channel blending that supports multimodal engagement across apps, digital touchpoints, and modalities (mobile applications, website, phone, SMS over chat, etc.), all with preserved context between touchpoints. Data unification, management, and analytics, e.g., unified customer data on a single CRM or system of record.Security, scale, reliability and call quality, e.g., services auto-scale as needed, protect personally identifiable information and other sensitive data, and are globally available with acceptably low latency across different devices and regions.You can also investigate solutions that package these technologies for defined use cases. For example, in the case of reimagining contact center operations with the power of AI, Google Cloud offers Contact Center AI, a solution that provides a unified experience for offering virtual agents, live agent assist, and conversation analytics. At Enterprise Connect, we recently announced the Contact Center as a Service (CCaaS) ability of this platform, Contact Center AI Platform. Conversational AI is a foundational group of technologies that will continue to change how we interact with computers for years to comeso now is the time to make sure your company can join the chat!Related: Reaching more customers with Contact Center AI: 2021 Wrap-up | Digital Assistance/Process Automation | Management/Sales and Related/Office and Administrative Support | null | null | null | null | null | null |
|
news | Jim McGregor, Contributor, Jim McGregor, Contributor https://www.forbes.com/sites/tiriasresearch/people/jimmcgregor/ | Nvidia’s GTC Provides A Glimpse At A World Full Of Autonomous Machines | 2022 GTC will feature just shy of 1,000 sessions, including keynotes, technical tutorials, panel discussions, and roundtable discussions with technical experts. There will also be demonstrations, vendor meetings, and other special events. | https://www.forbes.com/sites/tiriasresearch/2022/03/18/nvidias-gtc-provides-a-glimpse-at-a-world-full-of-autonomous-machines/ | 2022-03-18T16:08:02Z | A group of Moxi caregiver assistant autonomous robots from Diligent RoboticsDiligent RoboticsThere are a few must-attend technology conferences, even when they are held virtually. One of those is Nvidias GPU Technology Conference more commonly known as GTC. While holding the conference virtually does limit the interaction with the broad array of attendees from industry, government, and academia, it still provides an invaluable glimpse into advancements in accelerated processing technology, graphics, and artificial intelligence (AI).According to the agenda, 2022 GTC will feature just shy of 1,000 sessions, including keynotes, technical tutorials, panel discussions, and roundtable discussions with technical experts. There will also be demonstrations, vendor meetings, and other special events. One of my favorite items at GTC is the poster gallery, which highlights some advancements in technology by both academia and industry that didnt meet the criteria for a full session. Sessions begin on this Sunday, March 20th, and continue through Friday, March 25th. Always a highlight of the event, Nvidia CEO Jensen Huangs keynote takes place on Tuesday, March 22nd, at 8:00 AM PDT (GMT-7). Among the topics covered at the 2022 GTC are 3D design, accelerated computing, AI, autonomous machines, computer vision, the data center, graphics, High-Performance Computing (HPC), manufacturing, signal processing, video streaming, virtual and augmented reality (AR/VR/XR), digital simulation, and IoT, which covers pretty much everything else.An Nvidia autonomous robotNvidiaGTC also serves as a launching platform for new Nvidia technology. We, Tirias Research, expect to see some enhancements to the GPU lineup for PCs and servers, as well as for new hardware and software for automotive, data centers, and robotics. And no tech industry event would be complete without announcements around AI and the metaverse, or in Nvidias case it would be Omniverse, the companys developer environment/metaverse for creating other metaverses. Last year, GTC featured a strong robotics focus around the companys Jetson platform. I expect more of the same this year as Nvidia pushes the capabilities of AI in robotics and autonomous machines. The company continues to enhance its silicon, system modules, and tools, while creating additional AI models and application frameworks for developers and data scientists to build upon. Covariant autonomous industrial robotCovariantThere are some interesting sessions are on how robotics/autonomous machines are being used in commercial applications. Diligent Robotics will be discussing its development of reliable autonomous robots for busy human environments like hospitals, and Covariant will discuss the development of autonomous robots for large scale environments like manufacturing. In addition, Nvidia will be discussing autonomous machines in its Jetson Developer Day on Monday, March 21, and through a number of sessions throughout GTC. One session is focused on using the Isaac Cortex decision framework in conjunction with the Isaac Sim, which uses Omniverse, to develop, train, and test robots in an accurate virtual environment. With a push from COVID and growing labor and logistics issues, many industries have begun evaluating how to leverage autonomous machines enabled by AI and advanced robotics. Tirias Research believes that with the advancements in technology, the industry has reached an inflection point that will drive change in almost every industry segment from transportation and manufacturing to healthcare and foodservice. Over the next decade, industries and society as a whole will be drastically changed by autonomous machines.Details about this years GTC are available in the Nvidia website.The author and members of the Tirias Research staff do not hold equity positions in any of the companies mentioned. Tirias Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud. Members of the Tirias Research team have consulted for Nvidia, as well as other companies and research organizations focused on AI and autonomous machines. | Robotic Automation/Vehicular Automation/Decision Making | Unknown | null | null | null | null | null | null |
|
news | Benjamin Laker, Contributor, Benjamin Laker, Contributor https://www.forbes.com/sites/benjaminlaker/ | How Leaders Can Use Technology To Motivate Their Workers | It is well known that one unscalable element of business operations is human capital. People have a limit. Pushing them to that limit led to things like The Great Resignation. Technology is an answer for overworked humans, and a way to scale a business while accepting the reality of human capacity | https://www.forbes.com/sites/benjaminlaker/2022/04/07/how-leaders-can-use-technology-to-motivate-their-workers/ | 2022-04-07T16:01:13Z | The aim of technology is to simplify business processes, alleviating the burden on human workers and implementing systems that maximize the benefits of human-machine interplay. This is no mean feat, and one that leaders in tech have worked hard to achieve for decades. Pavel Pavlov CEOHyperAspectWith accelerated progress and applications for artificial intelligence (AI) and machine learning (ML), there has at long last been a breakthrough. What was once the stuff of sci-fi has now become business as usual, with data-savvy algorithms and even roboticized equipment running around the clock to amplify efficiencies. Leaders who are thoughtfully integrating high-tech solutions to benefit workers are the ones with an eye to the future.The Unscalable Element of BusinessIt is well known that one unscalable element of business operations is human capital. People have a limit. Pushing them to that limit led to things like The Great Resignation. Technology is an answer for overworked humans, and a way to scale a business while accepting the reality of human capacity. Cognitive Technology to Empower Human AchievementOne company, HyperAspect, has provided a platform to facilitate this, synthesizing intelligence and activating something called cognitive technology to empower human achievement. Every business is living in a world of big data, and vast datasets long ago outpaced the ability of human staff to wrangle, and make sense from that data.Its clearly a challenging task, but the value many companies can create and sustain relies heavily on what people learn from data. HyperAspect solves for this by democratizing data ingestion and analytics. Through the power of AI, ML and algorithms, the company has created a system that implements complex conditional logic within data flows, enabling humans to develop powerful applications and easy-to-view dashboards. There are numerous applications in various industries, and here are some examples.Healthcare The inefficiencies of healthcare are well-known, and few fields are more data-rich. From patient records to equipment tracking to operational trends, visible, accurate data can make a considerable difference among todays healthcare administrators and professionals. Monitoring treatment effectiveness and improving productivity are just two of the benefits of clean and efficient data practices in healthcare.Finance Venture capitalists have been busy over the last couple of years, with record-breaking numbers of startups and an influx of capital into busy markets to launch innovative products. VCs and financial institutions alike face the prevalent challenge of fraud, as well as the logistics of moving money and maintaining compliance. Cognitive technology that manages data has the power to streamline processes, making it easier to manage funding requests and process monies.Retail Its never been more critical for retailers to understand a customers wants and needs, such as through voice of customer (VOC) methodologies. Getting real-time feedback is the only way to respond fast enough to consumer demands and desires. Gathering this information, then organizing it so that its useful, is the task of AI, ML and algorithms.Legal Companies worldwide face legal guidelines, standards, enforcement and more in various contexts. One of those areas is in intellectual property: With the aforementioned uptick in new products and services on the market, there is also an increase in contracts, patents, and IP. All of this must be meticulously tracked, a process that is increasingly urgent, and increasingly digital. Swift aggregation of unstructured data from the web is a competitive differentiator in todays markets. This and more benefits are switched on with one-of-a-kind solutions that offer not only the right functionality but the right outcomes.Bridging the Gap Between Efficiency and ApplicationIt is not enough to just have data or even to organize and report on data. It must be used. This is where the inflection point is reached, and where most current data analysis systems or handlers fall short. For data to be truly effective for leaders in businesses of all kinds, it must be fast enough and accurate enough to inform decision-making.Cognitive technology that bridges this gap with the right technology and advanced analytics is the ultimate win. Systems that can capably collect data from online sources can empower company leaders to gauge and understand customer behavior. Whats more, data tells the truth, showcasing trends that could uncover fraud or malicious actors.Visualization enables leaders to quickly spot patterns, which always leads to valuable insights that no other source provides. Whats more, the timeliness of these insights is potent: the sooner leaders can see a storyline, the sooner they can respond proactively to shifting dynamics, either outside or inside their own companies.A Mission Critical EndeavorUltimately, handling data better is a mission-critical endeavor for any company that wants to grow. Leaders with an eye to making an impact in any sector will begin to strategize about the highest and best use of data. Finding a system that can upgrade data-intensive processes will be a point of competition among growth-minded leaders. Contained within data are actionable business insights that will ensure company growth. The right system can empower visionary leaders to crack the code, unlocking the power of data and securing a growth trajectory. | Content Synthesis/Decision Making/Digital Assistance | Management/Healthcare Practitioners and Support | null | null | null | null | null | null |
|
news | RStudio | Open source & professional software for data science teams on RStudio | Creating APIs for Data Science With plumber | Photo by Khara Woods on UnsplashWhether it’s pulling data from Twitter, accessing the most recent weather information, or tracking where a particular plane is going, application programming interfaces (APIs) are often part of our data science pi...Continue reading: Creating APIs for Data Science With plumber | https://www.r-bloggers.com/2022/03/creating-apis-for-data-science-with-plumber/ | 2022-03-22T00:00:00Z | Photo by Khara Woods on UnsplashWhether its pulling data from Twitter, accessing the most recent weather information, or tracking where a particular plane is going, application programming interfaces (APIs) are often part of our data science pipeline. But why would you want to create an API? And how difficult is it to do?APIs make it easy to scale the reach of your work. They allow your data science results to be responsive, accessible, and automated. And thanks to the plumber package, you can convert your R functions into API endpoints using just a few special comments.What is an API?APIs are messenger systems that allow applications to communicate with one another. You send a request to the API. The API takes your request to the server and receives a response. Then, the API delivers the response back to you.You may already use APIs to retrieve data as part of your data science pipeline. For example, the rtweet package allows R users to interact with Twitters API. You request data through the package and then receive the APIs data as a response.APIs communicate via endpoints. The endpoint receives a request to take an action. For example, when you run usrs <- search_users("#rstats", n = 1000) from rtweet, you are interacting with an endpoint that returns a list of users.Since APIs allow different systems to interact when they wouldnt be able to otherwise, they are incredibly powerful tools to increase interactivity and reach.Why would a data scientist want to create an API?At some point, you may want to share your R output with others. If the other person is not an R user, they may not be able to use your work without translating it into their language of choice.If your results are available in the form of an API, then anybody can import your results without this difficult translation step. API responses are readable across platforms and applications. Just as you use R to interact with the Twitter API, others can access the Twitter API with other tools.Lets say you are working with a website developer who uses Javascript. You just developed a model in R and youd like to share the results. You can send the developer an API so that they can display the results on a website without reconstructing your model in another language. The website can show updated results because it is communicating with your API in real-time. You do not have to manually refresh your code each time theres a change in the data. For example, RStudios pricing calculator uses an API created from a backend R model to feed the results into our website!Making your data science work available through an API reduces the handoff between R and other tools or technologies. More people can access your results and use them to make data-driven decisions.We recommend reading James Blair's post on how APIs increase the impact of your analyses, RStudio and APIs.Creating an API with plumberThe plumber package allows you to create APIs from your R code. It does this through special comments that give instructions on how to turn the functions in your script into API endpoints. Its pretty amazing with this package, your R code is easily accessible from other tools and frameworks.Heres an example plumber script. Notice how familiar it looks:Lets walk through how to convert this R function into an API.1. Write standard R codeLets say we want to randomly choose 100 numbers and create a histogram. We write out a function in R:function() { rand <- rnorm(100) hist(rand)}Notice that the function is not assigned to an object. We can test it out by running the below:test <- function() { rand <- rnorm(100) hist(rand)}test()2. Add special commentsNow, we instruct plumber on how to turn the function into an API endpoint. Plumber parses your script to identify special comments beginning in the #* or @ symbols. It uses them to convert your script into an API.Lets give our function a description using #*. Here, were telling plumber to call this function Plot a histogram:#* Plot a histogramNow, lets tell plumber that when we get a request, execute this function and return the plot:#* @get /plotBy default, plumber will turn your response into JSON format. You can adjust the type of response if that is not the output you would like. For example, our function outputs an image. It doesnt make sense to return an image in JSON format. We can serialize our result so that the API returns a PNG rather than JSON.#* @serializer pngThis is just one example of what an API can do. To learn more, check out the plumber documentation on rendering output.Now, our script looks like this:# plumber.Rlibrary(plumber)#* Plot a histogram#* @serializer png#* @get /plotfunction() { rand <- rnorm(100) hist(rand)}Congratulations! We wrote an API using R.3. Plumb itNow that weve created an API, its time to plumb (run) it!After we write our plumber script in the RStudio IDE, a special button appears that allows us to Run API:Running the API generates an interface for our API.Plumbing an API in RStudioThe interface provides a way to interact with our APIs endpoints. We can test out different calls to make sure that everything runs as expected.Endpoint in our code and the interfaceRun try it out and then execute to see what the API returns (in our case, an image of a histogram):Testing out our API through the interfaceNotice that you never left RStudio to create, run, and test your API!4. Deploy the APIWe can develop and test an API on our laptop, but how do we share it with others (for example, the website developer we mentioned previously)? We do not want our laptop to be serving the requests for a variety of reasons, including maintenance and security concerns.RStudio Connect is an enterprise publishing platform that deploys APIs created by plumber with versioning, dependency management, and authentication. RStudio Connect also supports the deployment of many other data product formats, including Python APIs developed using frameworks such as Flask, FastAPI, Quart, Falcon, and Sanic. See the RStudio Connect Python Updates blog post for more info on deploying Python APIs on Connect.Editing access settings in RStudio ConnectRStudio Connect also ensures that you are not consuming more system resources than is necessary. It automatically manages the processes necessary to handle the current load and balances incoming traffic across all available processes. It will also shut down idle processes when theyre not in use.Learn more about hosting Plumber APIs.Now that our API is hosted, anybody can use it in their application! Access it on RStudio Connect: https://colorado.rstudio.com/rsc/plumber-histogram-example/.Learn MoreAPIs increase the impact of your data science work by making your code accessible to a larger audience. Thanks to plumber, you can create them by providing a few special comments in your R code. | Content Synthesis/Information Retrieval Or Search | Unknown | null | null | null | null | null | null |
|
news | Rob Toews, Contributor, Rob Toews, Contributor https://www.forbes.com/sites/robtoews/ | A Wave Of Billion-Dollar Language AI Startups Is Coming | Given language’s ubiquity throughout the economy, few areas of technology will have a more far-reaching impact in the years ahead than NLP. | https://www.forbes.com/sites/robtoews/2022/03/27/a-wave-of-billion-dollar-language-ai-startups-is-coming/ | 2022-03-28T00:48:32Z | In 1998, Larry Page and Sergey Brin founded the greatest language AI startup of all time. But a new ... [+] generation of challengers is coming.WiredSee here for the first part of this article series: Language Is The Next Great Frontier In AILanguage is at the heart of human intelligence. It therefore is and must be at the heart of our efforts to build artificial intelligence. No sophisticated AI can exist without mastery of language.The field of language AIalso referred to as natural language processing, or NLPhas undergone breathtaking, unprecedented advances over the past few years. Two related technology breakthroughs have driven this remarkable recent progress: self-supervised learning and a powerful new deep learning architecture known as the transformer.We now stand at an exhilarating inflection point. Next-generation language AI is poised to make the leap from academic research to widespread real-world adoption, generating many billions of dollars of value and transforming entire industries in the years ahead.A nascent ecosystem of startups is at the vanguard of this technology revolution. These companies have begun to apply cutting-edge NLP across sectors with a wide range of different product visions and business models. Given languages foundational importance throughout society and the economy, few areas of technology will have a more far-reaching impact in the years ahead.Horizontal Technology ProvidersThe first category of language AI startups worth discussing is those players that develop and make available core general-purpose NLP technology for other organizations to apply across industries and use cases.Building a state-of-the-art NLP model today is incredibly resource-intensive and technically challenging. As a result, very few companies or researchers actually build their own NLP models from scratch. Instead, virtually all advanced NLP in use today, no matter the industry or setting, is based on one of a small handful of massive pretrained language models. Stanford researchers recently dubbed these pretrained models foundation models in recognition of their outsize influence.Most often, foundation models are built and open-sourced by the publicly traded technology giantse.g., BERT from Google, RoBERTa from Facebook.OpenAI is another important source of state-of-the-art NLP technology. Its large language model GPT-3 is perhaps the most well-known and widely used foundation model today. GPT-3 is a generative model (the G in its name stands for generative): it generates original text in response to prompts from human users. OpenAI has made GPT-3 commercially available via API for use across applications, charging on a per-word basis.Given Microsofts massive investments in and deep alliance with the organization, OpenAI can almost be considered an arm of the tech giant.But there is also tremendous opportunity in this category for younger startups.Cohere is a fast-growing startup based in Toronto that, like OpenAI, develops cutting-edge NLP technology and makes it commercially available via API for use across industries. Coheres founding team is highly pedigreed: CEO Aidan Gomez is one of the co-inventors of the transformer; CTO Nick Frosst is a Geoff Hinton protégé. The company recently announced a large Series B fundraise from Tiger Global less than a year after emerging from stealth.While Cohere does produce generative models along the lines of GPT-3, the company is increasingly focused on models that analyze existing text rather than generate novel text. These classification models have myriad commercial use cases: from customer support to content moderation, from market analysis to search."Language generation has seemingly monopolized the attention of those interested in NLP, but the most significant opportunity for developers interested in building NLP into their systems actually rests in language representation models like BERT, said Gomez. While slightly less 'miraculous', these models form the backbone of some of the most sophisticated NLP systems in the world."Another leading horizontal NLP startup is Hugging Face. Hugging Face is a wildly popular community-based repository for open-source NLP technology. Unlike OpenAI or Cohere, Hugging Face does not build its own NLP models. Rather, it is a platform that stores, serves and manages the latest and greatest in open-source NLP models, including enabling customers to fine-tune these models and deploy them at scale.Hugging Faces secret sauce is its community: it has become a go-to destination for companies and researchers in the world of NLP to collaborate. In this respect it can be loosely analogized to GitHub, but for machine learning rather than traditional software engineering.Other horizontal NLP providers of note include AI21 Labs and Primer.Based in Israel, AI21 has a two-pronged business model: it offers proprietary large language models via API to power customers applications (its current state-of-the-art model, named Jurassic-1, is roughly the same size as GPT-3), and it also builds and commercializes its own applications on top of those models. Its current application suite focuses on tools to augment reading and writing.Primer is an older competitor in this space, founded two years before the invention of the transformer. The company primarily serves clients in government and defense.There is one last wild card worth mentioning in this category. Launched less than a month ago, little is known yet about Inflection AI beyond its eye-catching founding team: Reid Hoffman, DeepMind cofounder Mustafa Suleyman, and decorated DeepMind researcher Karen Simonyan. The company is being incubated at Greylock, where Hoffman is a general partner. Its stated mission is to fundamentally redefine human-machine interaction by enabling humans to relay our thoughts and ideas to computers using the same natural, conversational language we use to communicate with people.Given the caliber of the companys founders and backers, expect Inflection AI to make waves in the world of language AI before long.SearchThe most basic way that humans use natural language to interface with machines is through search. It is the primary means by which we access and navigate digital information; it lies at the heart of the modern internet experience.Search has been dominated by a single player for so long (Google) that it is often seen as an unpromising or even irrelevant category for startups. But this is far from true.Last month a blog post titled Google Search Is Dying made the rounds and sparked widespread discussion. The post hit home with a simple point: an opportunity exists for an upstart to improve and disrupt the Google search experience.The new entrant taking on Google most directly is You.com. Founded by Richard Socher, former Chief Scientist at Salesforce and one of the worlds most widely cited NLP researchers, You.com is reconceptualizing the search engine from the ground up. Its product vision includes a horizontal layout, an emphasis on content summarization, and above all, a commitment to user data privacy.Challenging Google directly will, to state the obvious, be an uphill battle. There is also significant opportunity for startups in search beyond the consumer internet search market with which Google has become synonymous.ZIR AI is a young startup building a new search platform for enterprise. Leveraging the latest transformer-based techniques, ZIR is seeking to develop search technology with true semantic comprehension (as opposed to keyword-based matching) and more sophisticated multilingual capabilities. Like You.com, ZIR has a pedigreed founding team that includes former Cloudera CTO/cofounder Amr Awadallah.Algolia is a more well-established player in enterprise search; the company has raised over $300 million in venture funding since graduating from Y Combinator in 2014. Algolia offers an API that enables its customersfrom tech companies like Slack to media businesses like the Financial Timesto embed search experiences in their websites and applications. Constructor.io is another fast-growing competitor in this space that focuses specifically on ecommerce search and discovery.One final enterprise search startup worth keeping an eye on is Hebbia, which is building an AI research platform to enable companies to extract insights from their private unstructured data.In the words of Hebbia founder/CEO George Sivulka: Google has only indexed 4% of the worlds online data. Were unleashing the other 96%.All of the companies mentioned above (including Google) focus on text search. But thanks to recent breakthroughs in AI, opportunities now exist for startups to build search tools for data modalities beyond textand no new modality represents a bigger opportunity than video.Video has become the dominant medium for our digital lives. A whopping 80% of the data on the internet today is video. Yet remarkably, there is no effective way to search through all this video contentto find, say, a particular moment, concept or discussion. The range of potential commercial use cases for video search is basically endless: from social media to streaming content, from digital asset management to workplace productivity, from content moderation to cloud storage.One exciting startup building next-generation video search capabilities is Twelve Labs, which announced its seed financing earlier this month. Twelve Labs fuses cutting-edge NLP and computer vision to enable precise semantic search within videos. Multimodal AI like thisthat is, AI that ingests and synthesizes data from multiple informational modalities at once, like image and audiowill play a central role in AIs future.Large language models are accomplishing incredible things today. We think large multimodal neural networks for video are the obvious next step, said Twelve Labs cofounder/CEO Jae Kim. Video embeddings generated by these networks will supercharge current and future video-driven applications with an intelligence that weve never seen before.Writing AssistantsIn todays information-based economy, perhaps no skill matters more than effective writing.Yet as anyone who has experienced writers block can attest, writing can be a frustrating experience. The act of translating inchoate thoughts into well-crafted languageof finding the right wordscan be time-consuming and unsystematic.Next-generation NLP promises to transform how humans write, reconceptualizing one of civilizations most basic and vital activities.Large language models like OpenAIs GPT-3 can be thought of as auto-complete on (incredibly powerful) steroids. Given some text prompt from a human, these generative models can automatically produce novel sentences, paragraphs or even entire memos that are strikingly coherent, insightful, creativealmost magically so. Of course, their output remains far from perfect: they can also sometimes be nonsensical or harmfully biased.This technology will transform writing from an act of solo creation to a collaboration between human and machine: one in which the human provides some initial language, the AI suggests edits or follow-up sentences, the human iterates based on the AIs feedback, and so forth. The skillset required for good writing may accordingly expand to include an understanding of how to get the most out of the AIhow to best guide and coax it into producing the desired language.This novel paradigm for AI-augmented writing is already starting to become a reality, driven forward by a handful of interesting startups.The most established player in this category is Grammarly. Founded in 2009, Grammarly has admirably remained abreast of the latest NLP technologies over the years. The company raised funding late last year at a whopping $13 billion valuation. Grammarlys product provides automated recommendations for improved spelling, grammar, diction and phrasing in real-time as users write.Textio, LitLingo, and Writer are three newer entrants using next-generation language AI to build advanced Grammarly-like solutions for more targeted use cases. Textio focuses on hiring and recruiting, LitLingo on business compliance and risk management, and Writer on company-wide style and brand consistency.Trained on millions of writing samples, Textios AI can give users nuanced insights about their job postings and other hiring-related content: for instance, that a certain phrase will resonate more with male than with female candidates, that a given word suggests a fixed mindset over a growth mindset, that a particular metaphor may come across as exclusionary to applicants. LitLingo, meanwhile, uses real-time NLP to monitor employees digital messages and proactively prevent communications that could trigger litigation or unwanted public attentionsay, related to antitrust, workplace discrimination, securities violations or employment law.All four of the companies mentioned so far use AI primarily to provide recommendations and insights on existing text that humans have already written. Todays NLP, though, allows us to go one step further. The next frontier in AI-augmented writing will be for the AI to generate novel written content itself based on guidance from the human user.CopyAI is a Tennessee-based startup backed by Sequoia, Tiger Global and Wing VC that auto-generates customized marketing copy. The way it works is simple. Users enter basic information about their company and select a content format: say, a blog title, a website blurb, a Facebook ad, even an Instagram hashtag. CopyAIs NLP engine, which is powered by GPT-3, then spits out ten samples of text at a time for the user to use, adapt, or take inspiration from. According to the company, over half a million content marketers are using its technology today, including at organizations like Nestle and Microsoft.To temper expectations, we should not expect that todays NLP will immediately take over all writing from humans. Some forms of writingbrief formulaic content like marketing copy or social media postswill yield more naturally to these new AI tools than will others. Original, analytical, creative worksay, op-eds, thought pieces or investigative journalismwill resist automation for the time being.But make no mistake: in the years ahead, whether we like it or not, NLP will fundamentally change how humans produce the written word. Ten years from now, writing ones own content from scratch may well be considered an artisanal craft, with the vast majority of the worlds written text produced or at least augmented by AI.Language TranslationLanguage barriers are a fundamental impediment to international business and travel, costing untold billions in lost productivity every year.More profoundly, the inability for people around the world to understand one another inhibits the advancement of grand global goals and species-level harmony. But in a polyglot world like ours (over 7,000 languages are spoken in the world today), language barriers have always been an unavoidable reality.The Babel fish from Douglas Adams science fiction classic The Hitchhikers Guide to the Galaxywhich goes in someones ear and automatically enables them to hear any spoken language in their native tongueis an enchanting but purely fictional concept.Until now.Machine translation has been a central goal of artificial intelligence researchers dating back to the very beginnings of the field of AI in the 1950s. Automated language translation products have been available since the dawn of the commercial internet in the 1990s. Yet machine translation has proven to be a devilishly difficult challenge. AI-based translation tools have historically been deeply flawed (as anyone who remembers using AltaVistas Babel Fish service in their younger days can attest).But thanks to the remarkable advances underway in language AI, reliable and high-quality machine translation is fast becoming a reality.The most widely used AI-powered language translation service in the world is Google Translate. Unsurprisingly, given that it is the birthplace of the transformer and the most advanced AI organization in the world, Google has incorporated the latest NLP technologies to vastly upgrade its Translate service in recent years.But significant opportunities also exist for startups in the fast-changing world of language translation.BLANC offers AI-powered translations for video. Its AI platform takes a video with spoken dialogue in one language and applies AI to quickly reproduce that video with the dialogue in another language, doing so in a way that the speakers lip movements continue to look natural. Think of it as sophisticated dubbing, except that it can be carried out automatically and at scale.KUDO is a more established competitor that also offers video translation services. Today, KUDOs platform relies on human interpreters to stream translations over the internet in real-time. But the company envisions a future in which its platform is increasingly powered by AI. In this sense KUDO represents an interesting archetype: a mature non-AI-first business looking to inject more AI into its product offering by leveraging its massively valuable proprietary datasets.Lilt is a notable growth-stage player working on machine translation. The company was founded by two NLP researchers at Google Translate who came to appreciate that an AI solution like Google Translate could not, on its own, be relied upon to deliver automated language translation with the robustness demanded by enterprise and government organizations.Thus, Lilt offers a hybrid model that combines cutting-edge AI with humans in the loop to translate written content for global organizations, from marketing to mobile apps to technical documentation. This partially automated approach enables Lilt to provide translation that is cheaper than using human translators and at the same time more accurate than using AI alone.The interesting questionfor Lilt and for the entire industryis whether and how quickly the humans in the loop can be phased out in the years ahead.One last startup worth mentioning in this category is NeuralSpace. NeuralSpace was founded on a simple but powerful insight: the vast majority of cutting-edge research in NLP is conducted in English, yet 95% of the world does not speak English. NeuralSpace provides a no-code NLP platform that enables users around the world to build NLP models in low-resource languages, from Armenian to Punjabi to Zulu.Our vision at NeuralSpace is to break down the language barrier in AI for millions of low-resource language speakers, said NeuralSpace cofounder/CEO Felix Laumann. We give software developers the ability to train and deploy state-of-the-art large transformer-based language models and easily integrate them into their products, no matter where in the world they are or what language their audience speaks.Sales IntelligenceSales is more of an art than a science. Yet certain repeatable principles and tactics do exist that, if systematized, can meaningfully improve a sales teams performance.Is a rep spending the right amount of time on the right topics in sales calls, from product to pricing to small talk? Is she letting the customer ask enough questions? Has she engaged the right senior stakeholders at the customer organization at the right times over the course of the sales process? Is she following up with prospects on the right cadence?By ingesting vast troves of unstructured data from video calls, phone calls, email exchanges, CRMs and other communication channels, todays language AI can extract actionable insights about how salespeople are performing and what they can do to improve.There are few applications of language AI that can more directly affect a companys top line. Not surprisingly, therefore, the market for sales intelligence AI is booming.The runaway leader in this category is Gong, which has raised close to $600 million in venture funding. According to the company, its technology boosts average revenue per sales rep by 27%, translating into massive ROI for its customers.Gongs closest competitor Chorus.ai exited to ZoomInfo last year in a $575 million sale, further solidifying Gongs status as the category leader.Gong is an impressive business, with incredible revenue growth and a long list of blue-chip customers. The company seems destined to debut on public markets before long. Yet by most accounts, the core NLP in Gongs product offering is not particularly advanced.This raises an interesting question: might an opportunity exist for an upstart to build a more cutting-edge version of Gong, powered by the latest transformer-based advances in language AI, and take market share from the category leader by offering a more intelligent product?A handful of young startups have popped up that are nipping at Gongs heels, though none have yet broken out.Aircover, which raised a seed round last year, and Wingman, which came out of Y Combinator in 2019, are two examples. Unlike Gong, which provides analytics only after sales calls are finished, both of these startups provide real-time in-call coaching for sales reps. And while Gong has had major success selling to large enterprises, Wingman instead targets small- and medium-sized businesses.Chatbot Tools and InfrastructureWe all experience it in our daily lives: when we communicate digitally with companies and brands these daysvia text message, web chat, social media, and so forththese interactions are increasingly fielded by automated agents rather than humans.These AI-powered conversational interfaces are commonly known as chatbotsthough some startups today prefer to avoid that terminology and its mixed connotations, given a premature hype cycle for chatbot technology about five years ago.Notwithstanding earlier false starts, chatbots today have begun to gain real market adoption, thanks to improvements in the underlying NLP as well as in companies understanding of how to best productize and deploy these bots.Companies are now using chatbots to engage with customers in real-time wherever those customer interactions occurfor instance, fielding questions on their websites, automating routine customer support requests, giving customers updates on their orders, or supporting sales efforts.Most organizations interested in using conversational AI interfaces to interact with their customerssay, a bank, a hotel chain, an airlinelack the requisite technical resources to navigate the latest NLP technologies and build their own chatbot platforms from scratch.And a lot goes into building an enterprise-grade conversational AI interface: handling data privacy and security requirements, integrating with third-party applications, building the infrastructure to support deployment at scale, providing a graceful fallback mechanism when the bot is stumped and human intervention is necessary.A promising group of startups has emerged to provide the technology and infrastructure for companies across industries to create and operationalize chatbots.The most well-funded of these competitors is Ada Support, a Toronto-based startup that has raised close to $200 million from blue-chip venture capitalists. Ada powers automated interactions for enterprises in customer support and sales across text-based channels including web chat, SMS, and social media, intelligently looping in a human agent when needed. The company claims its technology can reduce customer wait times by 98%. With a long list of marquee clients including Zoom, Shopify, Verizon and Facebook, Ada powers over one billion customer interactions annually.Another leading player in this category is Rasa. A close Ada competitor, Rasas product caters to more technically savvy users, with a greater focus on chatbot configurability. Rasas AI stack is open-sourced, with over 600 contributors and over 10 million downloads. This open-source strategy gives Rasas customers greater transparency and control over the conversational AI interfaces that they build and deploy.Other noteworthy startups in this space include Forethought, a well-capitalized competitor that boasts NLP luminary Chris Manning as an adviser; Clinc, a conversational AI platform built specifically for banks; and Thankful, which focuses on e-commerce.Internal Employee EngagementOne specific type of enterprise chatbot has proven to be a sufficiently large market opportunity that it gets its own section: chatbots to automate employee help desks.Every day, in every company around the world, employees have routine questions that they need help with: how to reset their email password, whether they can expense an enterprise software subscription, how to enroll in a health insurance plan, what the companys vacation policy is.Conversational AI platforms can automatically field and resolve many of these employee support requests, reducing the need for human intervention and saving organizations vast amounts of time and money in the aggregate.The leading player in this category is Moveworks, which raised a $200 million Series C from Tiger Global last year. Another well-funded competitor is Espressive. Espressive claims that its chatbot platform can resolve between 50% and 70% of all employee helpdesk tickets without human assistance, recouping over a week of productivity per employee per year.Given the size of the market, plenty of smaller startups have emerged with similar AI-driven product offerings. One worth noting is Bay Area-based Rezolve.ai.Conversational Voice AssistantsWhen Google debuted its new Duplex technology in mid-2018, it wowed the public (and generated its fair share of controversy).Duplex is an AI system that, in a remarkably human-sounding voice, can place phone calls on behalf of human users to complete routine tasks like booking a dinner reservation or a hair appointment.At the time, Googles Duplex was just a demo, still heavily reliant on human-in-the-loop support.Four years later, this technology is ready for primetime.Following in Duplexs footsteps, a handful of startups have developed voice AI technology that can engage in nuanced automated phone conversations. While Googles Duplex is a consumer-facing tool (it is widely available today through apps like Google Maps), these startups go-to-market efforts focus on the enterprise. And no enterprise opportunity looms larger for this technology than contact centers.Contact centers (also referred to as call centers) are an unglamorous back-office function that happen to also be a staggeringly massive marketan estimated $340 billion in 2020, on its way to $500 billion by 2027.Replicant is one promising startup applying voice AI to automate contact center agent activity, reducing wait times for customers and cutting costs for companies. Replicant spun out of Atomic, the high-profile startup studio that has produced companies like Hims and OpenStore.Like Duplex, Replicants voice AI is designed to sound as natural as a human (the companys name is a tribute to the bioengineered robots from Blade Runner that are indistinguishable from humans). Replicants technology is equipped to handle a wide range of call center use cases, from billing to customer surveys to subscription renewals. When its AI encounters a complex conversation topic that it cannot resolve on its own, it pulls in a human agent.A close Replicant competitor is AI Rudder, a Singapore-based company that just raised $50 million from Sequoia, Coatue and Tiger Global.AI Rudder sells to customers in financial services and e-commerce, two industries that make extensive use of call centers. The pandemic has driven rapid growth for AI Rudder, whose revenue quadrupled last year. The companys AI system can not only speak a wide range of different languages but can also adopt the appropriate regional accent depending on the caller.One last startup of note in this category is Resemble AI, which specializes in generating realistic human voices using generative adversarial networks (GANs). Resembles synthetic voices can speak with all the nuance and range of a humanfor instance, whispering or communicating with various emotionsand are finding use cases from video games to advertising. The company recently made headlines when its technology was used to reproduce Andy Warhols voice for an upcoming Netflix documentary.Contact CentersAs the previous section highlighted, contact centers are a massiveand massively underdigitizedmarket. There is tremendous opportunity to transform the world of contact centers with software and machine learning.While startups in the previous section like Replicant and Rudder AI provide voice AI technology to automate basic call center conversations, a different group of companies offers conversational coaching and analytics platforms for human call center agents. To oversimplify, these players can be thought of as Gong for call centers.In terms of venture capital funding, there is perhaps no hotter category in NLP today.Last month, contact center AI startup Uniphore raised a $400 million round from NEA that valued the company at $2.5 billion. A few weeks later, direct competitor Cresta announced an $80 million fundraise led by Tiger Global at a $1.6 billion valuation. These fundraises have made these two startups among todays first NLP unicorns. Expect VC dollars to continue pouring into this space given the outsize market opportunity in play.Co-founded by AI legend Sebastian Thrun (the creator of Google X and Googles self-driving car program) and two of his Stanford PhD students, Cresta is the most pedigreed competitor in this category. Cresta focuses on providing personalized coaching to contact center agents in real-time, as opposed to post-conversation, with an omnichannel platform that spans phone calls and text chats.Uniphore has been around for almost a decade longer than Cresta and is much further along from a revenue perspective: the company expects to reach $100 million in annual recurring revenue by the end of next month. Founded in India and based there for the first decade of its existence, Uniphore recently relocated its headquarters to the Bay Area.Other startup competitors in this category include Observe.ai, whose product is oriented around post-conversation analytics rather than real-time coaching, and Level AI, which focuses on automating call center quality assurance.All of these players offer contact center AI for use across industries. A less common approach is to develop contact center AI purpose-built for a particular vertical. This is the approach taken by BirchAI, a young startup that recently spun out of the Allen Institute for AI (AI2). BirchAI has built a cutting-edge NLP solution focused on contact centers in healthcare. The companys target customers include health insurers, pharmaceutical companies and medical device companies.As Birch cofounder/CEO Kevin Terrell put it, Transformer-based NLP can now automate complex dialogue and document workflows that used to require highly trained employees. Healthcare, with its lagging productivity and aging workforce, is one sector where the need for this technology is particularly pronounced.Content ModerationFrom misinformation to cyberbullying to hate speech to scams, harmful online content is a massive and growing problem in todays digital world.The problem of toxic content has been a reputational nightmare and a technological quandary for social media platforms like Facebook in recent years.But the challenge goes beyond social media. Any platform that features user-generated content of any kindfrom gaming companies to dating appsis susceptible to the proliferation of toxic language. At scale, it becomes impossible for companies to rely on humans alone to monitor and moderate all this content.The latest advances in language AI can be deployed as a new tool in this fight.To be clear, AI is far from a panacea. Language is a slippery, nuanced phenomenon; it is impossible to build an AI model today that can reliably detect every instance of fake news or sexual harassment. But an intriguing group of startups is applying NLP to help organizations make a dent in the problem.And make no mistakegiven the scale of the challenge, the market opportunity here is massive. Facebook alone reportedly spent $13 billion on content moderation between 2016 and 2021, including paying Accenture $500 million per year to work on the problem.Spectrum Labs is one promising startup applying AI to combat online toxicity, with a focus on four industries: marketplaces, social platforms, gaming services and dating applications. According to the company, its technology allows online platforms to increase detection of toxic behaviors by 10x while reducing content moderation efforts by 50% on average.The companies, communities and | Content Creation/Content Synthesis/Digital Assistance | Business and Financial Operations/Arts, Design, Entertainment, Sports, and Media/Sales and Related/Office and Administrative Support | null | null | null | null | null | null |
|
news | Kai Christensen | The Singularity Is Close | Or "Why we're all in denial about the robot apocalypse" | https://mkaic.substack.com/p/the-singularity-is-very-close | 2022-03-31T20:19:20Z | Then came the Butlerian Jihad two generations of chaos. The god of machine-logic was overthrown among the masses and a new concept was raised: Man may not be replaced. Frank Herbert, DuneWithin one century, biological intelligence will be a tiny minority of all sentient life. It will be very rare to be human. It will be very rare to have cells and blood and a heart. Human beings will be outnumbered a thousand to one by conscious machine intelligences.Artificial General Intelligence (AGI)1 is about to go from being science fiction to being part of everybodys day-to-day life. Its also going to happen in the blink of an eye because once it gets loose, there is no stopping it from scaling itself incredibly rapidly. Whether we want it to or not, it will impact every human beings life.Some people believe the singularity wont happen for a very long time, or at all. Id like to discuss why I am nearly certain it will happen in the next 20 years. My overall prediction is based on 3 hypotheses:Scale is not the solution.AI will design AGI.The ball is already rolling.Keep in mind that this is just speculation and opinions. These predictions depict the future I personally feel is most likely.Scale is not the solution.Recently, an architecture called the Transformer has been taking over machine learning. Its really good at sequence-to-sequence tasks like translation and text completion, and its also been successfully applied to other fields like computer vision.Transformers2 also demonstrate an intriguing ability to scale their performance with their size better than other architectures. They seem less prone to the performance ceilings found in their competition.This has lead to a new slogan popping up in the AGI-speculation community: scale is all you need. Some people believe that bigger networks, bigger compute clusters, and bigger datasets are all we need to get to AGI. I disagree.I believe we are more bottlenecked by the architecture designs than anything else. While modern, standard feedforward neural networks are getting very good at Doing Stuff, they arent AGI and I dont think theres a clear path forward for them to become AGI. I have no doubt OpenAIs next mega-model, GPT-4 (and beyond), will be excellent, but I also think it will have exploitable flaws that make it fail a thorough Turing test. In fact, I see the massive size of the present-days GPT-3 as a sign that scale isnt the answer. 175 billion parameters, but still obviously not sentient? For comparison, the human brain has between 20 and 100 billion neurons and up to 1 quadrillion synapses. You could argue that until our neural networks have hundreds of trillions of parameters, its not fair to compare them to the brain, but I think this argument relies too much on the assumption that a biological synapse and a weight in a network are equivalent in computational ability. This has not be proven. The intricacies of how the brain moves and processes signals are still not entirely understood3, but we know it seems to operate very differently from current neural networks.4Looking at most of the most revolutionary papers in the history of AI, they are dominated not by we made it bigger but by we made it smarter at the same size. I see no reason not to expect that this pattern will continue.If scale isnt the answer, what is? I believe that the pièce de résistance is adaptability. Presently, the way you make an ML model is fairly rigid: you decide on a fancy new way to differentiably mix matrix multiplications together, you feed it a ton of data, and you use some simple calculus-based optimizer to train the weights in your network5. The way that the weights in your network are arranged doesnt change after training.I dont believe this is adaptible enough, even at scale. In order for true intelligence to emerge, models must be able to reorganize their own inner workings. I dont think you can have the level of flexibility required for sentience with a frozen architecture.6I think sentient AI will be created by working smarter, not harder, with a focus on better architectural design and intelligent optimizers. This leads nicely into my next hypothesis:AI will design AGI.Human-designed networks have achieved great results, but they still suffer from the flaws of their creators. We are attracted to neatly organized network architectures which we can investigate and explain and attempt to understand.But our brains, the gold standard of intelligence, are famously difficult to investigate, explain, or understand! I think this is because our brains werent designed by anyone they evolved. They are the product of the universes greatest optimizer, natural selection.7I think its reasonable to assume that the architecture that brings about AGI will not be hand-designed by humans, or even selected via some brute-force hyperparameter search it will be designed by another AI. I predict there will be several recursive layers of AI design perhaps a dumb network which constructs a decent network which constructs a smart network which constructs AGI.I am bullish on the prospect of what I call constructor networks models that construct other models (also known as hypernetworks). I think the moment we crack hyperlearning will be the moment progress will start moving faster than we can keep up, precisely because we will no longer be the ones making the progress the algorithmsthemselves will.In order to work smarter, not harder, we need to let go of our human biases and focus on making unconstrained architectures that can aggressively optimize every aspect of themselves. I fully expect these architectures will be frustratingly difficult to explain when they arrive like huge mounds of digital neural spaghetti but they will also outperform all competition. Every additional stable layer of AI abstraction we add between ourselves and the final model will make the final model harder to understand and better at its task.The ideal model will be able to not only be constantly online-learning, but also constantly adding and removing its own parameters, allowing evolution and adaptation to new tasks.You cannot have artificial general intelligence if your model cannot adapt in real time to an arbitrary task.The ball is already rolling.I believe that there is too much momentum to stop AGI now. With this much distributed attention fixed on the problem, AGI will be solved. Additionally, once it is solved it will be released to the public whether its ethical to do so or not. I imagine that the first people to solve it will probably keep it behind closed doors, but it wont stay secret forever. Someone on the team will leak everything, or someone else will independently make the same discoveries and release them. Eventually it will get out.Consider the invention of the nuclear bomb once we learned of the power hidden in radioactive materials, it was only a matter of time before someone pushed the research to its moral limits. AGI is like that, except its even more terrifying because uranium, plutonium, and the bombs made out of them can be strictly controlled, but people with powerful computers and an internet connection cannot, nor can the AGIs they create.I recognize how cliché and alarmist this all sounds. Really, youre genuinely worried about a robot apocalypse? You know Age of Ultron is just a stupid Marvel movie, right? Yeah, I know. But Ive grown to believe that the concerns that fiction writers have been bringing up for decades are actually quite reasonable because AGI cannot be stopped.Once an intelligence is loose on the internet, it will be able to learn from all of humanitys data, replicate and mutate itself infinitely many times, take over physical manufacturing lines remotely, and hack important infrastructure. Obviously, its impossible to say for sure that this is what the first free AGI will do, but its inevitable that some malevolent AGI will exist and will do these things. We can only hope that well have sufficiently powerful benevolent AGI to fight back.Final ThoughtsI subtitled this post Why we're all in denial about the robot apocalypse. I say that because I believe that society at large is completely, utterly, and woefully unprepared for the advent of sentient, living artificial general intelligence. I think the singularity is coming much sooner than most people expect, and I think its going to cause a great deal of upset when it arrives for better and for worse.Take for instance the common religious belief that people possess some unmeasurable, undefinable soul, and that this soul is what separates us from inanimate objects and non-sentient animals. Furthermore, some people believe that these souls come from deity. I have spoken with friends who believe that AGI is impossible because robots cant have souls, humans arent God. For these people, like Caleb says in Ex Machina (paraphrasing), removing the line between man and machine also removes the line between god and man.Now, this isnt to say that AGI will destroy religion or anything it may even be used to strengthen some sects (as taken to the extreme in HBOs Raised By Wolves). No, religion has been around for millennia and Im sure it will continue to be around for many more millennia. Im simply predicting that a subset of religious people are going to experience lots of cognitive dissonance when the first AGI arrives. More generally, arguments about AGI sentience and ethical issues will go from being topics only geeks talk about to topics that Facebook moms make political grandstands over. Finally, I want to address those who may feel this post is pessimistic: I assure you, I am hopeful about AGI. I work in the field of ML because I am hopeful. I hope to personally contribute to the development of AGI in my lifetime. I think AGI has the capacity to make the world an infinitely better place. We are not prepared for AGI, but that doesnt mean AGI has to be the end of humanity. I dont know what life will look like in the age of living machines, but I am confident that, as Jeff Goldblum puts it: Life, uh, finds a way.Ian Malcolm, Jurassic ParkThanks for reading,KaiPS Im making a series of short films about AGI right now! You should totally go watch the first episode, which is out now on my YouTube channel and my TikTok account. Also, while youre at it, why not follow me on Twitter? | Unknown | Others | null | null | null | null | null | null |
|
news | Yashar Behzadi | The Application of Synthetic Data Is Inevitable | Synthetic data is at an inflection point of utilization. The emerging technology is just beginning its adoption cycle and value to the enterprise, but change is on the horizon. According to my company’s recent survey, industry leaders believe that, on average, 59% of their industry will utilize synthetic data in five years, either independently or in combination with […]The post The Application of Synthetic Data Is Inevitable appeared first on DATAVERSITY. | https://www.dataversity.net/the-application-of-synthetic-data-is-inevitable/ | 2022-03-28T07:20:00Z | Synthetic data is at an inflection point of utilization. The emerging technology is just beginning its adoption cycle and value to the enterprise, but change is on the horizon. According to my companys recent survey, industry leaders believe that, on average, 59% of their industry will utilize synthetic data in five years, either independently or in combination with real-world data. Many industries and companies are experimenting with the technology and recognizing the use cases and relevant applications. While 2020 and 2021 saw the adoption of synthetic data by those in the machine learning and computer vision fields, 2022 will be the year synthetic data appeals more to the majority, ensuring they view its integration as critical to staying ahead. Synthetic Data and Its Disruptive CapabilitiesSynthetic data is computer-generated data that serves as an alternative to real-world data. It is created in simulated digital worlds rather than collected from or measured in the real world. Combining tools from the world of visual effects and CGI with generative AI models, synthetic data enables companies to create vast amounts of photorealistic, diverse data on demand to train computer vision models. Synthetic data is a disruptive approach to training AI models through the use of computer-generated images and simulations. A single dataset may contain tens of millions of elements. To manually collect and label data of this magnitude is time-consuming and costly for organizations, not to mention prone to human errors. Synthetic data aims to simulate real-world scenarios to train AI systems virtually. This approach reduces the time and resources needed to build these models by delivering vast amounts of perfectly labeled data to organizations in a matter of hours. Also, since synthetic data isnt generated from real-world sources, data privacy and bias issues are reduced. According to the aforementioned survey, 89% of technology executives agree that it is a new and innovative technology that will transform their industry.Barriers to Further AdoptionWhile synthetic data can transform industries and how organizations use data, there is still work to be done to address barriers to adoption. Two-thirds (67%) of technology executives agree that their organization lacks the knowledge and understanding of implementing it. Additionally, almost half have concerns that models built with synthetic data are not as good as real-world data. A key to further implementation is educating colleagues throughout the entire organization, not just the C-suite, as there is confusion and a lack of understanding among many groups. Organizations already using vision data are positioned to lead this charge, as they understand the value of vision data and how it can benefit their industry. Among those working with vision data that dont use or have only started using synthetic data, only three in 10 (30%) respondents cite a lack of tools to create and manage synthetic data as a barrier to broader utilization.Synthetic Data Enabling Emerging IndustriesAI is driven by the speed, diversity, and quality of data. Todays systems leverage supervised learning approaches in which humans label attributes in image data to then train AI models. This approach is fundamentally limited, as humans do not scale and, more importantly, cannot label key attributes (e.g., 3D position, interactions, etc.) necessary to enable emerging industries such as AR/VR, autonomous vehicles, robotics, and more. Synthetic data is predicted to be a key solution to address these shortcomings to bring these emerging industries to the mainstream. The metaverse, for example, cannot be built quickly or efficiently without the use of synthetic data. To recreate reality as a digital twin and build out enough 3D-rendered objects in a way that is time- and cost-efficient, its necessary to deeply understand humans, objects, 3D environments, and their interactions with one another. Creating these AI capabilities requires tremendous amounts of high-quality labeled 3D data data that is impossible for humans to label. Synthetic Datas InevitabilityAccording to Synthetic Data for Deep Learning, new research is starting to provide proof points around the utility of synthetic data across use cases, including robotics, autonomous vehicles, smart homes, consumer products, manufacturing, logistics, health care, and more. These use cases and the growing buzz around other emerging industries will be central to the growth of synthetic data because simply put, they wont be possible without it. Synthetic data will inevitably come to define a new paradigm in AI and enable the next generation of more capable models and products in 2022 and beyond.TRAIN TO BECOME A CERTIFIED DATA MANAGEMENT PROFESSIONALOur online training program in CDMP preparation provides a solid foundation of different data disciplines. Use code DATAEDU by March 31 for 25% off! | Content Creation/Discovery/Prediction | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | Xiaoyong Zhu | Feathr: LinkedIn’s feature store is now available on Azure | With the advance of AI and machine learning, companies start to use complex machine learning pipelines in various applications, such as recommendation systems, fraud detection, and more. These complex systems usually require hundreds to thousands of features to support time-sensitive business applications, and the feature pipelines are maintained by different team members across various business groups. | https://azure.microsoft.com/en-us/blog/feathr-linkedin-s-feature-store-is-now-available-on-azure/ | 2022-04-12T09:00:10Z | This blog post is co-authored by David Stein, Senior Staff Software Engineer, Jinghui Mo, Staff Software Engineer, and Hangfei Lin, Staff Software Engineer, all from Feathr team.Feature store motivationWith the advance of AI and machine learning, companies start to use complex machine learning pipelines in various applications, such as recommendation systems, fraud detection, and more. These complex systems usually require hundreds to thousands of features to support time-sensitive business applications, and the feature pipelines are maintained by different team members across various business groups.In these machine learning systems, we see many problems that consume lots of energy of machine learning engineers and data scientists, in particular duplicated feature engineering, online-offline skew, and feature serving with low latency.Figure 1: Illustration on problems that feature store solves.Duplicated feature engineeringIn an organization, thousands of features are buried in different scripts and in different formats; they are not captured, organized, or preserved, and thus cannot be reused and leveraged by teams other than those who generated them.Because feature engineering is so important for machine learning models and features cannot be shared, data scientists must duplicate their feature engineering efforts across teams.Online-offline skewFor features, offline training and online inference usually require different data serving pipelines—ensuring consistent features across different environments is expensive.Teams are deterred from using real-time data for inference due to the difficulty of serving the right data.Providing a convenient way to ensure data point-in-time correctness is key to avoid label leakage.Serving features with low latencyFor real-time applications, getting feature lookups from database for real-time inference without compromising response latency and with high throughput can be challenging.Easily accessing features with very low latency is key in many machine learning scenarios, and optimizations needs to be done to combine different REST API calls to features.To solve those problems, a concept called feature store was developed, so that:Features are centralized in an organization and can be reusedFeatures can be served in a synchronous way between offline and online environmentFeatures can be served in real-time with low latencyIntroducing Feathr, a battle-tested feature storeDeveloping a feature store from scratch takes time, and it takes much more time to make it stable, scalable, and user-friendly. Feathr is the feature store that has been used in production and battle-tested in LinkedIn for over 6 years, serving all the LinkedIn machine learning feature platform with thousands of features in production.At Microsoft, the LinkedIn team and the Azure team have worked very closely to open source Feathr, make it extensible, and build native integration with Azure. It’s available in this GitHub repository and you can read more about Feathr on the LinkedIn Engineering Blog.Some of the highlights for Feathr include:Scalable with built-in optimizations. For example, based on some internal use case, Feathr can process billions of rows and PB scale data with built-in optimizations such as bloom filters and salted joins.Rich support for point-in-time joins and aggregations: Feathr has high performant built-in operators designed for Feature Store, including time-based aggregation, sliding window joins, look-up features, all with point-in-time correctness.Highly customizable user-defined functions (UDFs) with native PySpark and Spark SQL support to lower the learning curve for data scientists.Pythonic APIs to access everything with low learning curve; Integrated with model building so data scientists can be productive from day one.Rich type system including support for embeddings for advanced machine learning/deep learning scenarios. One of the common use cases is to build embeddings for customer profiles, and those embeddings can be reused across an organization in all the machine learning applications.Native cloud integration with simplified and scalable architecture, which is illustrated in the next section.Feature sharing and reuse made easy: Feathr has built-in feature registry so that features can be easily shared across different teams and boost team productivity.Feathr on Azure architectureThe high-level architecture diagram below articulates how would a user interacts with Feathr on Azure:Figure 2: Feathr on Azure architecture.A data or machine learning engineer creates features using their preferred tools (like pandas, Azure Machine Learning, Azure Databricks, and more). These features are ingested into offline stores, which can be either:Azure SQL Database (including serverless), Azure Synapse Dedicated SQL Pool (formerly SQL DW).Object storage, such as Azure BLOB storage, Azure Data Lake Store, and more. The format can be Parquet, Avro, or Delta Lake.The data or machine learning engineer can persist the feature definitions into a central registry, which is built with Azure Purview.The data or machine learning engineer can join on all the feature dataset in a point-in-time correct way, with Feathr Python SDK and with Spark engines such as Azure Synapse or Databricks.The data or machine learning engineer can materialize features into an online store such as Azure Cache for Redis with Active-Active, enabling multi-primary, multi-write architecture that ensures eventual consistency between clusters.Data scientists or machine learning engineers consume offline features with their favorite machine learning libraries, for example scikit-learn, PyTorch, or TensorFlow to train a model in their favorite machine learning platform such as Azure Machine Learning, then deploy the models in their favorite environment with services such as Azure Machine Learning endpoint.The backend system makes a request to the deployed model, which makes a request to the Azure Cache for Redis to get the online features with Feathr Python SDK.A sample notebook containing all the above flow is located in the Feathr repository for more reference.Feathr has native integration with Azure and other cloud services. The table below shows these integrations:Feathr componentCloud IntegrationsOffline store – Object StoreAzure Blob StorageAzure ADLS Gen2AWS S3Offline store – SQLAzure SQL DBAzure Synapse Dedicated SQL Pools (formerly SQL DW)Azure SQL in VMSnowflakeOnline storeAzure Cache for RedisFeature RegistryAzure PurviewCompute EngineAzure Synapse Spark PoolsDatabricksMachine Learning PlatformAzure Machine LearningJupyter NotebookFile FormatParquetORCAvroDelta LakeTable 1: Feathr on Azure Integration with Azure Services.Installation and getting startedFeathr has a pythonic interface to access all Feathr components, including feature definition and cloud interactions, and is open sourced here. The Feathr python client can be easily installed with pip:pip install -U feathrFor more details on getting started, please refer to the Feathr Quickstart Guide. The Feathr team can also be reached in the Feathr community.Going forwardIn this blog, we’ve introduced a battle-tested feature store, called Feathr, which is scalable and enterprise ready, with native Azure integrations. We are dedicated to bringing more functionalities into Feathr and Feathr on Azure integrations, and feel free to give any feedback by raising issues in Feathr GitHub repository. | Process Automation/Content Synthesis/Recommendation | Computer and Mathematical | null | null | null | null | null | null |
|
news | PR Newswire | Bloom Raises $1.1M in Seed Round Funding to Bring Generative AI to eCommerce | Product photo performance engine Bloom has announced that it has raised $1.1 million at the close of its seed funding round. The AI-driven platform has... | https://finance.yahoo.com/news/bloom-raises-1-1m-seed-100000964.html | https://s.yimg.com/uu/api/res/1.2/xmqXF1xMnS0myPgeru4wkA--~B/aD0xNjt3PTE2O2FwcGlkPXl0YWNoeW9u/https://media.zenfs.com/en/prnewswire.com/cfe8be588024e475170780bfaf1b2004 | 2022-03-14T10:00:00Z | Bloom boosts merchants' conversion rates with technology that tracks consumers' on-site interactions and delivers the most desirable product photos, causing shoppers to stay on the page longer and buyNEW YORK, March 14, 2022 /PRNewswire/ - Product photo performance engine Bloom has announced that it has raised $1.1 million at the close of its seed funding round. The AI-driven platform has garnered backing from various high-powered venture firms and angel investors, bringing Bloom one step closer to bringing generative AI to eCommerce.Founded by Aarlo Stone Fish, a Yale alumnus with 20 years of experience as a software engineer and an AI expert, and Sam Dundas, a lifelong entrepreneur and product builder, Bloom uses generative AI to improve customers' online shopping experiences and drive more sales. Bloom's technology tracks a consumer's on-site behaviors such as clicks, zooms, swipes, and bounces and then compiles that behavioral data to deliver pixel-by-pixel the most compelling and personalized images possible. The result is that consumers spend more time on the page and eventually purchase. eCommerce sites integrating Bloom have seen a boost in their conversion rates ranging between 5-14% without the merchant lifting a finger."We're building the engine to power billions of shopper sessions with 100% personalized content," said Dundas. "Bloom is positioned to become an essential layer to the e-commerce tech stack."Bloom's seed funding round was backed by prominent investors who have supported a wide range of startups, including those in the AI sector. Investors in the seed round include Inovia Capital; AIX Ventures, featuring Investing Partners Richard Socher (founder of MetaMind), Pieter Abbeel (co-founder of Covariant), Chris Manning (director of Stanford AI Lab), and Anthony Goldbloom (founder of Kaggle); Forum Ventures, which also invested in Bloom during its pre-seed accelerators; OneValley Investments; and The Y Startup Index, co-founded by Sean Glass, a prolific angel investor and serial entrepreneur.With eCommerce accounting for $4.93 trillion worldwide and expected to grow to $7.39 trillion by 2025 and mobile commerce purchases accounting for 72.9% of all online purchases, Bloom helps its partners get more of these sales with its mobile-optimized product photo engine. By integrating Bloom into their sites, merchants, especially those with $1+ million in sales, can enhance the online shopping experience with a done-for-you AI-driven solution that closes more sales without investing additional precious time or human resources into the process."Showing the same product photos to every shopper is medieval," said Shaun Johnson, founding partner at AIX Ventures and seed round investor. "A shopper should see a photo that is generated to meet their specific needs, including the model type and environment. Bloom is this dynamic photo solution that is taking e-commerce photos into the future."Bloom initially came to market with a generative AI product capable of creating photo-realistic fashion models, indistinguishable from real-life people. After launching Bloom's beta version and collecting feedback from 200 eCommerce merchants, Stone Fish and Dundas discovered that brands don't understand how their existing assets perform. Wanting to find a solution to this problem that delivered a clear ROI, Bloom was born. The seed funding will enable the Bloom team to bring its solutions to more merchants and continue to expand its capabilities, eventually achieving its ultimate vision of linking its performance engine with generative AI in real-time.Bloom is currently available on Shopify, accessible to the over two million eCommerce merchants on the platform.View original content:https://www.prnewswire.com/news-releases/bloom-raises-1-1m-in-seed-round-funding-to-bring-generative-ai-to-ecommerce-301499268.htmlSOURCE Bloom AI | Personalization/Content Creation | Management/Business and Financial Operations | null | null | null | null | null | null |