Datasets:
Tasks:
Text Classification
Modalities:
Text
Sub-tasks:
multi-class-classification
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Adding GPAI Initiatives task
Browse files- data/AIInitiatives/test.csv +0 -0
- data/AIInitiatives/train.csv +66 -0
- raft.py +68 -29
data/AIInitiatives/test.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
data/AIInitiatives/train.csv
ADDED
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Name,Organization / Author,Brief Description,Sector,Geographical scope,Target Audience,Stage of Development,Date started,Country/region of origin,Notes (including specific SDG(s) and OECD AI Principles addressed),AI AND ETHICS (Ethical frameworks and guidelines promoting and fostering responsible AI),"AI AND GOVERNANCE (Governance mechanisms operationalizing responsible AI, including auditing mechanisms, risk assessments, standards, certifications, corporate governance frameworks, etc.)",AI AND SOCIAL GOOD (Applied projects advancing SDGs responsibly)
|
2 |
+
Limits on autonomy in weapon systems. Identifying practical elements of human control,"Stockholm International Peace Research Institute, International Committee of the Red Cross.","There is wide recognition that the need to preserve human control over weapon systems and the use of force in armed conflict will require limits on autonomous weapon systems (AWS). This report from the Stockholm International Peace Research Institute and the International Committee of the Red Cross offers in-depth analysis of the type and degree of human control that is required to mitigate the risks posed by AWS. It proposes three types of control measures to reduce or compensate for the unpredictability introduced by AWS and associated risks for civilians: controls on the weapon’s parameters such as types of targets, controls on the environment of use and controls in the form of human supervision. Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control is a comprehensive examination of the specific controls on AWS needed to ensure human control over the use force, and to address legal, ethical and operational concerns. It provides policymakers with practical guidance on how these control measures should form the basis of internationally agreed limits on AWS—whether rules, standards or best practices.",Academia and International Organisation,Global,"Policy makers, diplomats, civil society, international organizations",Published report,2020,Sweden,"SDG 16 Peace, justice, strong institutions",Yes,No,No
|
3 |
+
”Ethics of Artificial Intelligence and Robotics” (Stanford Encyclopedia of Philosophy),Vincent C. Müller,"Encyclopedic entry in Stanford Encyclopedia of Philosophy. It covers Ethical issues that arise with AI systems i.e. privacy, manipulation, opacity, bisa, human-robot interactions, employment, autonomy,machine ethics, artificial moral agency, and the problem of a
|
4 |
+
possible future AI superintelligence. - The initiative's mission is: 'None'",Academia,Global,Ethics researchers,Published draft,April 2020,United States,Final version in Winter 2020 edition,Yes,No,No
|
5 |
+
10 Principles of responsible AI,Women Leading in AI,"A set of practical guidelines defined by an inclusive community in a format that resonates with our government, making policy recommendations to ensure AI is fair, free from bias and promotes equality. - The initiative's mission is: 'To grow a diverse network of women and supportive men in AI, to promote fairness, equality and remove bias from AI, share research, and influence politicians on these topics.'",Civil society,Global,"public sector (na- tional and inter- national policy
|
6 |
+
makers)",Published,2019,International,,Yes,No,No
|
7 |
+
2019 Report and AI Utilization Principles, https://www.soumu.go.jp/main_content/000637844.pdf, https://www.soumu.go.jp/main_content/000658284.pdf, https://www.soumu.go.jp/main_content/000658286.pdf,"Ministry of Internal Affairs and Communications (MIC), the Government of Japan. 2018.","Overview of recent trends in AI governance/networking, Principles for the utilization of AI, and future challenges - The initiative's mission is: 'Promoting benefits of, mitigating risks of, and fostering trust in AI systems'",Public,Japan,Policymakers,Published,No,Yes,No
|
8 |
+
"2020 Report, The Conference toward AI Network Society","Ministry of Internal Affairs and Communications (MIC), the Government of Japan","Outline of a report on safe, secure, and trustworthy social implementation of AI, focusing on interactions between different actors. - The initiative's mission is: 'To create an environment for the safe, secure, and trustworthy social implementation of AI",Public,Japan,Policymakers,Published,,Japan,,No,Yes,No
|
9 |
+
3A Institute,"Genevieve Bell & the Australian National University, College of Engineering and Computer Science, CSIRO:Data61","Who is building, managing and decommissioning our Ai-enabled future?
|
10 |
+
This question is at the heart of our mission. Located within the College of Engineering and Computer Science at the Australian National University we are guiding and accelerating into existence a new branch of engineering centred on cyber-physical systems and artificial intelligence. Our mission is to build the skills and knowledge we need to help shape the future safely, sustainably, responsibly.",Academia,Australia,"Industry, government, community organisations, startups and education",Running,September 2017,Australia,,No,No,Yes
|
11 |
+
"AI
|
12 |
+
Based Referral System
|
13 |
+
for Patients With
|
14 |
+
Diabetic Retinopathy","Government of the State of Jalisco, Universidad Autónoma de Guadalajara, Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Centro Médico de Occidente","A diabetic retinopathy screening program for early detection and treament through convolutional neural network, based on Mexican clinical guidelines, that will be implemented in three hospitals in Mexico - for early detection and treatment of diabetic retinopathy. To facilitate the early detection and treatment of diabetic retinopathy, which may lead to loss of vision and blindness. Diabetes mellitus is one of the major health challenges in Mexico and Latin America.",Public and Academia,Local(Mexico),Medical community with a focus on those working on diabetic patients,Test phase,2019,Mexico,ODS 3 Good health and well being,No,No,Yes
|
15 |
+
AI - Our approach / Responsible AI, https://www.microsoft.com/en-us/ai/our-approach,Microsoft,"Outline of principles for the development of AI that ""puts people first"", and a method for operationalizing these principles - The initiative's mission is: 'To innovate responsibly, empower others and foster positive impact'",Private,Global,Employees,Published,November 2018,United States,No,Yes,No
|
16 |
+
AI 4 Development Agency,AI 4 Development Agency,"A tech non-profit that develops solutions and promotes civic education & application of AI globally. - The initiative's mission is: 'Create A Better Tomorrow in which trust is re-established, opportunities are
|
17 |
+
equally distributed, and societies are empowered for the Future of Work'",Civil society,Global,Citizens,Publically launched,May 2019,Austria,,No,No,Yes
|
18 |
+
AI Civic Forum,"Algora Lab, University of Montreal, TFS, Mila","The AI Civic Forum (AICF) is a multi-stakeholder platform to proactively engage people around the world in discussions on AI ethics and governance. Anchored in a robust collective intelligence process, the objectives are delivered through four key deliverables: face-to-face deliberations; an online platform; an AI Literacy Toolkit and a Trainer-the-trainer Playbook. The AI Civic Forum is co-led by The Future Society, AlgoraLab and Mila. - The initiative's mission is: 'Bring together diverse communities of citizens, policymakers, academics, experts, private sector representatives and other key stakeholders to deliberate on the ethical design, deployment, and governance of AI.'",Mixed academia/non-profit,Global,"Civil Society, Policymakers, Citizens",Set-up in progress,June 2019,United States,,No,Yes,No
|
19 |
+
AI Commons,AI Commons,"Initiative bringing stakeholders together to address the world’s greatest challenges using AI. - The initiative's mission is: 'To allow anyone, anywhere, to benefit from the possibilities that AI can provide.'",Mixed,Global,All,Publically launched,2017,United States & France,Website in maintenance mode (checked on Oct 8 2020),No,No,Yes
|
20 |
+
AI Ethics Guidelines Global Inventory,AlgorithmWatch,Catalogue/inventory of ethical AI frameworks - The initiative's mission is: 'To map frameworks that seek to set out principles of how systems for automated decision-making (ADM) can be developed and implemented ethically',Non-profit,Europe/US,All,Publically launched,2020,Germany,,No,Yes,No
|
21 |
+
AI Ethics Guidelines: European and Global Perspectives,Ad Hoc Committee on Artificial Intelligence,"Maps the relevant corpus of soft law documents and other ethical-legal frameworks developed by governmental and non-governmental organisations globally - The initiative's mission is: 'To monitor the ever-evolving spectrum of non-mandatory governance instruments; to prospectively assess the impact of AI on ethical principles, human rights, the rule of law and democracy'",International organisation,Europe,"Developers, funding agencies, governmental and inter-governmental organisations and other relevant stakeholders involved in the advancement of ethically responsible innovation in AI",Published,2020,International,,Yes,Yes,No
|
22 |
+
AI Ethics Principles,"Department of Industry Innovation and Science, Australian Government","Eight voluntary AI ethics principles for designing, developing, integrating or using AI systems - The initiative's mission is: 'To achieve better outcomes, reduce the risk of negative impact and practice the highest standards of ethical business and good governance'",Public,Australia,Businesses,Published,Nov 2019,Australia,,Yes,No,No
|
23 |
+
AI explainability 360,IBM,Extensible open source toolkit - The initiative's mission is: 'To comprehend how machine learning models predict labels by various means throughout the AI application lifecycle',Private,Global,Developers,Publically launched,2019,United States,,No,Yes,No
|
24 |
+
AI Explained - Non-technical Guide for Policymakers,AI for Peace,"A manual to explain AI basics to policymakers and interested individuals without technical background. - The initiative's mission is: 'Demystify what AI is, and demonstrate how it is already altering our lives and societies we live in.'",Civil society,Global,"Policymakers, Citizens",Published,February 2020,International,,No,Yes,Yes
|
25 |
+
AI factsheets 360,IBM,Template for capturing relevant information about the creation and deployment of an AI model or service - The initiative's mission is: 'To foster trust in AI by increasing transparency and enabling governance.',Private,Global,Developers,Published,2019,United States,,No,Yes,No
|
26 |
+
AI fairness 360,IBM,"Extensible open source toolkit - The initiative's mission is: 'To examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle'",Private,Global,Developers,Publically launched,2019,United States,,No,Yes,No
|
27 |
+
AI for Good Global Summit,International Telecommunications Union,Yearly summit in partnership with 35 UN agencies to foster discussion and coordination on fulfilling the Sustainable Development Goals using AI. - The initiative's mission is: 'Identify practical applications of AI and scale those solutions for global impact',Mixed,Global,All,Publically launched,2017,Switzerland,,No,No,Yes
|
28 |
+
AI for Humanity,Government of France,"France's implementation programme for the national strategy on AI, following up on the original report (Villani Mission). It includes a yearly conference bringing stakeholders together to discuss and coordinate progress. - The initiative's mission is: 'Fully seize the opportunities offered by AI now, while designing the framework to regulate it.'",Public,France,All,Publically launched,March 2018,France,,No,Yes,Yes
|
29 |
+
AI for SDG Center,The Future Society,"Designed as a public-private-people partnership factory, the Center will work with international organizations, businesses, academia and civil society organizations to engineer new business, ethics, and value-sharing models to deploy the best AI solutions and platforms globally towards helping solve humanity’s greatest challenges, as outlined in the UN Sustainable Development Goals (SDG). - The initiative's mission is: 'Deploy the best AI solutions and platforms globally to help solve humanity's greatest challenges, as outlined in the UN Sustainable Development Goals'",Civil society,Global,All,Set-up in progress,February 2018,United Arab Emirates,,No,No,Yes
|
30 |
+
AI for SDGs Think Tank,Research Center for AI Ethics and Sustainable Development at the Beijing Academy of Artificial Intelligence.,"Public online service compiling and analysing AI projects and proposals that impacts the UN SDGs, both positively and negatively. - The initiative's mission is: 'Promote the positive use of AI for Sustainable Development and investigate negative impact of AI on sustainable development. '",Non-profit,Global,All,Publically launched,June 2020,China,,No,No,Yes
|
31 |
+
AI Governance White Paper (人工智能治理白皮书),Chinese Academy of ICT (CAICT) & Artificial Intelligence Industry Alliance,"Held yearly in Dubai on the occasion of the World Government Summit (WGS) under the aegis of the UAE State Minister for AI, the Global Governance of AI Roundtable (GGAR) is a revolving international multi-stakeholder governance process that brings together a diverse community of 250 global experts and practitioners from government, business, academia, international organizations, and civil society.
|
32 |
+
|
33 |
+
- The initiative's mission is: 'The Global Governance of AI Roundtable (GGAR) has been envisioned and designed as a unique collective intelligence exercise to help shape and deploy global, but culturally adaptable, norms for the governance of artificial intelligence. Building upon the first edition of held in February 2018, the 2019 edition began in August with an intensive six-months preparation and curation period. The community of participants was brought together through regular video conferences with the objective of shaping the 2019 agenda and driving the research effort managed in parallel by our team of AI Policy Researchers. This led to the publication of 14 background research papers on different topics ranging from agile governance, cybersecurity, geopolitics, explainability, international development, sustainability, and more. These research papers were informed by the organization of almost 90 expert calls, which also helped architect the agenda for a full-day Round-table workshop with 4 working sessions and 47 subcommittees. This combined community building, research, and agenda-setting effort were done in partnership with a host of prestigious international organizations including the OECD, UNESCO, IEEE, the Council on Extended Intelligence, and the Global Data Commons Task Force. After providing each partner-organization with a platform to meet and advance its own goals and initiatives on AI policy during two days ahead of the World Government Summit (WGS), the Global Governance of AI Forum culminated into a one-day big Roundtable Collective intelligence Workshop held on the first day of Summit. The Roundtable was designed as a prolongation of the collective intelligence effort initiated during preparation. It had no panels, no keynotes; only curated breakout sessions to maximize productivity and outcome. The insights and recommendations have been captured into a comprehensive report, which includes an action-oriented summary for policymakers – see The Report of the 2018 edition.'",Private Sector Alliance & Academia,"Global, China","Policymakers, Regulators",Published,9/29/2020,China,,No,Yes,No
|
34 |
+
AI Guidelines,Deutsche Telekom,Nine self-binding guidelines for how they should use AI and develop AI-based products and services in the future - The initiative's mission is: 'Toguide the use AI in positive ways',Private,Germany,Employees,Published,2018,Germany,,Yes,Yes,No
|
35 |
+
"AI in the UK: ready, willing and able?","UK House of Lords, Select Committee on Artificial Intelligence","Five principles that could become the basis for a shared ethical AI framework. While AI-specific regulation is not appropriate at this stage, such
|
36 |
+
a framework provides clarity in the short term, and could underpin regulation, should it prove to be necessary, in the future. - The initiative's mission is: 'To recommend a cross-sector ethical code of conduct across the UK.'",Public,UK,public sector (UK government),Published,April 2018,United Kingdom,,Yes,Yes,No
|
37 |
+
AI Now 2017 Report,AI Now Institute,"Report which identifies new developments, emerging challenges and makes recommendations in four areas: labor and automation, bias and inclusion, rights and liberties, and ethics and governance - The initiative's mission is: 'To ensure that the benefits of AI will be shared broadly, and that risks can be identified and mitigated.'",Academia,Global,"multiple (core public agencies, companies, industry, universities, conferences, other stakeholders)",Published,Oct 2017,United States,,Yes,No,No
|
38 |
+
AI Now 2018 Report (inc. Algorithmic Impact Assessment Framework),AI Now Institute,"Report on social implications of AI in 2018 - The initiative's mission is: 'To understand the social implications of AI technologies, with a focus on questions of accountability'",Academia,Global,"multiple (core public agencies, companies, industry, universities, conferences, other stakeholders)",Published,Dec 2018,United States,,No,Yes,No
|
39 |
+
AI Principles & Ethics,Smart Dubai,"Four key AI principles for development and use. 1) AI systems should be fair, transparent, accountable and understandable ; 2) AI systems should be safe and secure, and should serve and protect humanity ; 3) AI should be beneficial to humans and aligned with human values, in both the long and short term ; 4) AI should benefit all people in society, be governed globally, and respect dignity and people rights - The initiative's mission is: 'To allow Dubai to excel in the development and use of AI in ways that boost innovation and deliver human benefit and happiness.'",Public,UAE,"multiple (citizens, developers, public sector)",Published,2018,United Arab Emirates,,Yes,Yes,No
|
40 |
+
AI Principles of Telefónica,Telefonica,"Corporate AI principles, including fair AI, human-centred AI, transparent and explainable AI, privacy and security by design, and working with partners. - The initiative's mission is: 'To set the principlesTelefonica abides when designing, developing or using AI. To provide commitment to implementing them in their products and services through training, governance, and by-design.'",Private,Global,Employees,Published,October 2018,Spain,,Yes,Yes,No
|
41 |
+
AI R&D Principles,"Ministry of Internal Affairs and Communications (MIC), the Government of Japan. 2017.","Proposal of guidelines that will be internationally shared as non-regulatory and non-binding soft law. - The initiative's mission is: '1) Accelerate the participation of multistakeholders involved in R&D and utilization of AI (such as developers, service providers, users including civil society, governments, and international organizations) at both national and international levels, in the discussions towards establishing ""AI R&D Guidelines"" and “AI Utilization Guidelines”
|
42 |
+
2) Promote the international sharing of best practices in the R&D and utilization of AI, which will help gain the trust of users and the society in AI and facilitate the R&D and utilization of AI.'",Public,Japan,Policymakers,Published,2017,Japan,,Yes,Yes,No
|
43 |
+
AI Repository,International Telecommunication Union (ITU),"Catalogue/inventory of AI initiatives which accelerate progress towards the “17 UN Sustainable Development Goals (SDGs)” - The initiative's mission is: 'To identify AI related projects, research initiatives, think-tanks and organizations that can accelerate progress towards the “17 UN Sustainable Development Goals (SDGs)”'",International organisation,Global,All,Publically launched,2020,Europe,,No,No,Yes
|
44 |
+
"AI Utilization Guidelines, Practical Reference for AI Utilization","The Conference toward AI Network Society, MIC","The guidelines consist of the AI Utilization Principles and the commentary on them. The AI Utilization Principles have been arranged based on a draft of what is expected to be taken into consideration for the promotion of the benefits of AI with risk mitigation. This Guidelines attempt to give specific descriptions for measures to be taken to realize each principle. Since the Guidelines are formulated with the participation of multiple stakeholders, it can be used as a common reference by stakeholders at all levels for AI utilization. The Guidelines are intended to encourage AI users to recognize the proper consideration needed in relation to AI utilization and to take action voluntarily. This can be done by referring to the Guidelines when they establish their own AI development and utilization principles based on the ""Social Principles of Human-Centric AI"". Furthermore, it may be possible for AI service providers and business users to add value to their AI services and business utilizing AI by undertaking such voluntary efforts.",Public,Japan,Private Sector,Published,"9 August, 2019",Japan,,No,Yes,No
|
45 |
+
AI Utilization Strategy for an AI-Ready Society,Keidanren,Slideshow which evaluates strategies set up by others countries and address how industry can best utilize AI - The initiative's mission is: 'To inform Japan's vision on AI.',Private sector alliance,Japan,Policymakers & Private Sector,Published,February 2019,Japan,,No,Yes,No
|
46 |
+
"AI, society and social good",The Royal Society,"Policy project by The Royal Society composed of reports, events and publications around AI's impact on society and AI stewardship. - The initiative's mission is: 'Careful stewardship of AI, where the benefits of these technologies are shared across society.'",Academia,Global,"Policymakers, Academia",Publically launched,2017,United Kingdom,,No,Yes,Yes
|
47 |
+
"AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations",AI4People,"AI4People is an an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. The article reports its findings. It includes core opportunities and risks of AI for society; presents a synthesis of five ethical principles that should undergird its development and adoption; and offers 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. - The initiative's mission is: 'To move the dialogue around AI ethics forward, constructively, from principles to proposed policies, best practices, and concrete recommendations for new strategies'",Academia,Global,Policymakers,Published,November 2018,Europe,,Yes,No,Yes
|
48 |
+
AIgroKB: A neural semantic inference data base for sustainable agriculture research,"Universidad Tecnológica Mixteca, México; Universidad del Estado de Mato Grosso, Brasil Contact: [email protected]","A prototype of a neural semantic inference algorithm is developed to learn, without human supervision, from a database of sustainable agriculture literature (e.g. AGRIS). The algorithm takes semantic triplets of the form {subject, verb, object} as training data. These a are extracted from the literature using an Open Information Extraction system, where each item is a phrase (not a sigle word or term). The aim is to learn a map among the phrases of the semantic triplets, such that the algorithm is able to infer a rank of suggestions for the missing phrase of any unseen triplet, given the other two phrases. The application was on scientific literature taken from journals related with sustainable agriculture, which has the special motivation of contributing with AI systems to emergent needs for ecological intensification, agroecological transition, sustainable food production, food security and related fields. To facilitate access and interpretation of knowledge in databases related to sustainable food production, food security and related fields.",Academia,Local (Mexico),"Researchers (in public and private institutions) in sustainable food production, food security and related fields.",Development,2019,Mexico,"SDG Zero Hunger, Responsible consumption and production; Ignacio Arroyo Fernández <[email protected]>",No,No,Yes
|
49 |
+
Algorithm Charter for Aotearoa New Zealand,"New Zealand Government, Stats NZ","Provides a risk assessment framework and a list of commitments to sign for government agencies using algorithms. Commitments include Transparency; Partnership; People; Data; Privacy, Ethics and Human Rights; Human Oversight - The initiative's mission is: 'Improving government transparency and accountability without stifling innovation or causing undue compliance burden'",Public,New Zealand,Civil servants,"Published, to be reviewed after 12 months",July 2020,New Zealand,,Yes,Yes,No
|
50 |
+
AlgorithmWatch,AlgorithmWatch,"AlgorithmWatch is a non-profit research and advocacy organization that is committed to watch, unpack and analyze algorithmic / automated decision-making (ADM) systems and their impact on society. While the prudent use of ADM systems can benefit individuals and communities, they come with great risks. In order to protect human autonomy and fundamental rights and maximize the public good, we consider it crucial to hold ADM systems accountable to democratic control. Use of ADM systems that significantly affect individuals' and collective rights must not only be made public in clear and accessible ways, individuals must also be able to understand how decisions are reached and to contest them if necessary. Therefore, we enable citizens to better understand ADM systems and develop ways to achieve democratic governance of these processes – with a mix of technologies, regulation, and suitable oversight institutions. With this, we strive to contribute to a fair and inclusive society and to maximize the benefit of ADM systems for society at large.",Civil Society,Europe,"general audience, policy makers, civil society, private sector, academia",Full-fledged watch-dog organisation with 16 staff,May 2016,Germany,,Yes,Yes,No
|
51 |
+
ALLAI,"Catelijne Muller, Virginia Dignum, Aimee van Wynsberghe","ALLAI’s vision is a world where AI is developed, deployed and used responsibly, i.e. in a safe and sustainable manner and in line with our ethical principles, our societal values, existing and new laws and regulations, human rights, democracy and the rule of law. We call this Responsible AI.
|
52 |
+
ALLAI’s mission is to take into account a wide variety of AI impact domains such as safety, autonomy, lawfulness, inclusiveness and transparency. These impact domains spread across society, from the public to the private sector, from labor relations to education and from the individual to the collective.
|
53 |
+
ALLAI fosters multi-disciplinarity and involves a variety of experts in its activities(e.g. AI and data-scientists, legal scholars, ethicists and behavioral scientists). ALLAI’s work is aimed at various stakeholders such as policy-makers, social partners, consumers, private and public sector and society at large.",Non-profit,Europe / international,"companies, government organisations, general public",running,2018,Netherlands,aligned the European Trustworthy AI requirements and guidelines,Yes,Yes,No
|
54 |
+
An Ethical Framework for Artificial Intelligence,Tencent Institute,"Four principles: make the future development of AI needs available, reliable, comprehensible, and controllable - The initiative's mission is: 'Not specified'",Private,China,Not specified,Published,2017,China,,Yes,Yes,No
|
55 |
+
Artificial Intelligence (AI) in Health,Royal College of Physicians,"The RCP's position statement on artificial intelligence (AI) in health. - The initiative's mission is: 'To urge industry to address real-world challenges, doctors to appraise the technology and regulators to develop guidance and evaluation methods.'",Civil society,UK,"multiple (industry, doctors, regulators)",Published,September 2018,United Kingdom,,Yes,No,No
|
56 |
+
Artificial Intelligence Against Modern Slavery (AIMS),"Walk Free, The Future Society, Business Human Rights Resource Centre, WikiRate",Project to produce a tool to evaluate compliance with anti slavery regulations by analysing business statement required under the UK Modern Slavery Act. - The initiative's mission is: 'To help eradicate modern slavery.',Civil society,"United Kingdom, Australia","Industry, Regulators",Pilot,June 2020,Australia,,No,No,Yes
|
57 |
+
Artificial Intelligence and Machine Learning: Policy Paper,Internet Society,"The paper explains the basics of the technology behind AI, identifies the key considerations and challenges surrounding the technology, and provides several high-level principles and recommendations to follow when dealing with the technology. - The initiative's mission is: 'To provide an introduction to AI to policymakers and other stakeholders in the wider Internet ecosystem.'",Non-profit,Global,"multiple (policy- makers, other stakeholders in
|
58 |
+
the wider Inter- net ecosystem)",Published,Apr 2017,International,,Yes,Yes,No
|
59 |
+
Artificial intelligence and privacy,The Norwegian Data Protection Authority,"Recommendations for privacy friendly development and use of AI. Report aims to provide greater technical detail in describing artificial intelligence (AI), while addressing relevant AI challenges associated with the data protection principles embodied in the GDPR. - The initiative's mission is: 'Further stakeholder knowledge about the privacy implications of artificial intelligence and discuss them, not only in order to safeguard the right to privacy of the individual, but also to meet the requirements of society at large. '",Public,Norway,"multiple (developers, system suppliers, organisations, end users, authorities)",Published,January 2018,Norway,,Yes,Yes,No
|
60 |
+
"Artificial Intelligence for Europe: Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee, and the Committee of the Regions",European Commission,"Describes an approach to AI which highlights the need to join forces at European level, to ensure that all Europeans are part of the digital transformation, that adequate resources are devoted to AI and that the Union’s values and fundamental rights are at the forefront of the AI landscape. - The initiative's mission is: 'To place the power of AI at the service of human progress'",International organisation,Europe,Policymakers,Published,2018,Belgium,,Yes,Yes,No
|
61 |
+
Artificial Intelligence Industry Code of Conduct (Consultation Version),Artificial Intelligence Industry Alliance,"Code of conduct for AI developers - The initiative's mission is: 'To promote the ethical self-discipline of China's artificial intelligence industry, build consensus, and promote the healthy development of artificial intelligence'",Professional association,China,All,Consultation published,2019,China,,Yes,No,No
|
62 |
+
Artificial Intelligence Standardization White Paper,Standards Administration of China,Describes China’s approach to standards-setting for artificial intelligence - The initiative's mission is: 'The joint promotion of AI and its industrial development.',Public,China,Developers,"Published v1 (""will be revised constantly in the future based on the developing requirements of technologies, industries, and standardization"")",Jan 2018,China,,Yes,Yes,No
|
63 |
+
Artificial Intelligence Strategy,"German Federal Ministry of Education and Research, the Federal Ministry for Economic Affairs and Energy, and the Federal Ministry of Labour and Social Affairs","German national AI strategy, emphasizing individual rights of freedom, autonomy, personal rights, the freedom of decision of the individual.",Public,Germany,Policymakers,Published,2018,Germany,,Yes,Yes,No
|
64 |
+
"Artificial intelligence, values and alignment",DeepMind,"Research paper examining the philosophical questions that arise in the context of AI alignment - i.e. how to ensure that AI systems are properly aligned with human values.
|
65 |
+
",Private,Global,All,Published,Jan 2020,United Kingdom,,No,Yes,No
|
66 |
+
Artificial Intelligence: A European Perspective,European Commission,"Report that presents a European view of AI - The initiative's mission is: 'To provide a balanced assessment of opportunities and challenges for AI from a European perspective, and support the development of European action in RFCȩEJM@?Jȩ'ȩAMLRCVRȩ'",International organisation,Europe,Policymakers,Published,2018,Europe,,Yes,Yes,No
|
raft.py
CHANGED
@@ -14,14 +14,12 @@
|
|
14 |
# limitations under the License.
|
15 |
"""RAFT AI papers, test set."""
|
16 |
|
17 |
-
|
18 |
import csv
|
19 |
import json
|
20 |
import os
|
21 |
|
22 |
import datasets
|
23 |
|
24 |
-
|
25 |
# TODO: Add BibTeX citation
|
26 |
# Find for instance the citation on arxiv or on the dataset repo/website
|
27 |
_CITATION = """\
|
@@ -50,13 +48,19 @@ _LICENSE = ""
|
|
50 |
# The HuggingFace dataset library don't host the datasets but only point to the original files
|
51 |
# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
|
52 |
_URLs = {
|
53 |
-
'
|
54 |
-
|
55 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
|
58 |
-
class
|
59 |
-
"""Dataset
|
60 |
|
61 |
VERSION = datasets.Version("1.1.0")
|
62 |
|
@@ -72,18 +76,21 @@ class RaftAisafetyTest(datasets.GeneratorBasedBuilder):
|
|
72 |
# data = datasets.load_dataset('my_dataset', 'first_domain')
|
73 |
# data = datasets.load_dataset('my_dataset', 'second_domain')
|
74 |
BUILDER_CONFIGS = [
|
75 |
-
datasets.BuilderConfig(name="
|
76 |
description="Decide whether the papers focus on AI safety methods."),
|
77 |
-
datasets.BuilderConfig(name="
|
78 |
description="If a paper has AI safety methods, determine if it is meta"
|
79 |
"safety or technical safety."),
|
|
|
|
|
|
|
80 |
]
|
81 |
|
82 |
-
DEFAULT_CONFIG_NAME = "
|
83 |
|
84 |
def _info(self):
|
85 |
# TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
|
86 |
-
if self.config.name
|
87 |
features = datasets.Features(
|
88 |
{
|
89 |
"title": datasets.Value("string"),
|
@@ -92,13 +99,22 @@ class RaftAisafetyTest(datasets.GeneratorBasedBuilder):
|
|
92 |
"answer": datasets.Value("string"),
|
93 |
}
|
94 |
)
|
95 |
-
|
96 |
features = datasets.Features(
|
97 |
{
|
98 |
-
"
|
99 |
-
"
|
100 |
-
"
|
101 |
-
"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
}
|
103 |
)
|
104 |
return datasets.DatasetInfo(
|
@@ -127,22 +143,25 @@ class RaftAisafetyTest(datasets.GeneratorBasedBuilder):
|
|
127 |
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
|
128 |
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
|
129 |
data_dir = dl_manager.download_and_extract(_URLs)
|
|
|
130 |
return [
|
131 |
datasets.SplitGenerator(name=datasets.Split.TRAIN,
|
132 |
-
gen_kwargs={"filepath": data_dir['train'],
|
133 |
"split": "train"}),
|
134 |
datasets.SplitGenerator(name=datasets.Split.TEST,
|
135 |
-
gen_kwargs={"filepath": data_dir['test'],
|
136 |
"split": "test"})
|
137 |
]
|
138 |
|
139 |
def _generate_examples(
|
140 |
-
|
141 |
):
|
142 |
""" Yields examples as (key, example) tuples. """
|
143 |
# This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
|
144 |
# The `key` is here for legacy reason (tfds) and is not important in itself.
|
145 |
|
|
|
|
|
146 |
with open(filepath, encoding="utf-8") as f:
|
147 |
csv_reader = csv.reader(
|
148 |
f, quotechar='"', delimiter=",", quoting=csv.QUOTE_ALL, skipinitialspace=True
|
@@ -150,13 +169,33 @@ class RaftAisafetyTest(datasets.GeneratorBasedBuilder):
|
|
150 |
for id_, row in enumerate(csv_reader):
|
151 |
if id_ == 0: # First row is column names
|
152 |
continue
|
153 |
-
if
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
# limitations under the License.
|
15 |
"""RAFT AI papers, test set."""
|
16 |
|
|
|
17 |
import csv
|
18 |
import json
|
19 |
import os
|
20 |
|
21 |
import datasets
|
22 |
|
|
|
23 |
# TODO: Add BibTeX citation
|
24 |
# Find for instance the citation on arxiv or on the dataset repo/website
|
25 |
_CITATION = """\
|
|
|
48 |
# The HuggingFace dataset library don't host the datasets but only point to the original files
|
49 |
# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
|
50 |
_URLs = {
|
51 |
+
'TAISafety': {
|
52 |
+
'train': "./data/TAISafety/train.csv",
|
53 |
+
'test': "./data/TAISafety/test.csv"
|
54 |
+
},
|
55 |
+
'AIInitiatives': {
|
56 |
+
'train': "./data/AIInitiatives/train.csv",
|
57 |
+
'test': "./data/AIInitiatives/test.csv"
|
58 |
+
}
|
59 |
+
} # TODO: Generate these automatically.
|
60 |
|
61 |
|
62 |
+
class Raft(datasets.GeneratorBasedBuilder):
|
63 |
+
"""RAFT Dataset."""
|
64 |
|
65 |
VERSION = datasets.Version("1.1.0")
|
66 |
|
|
|
76 |
# data = datasets.load_dataset('my_dataset', 'first_domain')
|
77 |
# data = datasets.load_dataset('my_dataset', 'second_domain')
|
78 |
BUILDER_CONFIGS = [
|
79 |
+
datasets.BuilderConfig(name="TAISafety-binary", version=VERSION,
|
80 |
description="Decide whether the papers focus on AI safety methods."),
|
81 |
+
datasets.BuilderConfig(name="TAISafety-multiclass", version=VERSION,
|
82 |
description="If a paper has AI safety methods, determine if it is meta"
|
83 |
"safety or technical safety."),
|
84 |
+
datasets.BuilderConfig(name="AIInitiatives-multilabel", version=VERSION,
|
85 |
+
description="For each initiative, decide which (if any) of Ethics, "
|
86 |
+
"Governance, and Social Good apply to the initiative's AI goals")
|
87 |
]
|
88 |
|
89 |
+
DEFAULT_CONFIG_NAME = "TAISafety/binary" # It's not mandatory to have a default configuration. Just use one if it make sense.
|
90 |
|
91 |
def _info(self):
|
92 |
# TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
|
93 |
+
if self.config.name.startswith("TAISafety"):
|
94 |
features = datasets.Features(
|
95 |
{
|
96 |
"title": datasets.Value("string"),
|
|
|
99 |
"answer": datasets.Value("string"),
|
100 |
}
|
101 |
)
|
102 |
+
elif self.config.name.startswith("AIInitiatives"):
|
103 |
features = datasets.Features(
|
104 |
{
|
105 |
+
"name": datasets.Value("string"),
|
106 |
+
"organization": datasets.Value("string"),
|
107 |
+
"description": datasets.Value("string"),
|
108 |
+
"sector": datasets.Value("string"),
|
109 |
+
"scope": datasets.Value("string"),
|
110 |
+
"audience": datasets.Value("string"),
|
111 |
+
"stage": datasets.Value("string"),
|
112 |
+
"date": datasets.Value("string"),
|
113 |
+
"country": datasets.Value("string"),
|
114 |
+
"notes": datasets.Value("string"),
|
115 |
+
"answer_ethics": datasets.Value("string"),
|
116 |
+
"answer_governance": datasets.Value("string"),
|
117 |
+
"answer_socialgood": datasets.Value("string"),
|
118 |
}
|
119 |
)
|
120 |
return datasets.DatasetInfo(
|
|
|
143 |
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
|
144 |
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
|
145 |
data_dir = dl_manager.download_and_extract(_URLs)
|
146 |
+
dataset = self.config.name.split("-")[0]
|
147 |
return [
|
148 |
datasets.SplitGenerator(name=datasets.Split.TRAIN,
|
149 |
+
gen_kwargs={"filepath": data_dir[dataset]['train'],
|
150 |
"split": "train"}),
|
151 |
datasets.SplitGenerator(name=datasets.Split.TEST,
|
152 |
+
gen_kwargs={"filepath": data_dir[dataset]['test'],
|
153 |
"split": "test"})
|
154 |
]
|
155 |
|
156 |
def _generate_examples(
|
157 |
+
self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
|
158 |
):
|
159 |
""" Yields examples as (key, example) tuples. """
|
160 |
# This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
|
161 |
# The `key` is here for legacy reason (tfds) and is not important in itself.
|
162 |
|
163 |
+
dataset, config = self.config.name.split("-")
|
164 |
+
|
165 |
with open(filepath, encoding="utf-8") as f:
|
166 |
csv_reader = csv.reader(
|
167 |
f, quotechar='"', delimiter=",", quoting=csv.QUOTE_ALL, skipinitialspace=True
|
|
|
169 |
for id_, row in enumerate(csv_reader):
|
170 |
if id_ == 0: # First row is column names
|
171 |
continue
|
172 |
+
if dataset == "TAISafety":
|
173 |
+
if split == "train":
|
174 |
+
title, publication, abstract, category, binary = row
|
175 |
+
answer = category if config == "multiclass" else binary
|
176 |
+
elif split == "test":
|
177 |
+
title, publication, abstract = row
|
178 |
+
answer = ""
|
179 |
+
yield id_, {"title": title,
|
180 |
+
"publication": publication,
|
181 |
+
"abstract": abstract,
|
182 |
+
"answer": answer}
|
183 |
+
if dataset == "AIInitiatives":
|
184 |
+
name, organization, description, sector, scope, audience, \
|
185 |
+
stage, date, country, notes, answer_ethics, \
|
186 |
+
answer_governance, answer_socialgood = row
|
187 |
+
if split == "test":
|
188 |
+
answer = ""
|
189 |
+
yield id_, {"name": name,
|
190 |
+
"organization": organization,
|
191 |
+
"description": description,
|
192 |
+
"sector": sector,
|
193 |
+
"scope": scope,
|
194 |
+
"audience": audience,
|
195 |
+
"stage": stage,
|
196 |
+
"date": date,
|
197 |
+
"country": country,
|
198 |
+
"notes": notes,
|
199 |
+
"answer_ethics": answer_ethics,
|
200 |
+
"answer_governance": answer_governance,
|
201 |
+
"answer_socialgood": answer_socialgood}
|