AI 360: Expert evaluation of future implications of AI and how to steer for societal benefit
In March 2019, recognised experts in rights and ethics, law, social science, culture, politics and economy met in Copenhagen for a multi-dimensional and thorough treatment of AI and its implications for our future societies.
In March 2019, recognised experts in rights and ethics, law, social science, culture, politics and economy met in Copenhagen for a multi-dimensional and thorough treatment of AI and its implications for our future societies. The AI 360 methodology is a multidisciplinary approach to identifying the most important societal implications of AI, and for producing concrete action-oriented solutions.
They identified a number of uncertainties and challenges for the responsible development of AI. They also produced a number of recommendations for how to increase the likelihood of steering AI towards societal benefits. The recommendations were:
On the political implications of AI:
- Open science, open innovation basic principles.
- RRI Fairness, AI to support a better, open and fair political culture.
- EUs algorithmic governance use should improve.
- Skills for AI should be part of fundamental human rights.
- A quality mark for companies to show they work in compliance with principles of compliance to openness of trust.
- Trust and trustworthiness built through judges; an AI ombudsperson could be implemented in the system.
- Get inspired by recommendations from the High-Level Expert Group on Artificial Intelligence(AI HLEG).
- Labelling is needed, together with an agency that checks labels, government and companies.
On the Legal framework, Rights and Ethics:
- Ensure digital and online anonymity by default.
- Establish a national system for handling consent related to data, so algorithms only have access to the data there is consent for, and so citizens can give/revoke consent online and control the use of their data.
- Implement an IT-architecture in front of the databases, which allows algorithms to utilize anonymised data, but without data leaving the database.
- Implement certification or approval of algorithms on case-by-case basis. Inspiration can be found in legislation on chemicals and gene-tech.
- Establish rules and institutions that, in special instances, can allow direct access to data, when necessary. And which can approve de-anonymization, when it is in the interest of citizens.
- Implement required routine tests for bias in algorithms, along with mandatory revision on tests and reporting in annual reports.
On the economic dimension:
- Political prioritization: To obtain the real potential of AI and the related economical potential, we need to shift some public investments from physical infrastructure to AI-enabling infrastructure. This can only be done through political dialogue and reprioritization of funds.
- Dynamic consent: Dynamic consent is a more advanced form of consent, which empower and protect individual data contributors better than the baseline (i.e. Informed consent).
- AI-framework: A framework for developing responsible AI with parameters for: Transparent AI, Reversible AI, Coachable AI, Explainable AI and Interpretable AI.
- Proof of sustainability: A roadmap of sustainable AI technology must include ethical and responsible considerations. Example: Proof of sustainability. For a company to get access to a market they must provide a sustainable product under an agile regulation regime: Code of conducts, Standards etc.
- Education: Retraining of workers/citizens, e.g. where every citizen gets a number of tokens that can be traded for reskilling and education.
- New tax regulation: The company tax should be on the turnover in EU to ensuring tax is paid.
- Solving a democratic problem:
- Method 1: Broad public debate should be stimulated. Method 2: A cultivation project on public awareness of fair distribution of costs and benefits. (This should not be a political left vs. right discussion.)
On the societal implications:
- Education in IT, coding and knowledge of IT and social conditions as an investment, because it points to better and more responsible IT and AI
- We need to (collectively, politically) compile a list of nice to haves and need to haves for health and education applications and develop legal guidelines that help achieve desired applications
However, even if the recommendations from the experts were to be followed red flags remained. The experts did not see any good solutions for dealing with inequality in the distribution of benefits and risks that would follow from an increased implementation of AI in our societies, neither in the short or long term. In the short term (2025), social cohesion and inclusion also remain major red flags for which no adequate solutions exist at the present time.
In addition to the red flags, concerns remained on the potential of AI for abusive applications affecting fundamental rights and freedoms and the functioning of democratic societies.
The full overview of the results, as well as explanations of the expert evaluations can be found in the report AI 360 | COPENHAGEN: Report from the workshop.
Next steps for working with responsible development of AI in the Human Brain Project:
The report from the AI 360 | COPENHAGEN workshop is delivered to the European Commission, the Human Brain Project, and it forms the basis for a Europe-wide citizen engagement process on AI in the autumn of 2019.