Should citizens be algorithm regulators?

A new report by OpenAI suggests we should create external citizen regulatory bodies to evaluate the societal impact of algorithm-based decisions.

We don’t know how to regulate algorithms, because their application to societal problems involves a fundamental incongruity. Algorithms follow logical rules in order to optimize for a given outcome. Public policy is all a matter of trade-offs: optimizing for some groups in society necessarily makes others worse off.

Resolving social trade-offs requires that many different voices be heard. This may sound radical, but it is, in fact, the original lesson of democracy: Citizens should have a say. We don’t know how to regulate algorithms, because we have become shockingly bad at citizen governance.

In order for AI developers to earn trust from users, civil society, governments, and other stakeholders, there is a need to move beyond principles to a focus on mechanisms for demonstrating responsible behaviour. Making and assessing verifiable claims, to which developers can be held accountable, is one step in this direction.

About this report

This report suggests various steps that different stakeholders can take to make it easier to verify claims made about AI systems and their associated development processes. The authors believe the implementation of such mechanisms can help make progress on one component of the multifaceted problem of ensuring that AI development is conducted in a trustworthy fashion.

Some authors of this report held a workshop in April 2019, aimed at expanding the toolbox of mechanisms for making and assessing verifiable claims. This report lays out and builds on the ideas proposed at that workshop.

http://www.towardtrustworthyai.com

Justin Flitter

Founder of NewZealand.AI.

http://unrivaled.co.nz
Previous
Previous

Wych, the NZ based, AI-powered financial personal assistant partners with Australian ACCC.

Next
Next

Neil deGrasse Tyson interviews robot Sophia about how #AI can help fight COVID-19.