Keeping ethics at the centre of the national AI Strategy

The ethical complexity of new technologies can seem overwhelming to the general public and to policy makers. The Lord Bishop of Oxford, speaking at an All-Party Parliamentary Group on Artificial Intelligence, challenged three myths about AI and offered five key questions for keeping ethics at the centre of the government’s AI strategy, announced earlier this year.

It has been a privilege and a steep learning curve to explore public policy on AI and data over the last four years, through this APPG, the House of Lords Select Committee on AI, as a Board Member of the CDEI and through the Rethinking Data Project sponsored by the Ada Lovelace Institute.

The place of ethics in the responsible development of AI has been a consistent theme. The ethical complexity of new technologies can seem overwhelming to the general public and to policy makers. Three years ago, the debate centred on the development of high level ethical principles. These remain important.

The extensive work of the CDEI in public policy has revealed the parallel importance of asking the right questions in a wide range of different contexts. I offer here what seem to me the five central ethical questions to ask of the new national AI strategy and of AI, data and new technologies at every level.

These are the five key questions I would encourage everyone to be asking of AI policy from Parliamentarians, to Board members, to local councillors, to sixth formers and across every different sector, public and private, affected by these powerful new technologies.

What is the place of ethics in the development and implementation of the strategy?

How is this evidenced in the rhetoric, in the ethical formation and qualifications of those who will lead, in the openness to public scrutiny, self-criticism and independent judgement, in the time taken to learn and improve, in the arrangements for ongoing governance, in the balancing of benefits and harms?

Numerous studies have emphasised the key relationship of ethics and public confidence in deriving the full benefits from AI and data driven technologies. In particular we should be alert to the presence of three dangerous myths:

MYTH 1: that ethics and innovation are somehow alternatives
MYTH 2: that the ethics is done when the rhetoric is right
MYTH 3: that ethics can ever be adequate without external and independent scrutiny

There are promising signs in the announcement of the National Strategy and the AI Roadmap. The initial rhetoric is good. This commitment needs to be carried through in the detail, governance and budgets.

Is there an adequate vision of the potential of AI for the good of all?

AI and data driven technologies hold immense potential for good as they reach maturity as we have seen through the pandemic, in recent breakthroughs in medicine and science, for productivity and the environment.

This huge power and potential for the common good must continue to be the lead driver. There is, again, much to welcome in the announcement of the AI strategy. There is however a key tension between AI as a driver of economic growth (only) and AI technologies for the common good. How far will the AI strategy be shaped by this wider vision of good, sustainable society for all?

The ethics of potential and power raise questions of how the benefits of AI are distributed fairly; of checks, balances and scrutiny of Big Tech, of the human-AI interface and the importance of preserving human and humane engagement.

Just because we can do something does not mean that we should do something. This critical engagement with potential recurs in every field of data use. How will the AI strategy equip society to develop responsible answers to the questions of potential?

Are the ethical answers we give in deploying new technologies consistent with our wider ethical understanding as a society?

The deployment of ethics in AI is largely not the generation of new principles but the application of present ethical standards to new technologies and applications.

Every policy maker needs to be able to apply questions of consistency of values. A key test of the AI strategy is whether it will better enable the preservation of our common values for the future. Two key and current debates illustrate this well:

  • Debates around fairness and bias in algorithmic decision making centre on the consistency of standards across automated and non automated decision making (for example in the deployment of facial recognition technologies, fin tech or predictive policing).
  • Debates around online safety for children and the vulnerable centre on the consistency of the rights of children in the online world as well as in the offline world.

How will society continue to debate the ethical consequences of new technologies – both direct and indirect?

AI is recognised as a disruptive technology. This is not acknowledged (but is assumed) in the announcement of the AI strategy and the Ten Tech Priorities.

The AI Strategy will need to recognise explicitly the impact of the deployment of AI on wider society. This is particularly the case in the area of work and jobs: both the likely impact over the next decade in (for example) logistics, call centres and transport (which will play into the government’s levelling up agendas) and the impact of the rise of the gig economy. These consequences should be regarded as in scope for the AI Strategy and in holding the government and others to account for its delivery.

What are the consequences of artificial intelligence for our humanity?

Finally, the AI strategy will need to recognise to some degree the existential questions which are raised by the deployment of artificial intelligence. These will need careful study and reflection through the arts and humanities as well as through the sciences and social sciences. The risks as well as the benefits will need to be weighed carefully through public debate.

Questions here include, for example, the potential increase of loneliness, isolation and social cohesion of the shift towards online and automated retail; the effects on the motivation and ambition of the young of the shifting prospects for work; and the potential for alienation and reaction against algorithmic decision making.

The Rt Revd Dr Steven Croft
Bishop of Oxford
10 May 2021