Data Ethics and Artificial Intelligence: an update

It’s been a while since I posted to this blog on my ongoing work in data ethics and AI. This blog post is a general update on developments in the field and the work I am currently contributing to.

The Centre for Data Ethics and Innovation

Back in November, I was appointed to the Board of the UK Government’s new Centre for Data Ethics and Innovation (CDEI). The CDEI is one of the first government agencies in the world to explore the impact and regulation of new technology for every aspect of the economy and civic life.

The Board will be publishing a series of snapshots over the coming weeks: short summaries of different areas of technology. The first four reports are on deep fakes, personal insurance, smart speakers and facial recognition technology.

We are also working on two major reports. The first focusses on online targeting: the ways in which our data is gathered and used to advertise products or shape our behaviour. The second focusses on bias in algorithmic decision making. We’ve also published landscape summaries by academics on the current state of ethical debate in both fields.

These two areas are vital for many reasons. If the public is to trust new technologies, we need to know and trust how our data is being used. We also need to know how decisions are being made about such key areas as financial services, human resources, predictive policing and the care of the most vulnerable.

You can access the landscape summaries and our interim reports online.

From Principles to Practical Guidance

One of the key early questions in the Centre’s work has been how to translate high-level ethical principles into practical guidance for government and industry. Over the last couple of years, some sets of high-level principles have been suggested.

The CDEI welcomes the recent commitment by 42 countries, including the UK, to adopt the OECD human-centred Principles on Artificial Intelligence. They are well worth reading and noting here:

  1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
  4. AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
  5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

But high-level principles only take you so far when dealing with new technologies. We are also working on principles for good governance and regulation, which encourages innovation.

Dr. Simon Cross

Simon CrossI was pleased to receive a grant from the World Templeton Foundation in January to fully fund a senior researcher to support my work in Artificial Intelligence, Climate Change, and the House of Lords. Dr. Simon Cross joined my team in May. He has had a career as a commercial airline pilot and has recently completed a D. Phil in the Science and Religion field here in Oxford. Simon is now working for half the week in Church House Oxford and half the week in Church House Westminster as part of the Church of England’s Parliamentary Unit, supporting all the Lords Spiritual in the Faith and Science Area.

Simon’s early work has focussed on preparing responses on behalf of the Bishops to key pieces of draft regulation and legislation for the digital world. In the autumn he will be supporting me and others in debates on the Environment Bill and the Online Harms bill.

Reading and Research

I continue to read in and around the field. The two highlights for the first half of 2019 have been Shoshanna Zuboff’s deep and detailed book on Surveillance Capitalism. Zuboff coins a new vocabulary to help us think about how Facebook and Google and other companies harvest our data, invade our personal space and use the data to generate products which predict and influence our actions, to their enormous profit.

If you don’t have time to read the whole book, there is a shorter article published in the Guardian a few weeks ago.

My current read is equally disturbing. China’s investment in AI is set to overtake investment in the United States in the next few years. Kai Strittmatter chronicles how data and AI are being used by the State in China to influence and shape the views of the entire population. The book is called: We have been harmonised: Life in China’s Surveillance State.

Strittmatter Book Cover

What does it mean to be fully human?

The ethical questions surrounding the use of AI and data are manifold and large. Sooner or later they all lead back to the question “what does it mean to be a fully human person in a flourishing society in the 21st Century when technology brings such power and such potential for harm?”

What is a bishop doing in the room when these matters are being discussed? I think I’m there partly as a representative of wider civil society and the ethical tradition in the faith communities. But I am also there because I represent a faith which dares to believe that God became a human person in Jesus Christ. We have been pondering this question for over two thousand years.

I will keep you posted as the journey continues…

+Steven
25 July 2019