The one on the right is Artie.

Artie is a Robothespian.  We met last week at Oxford Brookes University.  Artie showed me some of his moves.  He plays out scenes from Star Wars and Jaws with a range of voices, movements, gestures and special effects (including shark fins swimming across the screens which form his eyes).

Artie can’t yet hold an intelligent conversation but it won’t be long before his cousins and descendants can.  Artificial Intelligence (AI) is now beginning to affect all of our lives.

Every time you search the internet or interact with your mobile phone or shop on a big store online, you are bumping into artificial intelligence.  AI answers our questions through Siri (on the iPhone) or Alexa (on Amazon).  AI matters in all kinds of ways.

I’ve been exploring Artificial Intelligence for some time now.  In June I was appointed to sit on a new House of Lords Select Committee on AI as part of my work in the House of Lords.  The Committee has a broad focus and is currently seeking evidence from a wide group of people and organisations.  You can read about our brief here.

Here are just some of the reasons why all of this matters

Robot vacuum cleaners and personal privacy

A story in the Times caught my eye in July.  It’s now possible to buy a robot vacuum cleaner to take the strain out of household chores.  Perhaps you have one.  The robot will use AI to navigate the best route round your living room.  To do this it will make a map of your room using its onboard cameras.  The cameras will then transmit the data back to the company who make the robot. They can sell the data on to well known on line retailers who can then email you with specific suggestions of cushion covers or lamps to match your furniture.  All of this will be done with no human input whatsoever.

Personal boundaries and personal privacy matter. They are an essential part of our human identity and knowing who we are – and we are far more than consumers.  This matters for all of us – but especially the young and the vulnerable.  New technology means regulation on data protection needs to keep pace. The government announced its plans in August for a strengthening of UK protection law.

We need a greater level of education about AI and what it can do and is doing at every level in society – including schools. The technology can bring significant benefits but it can also disrupt our lives.

Self driving lorries and the future of work

AI will change the future of work.  Yesterday the government announced the first trials of automatic lorry convoys on Britain’s roads.

Within a decade, the transport industry may have changed completely.  There are great potential benefits.  As a society we need to face the reality that work is changing and evolving.

AI is already beginning to change the medical profession, accountancy, law and banking.  There is now an app which helps motorists challenge parking fines without the help of a lawyer (DoNotPay).  It has been successfully used by 160,000 people and was developed by Joshua Bowder, a 20 year old whose mission in life is to put lawyers out of business through simple technology.  The chat bot based App has already been extended to help the homeless and refugees access good legal advice for free.

Every development in Artificial Intelligence raises new questions about what it means to be human.  According to Kevin Kelly, “We’ll spend the next three decades – indeed, perhaps the next century – in a permanent identity crisis, continually asking what humans are good for”[1].

As a Christian, I want to be part of that conversation.  At the heart of our faith is the good news that God created the universe, that God loves the world and that God became human to restore us and show us what it means to live well and reach our full potential.

Direct messaging and political influence

The outcome of the last two US Presidential Elections has been shaped and influenced by AI: the side with the best social media campaigns won.  Professor of Machine Learning, Pedro Domingos, describes the impact algorithm driven social media had on the Obama-Rooney campaign[2].  In his excellent documentary “Secrets of Silicon Valley” Jamie Bartlett explores the use of the same technology by the Trump Presidential campaign in 2016 which again led to victory in an otherwise close campaign.

There are signs that a similar use of social media with very detailed targeting of voters using AI was also used to good effect by Labour in the 2017 election.

In July six members of the House of Lords led by Lord Puttnam wrote to the Observer raising questions about the proposed takeover of Sky by Rupert Murdoch.  In an open letter they argue, persuasively in my view, that this takeover gives a single company access to the personal data of over 13 million households: data which can then be used for micro ads and political campaigning.

The tools offered by AI are immensely powerful for shaping ideas and debate in our society.  Christians need to be part of that dialogue, aware of what is happening and making a contribution for the sake of the common good.

Swarms and drones and the weaponisation of AI

DroneKiller robots already exist in the form of autonomous sentry guns in South Korea.  Many more are in development.  On Monday 116 founders and leaders of robotics companies led by Elon Musk called on the United Nations to prevent a new arms race.

Technology itself is a neutral thing but carries great power to affect lives for good or for ill.  If there is to be a new arms race then we need a new public debate.  The UK Government will need to take a view on the proliferation and use of weaponry powered by AI.  The 2015 film Eye in the Sky starring Helen Mirren and directed by Gavin Hood is a powerful introduction to the ethical issues involved in remote weapons.  Autonomous weapons raise a new and very present set of questions.  How will the UK Government respond?  Christians need a voice in that debate.

The Superintelligence: creating a new species

It’s a long way from robot vacuum cleaners to a superintelligence.  At the moment, much artificial intelligence is “narrow”: we can create machines which are very good at particular tasks (such as beating a human at “Go”) but not machines which have broad general intelligence and consciousness.  We have not yet created intelligent life.

But scientists think that day is not far away.  Some are hopeful of the benefits of non human superintelligence.  Some, including Stephen Hawking, are extremely cautious.  But there is serious thinking happening already.  Professor Nick Bostron is the Director of the Future of Humanity Institute in the University of Oxford.  In his book, Superintelligence, he analyses the steps needed to develop superintelligence, the ways in which humanity may or may not be able to control what emerges and the kind of ethical thinking which is needed.  “Human civilisation is at stake” according to Clive Cookson, who reviewed the book for the Financial Times[3].

The resources of our faith have much to say in all of this debate around AI: about fair access, privacy and personal identity, about persuasion in the political process, about what it means to be human, about the ethics of weaponisation and about the limits of human endeavour.

In the 19th Century and for much of the 20th Century, science asked hard questions of faith.  Christians did not always respond well to those questions and to the evidence of reason.  But in the 21st Century, faith needs to ask hard questions once again of science.

As Christians we need think seriously about these questions and engage in the debate.  I’ll write more in the coming months as the work of the Select Committee moves forward.

[1] Kevin Kelly, The Inevitable: understanding the 12 technological forces that will shape our future, Penguin, 2016, p. 49

[2] Pedro Domingos, The Master Algorithm, How the quest for the ultimate learning machine will remake our world, Penguin, 2015, pp.16-19.

[3] Nick Bostron, Superintelligence: paths, dangers, strategies, Oxford, 2014

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments