Posts

I’ve spent most Tuesday afternoons this term in the House of Lords Select Committee on Artificial Intelligence.  We’ve been hearing evidence on every aspect of Artificial Intelligence as it affects business, consumers, warfare, health, education and research.

In the meantime, public interest and debate in Artificial Intelligence (AI) continues to grow.  In the last week or so, there have been more news stories about self-driving cars; about Uber’s breach in data, dire warnings from Elon Musk and Hilary Clinton; announcements in the budget about investment in technology and much stealthy marketing of AI in the guise of digital assistants for the home.

The Committee is due to report in April.  We are just beginning the process of distilling down all we have heard into the key issues for public policy.

As we begin this process of reflection, these are my top eight issues in AI and the deep theological questions they raise.

  1. We need a better public debate and education

AI and machine learning technology is making a big difference to our lives and is set to make a bigger difference in the future.  There is consensus that major disruptive change is on its way.  People differ about how quickly it will arrive.  The rule of thumb, I’ve learned, is that we underestimate the impact of change through technology but overestimate the speed.

Public debate and scrutiny is vital.  It’s important to understand so that we can live well with new technology, protect our data and identity, and that of our children and grandchildren and ensure technology serves us well.  It is also vital to build public trust and confidence.  A few years ago, the development of GM foods was halted because public trust and confidence did not keep pace with the technology. Public debate is vital.

  1. AI and social media are shaping political debate

There is very good evidence that AI and social media used together are shaping the democratic process and changing the nature of public debate.  Technology is partly responsible for the unexpected outcomes of elections and referenda in recent years.

AI and social media make it possible for tailored messages to be delivered directly to voters in a personalised way.  The nature of public truth and political debate is therefore changing.  We are less likely to trust single authoritative sources of news.  We listen and debate in silos.  There is a wider spectrum of ideas.  Those who offer social media platforms are not responsible for the content published there (for the first time in history).  There is good evidence that this is leading to sharper, more antagonistic and polarised debate.

  1. AI will massively transform the world of work

There have been a range of serious studies.  Between 20% and 40% of jobs in the economy are at high risk of automation by the early 2030’s.  The economic effects will fall unevenly across the United Kingdom.  The greatest impact will be felt in the poorest communities still adjusting to the loss of jobs in mining and manufacturing.  There is a risk of growing inequality.  Traditional white collar jobs in accounting and law will be similarly affected.

The disruption will probably be enough to break the traditional life script of 20 years of education followed by 40 years of work and retirement.  We need to prepare for a world in which this is no longer normal.  We will need radical new ways of structuring support across the whole of society.  Universal Basic Income or Universal Basic Services need to be actively explored.  This will be the major economic challenge for government over the next decade.

New jobs and roles will be created in this fourth industrial revolution.  The economic prosperity of the country will depend on how seriously we take investment in this area over the next five years.  Other economies are making massive investment.  The United Kingdom has some of the best research in the world but without continued investment and better education at all levels we will fall further behind the global leaders.

  1. Education is key to the future

STEM subjects and computer sciences are vital for everyone.  But not to the exclusion of the humanities.  We need to educate for the whole of life not simply train economic units of productivity.  In a world which is uncertain what it means to be human, we need a fresh emphasis on ethics and values.

  1. Better data is key

There are two ingredients in the development of machine learning: computing power and good data.  Government needs to support small and medium enterprises and start up businesses by making both more available: otherwise the major companies who are already ahead are likely to grow their advantage.

There are significant issues surrounding the security and quality of data, particularly in health care, but also huge advantages in making that data available.  Some of the major benefits of AI to humanity are likely to come in better diagnosis of disease and in enhancement (not replacement) of treatments offered by practitioners.  But the date needs to be of the highest quality to prevent bias creeping into the outcomes.

  1. Ethics needs to run through everything

AI brings immense potential for good but also significant potential for harm if used solely for profit and without though for the consequences.  There are very obvious areas where AI can do immense damage: weaponisation; the sexualisation of machines and the acceleration of inequality.

The very best companies are highly ethical, publish codes of practice and are making a major contribution in this area.  But statements of ethical intent, education for ethics and codes of good practice need to be universal.

  1. We need to grow the AI economy

New jobs and roles will be created in this fourth industrial revolution.  The economic prosperity of the country will depend on how seriously we take investment in this area over the next five years.  Other economies are making massive investment.  The United Kingdom has some of the best research in the world but without continued investment and better education at all levels we will fall further behind the global leaders.

We have some of the best Universities and researchers in the world.  But many businesses, branches of local and national government, services and charities have yet to make the transition to a digital economy which is a necessary first step to being AI ready.

  1. We need great leadership to shape the future

Leadership of developments in AI is currently dispersed and unclear.  Developments in AI demand a sustained, coordinated response across government and wider society and clear, ethical leadership alert to both the dangers and the possibilities of AI.

* * * * * * * *

There are some key theological issues here.  My list is growing but five stand out:

  1. What does it mean to be human?

Every advance in AI leads to deeper questions of humanity.  As a Christian, I believe God became a human person in Jesus Christ.  Our faith has profound things to say about human identity.

  1. What does it mean to be created and a creator?

A key part of being human from a Christian perspective is understanding that we are part of creation but with the power to create.  We need to understand both our limits and our potential.  AI encourages humanity to dream dreams but not always to set boundaries.

  1. Ethics needs to run through everything: truth

We need continually to emphasise the importance of truth, faithfulness, equality, respect for individuals, deep wisdom and the insights which come from human discourse and the whole ethical tradition, deeply rooted in Christianity and in other faiths.

  1. We need to be alert to increasing Inequality and poverty of opportunity.

The indications are already clear: without intervention, AI is more likely to increase inequality very significantly rather than decrease it.  AI needs to be held within a vision for global economics and politics which is deeper and better than free market capitalism.

  1. There is immense potential for good in AI but also immense potential for harm.

Serious damage can result from the wrong use of data and lives can be distorted.  Machines can and will be sexualised which will shape the humanity of those who use them.  Weaponisation of AI requires very careful international debate and global restraint.

The one on the right is Artie.

Artie is a Robothespian.  We met last week at Oxford Brookes University.  Artie showed me some of his moves.  He plays out scenes from Star Wars and Jaws with a range of voices, movements, gestures and special effects (including shark fins swimming across the screens which form his eyes).

Artie can’t yet hold an intelligent conversation but it won’t be long before his cousins and descendants can.  Artificial Intelligence (AI) is now beginning to affect all of our lives.

Every time you search the internet or interact with your mobile phone or shop on a big store online, you are bumping into artificial intelligence.  AI answers our questions through Siri (on the iPhone) or Alexa (on Amazon).  AI matters in all kinds of ways.

I’ve been exploring Artificial Intelligence for some time now.  In June I was appointed to sit on a new House of Lords Select Committee on AI as part of my work in the House of Lords.  The Committee has a broad focus and is currently seeking evidence from a wide group of people and organisations.  You can read about our brief here.

Here are just some of the reasons why all of this matters

Robot vacuum cleaners and personal privacy

A story in the Times caught my eye in July.  It’s now possible to buy a robot vacuum cleaner to take the strain out of household chores.  Perhaps you have one.  The robot will use AI to navigate the best route round your living room.  To do this it will make a map of your room using its onboard cameras.  The cameras will then transmit the data back to the company who make the robot. They can sell the data on to well known on line retailers who can then email you with specific suggestions of cushion covers or lamps to match your furniture.  All of this will be done with no human input whatsoever.

Personal boundaries and personal privacy matter. They are an essential part of our human identity and knowing who we are – and we are far more than consumers.  This matters for all of us – but especially the young and the vulnerable.  New technology means regulation on data protection needs to keep pace. The government announced its plans in August for a strengthening of UK protection law.

We need a greater level of education about AI and what it can do and is doing at every level in society – including schools. The technology can bring significant benefits but it can also disrupt our lives.

Self driving lorries and the future of work

AI will change the future of work.  Yesterday the government announced the first trials of automatic lorry convoys on Britain’s roads.

Within a decade, the transport industry may have changed completely.  There are great potential benefits.  As a society we need to face the reality that work is changing and evolving.

AI is already beginning to change the medical profession, accountancy, law and banking.  There is now an app which helps motorists challenge parking fines without the help of a lawyer (DoNotPay).  It has been successfully used by 160,000 people and was developed by Joshua Bowder, a 20 year old whose mission in life is to put lawyers out of business through simple technology.  The chat bot based App has already been extended to help the homeless and refugees access good legal advice for free.

Every development in Artificial Intelligence raises new questions about what it means to be human.  According to Kevin Kelly, “We’ll spend the next three decades – indeed, perhaps the next century – in a permanent identity crisis, continually asking what humans are good for”[1].

As a Christian, I want to be part of that conversation.  At the heart of our faith is the good news that God created the universe, that God loves the world and that God became human to restore us and show us what it means to live well and reach our full potential.

Direct messaging and political influence

The outcome of the last two US Presidential Elections has been shaped and influenced by AI: the side with the best social media campaigns won.  Professor of Machine Learning, Pedro Domingos, describes the impact algorithm driven social media had on the Obama-Rooney campaign[2].  In his excellent documentary “Secrets of Silicon Valley” Jamie Bartlett explores the use of the same technology by the Trump Presidential campaign in 2016 which again led to victory in an otherwise close campaign.

There are signs that a similar use of social media with very detailed targeting of voters using AI was also used to good effect by Labour in the 2017 election.

In July six members of the House of Lords led by Lord Puttnam wrote to the Observer raising questions about the proposed takeover of Sky by Rupert Murdoch.  In an open letter they argue, persuasively in my view, that this takeover gives a single company access to the personal data of over 13 million households: data which can then be used for micro ads and political campaigning.

The tools offered by AI are immensely powerful for shaping ideas and debate in our society.  Christians need to be part of that dialogue, aware of what is happening and making a contribution for the sake of the common good.

Swarms and drones and the weaponisation of AI

DroneKiller robots already exist in the form of autonomous sentry guns in South Korea.  Many more are in development.  On Monday 116 founders and leaders of robotics companies led by Elon Musk called on the United Nations to prevent a new arms race.

Technology itself is a neutral thing but carries great power to affect lives for good or for ill.  If there is to be a new arms race then we need a new public debate.  The UK Government will need to take a view on the proliferation and use of weaponry powered by AI.  The 2015 film Eye in the Sky starring Helen Mirren and directed by Gavin Hood is a powerful introduction to the ethical issues involved in remote weapons.  Autonomous weapons raise a new and very present set of questions.  How will the UK Government respond?  Christians need a voice in that debate.

The Superintelligence: creating a new species

It’s a long way from robot vacuum cleaners to a superintelligence.  At the moment, much artificial intelligence is “narrow”: we can create machines which are very good at particular tasks (such as beating a human at “Go”) but not machines which have broad general intelligence and consciousness.  We have not yet created intelligent life.

But scientists think that day is not far away.  Some are hopeful of the benefits of non human superintelligence.  Some, including Stephen Hawking, are extremely cautious.  But there is serious thinking happening already.  Professor Nick Bostron is the Director of the Future of Humanity Institute in the University of Oxford.  In his book, Superintelligence, he analyses the steps needed to develop superintelligence, the ways in which humanity may or may not be able to control what emerges and the kind of ethical thinking which is needed.  “Human civilisation is at stake” according to Clive Cookson, who reviewed the book for the Financial Times[3].

The resources of our faith have much to say in all of this debate around AI: about fair access, privacy and personal identity, about persuasion in the political process, about what it means to be human, about the ethics of weaponisation and about the limits of human endeavour.

In the 19th Century and for much of the 20th Century, science asked hard questions of faith.  Christians did not always respond well to those questions and to the evidence of reason.  But in the 21st Century, faith needs to ask hard questions once again of science.

As Christians we need think seriously about these questions and engage in the debate.  I’ll write more in the coming months as the work of the Select Committee moves forward.

[1] Kevin Kelly, The Inevitable: understanding the 12 technological forces that will shape our future, Penguin, 2016, p. 49

[2] Pedro Domingos, The Master Algorithm, How the quest for the ultimate learning machine will remake our world, Penguin, 2015, pp.16-19.

[3] Nick Bostron, Superintelligence: paths, dangers, strategies, Oxford, 2014