Posts

Spirit of God awakens a new life, both dead and alive, detail of stained glass window by Sieger Koder in church of Saint John in Piflas, Germany

Some of us might have been surprised to see Artificial Intelligence so high on the agenda for the Prime Minister’s meeting with the President Biden this week. The President pledged to support Britain’s convening of a major global conference on AI regulation later this year.

The calling of the conference is part of the government’s response to a series of concerns about AI voiced by leading figures in the tech industry in recent months warning of the need to regulate both research and deployment of AI. Many of you will know that I have been working in this area now for a number of years in my work in the House of Lords and for three years as part of the government’s Centre for Data Ethics and Innovation. This seems a good moment to bring the Synod and the Diocese up to date on the potential and concerns around AI and also with developments in the Online Safety Bill.

Artificial Intelligence is developing apace and is affecting every part of our lives. Global investment is increasing. New products are rolled out with bewildering speed. Microsoft launched Chat GPT on 30th November last year. By January it had become the fastest growing consumer software application in history gaining over 100 million users worldwide. Chat GPT is currently leading the field among new AI’s available to the public based on Large Language Models: the manipulation not just of data but of language in a way which seems human and intelligent. Chat GPT is already transforming search, the way children do their homework and possibly the way clergy prepare sermons. Version 4 was launched in March; an App came out in May. Microsoft will incorporate a version into Office later this year.

The software has the potential to reshape the legal profession, call centres and knowledge based enterprises. Other developments in AI are transforming medicine particularly in the rapid diagnosis of cancers or more accurate scanning and in the development of remote medicine.

There is huge potential here but also significant jeopardy. Two of the three godfathers of AI, Geoffrey Hinton and Yoshua Benigio have sounded warnings about research and deployment running much faster than regulation and public debate. In May a coalition of industry experts including the head of the company which developed Chat GPT and of Google Deep Mind issued a serious warning that Artificial Intelligence could lead to the extinction of humanity. They argue that:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war”.

What are the risks? They include the weaponization of AI by bad actors; the generation of misinformation to destabilise society, including in elections; the concentration of power in fewer and fewer hands enabling regimes to enforce narrow values through pervasive surveillance and oppressive censorship”; and enfeeblement, where humans become dependent on AI.

These warnings are not uncontested and we are currently seeing a pushback against some of these dire warnings. We are probably decades away from an autonomous general artificial intelligence. These Terminator like scenarios can be used to distract attention from the more immediate but real dangers – such as the rapid deployment of facial recognition technology in security and policing without proper governance. But more not less public debate is needed which is mindful both of the immense good this technology can enable and the severe harm.

What then has this to do with the Church and with Christians? We clearly need to engage in an informed way as this technology develops for the sake of present and future generations. As Christians we have a distinctive understanding of human dignity and person hood and what it means to be human. Our identity is rooted in the faith that humankind is made in the image of God, to quote Genesis 1. We place our faith and trust in our Father in heaven who made us and who loves us. We are able to work in partnership with technology and machines of all kinds. But not uncritically.

If technology undermines personal safety or dignity, through stripping away capacity for creativity and meaningful work, then we should be concerned. If technology undermines the democratic process or public truth, we should sound a warning. If the development of autonomous weapons gives life and death decisions to a machine we should raise our voices in every way possible.

Second, our understanding of what it means to be human is rooted in the incarnation. We believe that Almighty God, maker of heaven and earth, became a human person in a particular time and place to redeem all of humanity in every time and place. There is no higher statement of value and worth for humankind that the truth that God became a person in Jesus and a person who embodies the distinctive Christian character of the beatitudes: contemplation in a relationship with God, compassion in love for the world and courage in a desire for justice and for peace. We are called to embody those values in the life of the Body of Christ, the Church.

This means again that the Church will need to be both critical and cautious in response to new technologies. Our humanity is not negotiable. We need to say clearly that the future of humankind is not unlimited enhancement and mechanisation and automation and delegation. We will want to see robust public debate and good governance which is alert to dangers. We will want the commonly owned values of our society, based on our Christian inheritance, to be lived out online as well as offline. We will want to ensure a strong role for government in regulation. If this is in the hands of major global tech companies then power and wealth and influence will be concentrated in an ever smaller group of unaccountable technocrats. We will want to see strong human- AI partnerships as a foundational principle in medicine, in law enforcement, in automation of work, in education.

And third our understanding of our humanity is formed by our faith and trust in the Holy Spirit, who gives life to the people of God. The Spirit of God comes to dwell within the heart and life of the believer, to give life in all its fulness, to form us into the likeness of Christ and to empower us to change God’s world for the better.

The Spirit leads us into all truth, we believe. One of the concerns to be alert to in this present phase of AI development is truth and authenticity. The new tools make the creation and dissemination of authentic deep fakes much easier. How do we know on the night before an election that the picture of the politician saying or doing something terrible is true or not? If Chat GPT or Google tells us that something is true, how do we test that in the real world if the internet is our only source of information? The preservation of truth has to be one of the highest priorities in a democracy and for the Church.

One of the other marks of the Spirit’s life is creativity. Remember in Exodus how the Spirit is given to skilled workers in fabrics and metals and wood in the building of tabernacle; remember how the Spirit inspires architects and builders and musicians and the arts.

The new generation of AI has a massive capacity for creativity. For the very first time we can all access a tool which will write a greetings card in the style of a Shakespeare sonnet or produce a new play or opera. So far the quality is not high – but it will get better.

My colleague Simon Cross, who is funded by the Templeton Foundation and works with me on these issues, has recently summed up the shift in the new generation of AI tools in this way:

The first iteration of digitalisation extracted data about us. In the first digital world, facts like our age, ethnicity, location and viewing habits could be extracted – or inferred with ever increasing granularity – and then used to tailor our attention: surveillance to sell. But the onus was on our information and opinions, not our ideas. There have been a host of downstream harms and unintended consequences that we are still discovering. But now, even before that first clean up is complete, Generative AI is coming for our creativity. Everything, but everything we write, or say, or sing, or paint, or draw, or sculpt, or… everything: all of it, is – or soon might be – hoovered up inside a ‘foundation model’, because our creativity is the coal that powers this new generative AI furnace.

What will the consequence be for our humanity and identity if AI takes the major share of human creativity: the arts as well as the sciences. The answer is that we become less than human, less than we can be. The spark of the divine image begins to be extinguished. We need to be alert; we need our prophets; we need to preserve truth and creativity and dignity for future generations.

Finally, as Simon argues there, the first clean up is not yet complete. Indeed it has hardly started. The Online Safety Bill currently in Committee Stage in the House of Lords is a key piece of legislation. It is not yet strong enough and over the last three months I’ve been working with a cross party group of peers, charities and agencies, and connecting with MPs, to seek to strengthen the Bill, with Simon’s support and that of other Lords Spiritual.

I am increasingly convinced that the world has created a deeply toxic environment for the mental health of children and adults through social media. We will look back on the last two decades and the lack of regulation in future years with disbelief. The range of harms affects every section of society but children and the vulnerable most of all.

The Letter of James is absolutely clear about the power of the tongue and of words to do harm.

“How great a forest is set ablaze by a small fire. And a tongue is a fire….a restless evil…. full of deadly poison.”

This fire, this evil, this deadly poison is magnified a hundred fold by social media and online engagement and has a massive effect on peoples real lives in a range of ways. The multiplication happens through 24 hour access even in our most private spaces; through the clever fostering of addiction; through algorithms which drive the most controversial content to our feeds and now increasingly through AI generated material.

I have been corresponding in recent weeks with Amanda and Stuart Stephens well known to some members of this Synod whose 13 year old son Olly was tragically murdered in Reading in 2021 by other children of a similar age. Social media played a massive part in his murder especially through incitement to knife crime. Amanda and Stuart have joined other bereaved parents in campaigning for a stronger bill.

The harms caused to children by pornography have been a feature of several of amendments and especially for strong age assurance and verification protection.

Adults too are not immune to harm from social media as many here will know. The Bill needs to be further strengthened as at attempt to regulate the damage already done. We need to learn from the damage caused by the last 20 years of social media to better regulate for the next generation. The government has not yet agreed to the major changes which are still needed though there is still time to do this.

There may yet come a moment when it will be helpful for members of this Synod to write to their MP’s on this matter.

There is much that can be done in local churches and schools to help and support parents and children in responsible approaches to the internet. We will be giving consideration later in this Synod to the magnificent work of our Board of Education and our engagement with children and young people now and into the future. I hope this address sets a context both in outlining some of the challenges the next generations will face, the need to monitor and limit access to social media and the resources of Christian faith to establish and build a vital core of Christian identity rooted in God the Trinity, Father, Son and Holy Spirit.

 

+Steven
10 June 2023

Photo: Spirit of God awakens a new life, both dead and alive, detail of stained glass window by Sieger Koder in church of Saint John in Piflas, Germany (c) Shutterstock

Archbishop Justin stands on a stage infront of large audience, a large photo of an oil refinery is shown on a screen.

Bishop Steven shares an overview of the key thread of Science and Faith at the Lambeth Conference held in Canterbury from 26th July to 7th August.

The Bishop of Oxford spoke in the debate on the Scrutiny Committee Report in the House of Lords on 25 Mary 2022.

Fifteen years ago, Facebook, Twitter and YouTube didn’t exist. Today, 67% of people in the UK are active users of at least one of them, and we now spend almost two hours each day on social media. Yet society is increasingly fearful of the risks of fake news and harmful content and distrustful of the very platforms that consume so much of our time.

Our lives are irreversibly online, lived with ever decreasing levels of privacy and hyperstimulated to a relentless pace. Few of us have stopped to properly consider what it means to live well in this age, but as Christians, we have an essential part to play in the shape of online society.

This week the national Church launched a Digital Charter, which includes guidelines and a pledge that anyone can add their name to as part of a personal commitment to making social media a more positive place. I’ve signed up to the Charter, and I hope you will too.

As a Diocese, we’ve been spending time exploring what it means to be a more Christ-like Church for the sake of God’s world. It’s a journey that started three years ago as we studied the Beatitudes together. Recently I’ve begun to ponder what those eight beautiful qualities might mean for social media and our online lives.

Blessed are the poor in spirit, for theirs is the kingdom of heaven.
I will remember that my identity comes from being made and loved by God, not from my online profile.

Blessed are those who mourn, for they will be comforted This world is full of grief and suffering.
I will tread softly and post with gentleness and compassion.

Blessed are the meek, for they will inherit the earth.
I will not boast or brag online, nor will I pull others down.

Blessed are those who hunger and thirst for righteousness, for they will be filled.
There are many wrongs to be righted. I will not be afraid to name them and look for justice in the world.

Blessed are the merciful, for they will receive mercy.
I will not judge others but be generous online. I will be conscious of my own failings.

Blessed are the pure in heart, for they will see God.
I will be truthful and honest, and I will not pretend to be what I am not.

Blessed are the peacemakers, for they will be called children of God
I will seek to reconcile those of different views with imagination and good humour.

Blessed are those who are persecuted for righteousness’ sake, for theirs is the kingdom of heaven.
I will not add to the store of hate in the world, but I will try to be courageous in standing up for what is right and true.

You can download a card (colour version | black and white version) to keep near your phone and tablet and share this social media graphic online.

Advances in technology have brought sharp ethical dilemmas and deeper questions of human identity. There are important debates to be had about the exploitation of our personal data, along with the threats (and benefits) of AI. These will take time and will require legislation, but we can also do something right now: let us each play our part in making social media kinder.

 

+Steven
June 2019

Further reading:

#CofECharter

Developing Artificial Intelligence in the UK

 

For the past year, I’ve been a member of the House of Lord’s Select Committee on Artificial Intelligence. The Committee of 13 members received 223 pieces of written evidence and took oral sessions from 57 witnesses over 22 sessions between October and December. It has been a fascinating process.

The Committee’s report is published today. It’s called AI in the UK: ready, willing and able? You can find it on the Committee website.

When I first started to engage with questions of Artificial Intelligence, I thought the real dangers to humankind were a generation away and the stuff of science fiction. The books and talks that kept me awake at night were about general AI: conscious machines (probably more than a generation away if not more).

The more I heard, the more the evidence that kept me awake at night was in the present not the future. Artificial Intelligence is a present reality not a future possibility. AI is used, and will be used, in all kinds of everyday ways. Consider this vignette from the opening pages of the report…

You wake up, refreshed, as your phone alarm goes off at 7:06am, having analysed your previous night’s sleep to work out the best point to interrupt your sleep cycle. You ask your voice assistant for an overview of the news, and it reads out a curated selection based on your interests. Your local MP is defending herself—a video has emerged which seems to show her privately attacking her party leader. The MP claims her face has been copied into the footage, and experts argue over the authenticity of the footage. As you leave, your daughter is practising for an upcoming exam with the help of an AI education app on her smartphone, which provides her with personalised content based on her strengths and weaknesses in previous lessons…

There is immense potential for good in AI: labour saving routine jobs can be delegated; we can be better connected; there is a remedy for stagnant productivity in the economy which will be a real benefit; there will be significant advances in medicine, especially in diagnosis and detection. In time, the roads may be safer and transport more efficient.

There are also significant risks. Our data in the wrong hands mean that political debate and opinion can be manipulated in very subtle ways. Important decisions about our lives might be made with little human involvement. Inequality may widen further. Our mental health might be eroded because of the big questions raised about AI.

This is a critical moment. Humankind has the power now to shape Artificial Intelligence as it develops. To do that we need a strong ethical base: a sense of what is right and what is harmful in AI.

I’m delighted that the Prime Minister has committed the United Kingdom to give an ethical lead in this area. Theresa May said in a recent speech in Davos in January:

“We want our new world leading centre for Data Ethics and Innovation to work closely with international partners to build a common understanding of how to ensure the safe, ethical and innovative development of artificial intelligence”

That new ethical framework will not come from the Big Tech companies and Silicon Valley which seek the minimum regulation and maximum freedom. Nor will it come from China, the other major global investor in AI, which takes a very different view of how personal data should be handled. It is most likely to come from Europe, with its strong foundation in Christian values and the rights of the individual and most of all, at present, from the United Kingdom, which is also a global player in the development of technology.

The underlying theme of the Select Committee’s recommendations is that ethics must be put at the centre of the development and use of AI. We believe that Britain has a vital role in leading the international community in shaping Artificial Intelligence for the common good rather than passively accepting its consequences.

The Government has already announced the creation of a new Centre for Data Ethics and Innovation to lead in this area. The Select Committee’s proposals will support the Centre’s work.

Towards the end of our enquiry, the Committee shaped five principles which we offer as a starting point for the Centre’s work. They emerged from very careful listening to those who came to meet us from industry and universities and regulators. Almost everyone we met was concerned about ethics and the need for an ethical vision to guide the development of these very powerful tools which will shape society in the next generation.

These are our five core principles (or AI Code) with a short commentary on each:

Artificial intelligence should be developed for the common good and benefit of humanity

Why is this important? AI is about more than making tasks easer or commercial advantage or one group exploiting another. AI is a powerful technology which can shape our understanding of work and income and our health. It’s too important to be left to multinational companies operating on behalf of their shareholders or to a tiny group of innovators. We need a big, wide public debate. It’s also vital that as a society we encourage the best minds towards using AI to solve the most critical problems facing the planet. It would be a tragedy if the main fruits of AI were simply better computer generated graphics or quicker ways to order takeaway pizza.

Artificial Intelligence should operate on principles of intelligibility and fairness

This is absolutely vital. There is a striking tendency in AI at the moment to anthropomorphise: to make machines seem human. This looks harmless at first until you begin to consider the consequences. Suppose in a few years time you are unable to tell whether that call from the bank is from an AI or a person? Suppose you apply for a job and the decisions about your application are all taken by a computer?

Suppose that computer is using a faulty data set, biased against you but you never get to know that? There are already a number of chatbots available offering cognitive behavioural therapy. Some of them charge money. Suppose they get better and better and imitating humans. What is to prevent vulnerable people being exploited? Regulation and monitoring is needed not for the first generation of developers (who are mainly very ethical) but for the generation after that.

Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

The Cambridge Analytica and Facebook scandals erupted the week after the Select Committee agreed its final report. They underline the need for this principle. Data is the oil of the AI revolution. It is vital to fuel machine learning and wide application of AI. But data also contains the essence of identity and personality. It is fundamental that our data is safeguarded and not exploited.

All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

AI is a disruptive technology. Some jobs will diminish or disappear. New jobs will emerge—but they will be different and probably not there in the same numbers as the jobs we lose. Inequality will increase unless we take positive steps to counter this. The economic predictions are uncertain. It is however absolutely clear that the only way to counter this disruption is education and lifelong learning. That education is not only about reskilling the workforce. There is a universal need for everyone to learn how to flourish in a new digital world. Providing that education is the responsibility of government.

The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Autonomous weapons are a present reality and a future prospect. This will change warfare for ever. The UK’s position on them is, at best, ambiguous: we use definitions which are out of step with the rest of the world. The Select Committee calls on the government for much greater clarity here and again, for a wider public debate. Deception is already a feature of AI in cyberwarfare and covert attempts to change perceptions of truth and public opinion. Unless we guard values of public truth and courtesy and freedom then our society is vulnerable.

Artificial Intelligence is here to stay. It has the capacity to shape our lives in many different ways. This is the moment to ensure that humankind shapes AI to serve the common good and all humanity rather than allowing AI driven by commercial or other interests to shape our future and our national life.

 

Bishop Steven gave his Thought for the Day on Saturday 30th December 2017 during a programme edited by Artificial Intelligence (AI).


“How are you feeling?”
“What’s your energy like today?”

Imagine being asked the same questions every day not by a person but by a machine.

My eye was drawn earlier this year to the launch of the Woebot—a charming robot friend, able to listen 24-7 through your phone or computer.

The Woebot (that’s WOE) is a Fully Automated Conversational Agent, a chatbot therapist powered by artificial intelligence and the principles of cognitive behavioural therapy. It aims to help young adults cope better with life.

That has to be a good thing, although it says as much about our culture as it does about AI.
As Crocodile Dundee might have said, “Haven’t you got any mates?” The truth is, we don’t, or not enough.

AI is beginning to be everywhere. It helps us do things we couldn’t do before. As we’ve been hearing this morning, AI raises many deep questions about the future of work, proper boundaries, weaponisation, the right use of data, and teaching children and adults to look after themselves in a digital world. Most lead back to the same core issue. What does it mean to be human? This is a question that has never been more important.

For a Christian, the foundation of being human is that we are part of God’s creation but with this wonderful power to create.

Psalm 139 evokes wonder and mystery:

“For it was you, o God, who formed my inward parts; you knit me together in my mother’s womb. I praise you for I am fearfully and wonderfully made”.

Every advance in AI shows me what a profound and wonderful thing it is to be alive—to be human.

AI can do really interesting things. But, as yet Artificial Intelligence isn’t a patch on the real thing: human intelligence and human learning and human identity. We have a mind and memories, conscience and consciousness, the capacity to reason, to love and to weep, to hold a child or the hand of an old person, to breathe deep in the early morning, or to talk with God in the cool of the evening.

In this Christmas season especially, I remember that being human is God’s special subject. Humanity is the pinnacle of creation, flawed and imperfect though we are. Christians believe that God’s reason and ingenuity and love took flesh and God was born a child and came to bring hope and purpose and healing to the earth.

Artificial Intelligence is amazing, though we need to use it well and be alert to its dangers. Human consciousness is even more remarkable, for me: a God-given mystery.

We are more than the sum of our parts. The moment we begin to lose sight of the fact that humankind is truly unique is the moment we fail to recognise the amazing gift life in all its glory.

With that in mind…

How are you feeling today?

I’ve spent most Tuesday afternoons this term in the House of Lords Select Committee on Artificial Intelligence.  We’ve been hearing evidence on every aspect of Artificial Intelligence as it affects business, consumers, warfare, health, education and research.

In the meantime, public interest and debate in Artificial Intelligence (AI) continues to grow.  In the last week or so, there have been more news stories about self-driving cars; about Uber’s breach in data, dire warnings from Elon Musk and Hilary Clinton; announcements in the budget about investment in technology and much stealthy marketing of AI in the guise of digital assistants for the home.

The Committee is due to report in April.  We are just beginning the process of distilling down all we have heard into the key issues for public policy.

As we begin this process of reflection, these are my top eight issues in AI and the deep theological questions they raise.

  1. We need a better public debate and education

AI and machine learning technology is making a big difference to our lives and is set to make a bigger difference in the future.  There is consensus that major disruptive change is on its way.  People differ about how quickly it will arrive.  The rule of thumb, I’ve learned, is that we underestimate the impact of change through technology but overestimate the speed.

Public debate and scrutiny is vital.  It’s important to understand so that we can live well with new technology, protect our data and identity, and that of our children and grandchildren and ensure technology serves us well.  It is also vital to build public trust and confidence.  A few years ago, the development of GM foods was halted because public trust and confidence did not keep pace with the technology. Public debate is vital.

  1. AI and social media are shaping political debate

There is very good evidence that AI and social media used together are shaping the democratic process and changing the nature of public debate.  Technology is partly responsible for the unexpected outcomes of elections and referenda in recent years.

AI and social media make it possible for tailored messages to be delivered directly to voters in a personalised way.  The nature of public truth and political debate is therefore changing.  We are less likely to trust single authoritative sources of news.  We listen and debate in silos.  There is a wider spectrum of ideas.  Those who offer social media platforms are not responsible for the content published there (for the first time in history).  There is good evidence that this is leading to sharper, more antagonistic and polarised debate.

  1. AI will massively transform the world of work

There have been a range of serious studies.  Between 20% and 40% of jobs in the economy are at high risk of automation by the early 2030’s.  The economic effects will fall unevenly across the United Kingdom.  The greatest impact will be felt in the poorest communities still adjusting to the loss of jobs in mining and manufacturing.  There is a risk of growing inequality.  Traditional white collar jobs in accounting and law will be similarly affected.

The disruption will probably be enough to break the traditional life script of 20 years of education followed by 40 years of work and retirement.  We need to prepare for a world in which this is no longer normal.  We will need radical new ways of structuring support across the whole of society.  Universal Basic Income or Universal Basic Services need to be actively explored.  This will be the major economic challenge for government over the next decade.

New jobs and roles will be created in this fourth industrial revolution.  The economic prosperity of the country will depend on how seriously we take investment in this area over the next five years.  Other economies are making massive investment.  The United Kingdom has some of the best research in the world but without continued investment and better education at all levels we will fall further behind the global leaders.

  1. Education is key to the future

STEM subjects and computer sciences are vital for everyone.  But not to the exclusion of the humanities.  We need to educate for the whole of life not simply train economic units of productivity.  In a world which is uncertain what it means to be human, we need a fresh emphasis on ethics and values.

  1. Better data is key

There are two ingredients in the development of machine learning: computing power and good data.  Government needs to support small and medium enterprises and start up businesses by making both more available: otherwise the major companies who are already ahead are likely to grow their advantage.

There are significant issues surrounding the security and quality of data, particularly in health care, but also huge advantages in making that data available.  Some of the major benefits of AI to humanity are likely to come in better diagnosis of disease and in enhancement (not replacement) of treatments offered by practitioners.  But the date needs to be of the highest quality to prevent bias creeping into the outcomes.

  1. Ethics needs to run through everything

AI brings immense potential for good but also significant potential for harm if used solely for profit and without though for the consequences.  There are very obvious areas where AI can do immense damage: weaponisation; the sexualisation of machines and the acceleration of inequality.

The very best companies are highly ethical, publish codes of practice and are making a major contribution in this area.  But statements of ethical intent, education for ethics and codes of good practice need to be universal.

  1. We need to grow the AI economy

New jobs and roles will be created in this fourth industrial revolution.  The economic prosperity of the country will depend on how seriously we take investment in this area over the next five years.  Other economies are making massive investment.  The United Kingdom has some of the best research in the world but without continued investment and better education at all levels we will fall further behind the global leaders.

We have some of the best Universities and researchers in the world.  But many businesses, branches of local and national government, services and charities have yet to make the transition to a digital economy which is a necessary first step to being AI ready.

  1. We need great leadership to shape the future

Leadership of developments in AI is currently dispersed and unclear.  Developments in AI demand a sustained, coordinated response across government and wider society and clear, ethical leadership alert to both the dangers and the possibilities of AI.

* * * * * * * *

There are some key theological issues here.  My list is growing but five stand out:

  1. What does it mean to be human?

Every advance in AI leads to deeper questions of humanity.  As a Christian, I believe God became a human person in Jesus Christ.  Our faith has profound things to say about human identity.

  1. What does it mean to be created and a creator?

A key part of being human from a Christian perspective is understanding that we are part of creation but with the power to create.  We need to understand both our limits and our potential.  AI encourages humanity to dream dreams but not always to set boundaries.

  1. Ethics needs to run through everything: truth

We need continually to emphasise the importance of truth, faithfulness, equality, respect for individuals, deep wisdom and the insights which come from human discourse and the whole ethical tradition, deeply rooted in Christianity and in other faiths.

  1. We need to be alert to increasing Inequality and poverty of opportunity.

The indications are already clear: without intervention, AI is more likely to increase inequality very significantly rather than decrease it.  AI needs to be held within a vision for global economics and politics which is deeper and better than free market capitalism.

  1. There is immense potential for good in AI but also immense potential for harm.

Serious damage can result from the wrong use of data and lives can be distorted.  Machines can and will be sexualised which will shape the humanity of those who use them.  Weaponisation of AI requires very careful international debate and global restraint.

The one on the right is Artie.

Artie is a Robothespian.  We met last week at Oxford Brookes University.  Artie showed me some of his moves.  He plays out scenes from Star Wars and Jaws with a range of voices, movements, gestures and special effects (including shark fins swimming across the screens which form his eyes).

Artie can’t yet hold an intelligent conversation but it won’t be long before his cousins and descendants can.  Artificial Intelligence (AI) is now beginning to affect all of our lives.

Every time you search the internet or interact with your mobile phone or shop on a big store online, you are bumping into artificial intelligence.  AI answers our questions through Siri (on the iPhone) or Alexa (on Amazon).  AI matters in all kinds of ways.

I’ve been exploring Artificial Intelligence for some time now.  In June I was appointed to sit on a new House of Lords Select Committee on AI as part of my work in the House of Lords.  The Committee has a broad focus and is currently seeking evidence from a wide group of people and organisations.  You can read about our brief here.

Here are just some of the reasons why all of this matters

Robot vacuum cleaners and personal privacy

A story in the Times caught my eye in July.  It’s now possible to buy a robot vacuum cleaner to take the strain out of household chores.  Perhaps you have one.  The robot will use AI to navigate the best route round your living room.  To do this it will make a map of your room using its onboard cameras.  The cameras will then transmit the data back to the company who make the robot. They can sell the data on to well known on line retailers who can then email you with specific suggestions of cushion covers or lamps to match your furniture.  All of this will be done with no human input whatsoever.

Personal boundaries and personal privacy matter. They are an essential part of our human identity and knowing who we are – and we are far more than consumers.  This matters for all of us – but especially the young and the vulnerable.  New technology means regulation on data protection needs to keep pace. The government announced its plans in August for a strengthening of UK protection law.

We need a greater level of education about AI and what it can do and is doing at every level in society – including schools. The technology can bring significant benefits but it can also disrupt our lives.

Self driving lorries and the future of work

AI will change the future of work.  Yesterday the government announced the first trials of automatic lorry convoys on Britain’s roads.

Within a decade, the transport industry may have changed completely.  There are great potential benefits.  As a society we need to face the reality that work is changing and evolving.

AI is already beginning to change the medical profession, accountancy, law and banking.  There is now an app which helps motorists challenge parking fines without the help of a lawyer (DoNotPay).  It has been successfully used by 160,000 people and was developed by Joshua Bowder, a 20 year old whose mission in life is to put lawyers out of business through simple technology.  The chat bot based App has already been extended to help the homeless and refugees access good legal advice for free.

Every development in Artificial Intelligence raises new questions about what it means to be human.  According to Kevin Kelly, “We’ll spend the next three decades – indeed, perhaps the next century – in a permanent identity crisis, continually asking what humans are good for”[1].

As a Christian, I want to be part of that conversation.  At the heart of our faith is the good news that God created the universe, that God loves the world and that God became human to restore us and show us what it means to live well and reach our full potential.

Direct messaging and political influence

The outcome of the last two US Presidential Elections has been shaped and influenced by AI: the side with the best social media campaigns won.  Professor of Machine Learning, Pedro Domingos, describes the impact algorithm driven social media had on the Obama-Rooney campaign[2].  In his excellent documentary “Secrets of Silicon Valley” Jamie Bartlett explores the use of the same technology by the Trump Presidential campaign in 2016 which again led to victory in an otherwise close campaign.

There are signs that a similar use of social media with very detailed targeting of voters using AI was also used to good effect by Labour in the 2017 election.

In July six members of the House of Lords led by Lord Puttnam wrote to the Observer raising questions about the proposed takeover of Sky by Rupert Murdoch.  In an open letter they argue, persuasively in my view, that this takeover gives a single company access to the personal data of over 13 million households: data which can then be used for micro ads and political campaigning.

The tools offered by AI are immensely powerful for shaping ideas and debate in our society.  Christians need to be part of that dialogue, aware of what is happening and making a contribution for the sake of the common good.

Swarms and drones and the weaponisation of AI

DroneKiller robots already exist in the form of autonomous sentry guns in South Korea.  Many more are in development.  On Monday 116 founders and leaders of robotics companies led by Elon Musk called on the United Nations to prevent a new arms race.

Technology itself is a neutral thing but carries great power to affect lives for good or for ill.  If there is to be a new arms race then we need a new public debate.  The UK Government will need to take a view on the proliferation and use of weaponry powered by AI.  The 2015 film Eye in the Sky starring Helen Mirren and directed by Gavin Hood is a powerful introduction to the ethical issues involved in remote weapons.  Autonomous weapons raise a new and very present set of questions.  How will the UK Government respond?  Christians need a voice in that debate.

The Superintelligence: creating a new species

It’s a long way from robot vacuum cleaners to a superintelligence.  At the moment, much artificial intelligence is “narrow”: we can create machines which are very good at particular tasks (such as beating a human at “Go”) but not machines which have broad general intelligence and consciousness.  We have not yet created intelligent life.

But scientists think that day is not far away.  Some are hopeful of the benefits of non human superintelligence.  Some, including Stephen Hawking, are extremely cautious.  But there is serious thinking happening already.  Professor Nick Bostron is the Director of the Future of Humanity Institute in the University of Oxford.  In his book, Superintelligence, he analyses the steps needed to develop superintelligence, the ways in which humanity may or may not be able to control what emerges and the kind of ethical thinking which is needed.  “Human civilisation is at stake” according to Clive Cookson, who reviewed the book for the Financial Times[3].

The resources of our faith have much to say in all of this debate around AI: about fair access, privacy and personal identity, about persuasion in the political process, about what it means to be human, about the ethics of weaponisation and about the limits of human endeavour.

In the 19th Century and for much of the 20th Century, science asked hard questions of faith.  Christians did not always respond well to those questions and to the evidence of reason.  But in the 21st Century, faith needs to ask hard questions once again of science.

As Christians we need think seriously about these questions and engage in the debate.  I’ll write more in the coming months as the work of the Select Committee moves forward.

[1] Kevin Kelly, The Inevitable: understanding the 12 technological forces that will shape our future, Penguin, 2016, p. 49

[2] Pedro Domingos, The Master Algorithm, How the quest for the ultimate learning machine will remake our world, Penguin, 2015, pp.16-19.

[3] Nick Bostron, Superintelligence: paths, dangers, strategies, Oxford, 2014