On 1st November, a BBC Horizon documentary presented by Hannah Fry explored the impact of artificial intelligence on healthcare and GP services. The programme focussed on Babylon. Babylon’s mission is to put an accessible and affordable health service in the hands of every person on earth. This is not a future technology. It is available now. If you live in central London you can sign up to Babylon as your GP.
The potential benefits of AI in healthcare are very significant: more rapid and accurate diagnosis of cancer; diagnostic and triage software available even in remote areas; swift and simple videophone appointments with a doctor. But there are also balancing and significant risks. Horizon explores whether Babylon’s AI really is as reliable in diagnosis as a General Practitioner. Hannah Fry raises the question of whether the way in which Babylon is operating really is privatisation of the NHS by the back door. Healthy and tech savvy patients are signing up in large numbers for the online service. One of the consequences may be less viable GP’s surgeries particularly in poorer inner-city communities.
In The Times, the Duke of Cambridge has reflected on a different area of the impact of technology and artificial intelligence on our lives. In 2016, Prince William set up a cyberbullying task force, looking at the effects on mental health of young people of advances in technology and digital media. Two years on the Prince expresses his disappointment at the rate of progress. He writes: “I am worried that our technology companies still have a great deal to learn about the responsibilities which come with significant power”.
AI is affecting our lives in the present in both positive and negative ways as these two stories and many others illustrate. The potential benefits are hugely significant: for our national economy, for building healthy communities, for improving levels of health, for improving access to knowledge and education, for increasing productivity and eliminating dull and unsatisfying jobs, for our social and cultural life.
But the risks and dangers are also very great. AI is affecting our political campaigns and our perception of public truth. AI and automation will radically change the future of work in manual labour and the professions and the kind of initial and lifelong education our society needs. There are deep questions around the harvesting and use of our data. Everything you do online is being gathered up and sold onto a data broker who will in turn sell your data back to advertisers and campaigners. There are questions of transparency and bias as major decisions begin to be made involving AI around personal finance, human resources and criminal justice. There are questions around the development of autonomous weapon systems and the sexualisation of machines, both of which are a present reality.
In early 2017, I was invited to join the House of Lords Select Committee into Artificial Intelligence, a yearlong enquiry by a group of thirteen peers to consider “the economic, ethical and social implications of artificial intelligence”. The Committee had three full-time staff and two seconded professional advisors. We received 223 written pieces of evidence and took oral evidence from 57 witnesses and made visits to several different institutions developing AI. It was a fascinating piece of work.
Our report was published in April: AI in the UK: ready, willing and able with 74 different recommendations. The Government published its own response to the Select Committee Report on 28th June. The report will be debated in Parliament on 19 November.
The conclusions of our report are very simple and very important. AI is at a critical stage of its development now. The way we use AI and machine learning really is affecting our everyday lives in all kinds of ways and those affects will increase in the coming years. We heard the same concerns from every single sector of industry and society. The development of artificial intelligence needs a much stronger and shared ethical code. Government and civil society must be involved in developing that shared ethical understanding.
As a Committee, we have put forward five principles to form the basis of an AI Code as a contribution to the debate.
- Artificial intelligence should be developed for the common good and benefit of humanity
- Artificial intelligence should operate on principles of intelligibility and fairness
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities
- All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
Those principles are the centre and conclusion of our report. They have been widely welcomed and commented on. Dr. Stephen Cave is the Director of the Leverhulme Centre for Artificial Intelligence in the University of Cambridge. Stephen said in response to the Select Committee Report:
“The tech entrepreneur mantra of the last decade was move fast and break things. But some things are too important to be broken: like democracy, or equality, or social cohesion. The only kind of innovation that will bring us closer to the society we want, and the only kind a government should support, is therefore responsible innovation. I’m delighted to see the report endorse this so strongly.”
The UK Government has made a commitment to the UK becoming a global leader in the ethical and responsible use of AI. The Government has established a new Centre for Data Ethics and Innovation, a world-first advisory body, to provide the ethical leadership which will shape how Artificial Intelligence is used.
The Prime Minister said in her speech in Davos in January 2018:
“We want our new world-leading Centre for Data Ethics and Innovation to work closely with international partners to build a common understanding of how to ensure the safe, ethical and innovative deployment of Artificial Intelligence.”
At present, globally, the ethics of AI are driven on the one hand by the big tech companies shaped by profit and the extreme libertarianism of silicon valley and on the other by the massive investment by the Chinese government which has a very different view of the relationship between the state and the individual. There is space for the UK to articulate a different vision deeply shaped by our national values and shared story.
The ways in which we deploy, develop and respond to these new technologies is hugely significant for our future life on earth. The ways in which we harness the power of AI is one of the two most critical questions facing the human race in this generation. The second is the threat to the natural environment posed by climate change.
The ethical questions are new, serious and challenging. But there is an even deeper question being asked by the new technologies which is at the centre of most of the books and articles about artificial intelligence.
The fundamental question of our age is what does it mean to be human? How do we live well in the age of the machine? What does it mean to live well as a society with digital technology and the changes it will bring? Where do we find our core identity? What does it mean to be a person?
The whole of our Christian faith offers the most life-giving and affirming response to those questions. We are representatives of a faith which dares to believe that Almighty God, maker of the universe, became a human person in Jesus Christ. The whole of our long story is a two-thousand-year reflection on what it means to be human and to live well. Human identity rests in understanding that we are made and loved by God, that we are persons called into relationship with our creator and called into relationship with one another. We are fallible, finite and flawed. But God in love remakes and renews us through the gift of his Son Jesus Christ, through Christ’s suffering and death, and in love remakes and renews the heavens and the earth.
In Jesus Christ, Christians find a vision of what it means to be human and to live a flourishing human life. That vision is unfolded and explored in the Scriptures and especially in the gospels. To live fully and well is to live in a way which is contemplative, compassionate and courageous. Our lives are lived best in faith and trust with our creator; in love for our neighbours and in hope for the future.
Advances in technology bring sharp ethical dilemmas and deeper questions of human identity. As a community we have a part to play in these great debates and, above all, a story to tell of faith, hope and love and of the almighty, creator God who takes flesh and becomes a human person.
A Presidential Address to the Oxford Diocesan Synod
16 November 2018
Centre for Data Ethics & Innovation
On Tuesday 18 November, the Minister for Digital, Culture, Media and Sport announced the appointment of Bishop Steven to the Board of the new government Centre for Data Ethics and Innovation. Full details here https://www.gov.uk/government/news/stellar-new-board-appointed-to-lead-world-first-centre-for-data-ethics-and-innovation
2018 Winder Lecture
Readers wishing to explore the issues and opportunities presented by artificial intelligence and machine learning might like to listen to the 2018 Winder Lecture, given by Bishop Steven earlier this year. Click the play button above, or listen on Soundcloud. You can also view the presentation images +Steven used during the one hour talk.