Developing Artificial Intelligence in the UK

 

For the past year, I’ve been a member of the House of Lord’s Select Committee on Artificial Intelligence. The Committee of 13 members received 223 pieces of written evidence and took oral sessions from 57 witnesses over 22 sessions between October and December. It has been a fascinating process.

The Committee’s report is published today. It’s called AI in the UK: ready, willing and able? You can find it on the Committee website.

When I first started to engage with questions of Artificial Intelligence, I thought the real dangers to humankind were a generation away and the stuff of science fiction. The books and talks that kept me awake at night were about general AI: conscious machines (probably more than a generation away if not more).

The more I heard, the more the evidence that kept me awake at night was in the present not the future. Artificial Intelligence is a present reality not a future possibility. AI is used, and will be used, in all kinds of everyday ways. Consider this vignette from the opening pages of the report…

You wake up, refreshed, as your phone alarm goes off at 7:06am, having analysed your previous night’s sleep to work out the best point to interrupt your sleep cycle. You ask your voice assistant for an overview of the news, and it reads out a curated selection based on your interests. Your local MP is defending herself—a video has emerged which seems to show her privately attacking her party leader. The MP claims her face has been copied into the footage, and experts argue over the authenticity of the footage. As you leave, your daughter is practising for an upcoming exam with the help of an AI education app on her smartphone, which provides her with personalised content based on her strengths and weaknesses in previous lessons…

There is immense potential for good in AI: labour saving routine jobs can be delegated; we can be better connected; there is a remedy for stagnant productivity in the economy which will be a real benefit; there will be significant advances in medicine, especially in diagnosis and detection. In time, the roads may be safer and transport more efficient.

There are also significant risks. Our data in the wrong hands mean that political debate and opinion can be manipulated in very subtle ways. Important decisions about our lives might be made with little human involvement. Inequality may widen further. Our mental health might be eroded because of the big questions raised about AI.

This is a critical moment. Humankind has the power now to shape Artificial Intelligence as it develops. To do that we need a strong ethical base: a sense of what is right and what is harmful in AI.

I’m delighted that the Prime Minister has committed the United Kingdom to give an ethical lead in this area. Theresa May said in a recent speech in Davos in January:

“We want our new world leading centre for Data Ethics and Innovation to work closely with international partners to build a common understanding of how to ensure the safe, ethical and innovative development of artificial intelligence”

That new ethical framework will not come from the Big Tech companies and Silicon Valley which seek the minimum regulation and maximum freedom. Nor will it come from China, the other major global investor in AI, which takes a very different view of how personal data should be handled. It is most likely to come from Europe, with its strong foundation in Christian values and the rights of the individual and most of all, at present, from the United Kingdom, which is also a global player in the development of technology.

The underlying theme of the Select Committee’s recommendations is that ethics must be put at the centre of the development and use of AI. We believe that Britain has a vital role in leading the international community in shaping Artificial Intelligence for the common good rather than passively accepting its consequences.

The Government has already announced the creation of a new Centre for Data Ethics and Innovation to lead in this area. The Select Committee’s proposals will support the Centre’s work.

Towards the end of our enquiry, the Committee shaped five principles which we offer as a starting point for the Centre’s work. They emerged from very careful listening to those who came to meet us from industry and universities and regulators. Almost everyone we met was concerned about ethics and the need for an ethical vision to guide the development of these very powerful tools which will shape society in the next generation.

These are our five core principles (or AI Code) with a short commentary on each:

Artificial intelligence should be developed for the common good and benefit of humanity

Why is this important? AI is about more than making tasks easer or commercial advantage or one group exploiting another. AI is a powerful technology which can shape our understanding of work and income and our health. It’s too important to be left to multinational companies operating on behalf of their shareholders or to a tiny group of innovators. We need a big, wide public debate. It’s also vital that as a society we encourage the best minds towards using AI to solve the most critical problems facing the planet. It would be a tragedy if the main fruits of AI were simply better computer generated graphics or quicker ways to order takeaway pizza.

Artificial Intelligence should operate on principles of intelligibility and fairness

This is absolutely vital. There is a striking tendency in AI at the moment to anthropomorphise: to make machines seem human. This looks harmless at first until you begin to consider the consequences. Suppose in a few years time you are unable to tell whether that call from the bank is from an AI or a person? Suppose you apply for a job and the decisions about your application are all taken by a computer?

Suppose that computer is using a faulty data set, biased against you but you never get to know that? There are already a number of chatbots available offering cognitive behavioural therapy. Some of them charge money. Suppose they get better and better and imitating humans. What is to prevent vulnerable people being exploited? Regulation and monitoring is needed not for the first generation of developers (who are mainly very ethical) but for the generation after that.

Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

The Cambridge Analytica and Facebook scandals erupted the week after the Select Committee agreed its final report. They underline the need for this principle. Data is the oil of the AI revolution. It is vital to fuel machine learning and wide application of AI. But data also contains the essence of identity and personality. It is fundamental that our data is safeguarded and not exploited.

All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

AI is a disruptive technology. Some jobs will diminish or disappear. New jobs will emerge—but they will be different and probably not there in the same numbers as the jobs we lose. Inequality will increase unless we take positive steps to counter this. The economic predictions are uncertain. It is however absolutely clear that the only way to counter this disruption is education and lifelong learning. That education is not only about reskilling the workforce. There is a universal need for everyone to learn how to flourish in a new digital world. Providing that education is the responsibility of government.

The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Autonomous weapons are a present reality and a future prospect. This will change warfare for ever. The UK’s position on them is, at best, ambiguous: we use definitions which are out of step with the rest of the world. The Select Committee calls on the government for much greater clarity here and again, for a wider public debate. Deception is already a feature of AI in cyberwarfare and covert attempts to change perceptions of truth and public opinion. Unless we guard values of public truth and courtesy and freedom then our society is vulnerable.

Artificial Intelligence is here to stay. It has the capacity to shape our lives in many different ways. This is the moment to ensure that humankind shapes AI to serve the common good and all humanity rather than allowing AI driven by commercial or other interests to shape our future and our national life.

 

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments