by John Lennox
In April 2018 at the TED talks in Vancouver physicist and cosmologist Max Tegmark, president of the Future of Life Institute at MIT, made this rather grandiose statement: “In creating AI [artificial intelligence], we’re birthing a new form of life with unlimited potential for good or ill.”
A study by Sir Nigel Shadbolt and Roger Hampson entitled The Digital Ape carries the subtitle How to Live (in Peace) with Smart Machines. They are optimistic that humans will still be in charge, provided we approach the process sensibly. But is this optimism justified? The director of Cambridge University’s Centre for the Study of Existential Risk said: “We live in a world that could become fraught with . . . hazards from the misuse of AI and we need to take ownership of the problem–because the risks are real.”
The ethical questions are urgent since AI is regarded by experts as a transformative technology in the same league as electricity. It would, however, make more sense to compare AI with nuclear energy than with electricity. Research into nuclear energy led to nuclear power stations, but it also led to a nuclear arms race that almost led the world to the brink of extinction.
AI creates problems of similar, or of even greater, magnitude.
The brilliant play Copenhagen by Michael Frayn explores the question of whether scientists should simply follow the mathematics and physics without regard to the consequences of what they are developing or whether they should have moral qualms about it. The context of the play is the research that led to nuclear fission. Exactly the same issues are raised by AI, except that AI is accessible by many more people than atomic physics and does not need very sophisticated and expensive facilities.
You cannot build a nuclear bomb in your bedroom, but you can hack your way around the world and cause substantial damage. We need to stop and ask: What is the truth behind claims like those of Tegmark? Are they perhaps exaggerated speculation that goes far beyond what scientific research has actually shown? There may well be some validity in the observation that the amount of unjustified speculation claimed for AI is in inverse proportion to the amount of actual hands-on work in AI that the claimant has done. For it would seem that those scientists who actually build AI systems tend to be more cautious in their predictions about the potential of AI than those who do not.
There is also the question of what worldview is driving all of this. What are the assumptions that are being made? Are they in the interests of all of us or simply of an elite few who wish to dominate for their own purposes?
John Lennox is Professor of Mathematics (emeritus) at the University of Oxford and an adjunct lecturer at the Oxford Centre for Christian Apologetics. He is the author of numerous books including Where is God in a Coronavirus World?, Can Science Explain Everything?, and Joseph: A Story of Love, Hate, Slavery, Power & Forgiveness. The above article was taken from his most recent book, 2084: Artificial Intelligence & The Future of Humanity. Copyright © 2020. Used by permission of Zondervan. www.zondervan.com.
To listen to this week’s White Horse Inn featuring John Lennox, click here.
To order his latest book, 2084, click here.