Panacea for the world’s problems. Or weapons of mass destruction?
Roomba-like docile pets. Or masters in the making?
Dazzling tech-topia. Or dystopian meltdown?
In the last decade, AI acceleration has known no bounds. Computer vision. Face recognition. Language processing. Content generation. Decision-making. Leaps in deep learning, in speed, precision, accuracy, control. Breakthroughs in reasoning. Case in point: the latest version of ChatGPT reportedly has an IQ of 155 – just five points short of Einstein’s.
It’s hard not to peer into the far future and imagine an age of abundance. Limitless energy. Interstellar travel. Immortality. Or a human-engineered AI apocalypse. Terminator. I, Robot. That Love, Death + Robots episode that shows the rise of a fast-moving miniature human civilisation as it goes from hunter gatherers to industrial age to technological singularity – and then falls. A microcosm of what may happen to us as we reach technological maturity?
Debating AI existential risk – a curiosity that began even before the birth of AI – has today become all the rage. An entire discipline. From open letters calling for a pause in development and deployment to specialised institutions dedicated to telescoping the future.
Nick Bostrom – polymath, futurist, and Elon Musk’s favourite philosopher – stands squarely at its contemporary centre.
Bostrom calls AI the “single most important and daunting challenge that humanity has ever faced.” More bluntly yet: akin to a ticking time bomb in the hands of children. In fact, more than the relentless dystopia peddled out of Hollywood, it was his meditations on AI – collated in his 2014 bestseller Superintelligence – that mainstreamed the concern among the “broligarchy”.
A digital doomsayer, a new-age Nostradamus? Neither. Bostrom may hate science fiction, but he’s a transhumanist – a movement that advances the use of technology to alter the human condition, from biological to mental. Or as he puts it: “gung-ho techno-cheerleading.” He has signed up to be cryogenically frozen within hours of his death. He has courted controversy over remarks bordering on eugenics. He’s the mind behind the theory that humanity could be living in a simulation created by an advanced civilisation. And he’s argued for why he – and all of us – should want to achieve a “posthuman” state – whether in terms of healthspan, cognition, or emotion.
Yes, he foresees digital uploading of our biological minds as a distinct possibility this century. Yes, he believes machine intelligence is the inevitable “portal” we have to pass through to realise humanity’s long-term future. But he’s not a tech accelerationist either.
Bostrom’s creed is to predict, unpack, parse. Probability theory, risk analysis, morality mining, and a touch (or more) of science fiction. To consider every technology as a larger philosophical and existential toolkit for understanding humanity’s potential – not a guaranteed or imminent reality.
His simulation argument? A thought experiment to challenge assumptions. His work on mind enhancement, brain uploading, AI? To chart what could go right, what could go wrong. Map ethical minefields. Possibilities and pathways.
For 10 years he led the Future of Humanity Institute at Oxford, which pondered over everything from space colonisation to nanotechnology – and how they might imperil the human species. (The research centre was recently shut down. Whether the cause was controversy or bureaucracy is not clear.)
Bostrom wants to convince us to take responsibility to ensure a positive long-term potential – he’s shared his omens from Google to Washington – and shepherd us into our next phase of existence.
An existence that currently stands at the edge of a cliff rather than on a springboard. Why? Because, Bostrom believes, “superintelligence” is coming, the moment when biological intelligence is outsmarted by the silicon kind. A jet outflying a falcon.
At which point, such an invention may well be the last one we humans ever make – because this ultraintelligent machine will be able to birth a cavalcade of creations better than we ever could.
Will this become our Manhattan moment? Superintelligence that pays as much heed to us as we do to ants. Machines that follow goal optimisation at the expense of ethical codes, social cues, human welfare. That outsmart any effort to contain them. AI godfather and 2024 Nobel Laureate Geoffrey Hinton warns of a 10-20% chance that AI leads to our extinction in the next three decades.
A cartoonish yet morbid example Bostrom advances: suppose we give an AI the goal to make humans smile. When the AI is weak, it will tell jokes, perform funny actions. When the AI becomes superintelligent, it realises it can simply take control of the world, stick electrodes into everyone’s facial muscles, and voila, constant grins.
Or will an “intelligence explosion” lead to a solved world? Material abundance. Land of Plenty. No conflict, no disease. A sustainable, peaceful, exponential expansion. What will a world of tech excess mean for human (perhaps post-human by this point) purpose?
Even as Bostrom flips the script from what if AI goes wrong to what if AI goes right in his latest book Deep Utopia, we’re in danger of falling into the abyss either way.
Unless we avoid auto-annihilation by course-correcting today. Now.
20th century sci-fi author Ray Bradbury once wrote, “I don’t try to describe the future. I try to prevent it.” At SYNAPSE 2025, Nick Bostrom will take you from human intelligence to artificial intelligence to superintelligence. Whether we’re close to inventing our very last machine. Why longer-term risks are worth our time today. How humans can retain control – ethics? Design? Governance? Technology? – and prevent our own extermination. And just what it will mean to be human if we succeed.
We invite you to read Anders Sandberg’s profile from SYNAPSE 2024 to understand the basics of transhumanism