WHO HE IS
-
A godfather of AI. Aka, one of the founding architects of modern artificial intelligence.
-
A member of the OG trio that invented neural networks – the technology behind deep learning. It’s the backbone tech that enables computer vision and speech recognition – phones recognising faces, self-driving cars detecting pedestrians, facial tagging on social media, image search, satellite mapping, surveillance. And of course, LLMs. The three won the Turing Award – called the “Nobel Prize of Computing” in 2018 for their decades-long effort, turning a fringe idea into an industry staple. And today’s AI frenzy.
-
Quick tidbit: He stuck with AI during the decades when interest was low – he had difficulty finding a PhD advisor during the 1980s because no one else was studying machine learning. The perseverance clearly paid off.
-
And, intriguingly, skeptic-in-chief on modern AI. He’s a foremost contrarian voice among his peers – which include AI celebrities like Sam Altman, Dario Amodei, Elon Musk. And even Geoffrey Hinton and Yoshua Bengio, the two computer scientists who co-won the Turing Award with him. While accelerations promise AI-led abundance and doomers warn of AI-led apocalypse, LeCun… rolls his eyes. And he’s recently quit his position as Chief AI Scientist at Meta to prove them all wrong.
THE BIG BEEF
What accelerationists are saying:
-
Sam Altman: “Our vision is simple: we want to create a factory that can produce a gigawatt of new AI infrastructure every week.”
-
Elon Musk: “xAI will have more AI compute than everyone else combined in <5 years.” He’s now even racing to get data centres up in space, like Google and Jeff Bezos.
-
Peter Diamandis, founder of XPrize and high priest of exponential technologies: “We’re approaching the moment when exponential technologies solve humanity’s greatest challenges faster than new problems emerge. By 2035, food, energy, and education will be democratized at scale through autonomous systems and AI.”
What doomers are saying:
-
Geoffrey Hinton: “They may well develop the goal of taking control — and if they do, we’re in trouble.” He’s worried about AI surpassing human intelligence. He’s alarmed by AI’s emergent behaviour – systems that lie, deceive, fake alignment. He’s anxious about uncontrollable AI. And he pegs the chance of an AI-led existential catastrophe at 10-20%. For context, the odds of a commercial airplane crash is 1 in millions.
-
Yoshua Bengio: “We don’t have methods to make sure these systems will not harm people or turn against people… We don’t know how to do that.”
-
Eliezer Yudkowsky: “If any company or group, anywhere on the planet, builds an artificial superintelligence … then everyone, everywhere on Earth, will die.”
What LeCun’s arguing:
-
Current AI systems are “very stupid”. Machine learning “sucks”. And the next generation of AI – i.e., AI that achieves and surpasses human intelligence – isn’t going to arrive thanks to more infrastructure, more data, more investment, currently pegged at a colossal $8 trillion by 2030 by one estimate. Scale isn’t the answer.
-
LLMs are less intelligent than a 4 year-old child. Because they can’t understand the physical world, they don’t have persistent memory, cannot reason or plan.
IN HIS OWN WORDS
-
“Before we get to human-level intelligence in machines, we first have to reach cat-level and dog-level AI.”
-
“Oh my god, they’re going to take over the world. No. Intelligence has nothing to do with the desire to dominate”. He’s literally called fears of an AI existential threat – pardon his French – “BS”.
-
“The idea that intelligence is a linear scale…is complete nonsense.” – Getting to AGI is going to take multiple leaps, not one single ‘aha’ moment.
-
But make no mistake – LeCun is a believer and proponent of AI, just not the limited machine learning kind: “AI systems will absolutely mediate all of our interactions with the digital world – and to some extent, with each other.”
-
In fact, he was among those who did not sign the letter calling for a pause in AI development back in March 2023 – he believes slowing down will kill open source, and even cede ground to those who build in the shadows: “The main risk of AI in the future is if AI is controlled by a handful of companies.”
WHAT’S HIS BIG IDEA
-
World model AI: a system that doesn’t just predict, but perceives, plans, exercises autonomy. A machine that won’t need to be pre-programmed with core knowledge – but could learn the rules of the world, like a child – by, say, watching videos. In short, this is his answer to getting to human-level artificial intelligence.
-
Le Cun has recently quit Meta, where he was Chief AI Scientist. To start his own AI outfit and build “world model AI” – what he considers will be next-generation AI.
AT SYNAPSE
Yann Le Cun will share why he’s challenging the current hype – and doom – around LLMs. What he thinks about anxieties around emergent behaviour and AI displacement. What he thinks will jumpstart the “real AI revolution”. Why he’s an advocate of open source. What he considers to be “intelligence”. And his timelines for the actual, artificial kind.





