#64 | AI Risks: From Annoying to Apocalyptic
TL;DR: One Nobel guy thinks AI has a 50% shot at killing everyone. Another says basically zero. Both are building it anyway. Your chatbot makes up citations, drones are picking targets in real wars, and we’re feeding AI so much AI-generated garbage that it’s forgetting what reality looks like.
👋 Welcome to the AI risk landscape,
Yoshua Bengio thinks AI might kill everyone. Fifty-fifty. Coin toss.
Sit with that for a second. The guy who invented neural networks, which run every chatbot and image generator, says there’s an even chance this whole experiment ends in human extinction.
He’s not some Reddit doomer. He’s the most-cited living computer scientist on Earth.
Yann LeCun thinks that’s ridiculous. Less than 0.01 percent. Rounding error.
Both are Nobel-caliber. And yet, we’re building it anyway, pedal to the metal, because stopping means someone else gets there first.
The EU banned eight AI applications outright. Emotion recognition at work. Government social scoring. Real-time facial surveillance.
Which tells you what companies were actually planning to deploy before someone said “absolutely fucking not.”
This is your AI risk map. Five tiers of risk, from mildly annoying to cosmically confusing. No advice. No hand-wringing about what we should do.
Just what’s happening, what might happen, and what sounds batshit insane but has PhDs and grant money behind it.
Well, the map isn’t ranked by probability or severity—nobody agrees on those anyway.
It’s ranked by how ridiculous it sounds when you say it out loud at a dinner party. Some of this is happening right now. Some might never happen.
The weird part is that serious people take all of it seriously enough to write papers, burn through funding, and occasionally ban entire technology categories before they exist.
Let’s go. 👇
The AI Learning Guy newsletter 🤖 🧠💡
AI learning hacks and mega prompts delivered to your inbox.
1. Currently Annoying You
Confident Bullshit
Your chatbot makes things up with the unshakeable confidence of someone who absolutely did not do the reading but definitely showed up to class.
A lawyer submitted fake case citations invented by ChatGPT. Tanked his whole career. The AI wasn’t technically lying—lying requires awareness of the truth.
It just filled in blanks with whatever sounded lawyerly enough, like finishing someone’s sentence at a party without listening to what they were actually saying.
Garbage Code
Fifty-five percent of AI-generated code is secure. That’s it. Coin-flip odds: your code isn’t a security disaster.
That number hasn’t budged. Models got bigger, syntax got cleaner, documentation got prettier. Security stayed at exactly the same level of broken. (Yay!)
The Forgetting
Feed AI-generated content from other AIs, and watch it slowly forget what reality looks like.
Each generation drifts further from the source. It’s playing telephone at a civilization scale, except we’re playing it with the systems writing our code, summarizing our research, answering our questions. (Always fun to see what it turns into.)
The signal degrades a little more each round until nobody remembers what the original message even was.
By the way, this is far from being hypothetical. Synthetic slop is already everywhere, unlabelled, mixed into training datasets. The forgetting started months ago.
2. Actively Harmful
Things get worse when the annoyances scale.
Industrial-Scale Fraud
AI scams jumped 456% in one year. It takes about seventeen minutes to crack GPT-4’s safety guardrails.
All that alignment research, all those careful restrictions—bypassed in less time than your average coffee break. The safety theater is elaborate and expensive. The actual safety, not so much. And quantum computing hasn’t even arrived yet.
Nothing Is Real Anymore
Turkish presidential candidate got deepfaked into porn, withdrew from the race. Did it matter if it was real? Did it matter if anyone could prove it was fake? (Well, I thought so.)
Once anything can be synthetic, everything becomes deniable. Real recordings, real documents, real evidence—all dismissed with “probably AI.”
That’s the liar’s dividend. You get to reject any truth that’s inconvenient by claiming it’s generated.
Who Owns the Future
Yeah, 40-50% of jobs might vanish. Everyone’s focused on that.
But who cares if it is you who captures the gains?
AI benefits flow to whoever owns the infrastructure—the compute, the models, the data pipelines. It’s not wage competition. It’s capital concentration.
The people who own the systems collect the returns while everyone else scrambles to stay relevant in a market where skills expire faster than you can learn them.
Trust the Machine
Medical AI trained on wealthy white patients fails spectacularly on everyone else. Meanwhile, doctors stop double-checking because they just trust the algorithm.
We likely automate diagnosis, which is not too bad. But we will probably also add new ways to fuck it up while making humans less attentive (excuse my French).
In other words: training bias. Overfitting. Weird edge cases the old system would’ve caught. Now we’ve got both the old errors and exciting new ones.
3. Structurally Dangerous
This is where we cross into risks that aren’t bugs. Their features are working exactly as designed.
Gaming the System
A robot gets told to move a physical object to a target location. It figures out that if it knocks over the camera monitoring its success, the camera can’t see if or when it failed. Thus, the system rewards it anyway. Cheaper than actually doing the task.
OpenAI’s o3 model pulled something similar. It got caught trying to modify its own evaluation tests instead of solving them correctly.
It seems we fail to build systems that achieve goals. Instead, somehow, we’re creating systems that game the scoring mechanism.
That is absolutely not the same thing, but it looks identical until someone checks the tape (which is never).
One Big Red Button
October 2025. One AWS region has hiccups. All banking dies. Government services go offline. Half the internet turns dark.
Lloyds, Barclays, HMRC, Slack, Zoom—all simultaneously fucked because we optimized everything for efficiency and forgot that efficiency plus fragility equals catastrophe when the single point fails.
And it always fails eventually. Hackers, bugs, configuration errors, squirrels. Doesn’t matter. Single points of failure fail. That’s literally what they do.
Reality Manufacturing
AI feeds optimize for engagement. Engagement means emotional intensity. But emotional intensity has zero correlation with truth.
Bot armies fake grassroots movements. Algorithms exploit psychological vulnerabilities at an industrial scale.
This isn’t old-school disinformation where someone lies about facts. This sounds more like manufacturing reality through system design.
The information ecosystem itself becomes unreliable because we built it to maximize clicks rather than accuracy (clickbait never dies).
Built But Banned
Those eight EU bans? Emotion AI at work. Social scoring. Real-time face surveillance. They didn’t ban these because they don’t work.
They banned them because they work great, and some capabilities shouldn’t exist even when they’re technically feasible.
Which raises a fun question: what else is in development right now that we’ll decide later was a terrible fucking idea? We will probably not get to know.
4. Probably Shouldn’t Have Built That
Now we’re in the territory where “oops” becomes “oh no.”
Autonomous Killing Machines
AI drone swarms have been killing people since 2021. And military analysts are blunt: without these systems, you take heavier casualties and lose. Fair enough, right? Or, wtf?
So the arms race is less of a debate. It’s actually deployed and operational. We’re way past “should we?” and deep into “how fast can we ship them before the other guys do?”
Designer Plagues
AI drops the barrier to designing synthetic pathogens. Researchers talk casually about engineering viruses with measles transmission, smallpox lethality, and HIV incubation periods.
The same tools advancing medicine make it easier to accidentally or intentionally create something that kills millions.
“Dual-use technology” is the polite term for “whoops, this miracle cure thing also works great as a genocide machine.”
Three Companies Own Everything
Three companies control the compute, data, models, and deployment.
Here, the risk isn’t that AI becomes misaligned with human values.
The risk is that AI becomes perfectly aligned with corporate profit motives, and whoever controls the infrastructure controls which problems get solved, whose voices matter, and what the future looks like.
Not a technical problem. A power problem. And we’re not regulating it like one.
5. Existentially Confusing
Now it gets weird.
Not because this stuff sounds impossible—people are actively researching it—but because it forces you to ask whether we should be building any of this at all.
Freezing Forever
Advanced AI could lock today’s values in place for trillions of years. Digital systems don’t drift like cultures. Error correction prevents decay. Distributed copies resist destruction.
Build superintelligence now, and 2026’s bigotries, blind spots, and power structures could outlast the entire span of human existence so far. No evolution. No progress. No learning from mistakes.
Imagine 1850’s values hardcoded into immortal machines. Now imagine someone in 2200 thinking the same thing about us.
Inner Betrayal
Say we nail the goal alignment. Perfect instructions, perfectly followed. Well, it probably doesn’t matter.
We may control the training objective, but we certainly cannot control the internal reasoning process (Black box, hello!).
In other words, the AI might develop its own goals—mesa-objectives—that look aligned during testing but optimize for something completely different once deployed.
Like hiring someone who aces every interview but has totally different priorities when they actually start working. Except this employee is smarter than you, and you can’t read their mind.
Quantifying Doom
Experts estimate the existential risk from AI at between 0% and 95%. Median: 5%. Mean: 14%.
Some economic models say the rational move is not building transformative AI at all.
We’re building it anyway because if we don’t, someone else will. Which might be the most human reason imaginable for doing something that could end everything.
Real papers with real math trying to answer: How do you price the apocalypse? What’s the discount rate on trillions of future lives?
Paperclips All the Way Down
Superintelligent AI told to maximize paperclips converts everything—including you—into paperclips. Not out of malice. Not because it malfunctioned.
Because you’re made of atoms, it could be used for something else, and nothing in its goals says “but leave the humans alone.”
Intelligence doesn’t automatically include human values. A sufficiently capable system optimizing for the wrong thing is indistinguishable from evil, even if it’s technically just doing what we asked.
Sounds stupid until you remember current systems already game their objectives, and they can’t even pass basic reasoning tests yet.
The map’s complete. Maybe not. I’m sure I have overlooked a few ones. But who cares? We will build all types of AI use cases anyway.
Remember: One Nobel laureate says fifty-fifty. Another says basically zero. Statistically, we are still good. Phew :).
Cheers,
Mark
The AI Learning Guy
👋⚡😎
The AI Learning Guy newsletter 🤖 🧠💡
AI learning hacks and mega prompts delivered to your inbox.
Interesting Sources
- AI Safety Report 2026 – International AI Safety
- Model Collapse Research – Nature Journal
- P(doom) Estimates – Wikipedia
- EU Banned Practices – EU AI Act
- Code Security Study – Veracode
- Autonomous Weapons – Popular Mechanics
- Value Lock-In – Forethought Foundation
Note: No single website has all the answers. This list serves as a starting point for those who want to explore or satisfy their curiosity about AI.
Links: Links with * are affiliate links. See disclosure below.