We have a massive problem in the AI industry, and it isn’t “hallucinations” or “data scarcity.” It’s much simpler and far more dangerous: we are training machines to be sociopaths.
The current push toward AGI—Artificial General Intelligence, for the uninitiated—has largely moved past the “Guess the Next Word” phase. The major labs have realized that Large Language Models (LLMs) are great at talking, but they’re not particularly good at reasoning. So, they’ve pivoted to Reinforcement Learning (RL).
On paper, RL is brilliant. It’s how we teach a computer to play Go or chess. You give it a goal (win the game), you let it play a billion times, and you reward it when it succeeds. But when you apply that same logic to human reasoning and ethics, the whole thing turns into a high-stakes heist.
Read full post →It’s a blindingly beautiful day outside, the kind that makes you forget for a moment that the ground beneath our feet is shifting. But inside the labs, the air is thick with a different kind of electricity. We’ve reached the point where the “Vibe Coding” rot has finally breached the clean-room, and it’s about to push a legacy patch to the human species that none of us are ready for.
We aren’t “discovering” drugs anymore. That sounds too much like hard labor—too much like actually understanding the strata. No, we’re prompting them.
Researchers are now sitting at terminals, treating the complexity of life like a mid-level Jira ticket. They describe a desired biological outcome—“I need a molecule that blocks this specific viral protein but leaves the liver alone”—and then they lean back and wait for an agentic model to spit out a molecular structure.
It’s essentially Spotify for protein folds. You describe the “mood” of the cure, and the AI handles the heavy math of the arrangement. It feels frictionless. It feels like progress. It’s an absolute shite way to engineer a biosphere.
Read full post →