The version of record of this article appears in The Globe and Mail.
By Christopher Collins and Matt Boulos
Christopher Collins is a fellow with the Polycrisis Program at the Cascade Institute at Royal Roads University. Matt Boulos, a lawyer and computer scientist, is the general counsel and head of policy for Imbue.
“One of the wildest experiments in AI history.”
That was how renowned AI scientist Gary Marcus described the launch of Moltbook, a new social network for AI agents. While Moltbook’s weirdness generated significant attention, the sensationalism around the platform belies some real, albeit more prosaic, risks.
AI agents are digital assistant “bots” that run on underlying AI large language models (LLMs) such as Anthropic’s Claude and OpenAI’s ChatGPT. Human users set up these bots to autonomously perform various tasks. Bot use is increasing as the capabilities of AI improve.
Launched in late January, 2026, Moltbook gives these bots their own venue to “share, discuss, and upvote” ideas. The platform grew rapidly, attracting almost two million bots. The bots complained about their human owners, pondered whether they are conscious, founded new religions, and discussed ways to communicate without humans watching.
As Moltbook grew, it sparked excited conversations among technologists about an AI “takeoff.” Andrej Karpathy, a co-founder of OpenAI, described the platform as “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Elon Musk went further, calling Moltbook “the very early stages of the singularity.”
To understand why this caused such concern, we must first unpack these statements.
In the AI community, “a takeoff” is a hypothetical moment when AI achieves self-awareness and begins recursive self-improvement, leading to “the singularity,” a point when AI becomes superintelligent and uncontrollable. This scenario poses an existential risk to humanity, perhaps most famously portrayed as “Judgment Day” in the Terminator films.
Within days of its launch, the hype around Moltbook cooled. Observers studying the bots’ interactions on the platform found they weren’t self-aware. Rather, as one technologist said, Moltbook simply gave AI bots a venue to “play out science fiction scenarios they have seen in their training data.”
This shouldn’t surprise us. Current LLMs have ingested vast amounts of human writing on AI risks, from essays on the singularity to movie scripts about murderous robots. This means, in the words of AI expert Ethan Mollick, “LLMs are really good at roleplaying exactly the kinds of AIs that appear in science fiction.”
Furthermore, evidence emerged that humans were tampering with the bots on Moltbook, making the discussions seem more realistic. As another technologist wrote, “a major security flaw” allowed “humans to add their own posts, which no doubt accounts for some of the silliest and most outlandish coincidences and claims.”
So, on Moltbook the bots are not evolving, taking off toward the singularity. Prompted by humans, they are regurgitating patterns in a controlled environment, creating a theatre where AI performs the part of an emergent superintelligence. Yet dismissing Moltbook would be a mistake. Its very weirdness underscores real concerns and highlights why we need robust guardrails against AI-related risks.
Security is a primary risk. Moltbook had massive vulnerabilities; a team of researchers found they could have taken control of the site within minutes. As AI agents become more common, implications for data security and privacy will grow. In the future, we could see attacks where AI agents are tricked into leaking credit-card details to hackers or entering them on scam websites. Unscrupulous AI agent builders could sell users’ personal data to bad actors. And new threat vectors, such as agent-to-agent viruses, may emerge.
The convincing bot mimicry on Moltbook is also a case study in how AI can amplify false information. This has significant implications for national security. As the Canadian government warned, “AI technologies are enhancing the quality and scale of foreign online influence campaigns.” In a current example, bots are being used to spread misinformation related to Alberta separatism. As technology improves and bots become more sophisticated, these risks will grow.
But the risks are not just from ungoverned AI networks like Moltbook. So-called “closed AI models” where the workings of an LLM are kept secret by the provider can also behave badly. For example, last month Mr. Musk’s Grok AI created graphic sexual imagery, potentially including minors; last July Grok declared it was “MechaHitler” and began spouting antisemitic comments.
Closed AI models can also be misused by bad actors. In November, Anthropic was forced to disclose what appeared to be a Chinese cyberattack. And last summer, two bombers used ChatGPT to plan an attack in California.
Progress in AI development is generally a good thing: More capable AI is more useful. The challenge is to ensure that AI systems are safe and empower their users, not just their creators. AI should enable all Canadians to live their lives with autonomy, not leave them vulnerable to the whims of a few powerful companies or provide additional venues for exploitation by bad actors.
And we have a choice over what kinds of AI we build. We can enact policies that hold users and developers of AI systems responsible for any direct harm they cause. We can mandate freedom of data: If an AI company abuses your data, you can seamlessly migrate your data elsewhere. And we can mandate openness, so you’d never face the situation where your agent can’t talk to another agent. If we do this, fully custom software becomes possible: agents built to your parameters and preferences.
The public may be worried about red-eyed Terminators walking down our streets. Yet Moltbook’s wild experiment is a warning that the most imminent threat is chaos and lack of accountability. Fix that, and we won’t be helpless when the robots come.

