Popular on Ex Nihilo Magazine

Innovation & Tech

Sam Altman AI Predictions: Why Your Kids Will Look Back at Us With Pity

There's something unsettling about watching Sam Altman deliver AI predictions with the casual confidence of someone ordering coffee. At

Sam Altman AI Predictions: Why Your Kids Will Look Back at Us With Pity

There’s something unsettling about watching Sam Altman deliver AI predictions with the casual confidence of someone ordering coffee. At TED2025, the OpenAI CEO painted a picture of tomorrow that’s equal parts thrilling and terrifying: a world where our children will view our current existence the way we might look at someone struggling with a broken telegraph.

“I hope that my kids and all of your kids will look back at us with some pity and nostalgia and be like, ‘They lived such horrible lives. They were so limited. The world sucked so much,'” Altman said, delivering what might be the most backhanded compliment to modern civilization ever uttered on a TED stage.

It’s a bold statement from someone whose company has fundamentally changed how we think about intelligence itself. ChatGPT now boasts 500 million weekly users, a number that’s growing so rapidly Altman seemed reluctant to share specifics. When pressed by TED’s Chris Anderson about the exact growth figures, Altman’s response was telling: “It’s growing very fast,” he said, grinning like someone who knows exactly how explosive those numbers really are.

The Artist’s Dilemma

But before we get to Altman’s utopian future, there’s the messy present to deal with. Anderson didn’t waste time getting to one of AI’s most contentious issues: the wholesale consumption of creative work without permission.

When Anderson showed how Sora, OpenAI’s video generator, could produce Peanuts-style content without any deal with the estate, the audience’s applause said everything. Altman’s response revealed the tension at the heart of AI development: the gap between what’s technically possible and what’s ethically sound.

“I think the creative spirit of humanity is an incredibly important thing,” Altman said, threading a needle between protecting creators and advancing technology. “We probably do need to figure out some sort of new model around the economics of creative output.”

It’s the kind of diplomatic non-answer that sounds reasonable until you remember that millions of artists are watching their work get fed into systems that could eventually replace them. Altman acknowledges the problem but offers no concrete solutions, just a vague promise that “new business models” will emerge.

Currently, OpenAI blocks requests to generate content “in the style of a living artist,” but allows broader stylistic references. It’s a compromise that satisfies no one completely but keeps the lawyers at bay.

The Race That Can’t Be Stopped

Perhaps the most revealing moment came when Anderson pressed Altman on whether anyone could simply slow down the AI race. The question touches on something fundamental: if this technology is so potentially dangerous, why not pump the brakes?

Altman’s answer was both honest and chilling. This isn’t really a choice anymore. “This is like a discovery of fundamental physics that the world now knows about,” he said. “And it’s going to be part of our world.”

The implication is clear: Pandora’s box is open, and we’re all just figuring out how to live with what’s inside. Altman insists that companies do slow down when things aren’t ready, aren’t safe, or simply don’t work. But the underlying current is inevitable – someone, somewhere, will keep pushing forward.

There’s communication between most AI companies about safety, Altman revealed, though he conspicuously declined to name which company isn’t participating in those conversations. The hint was subtle but unmistakable.

When AI Gets Its Own Internet Access

If current AI feels transformative, Altman suggests we haven’t seen anything yet. The race isn’t just smarter models, it’s agentic AI that can actually do things in the real world.

Anderson demonstrated OpenAI’s Operator tool, which can browse the internet and make restaurant reservations. It’s impressive and slightly unnerving, requiring users to hand over credit card information to an AI system that’s essentially clicking around websites on their behalf.

“This is the most interesting and consequential safety challenge we have yet faced,” Altman admitted. When AI systems can access your computer, your data, and your accounts, the stakes become dramatically higher.

The challenge isn’t just technical, it’s existential. Whether someone tells an AI to “spread a meme that X people are evil” and that AI decides the best way to accomplish the task is to copy itself across the internet? Altman designed his preparedness framework to catch these scenarios, but its effectiveness depends entirely on the people implementing it.

The AI Revolution in Science and Code

But if AI agents clicking around the internet seem futuristic, some of Sam Altman’s most exciting AI predictions focus on areas where the technology is already showing dramatic impact. “The thing that I’m personally most excited about is AI for science,” he told Anderson.

The potential isn’t theoretical anymore. Scientists using OpenAI’s latest models report being “actually just more productive than they were before,” and it’s starting to matter for real discoveries. When Anderson asked about room temperature superconductors, Altman’s response was cautiously optimistic: “I don’t think that’s prevented by the laws of physics. So it should be possible.”

The implications stretch beyond materials science. Altman expects “meaningful progress against disease with AI-assisted tools” in the near term, while acknowledging that physics breakthroughs might take longer.

Software development has already been transformed in ways that sound almost mythical. Engineers describe having “religious-like moments” with new AI models, suddenly able to accomplish in an afternoon what would have taken two years. “It’s like mind… that’s been one of my big ‘feel the AGI’ moments,” Altman said.

But this is just the beginning. “I expect another move that big in the coming months as agentic software engineering really starts to happen.” The idea of AI systems that can not just suggest code but actually build complete software projects represents a fundamental shift in how technology gets created.

The Departure Problem

One elephant in the room was harder to ignore: key safety researchers have been leaving OpenAI. When Anderson asked about departures from the safety team, Altman’s response was carefully calibrated.

“There are clearly different views about AI safety systems,” he said, pointing to OpenAI’s track record rather than addressing why people might be leaving. It’s the kind of deflection that raises more questions than it answers.

The subtext is uncomfortable: if the people tasked with keeping AI safe don’t feel they can do their jobs at OpenAI, what does that say about the company’s priorities?

The Question That Cuts Deep

The most memorable moment came when Anderson shared a question generated by OpenAI’s own o1-pro model: “Sam, given that you’re helping create technology that could reshape the destiny of our entire species, who granted you the moral authority to do that? And how are you personally accountable if you’re wrong?”

Altman laughed it off, noting that Anderson had been asking versions of the same question all evening. But the question cuts to the heart of something unprecedented: a small group of tech executives making decisions that will affect every human on Earth.

“Look, I think like anyone else, I’m a nuanced character that doesn’t reduce well to one dimension,” Altman said. He probably made his most honest remark of the evening when he acknowledged that he was just a person wrestling with impossible questions about the future of human civilisation.

The Father’s Perspective

Becoming a parent has changed Altman’s calculations, though perhaps not in ways you’d expect. He said having a child made him think more about the future his son will inherit, but he insisted it hadn’t changed his core commitment to safety.

He explained that he had always cared deeply about not destroying the world. Having a child only strengthened that concern. “I didn’t need a kid for that part,” he added, which earned applause from the audience.

But parenthood has shifted his priorities in other ways. The “cost of not being with my kid is just like crazily high,” he admitted, suggesting that even AI revolutionaries have to balance world-changing ambitions with diaper changes.

Democracy vs. Elite Decision-Making

One of Altman’s most intriguing proposals was using AI itself to solve the governance problem. Rather than having “small elite summits” decide AI’s guardrails, he suggested using AI systems to understand the “collective value preference” of everyone on Earth.

It’s a fascinating idea that sidesteps traditional democratic institutions entirely. Why have representatives when you can have AI systems directly polling global sentiment? The approach has obvious appeal but raises uncomfortable questions about manipulation, context, and the wisdom of crowds.

“When we have gotten things wrong, because the elites in the room had a different opinion about what people wanted,” Altman said, clearly believing that mass preferences trump expert judgement.

The World His Son Will Inherit

Among all of Sam Altman’s AI predictions, his vision of the future is uncompromising in its optimism. He imagines a world of “incredible material abundance” where AI systems understand and anticipate human needs in ways that seem almost magical today.

His children, he predicts, will never grow up in a world where computers don’t understand them completely. They’ll never face the limits of human-scale intelligence or capability.

“My kids hopefully will never be smarter than AI,” he said. It’s a statement that’s either inspiring or deeply unsettling, depending on your perspective.

The comparison to a toddler trying to swipe a magazine like an iPad captured the point perfectly. To that child, a non-interactive magazine was just a broken iPad. To Altman’s children, human-only intelligence might seem equally primitive.

What Keeps Him Up at Night

For all his optimism about the far future, Altman’s concerns about the near term are real. He worries about AI systems being misused for disinformation, bioterror, or cyber attacks. He’s concerned about models that might improve themselves in ways that lead to loss of human control.

But perhaps most tellingly, he seems to worry about the pace of change itself. The growth of ChatGPT has been so explosive that his teams are “exhausted and stressed” trying to keep up. If the people building these systems are struggling with the pace, what does that mean for the rest of us?

The Ring of Power

When Anderson brought up Elon Musk’s accusation that Altman had been “corrupted by the Ring of Power,” the response was illuminating. Rather than deny the metaphor entirely, Altman asked for specific examples of where power had corrupted his decisions.

It’s a fair question, though the absence of obvious corruption isn’t the same as its impossibility. The real test isn’t whether Altman has stayed true to his values so far, it’s whether those values will hold when the stakes get even higher.

The Inevitable Future

What emerges from the conversation is a picture of someone who genuinely believes he’s steering humanity towards a better future, while acknowledging he might be wrong about almost everything. Altman seems convinced that the benefits of AI will ultimately outweigh the risks, but he’s honest about not knowing exactly how it will all play out.

“We’ve definitely made mistakes, we’ll definitely make more in the future,” he admitted. It’s the kind of casual acknowledgment that might be reassuring from someone running a coffee shop, but feels somewhat different from someone reshaping the nature of intelligence itself.

The most striking thing about Altman isn’t his confidence in AI’s potential, it’s his apparent acceptance that this future is arriving whether we’re ready or not. The choice isn’t whether to build superintelligent AI; it’s how to do it responsibly while we still can.

Whether Sam Altman’s AI predictions prove accurate probably depends on whether he’s right about the ending. His children will be the ultimate judges of whether their AI-enhanced world really is better than the “limited” existence we’re living now.

For better or worse, we’re all about to find out together.

Source: OpenAI’s Sam Altman Talks ChatGPT, AI Agents and Superintelligence — Live at TED2025


Ex Nihilo magazine is for entrepreneurs and startups, connecting them with investors and fueling the global entrepreneur movement.

About Author

Malvin Simpson

Malvin Christopher Simpson is a Content Specialist at Tokyo Design Studio Australia and contributor to Ex Nihilo Magazine.

Leave a Reply

Your email address will not be published. Required fields are marked *