Sam Altman and the End of the World Capitalism

By: blockbeats|2026/03/03 13:00:01
0
Share
copy
Author | Sleepy.txt

In 2016, The New Yorker published a special feature about Sam Altman titled "Sam Altman's Fate." At the time, he was 31 years old and already the president of Y Combinator, Silicon Valley's most influential startup accelerator.

The article included a detail that Altman enjoyed racing cars, owned five sports cars, and liked to rent airplanes. He told the reporter that he had two bags, one of which was an escape bag ready to run at any time.

He also prepared firearms, gold, potassium iodide (for nuclear radiation), antibiotics, batteries, water, an Israeli Defense Forces-grade gas mask, and even a piece of land in Big Sur, California's famous coastal destination, where he could escape by plane at any time.

Ten years later, Altman became the person most dedicated to creating doomsday and most dedicated to promoting an ark. He warned the world that AI would destroy humanity while personally accelerating this process; he claimed not to be motivated by money while building a $2 billion personal investment empire; he called for regulation while kicking out anyone trying to hit the brakes.

Rather than calling him a schizophrenic lunatic or a tactless fraud, it is more accurate to say that he is just the most standard and successful product produced by the giant machine of Silicon Valley. His "fate" is to forge the collective anxiety of humanity into his scepter and crown.

Doomsday is Good Business

Altman's business model can be summed up in one sentence: packaging a business as a crusade involving the survival of humanity.

He started practicing this strategy during his YC days. He transformed YC from a small workshop giving tens of thousands of dollars to early-stage startups into a vast entrepreneurial empire. He set up a YC Research lab, funding projects that didn't make money but sounded grandiose. He told reporters that YC's goal was to fund "all important areas."

With OpenAI, he took this strategy to the extreme. He sold a packaged worldview: AI Doomsday + Redemption Plan.

He was better than anyone at depicting the "existential risk" posed by AI. He co-signed with hundreds of scientists, claiming that AI's risk is comparable to nuclear war. When testifying before the Senate, he said, "We should be scared of (AI's potential)—and people should be happy about it." He implied that this fear itself was a beneficial warning.

Each of these statements could make headlines and all of them provided free advertising for OpenAI. This carefully crafted fear is the most effective attention leverage. Which is more exciting to capital and the media, a technology that "can improve efficiency" or one that "could destroy humanity"? The answer is self-evident.

As for the redemption part, he also had a ready-made product: Worldcoin. When fear was implanted in the public consciousness, the sale of a solution naturally followed. Using a basketball-sized silver orb to scan human irises globally, supposedly to give everyone money in the AI era. The story sounded good, but this practice of exchanging money for biometric data quickly raised alarms in many countries. Over a dozen countries, including Kenya, Spain, Brazil, India, Colombia, among others, halted or investigated Worldcoin citing data privacy concerns.

Sam Altman and the End of the World Capitalism

But for Ultraman, this might not matter at all. What matters is that through this project, he successfully positioned himself as the "sole solver."

Packaging fear and hope for sale is the most efficient business model of this era.

Regulation Is My Weapon, Not My Shackles

How does someone who talks about doomsday every day do business? Ultraman's answer is: turn regulation into his weapon.

In May 2023, he testified before the US Congress for the first time. Instead of complaining about regulation like other tech company CEOs, he actively requested, "Please regulate us." He proposed an AI licensing system, where only licensed companies could develop large-scale models. The outward image portrayed was that of a very responsible industry leader. However, at that time, OpenAI was far ahead in technology, and a strict, high-threshold regulatory system's main role was to keep all potential competitors out.

However, as time passed, especially as competitors like Google, Anthropic, caught up in technology, and the power of the open-source community began to rise, Ultraman's tone on regulation underwent a subtle change. He began emphasizing at various events that overly stringent regulation, especially requiring mandatory review before AI companies release products, could stifle innovation and be "disastrous."

Regulation at this point was no longer a moat but a stumbling block.

When in absolute advantage, he called for regulation to lock in the advantage; when no longer advantageous, he called for freedom to seek breakthroughs. He even attempted to extend his reach to the upstream end of the industry chain. He proposed a chip plan worth up to $7 trillion, seeking support from capital such as the UAE Sovereign Wealth Fund, aiming to reshape the global semiconductor industry landscape. This has far exceeded the scope of a CEO's authority and more resembles an ambitious individual intent on influencing the global landscape.

Behind all this is the rapid transformation of OpenAI from a nonprofit organization to a corporate behemoth. When it was founded in 2015, its mission was to "ensure that AGI benefits all of humanity in a safe and secure way." In 2019, it established a "for-profit" subsidiary. By early 2024, the outside world discovered that the word "safely" had been quietly removed from OpenAI's mission statement. While the company's structure remained "for-profit," its commercialization pace had clearly accelerated. Correspondingly, there was an explosive growth in revenue, from tens of millions of dollars in 2022 to over ten billion dollars in annual revenue in 2024, and its valuation had soared from $29 billion to the trillion-dollar level.

When someone starts gazing at the stars and discussing the fate of humanity, it's best to first see where their money bag has landed.

Persona: Charismatic Leader's Immunity

On November 17, 2023, Ultraman was dismissed by a board of directors he had personally selected, on the grounds of "not being forthright in communication with the board."

What happened in the next five days was less of a business struggle and more of a faith plebiscite. CEO Greg Brockman resigned; 95% of the company's employees, over 700 people, collectively petitioned for the resignation of the board or else they would mass migrate to Microsoft; the largest investor, Microsoft's CEO Nadella, publicly sided with Ultraman, saying Ultraman was welcome to work anytime. In the end, Ultraman made a triumphant return, reinstated to his position, and purged nearly all board members who opposed him.

How could a CEO officially deemed "not forthright" by the board return unscathed, with even greater power?

Ousted board member Helen Tona later revealed details. Ultraman had concealed from the board his actual control of the OpenAI Venture Fund; lied multiple times on critical security processes within the company; and even the release of ChatGPT, the board only knew about it from Twitter. Any of these charges alone would be enough to dismiss a CEO a hundred times over.

But Ultraman was untouchable. Because he wasn't just any CEO, he was a "charismatic leader."

This is a concept proposed by sociologist Max Weber a hundred years ago, saying there is a kind of authority that does not come from position, not from the law, but from the leader's own "extraordinary personal charm." Followers believe in him, not because of what he did right, but because he is who he is. This belief is irrational. When a leader makes a mistake or is challenged, the followers' first reaction is not to question the leader but to attack the challenger.

That's how OpenAI employees are. They don't believe in the procedural justice of the board; they only believe in the "destiny" represented by Ultraman, thinking that the board is "hindering human progress."

After Ultraman returned to work, OpenAI's safety team was quickly disbanded. Chief Scientist Ilya Sutskever, who was the one who led Ultraman's dismissal, also left later on. In May 2024, Jan Leike, the head of the safety team, resigned, and he wrote on Twitter: "In order to launch those shiny products, the company's safety culture and processes have been sacrificed."

In front of a "charismatic leader," facts are not important, processes are not important, and safety is not important. The only thing that is important is faith.

Prophets on the Assembly Line

Sam Altman is just the latest and most successful model on Silicon Valley's "prophet" assembly line.

On this assembly line, there are many people we are very familiar with.

Take Musk, for example. In 2014, he was everywhere saying, "AI is summoning the demon." But his Tesla is the world's largest robotics company and the most complex AI application. After the fallout with Ultraman, he founded xAI in 2023, openly declaring war. Just a year later, xAI was valued at over 20 billion dollars. He warned of the arrival of the demon while personally creating another demon. This binary narrative of playing both sides is right in line with Ultraman.

Then there's Zuckerberg. A few years ago, he bet the entire company's fortune on the metaverse, burning nearly 90 billion dollars, only to find it was a pit. So he immediately turned around, shifting the company's core narrative from the metaverse to AGI. In 2025, he announced the establishment of the "Superintelligence Lab" and personally recruited troops. It's the same grand vision concerning the future of humanity, the same capital story requiring astronomical investments, and the same messianic posture.

And then there's Peter Thiel. As Ultraman's mentor, he is more like the chief designer of this assembly line. While investing in various companies promoting "technological singularity" and "immortality," he bought land in New Zealand, built a doomsday castle, and obtained citizenship after only 12 days in the country. His company Palantir is one of the world's largest data surveillance companies, with clients mainly in governments and the military. He prepares for the collapse of civilization on one hand, and on the other hand, he creates the sharpest monitoring tools for those in power. In early 2026, during a military operation against Iran, Palantir's AI platform acted as the brain, integrating massive data from spy satellites, communication interception, drones, and analysis of Claude models, transforming chaotic information into real-time actionable intelligence, ultimately pinpointing the target and completing the decapitation.

Each of them is playing a dual role of "warning of impending doomsday" and "ushering in the apocalypse." This is not a split personality; it is a business model that has been validated by the capital markets as the most efficient. They capture attention, capital, and power by manufacturing and selling structural anxiety. They are both a product of this system and architects of this system, the "evil behind the grand narrative."

Silicon Valley is no longer just a place that outputs technology; it is a factory that manufactures the "modern myth."

Why does this trick work every time?

Every few years, Silicon Valley gives birth to a new prophet who sweeps capital, media, and public attention with a grand narrative of doomsday and redemption. This trick is repeated over and over again, yet it continues to be effective. Each part of it takes precise aim at specific flaws in human cognition.

Step One: Manage the rhythm of fear, not just create fear.

The potential risks of AI are indeed real, but these individuals actively chose to present it in the most dramatic way possible, and they have precise control over the release of fear.

When to make the public fearful, when to provide hope, when to raise the alarm again—all of this is designed. Fear is the fuel, but the timing and manner of ignition is the true technology.

Step Two: Turn the incomprehensibility of technology into an authoritative source.

AI is a black box that is entirely opaque to the vast majority of people. When faced with something so complex that it cannot be fully understood, people instinctively defer the explanation to the "ones who understand it the most." They deeply understand this and have turned it into a structural advantage. The more they describe AI as mysterious, dangerous, beyond common understanding, the more irreplaceable they become.

The frightening aspect of this logic is that it is self-reinforcing. Any external doubts are automatically dismissed because the questioners "do not understand enough." Regulators don't understand the technology, so their judgment is not trustworthy; critics in academia have never worked on models at the frontlines, so their concerns are purely theoretical. Ultimately, only they themselves are qualified to judge themselves.

Step Three: Use "meaning" to replace "interest" and make followers voluntarily relinquish criticism.

This is the most difficult layer of the entire system to penetrate and the most enduring source of its power. What they sell is never just a job or a product; it is a story of cosmic significance: you are deciding the fate of humanity. Once this narrative is accepted, followers will voluntarily give up independent judgment. Because in the face of a mission related to the "survival of humanity," questioning the leaders would make oneself appear insignificant, even like an obstacle in history. It makes people willingly surrender their critical thinking abilities and perceive this surrender as a noble choice.

Put these three steps together, and you'll understand why this system is so difficult to disrupt. It doesn't rely on lies; it relies on a precise understanding of the human cognitive structure. It first creates a fear you can't ignore, then monopolizes the explanation of that fear, and finally transforms you into its most faithful propagator through "meaning."

And within this system, Ultraman is the model that has operated most smoothly to date.

Whose Destiny?

Ultraman has always said that he doesn't own any OpenAI shares, only receiving a symbolic salary, which was once the cornerstone of his "working for love" narrative.

But Bloomberg did the math for him in 2024, estimating his personal net worth at around $2 billion. This wealth mainly comes from a series of VC investments he made over the past decade. His early investment in the payment company Stripe reportedly yielded returns of up to hundreds of millions of dollars; the Reddit IPO he invested in also brought him substantial profits. He also invested in the fusion energy company Helion, claiming that the future of AI depends on an energy breakthrough, heavily betting on fusion, and then OpenAI went to Helion for a large electricity purchase deal. He claims to have avoided the negotiations, but even fools can see this conflict of interest chain.

He indeed doesn't have direct ownership in OpenAI, but he has built a vast, individual-centric investment empire around OpenAI. Every grandiose sermon he delivers about the future of humanity injects value into this empire's territory.

Now, looking back at his doomsday escape bag filled with firearms, gold, and antibiotics, and that land in the Great Salt, ready to take off at any time, do you have a new understanding?

He doesn't hide any of this. The escape bag is real, the bunker is real, the fascination with doomsday is also real. But he is also the one who works hardest to bring about doomsday. These two things are not contradictory because in his logic, doomsday doesn't need to be stopped; it just needs to be anticipated. He is obsessed with playing the one who sees the future clearly and prepares for it.

Whether preparing a physical escape bag or building a financial empire around OpenAI, it's essentially the same thing: in a self-propelled, uncertain future, secure the most certain winning position for yourself.

In February 2026, right after he voiced support for the "non-use of AI for warfare" red line, he signed a contract with the Pentagon. This isn't hypocrisy; it's an inherent requirement of his business model. Ethical posture is part of the product, and business contracts are a source of profit. He needs to play both the merciful savior and the ruthless doomsday prophet simultaneously because only by playing both roles can his story continue, and his "destiny" be revealed.

The real danger is not AI itself, but those who believe they have the right to define the human destiny.

You may also like

Popular coins

Latest Crypto News

Read more