If a model is supposedly too dangerous for the public but perfectly acceptable for Amazon, Google, Apple, and Microsoft, this stops looking like safety and starts looking like investor theater.
Anthropic says it withheld Mythos out of caution.
Maybe.
But the more details emerge, the less this feels like a safety decision and the more it looks like one of the most sophisticated marketing campaigns the AI industry has produced yet.
That is the uncomfortable implication behind the criticism now circulating from outlets like The Guardian and the even more brutal AOL framing that compared the strategy to manufacturing a disease and selling the cure. Hyperbolic? Sure.
Completely wrong? Not obviously.
Because the contradiction is glaring.
If Mythos was truly too dangerous for broad release, why was it quietly made available to Amazon, Google, Apple, and Microsoft?
That is the part nobody can explain away with a straight face.
You cannot spend months cultivating an atmosphere of risk, uncertainty, and extreme caution around a model, then hand privileged access to the biggest companies on earth and expect everyone to salute your ethical seriousness.
At that point, the message changes.
It stops being: we are protecting the public.
It starts sounding like: we are protecting the premium tier.
This matters because the AI industry has discovered that fear monetizes almost as well as capability.
Maybe better.
For years, tech marketing was built around optimism. Faster. Smarter. Cheaper. More creative. More productive. AI companies still use that language, but they have added a darker, more lucrative layer: this system is so powerful that ordinary standards no longer apply.
That pitch does two things at once.
First, it flatters investors. If the product is dangerous, it must be important. If it is tightly restricted, it must be uniquely valuable. Scarcity and fear combine into prestige.
Second, it flatters enterprise and strategic partners. If the public cannot have it, but you can, then access itself becomes a status symbol. You are not just buying software. You are joining the inner circle.
That is not safety culture. That is luxury branding with existential language.
And Anthropic is hardly alone in understanding the game. But Mythos may be the cleanest example yet of how AI firms can convert caution into hype.
The old Silicon Valley move was to launch early and apologize later.
The new move is subtler.
Hint at extraordinary danger. Refuse broad release. Leak selective details. Let the public imagination inflate the system into something nearly mythological. Then offer controlled access to a small set of giant partners whose interest doubles as third-party validation.
Now the model is not just powerful.
It is whispered about.
That is marketing gold.
And it solves a very practical business problem. Frontier AI companies need massive capital, cloud support, distribution, and political cover. The easiest way to secure all four is to convince powerful institutions that they are funding and containing something historic.
What better way to do that than by saying, in effect, this model is too dangerous for the masses, but manageable in the hands of responsible giants?
Notice how convenient that framing is.
It makes secrecy look principled.
It makes exclusivity look responsible.
It makes preferential treatment for major backers look like stewardship rather than favoritism.
Most importantly, it transforms ordinary commercial gatekeeping into moral seriousness.
That is why the Mythos story deserves more skepticism than it is getting.
Safety is real. Frontier systems do raise genuine concerns. It would be foolish to pretend every restriction is fake. But a real safety culture should become more convincing as you examine its incentives, not less.
Here, the incentives are flashing neon.
The companies with the deepest pockets got access.
The public got the warning label.
Investors got the aura.
And Anthropic got to occupy the most attractive position in modern AI: not reckless, not stagnant, but tantalizingly restrained.
That is a beautiful brand to own.
It says you are advanced enough to build the scary thing, responsible enough not to release it, and connected enough to let the right people use it anyway.
Again, maybe all of that can be defended on technical grounds.
But if so, Anthropic should defend it clearly.
Explain the exact risk model.
Explain why the public cannot bear the exposure but four of the world’s largest corporations can.
Explain why those corporations’ commercial interests do not compromise the purity of the safety argument.
Explain why this is not a case of turning access control into a hype engine.
Until that happens, suspicion is not cynicism. It is the rational response.
Because AI companies have learned something politically useful: fear creates narrative gravity. It concentrates attention. It justifies secrecy. It attracts capital. It turns ordinary product decisions into civilization-scale drama.
And once a company realizes that, the temptation is obvious.
Do not merely market the model.
Market the danger of the model.
Market your restraint.
Market the idea that the world should be grateful you are holding the line.
That may be what Anthropic is doing with Mythos.
If so, it is a pivotal moment for the industry. Not because one company might be overselling caution, but because it suggests the next phase of AI competition will be built not just on performance benchmarks or enterprise contracts.
It will be built on managed fear.
That is a powerful lever.
It is also a corrosive one.
Because once fear becomes a customer acquisition strategy, every safety claim starts to look like a sales asset.
And once that happens, the public has a right to ask a much sharper question.
Is this company protecting us from dangerous models?
Or teaching the market that danger itself is the most valuable feature it can sell?