I guess it was too dangerous to even read the article
That they don’t suggests that really it is only incrementally better than Opus 4.7 and that the market won’t bear a price increase that makes it economical to serve let alone profit from serving.
So the cynical me imagines execs sitting around the table and worrying that releasing it at anywhere close to break even would risk actually hurting the brand instead of setting them up as a premium company, and this at a time just before ipo when they can ill afford that rumour.
So they wonder what to do, and think playing national security card is the obvious way out. It’s incrementally better enough to find bugs that previous sota missed, it doesn’t get used widely so it’s cheap to serve and they get the good publicity without the economic scrutiny?
Making a loss selling to a small number of users using it in a limited way is entirely affordable. Making a loss selling it at scale is correspondingly unaffordable?
Whether its actually scarcity or hype building or a bit of column a, bit of column b is TBD. Then again, the new models seem more expensive, they slashed the tokens thrown around in thinking, and put up limit speedbumps so it’s probably not all gaslighting about compute bottlenecks.
Mythos is dangerous but it's not going to Skynet us.
Just the same as the military drone using some sort of OpenCV library and target prioritisation loop isn't going to turn evil on us.
It was never about intelligence, but about willingness to destroy (willingness to defend is not enough). Babylon, Egypt, Persia, Greece, Rome, China, ... I won't mention current examples ...
The real reason is that the hype around Mythos has already gone quiet because it does not find more than other models. That is, nothing at all in most open source projects. If you hide the model, embarrassing statistics will not be posted.
I wish the article could have been a lot tighter and shorter. This is not earth shattering information that requires a New Yorker length piece of investigative journalism.
Do you still care about the work?
There are also some caching plugins for wordpress, but most of them still hit the database on every request.
Based on this I doubt that Mythos pro is too dangerous to release or provides significantly more value.
It has considerably more parameters than most frontier models of today. Which gives it a lot more oomph per token.
Is it a "breakthrough" as in "something novel and unexpected"? No. Is it a "breakthrough" as in "something we know works, but made to work on a greater scale"? Very much so.
It feels like an AI tool that needs professionals to interface with it. Get some of those professionals, have them work with clients in a targeted way. It helps reduce the exposure the tool has to bad actors, and reduces the amount of resource usage that it will incur, because it's being used only by trained individuals.
Use what you learn from the experience to further refine its operation and make it less expensive to operate.
High end AI is at its most useful when you use it to replace high end human labor. You can't buy 9000 cybersec specialists on demand, but you can buy more Mythos tokens.
Then we get into all the scaling curves. Such as: LLMs getting more capable per FLOP, per byte of weights, per byte of VRAM, etc. And: inference compute getting cheaper over time.
I see a lot of "should make the industry nervous", but when you try to dig into it? It's wishful thinking, every fucking time.
[1] https://www.anthropic.com/glasswing#:~:text=deploy%20Mythos%...
Anthropic tries to create marketing hype around Mythos using two psychological tricks.
1. Put large numbers in the headlines.
"Mythos discovered 271 vulnerabilities in Firefox" makes the model seem extremely capable to the uninitiated.
But it's actually meaningless as a measure of capability _improvement_.
Anthropic gave away $100mil specifically as Mythos credits to these projects and companies (that's $2.5mil per project). Spending the same exorbitant amount of compute analyzing the same codebases in an older model like ChatGPT Plus would have turned up 260 of these vulnerabilities, or could even have turned up more than 271 ones.
No need to speculate, since this is exactly what we saw in the few code bases where we have such comparisons (like in the curl codebase). Supposedly weaker models, working with a much lower budget, turned up dozens of vulnerabilities. Mythos turned up only one, which ended up as a low severity CVE.
2. Do the whole "too dangerous to release" shtick. This is one of Dario Amodei's favorite moves. When he was vice president of research at OpenAI, he declared GPT-3 (which wasn't able to produce coherent text beyond 3-4 sentences at the time) too dangerous [1] as well.
Long story short, it's the ChatGPT 4.5 situation again: a company trained a model that's too slow and expensive, but not much more capable than what came before. It therefore requires these marketing stunts.
[1] https://www.itpro.com/technology/artificial-intelligence-ai/...
It claims to be an evidence-based investigation, but basically invents the contents of the documents they supposedly investigated, such as the Anthropic Frontier Red Team writeup, from whole cloth.
I don't think deeper engagement with it would promote good discussion.
By delaying allowing others to train off Mythos, they hold their SWE-Bench Pro head start longer so among other things, the USG can't but notice Anthropic's lead when they're deliberating on whether to further substantiate the "supply chain risk".
paol_taja•1h ago
OpenAI already used the same playbook with GPT-2 in 2019, and some of the same people involved back then are now doing it again at Anthropic with Mythos.
Same safety-branding DNA, different company, and people are falling for it again.
metadat•56m ago
Forgeties79•55m ago
It’s bad enough that it’s a marketing stunt, totally agree with you. But in the face of what we have seen and how they act like it’s no big deal, it’s just gross.
dascrazy_96•53m ago
repelsteeltje•41m ago