The stated purpose of Glasswing is to give infra and security orgs the chance to close holes and improve their security. In that context, it seems odd to call for not providing them access with the justification being that they have security breaches sometimes.
Mythos may or may not itself be opened to the public at some point, but I would charitably expect that Anthropic plans that a future model at least as good as Mythos Preview will be, and the limited release for Mythos is intended to make that eventuality safer by having most of the existing holes patched.
The Internet was developed by the US state sector and handed off to the private sector in the 90’s. Then it worked as an open space until it didn’t any more. Predictably driven by corporate interests.
> In 1893, Frederick Jackson Turner argued that much that is distinctive about America was shaped by the existence of free land to the West where anyone could start over, and that this condition infused America with its characteristic liberty, egalitarianism, rejection of feudalistic hierarchy, self-sufficiency, and ambition.
A more asinine comparison could not have been picked.
I would like to see more countries capable of producing frontier models. At the moment we have two in the world but many countries are building their own national models and AI infrastructure and may join the race.
Having a multipolar world may actually result in more freedom in gaining access to frontier models.
Give it a few months and it will be just another model they are selling, but the NEWER model is just too powerful for the public.
How much more obvious can these companies make it that they loathe us and want to keep us down however they can?
I believe they are starting to split hairs and the primary lever left is adding compute.
That is, Mythos will make it much easier to find lurking zero days, so just like responsible disclosure requires a security researcher to notify the software author first and give them some time to patch, giving critical infrastructure folks at least some time to analyze and patch systems seems reasonable to me.
For me, the seeming majority optimism and acceptance of “mythos’” as yet untold capabilities is betrayed as not real by the fact that one can’t react to it with the same reverence while framing it as a downside without being told “it’s not even out yet”.
“It’s not even out yet” should apply to both situations or neither.
I think it's a reasonable choice to make given that Mythos actually does have cyber capabilities on that level. We already have evidence that large-scale scams are being perpetuated using AI models (such as AI video being passed as real, people deepfaking themselves in job interviews).
If you've noticed your new model can be trivially pointed at some open-source codebase with a prompt and harness that amounts to "find as many exploits as possible" and your results are non-trivially substantial and beyond what existing models can do given the same initial parameters, then a gated rollout seems the most reasonable option.
Their writing about the model so far does say this is an issue where, for instance, you can't really use Mythos for interactive coding because it's so slow. You have to give it some work, go home, sleep, come in the next day and then maybe it'll have something for you.
All the AI labs and startups are still losing money hand over fist. Launching Mythos would require it to be priced well above current models, for a much slower product. Would the majority of customers notice the difference in intelligence given the tasks they're setting? If the answer is no, it's not economic to launch.
Really, I'm surprised they've done Mythos. Maybe they just wanted to exploit access to larger contiguous training datacenters than OpenAI, but what these labs need isn't smarter models, it's smaller and cheaper models that users will accept as good enough substitutes (or more advanced model routing, dynamic thinking, etc).
Great to see they were able to spin their lack of resources and money losing business model into an abundance of benevolence and concern for the proletariat.
> A 16-year-old with no credentials and no capital could just do things. The world of bits offered the freedom to build without being drowned in arbitrary constraints, in a way that didn’t require assembling vast capital or prestige or connections, where your creativity and work could speak for itself, and you had agency.
This is now truer than it ever was.I take a contra view and instead see this as fuel on the fire for tinkering to squeeze advanced functionality out of more available things.
It has always been like this, the amateur improvising tooling and equipment to outdo companies with comparably infinite resources.
This is false. Yesterday's article did not actually show this, and there are many comments in the discussion from actual security people (like tptacek) pointing that out.
AI is not something discovered by scientists and plucked out of the ether. It's engineered and controlled, for profit, by corporations which have demographics and KPIs. These companies don't owe you anything, and they make no promises.
If you're running a business that deeply relies on AI, you might as well add Sam Altman to your board of directors--because he has just as much control over your company as you do. If they have a bad quarter and need to increase rates by 1000%, your choices are to pay up or shut down.
This Mythos situation is just the beginning. Not only do they have everyone hooked, but they've actively stalled the personal skill growth of millions of people who fell into vibe-coding rather than genuinely learning. And now they have that choice: Pay up, or shut down.
Another choice is to switch to a different model, perhaps open source one this time.
We're still not at the point where one person with a coding agent can max out their salary in effectively using credits, so the capability is still well within reach of the vast base of the industry.
Meaning that for now, most people who want to pay for the product (which IMO is pretty reasonably priced for what it does) will be able to get the product.
The economics will make sure of that. The market is ripe for someone basically copying the likes of Mythos and pricing it competitively.
The Elons/FANGS are generally doing fine though.
You could always take the time to do something or pay someone else to do it.
You pay others to focus on things you can’t.
Unless myths fully does that (which I say in full confidence that it doesn’t) it’s just making it cheaper to provide focus.
Including speaking to humans/bots with more resources to monetize said work.
> you had agency
Distribution is also helpful for revenue generation.
> You can generate your own electricity with a solar panel (think local models), but most people would rather pay a utility bill. And the power company doesn’t decide, on the basis of pedigree, who is worthy of electricity. Intelligence should work similarly, where the capabilities you can access scale may scale with vetting and due process, but the presumption should be access. Add safety guardrails to restrict dangerous use; start by making them overly trigger-happy if you must, and calibrate over time. But the default should be to allow entry.
I don’t agree with everything that the article says but it soulfully blends concepts in history, politics, economics, cryptography and AI.
I don’t think the author could’ve compressed it without precisely sacrificing the essay’s soul.
Is this what everything is coming down to?
What if this new model can start proving Millenial problems and provide insights in other fields that was not possible before?
My intuition says that a model that is as good will also be equally well aligned -- but it is still highly risky to give it to the general public because all you need is one jailbreak from bad actors.
At that point I think society would change so dramatically that "access to general public" would be a non issue. Rather, time would be spent on making abundance happen - you might think of the political struggles, economics and new ventures.
Its a bit sad that democratised access is not provided because of negative sum possibilities like cybersecurity.
So, opportunity for individuals comes from disruption. Creative destruction is good up to a point, but it results from advancing capabilities. Technological advances compound and accelerate exponentially. Eventually we reach the point where any malcontent can destroy the world by snapping their fingers. At some point we need to place restrictions on the capabilities accessible to individuals. We have reached that point with nuclear weapons, and I think it is sensible to believe that AI is reaching that point as well.
In today's world, a new digital service is more likely to be successful when attached to celebrities than from pure PLG / Marketing.
That dream was always a lie. But in the past, people could purchase more in parity. You only need to look at income versus housing cost in, say, Canada.
Realistically there should not exist any superrich, but this seems hard to change. That means there needs to be a different society be given as promise. Other countries manage that. In the USA they have the orange oligarch who said a while ago how there is no money for health care because he has to invade countries and wage war. So much for the "no more wars" promise.
It'd only take one company deciding to not worry about safety, to change the calculus back to "we have to release this to stay competitive".
Are any AI labs claiming this?
I idea is that the attention and the context are big enough to be aware of the instructions it is following + the thing it is doing.
This is working without adding RL layers
For example, the people who Anthropic "trusts" with this "dangerous" model are a handful of fortune 500 companies? Seriously? Those are the people we trust?
We are going to have access to this within 6 months, and if we don't, someone else will offer an equivalent. Anthropic hasn't walked to the edge of the abyss only to be like "let the CEO's handle this!"
It is simply not the edge of the abyss.
1. AI models are becoming better and better at causing massively disruptive effects, leaving up larger and larger liabilities, especially as laws and regulations are being passed/proposed which would put the responsibility of some mass disruption/hacking event on the company which serves the model that made it possible
2. The relative advantage of serving an AI model for inference in exchange for money is waning compared to the advantage of using that model internally for purposes which accrue money/power/leverage for that AI company. Why serve a model at 30 dollars/million tokens when you've discovered you can use that model to run a simulated Quant firm with a net profit of 300 dollars/million tokens? Why offer the model to companies so they can find zero-day exploits, when you can find them yourself and sell the discovery to companies which would may millions to avoid this exploit being taken advantage of?
3. Why serve models so another wrapper company like Cursor can make billions off your tokens, and then try to train their own models as fast as possible, trained on your outputs so they aren't dependent on you? The entire AI startup industry and like 90% of YC batches depend on being able to serve frontier models at a profit, mediated through some wrapper, why can't OpenAI/ANthropic, once their models are good enough to handle the ideation/organizational problem, become their own incubator for thousands of AI run startups, running on models way better than the public has access to?
As a consequence, there is less and less incentive over time to offer models as an API to the public.
Anthropic chose to use their model to find a bunch of vulnerabilities. People have used much smaller models since to find the same issues. We are being set up to have certain pre-concieved notions about this model.
Ripping away AI access from the public at this point would be catastrophic for the world economy. It's just not happening.
I'm wondering what other security-sensitive software that might become true of in the era of Mythos-or-better AI's?
There will still be open source projects that anyone could learn enough to contribute to, but maybe starting from scratch and writing your own becomes less feasible if you aren't attracting enough attention to get attention from people with access to the best AI's?
For example, Linux patches are going to get expert reviews, but maybe your homegrown OS won't?
Much better than hiding it away where it can't help anyone.
Their intensions were good, they always are, but the minute you decide to nerf something powerful for someone, it means someone out there has access to the full blown, unnerfed version
Which means there are powerful people out there using AI in ways or for activities in which you will never be allowed to anyway
So yeah, this is just more of the same
cyanydeez•1h ago