Inner workings were determined by me, not the LLM. It assisted in generating inputs which had 100% boolean results in the output.
I kind of wish they had forced the governments hand and made them do it. Just to show the public how much interference is going on.
They say it wasn't related. Like every thing that has happened across tech/media, the company is forced to do something, then issues statement about 'how it wasn't related to the obvious thing the government just did'.
Makes perfect sense!!
If a company is deemed a "supply chain risk" it makes perfect sense to compel it to work with the military, assuming the latter will compel them to fix the issues that make them such a risk.
It is not about disciplining them to get better.
1. So one option is about forcing them to produce something. You must build this for us.
2 The other option is saying they are compromised so stop using them all together. We will not use what you build for us at all because we don't trust it.
So . Contradictory.
Or, more likely, adding the "core safety promise" was just them playing hard to the government to get a better deal, and the government showed them they can play the same game.
Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.
Non-profits where the CEO makes millions or billions are a joke.
And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.
B corps are like recycling programs, a nice logo.
If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.
I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.
This is what we were all going on about 15 years ago when Maryland was the first state to make PBCs legal. We got called negative at the time.
“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim any bullshit they want about themselves, it’s only after you’ve had a chance to seem them in the situations which test their words that you can confirm if they are what they said.
we're less than a year away from automated drones flying over crowds of protestors, gathering all electronic signals and face-id, making lists of everyone present, notifying employees and putting legal pressure on them to terminate everyone while adding them to watchlists or "no fly" lists
REALLY putting the "auto" in autocracy while everyone continues to pretend it's democracy
* Our shareholders will probably sue us
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
It combines interpretation of meaning with ambiguity to allow the reporter to assert anything they want. The ambiguity is there to protect the identity of the source but it has to be a more discrete disclosure of information in return. If you can't check the person you can still check what they said.
I would be ok with direct quotes from an anonymous source. That removes the interpretation of meaning at least.
As it is written, it would not be inaccurate to say this if their source was the lesswrong post, or even an earlier thread here on HN.
Phrasing "A source with direct knowledge of the situation" might remove some of the leeway for editorialising, but without sharing what the source actually said, it opens the door to saying anything at all and declaring "That's what I thought they meant" when challenged.
It's unfalsifyible journalism.
Write essays about AI safety in the application.
An entire interview round dedicated to pretending that you truly only care about AI safety and nothing else.
Every employee you talk to forced to pretend that the company is all about philanthropy, effective altruism and saving the world.
In reality it was a mid-level manager interviewing a mid-level engineer (me), both putting on a performance while knowing fully well that we'd do what the bosses told us to do.
And that is exactly what is happening now. The mission has been scrubbed, and the thousands of "ethical" engineers you hired are all silent now that real money is on the line.
The structural problem is that once you've taken billions in VC, safety becomes a negotiable constraint rather than a core value. The board's fiduciary duty runs toward returns, not toward whatever was in the mission statement. PBC status doesn't change that in practice — there's basically zero enforcement mechanism.
What's wild is how fast the cycle has compressed. Google took maybe 15 years to go from "don't be evil" to removing it from the code of conduct. OpenAI took about 5 years from nonprofit to capped-profit to whatever they are now. Anthropic is speedrunning it in under 3. At this rate the next AI startup will launch as a PBC and pivot before their Series B closes.
> I take significant responsibility for this change.
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...
"move fast and break things" ?
> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:
> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”
> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.
https://www.currentaffairs.org/news/2022/09/defective-altrui...
He is just giving everyone permission to do bad things by saying a lot of words around it.
Isn’t that the opposite of what he’s saying? He’s saying it could become that powerful, and given that possibility it’s incredibly important that we do whatever we can to gain more control of that scenario
Empty words. I would like to know one single meaningful way he will be held responsible for any negative effects.
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
you mean like the tens of billions poured into fusion research?
With the latest competing models they are now realizing they are an "also" provider.
Sobering up fast with ice bucket of 5.3-codex, Copilot, and OpenCode dumped on their head.
They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.
If they were unrelated, Anthropic wouldn’t be doing this this week because obviously everyone will conflate the two.
Claude only talks about safety, but never released anything open source.
All this said I’m surprised China actually delivered so many open source alternatives. Which are decent.
Why westerns (which are supposed to be the good guys) didn’t release anything open source to help humanity ? And always claim they don’t release because of safety and then give the unlimited AI to military? Just bullshit.
Let’s all be honest and just say you only care about the money, and whomever pays you take.
They are businesses after all so their goal is to make money. But please don’t claim you want to save the world or help humans. You just want to get rich at others expenses. Which is totally fair. You do a good product and you sell.
https://apnews.com/article/anthropic-hegseth-ai-pentagon-mil...
https://xcancel.com/elonmusk/status/2026181748175024510
I don't know where xAI got its training material from, but seeing Musk rewteeting that is refreshing.
baal80spam•1h ago
cmrdporcupine•1h ago
It took Google probably 15 years to fully evil-ize. Anthropic ... two?
There is no "ethical capitalism" big tech company possible, esp once VC is involved, and especially with the current geopolitical circumstances.
sigmoid10•1h ago
coldtea•1h ago
grim_io•50m ago
It's just a silly woke secretary choosing their own imaginary pronouns.
nozzlegear•24m ago
Department of Defense is the official name, and they did have a choice: they could have stopped working with the military. But they chose money and evil.
drzaiusx11•1h ago
menaerus•1h ago
cmrdporcupine•31m ago
They also have never had any guarantees they wouldn't f*ck around with non-US citizens, for surveillance and "security", because like most US tech companies they consider us to be second/lower class human beings of no relevance, even when we pay them money.
At least Google, in its early days, attempted a modest and naive "internationalism" and tried to keep their hands clean (in the early days) of US foreign policy things... inheriting a kind of naive 1990s techno-libertarian ethos (which they threw away during the time I worked there, anyways). I mean, they only kinda did, but whatever.
Anthropic has been high on its own supply since its founding, just like OpenAI. And just as hypocritical.
gadflyinyoureye•1h ago
MSFT_Edging•1h ago
You can be correct and not play into their game by ignoring the name change completely.
baggachipz•1h ago
ru552•39m ago