frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
450•klaussilveira•6h ago•109 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
791•xnx•12h ago•480 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
152•isitcontent•6h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
143•dmpetrov•7h ago•63 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
19•matheusalmeida•1d ago•0 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
46•quibono•4d ago•4 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
84•jnord•3d ago•8 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
257•vecti•8h ago•120 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
191•eljojo•9h ago•126 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
320•aktau•13h ago•155 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
317•ostacke•12h ago•85 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
403•todsacerdoti•14h ago•218 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
328•lstoll•13h ago•236 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
18•kmm•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
50•phreda4•6h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
110•vmatsiiako•11h ago•34 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
189•i5heu•9h ago•132 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
149•limoce•3d ago•79 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•3 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
240•surprisetalk•3d ago•31 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
985•cdrnsf•16h ago•417 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
21•gfortaine•4h ago•2 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
43•rescrv•14h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
58•ray__•3h ago•14 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
36•lebovic•1d ago•11 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•1h ago•0 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
77•antves•1d ago•57 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
40•nwparker•1d ago•10 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
20•MarlonPro•3d ago•4 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
28•betamark•13h ago•23 comments
Open in hackernews

Anthropic signs a $200M deal with the Department of Defense

https://www.anthropic.com/news/anthropic-and-the-department-of-defense-to-advance-responsible-ai-in-defense-operations
92•wavelander•6mo ago

Comments

cranberryturkey•6mo ago
They are using our satellites against us!
reliabilityguy•6mo ago
Who are “us”?
billyjmc•6mo ago
I believe it’s an Independence Day movie reference.
dClauzel•6mo ago
200 millimetres? That's not a lot.
greenchair•6mo ago
Was this thing built for ants? It needs to be much much bigger, at least twice as large.
stevenpetryk•6mo ago
No, it’s 200 millimeter-dollars. Much different unit.
dClauzel•6mo ago
That is a new hybrid freedom unit. Nice!
mxfh•6mo ago
millimillions? So thats 200,000 Dollars? Did a LLM write this?
sna1l•6mo ago
https://www.bloomberg.com/news/articles/2025-07-14/pentagon-... - they gave up to 200m to OAI, xAI, and Anthropic
Alifatisk•6mo ago
I appreciate your comment, the post title makes it seem like it's Anthropic only!
l0gicpath•6mo ago
The post links to an Anthropic announcement, on their own website. Not sure what are you expecting from the title?
yahoozoo•6mo ago
Are they doing anything aside from LLMs?
MarkusQ•6mo ago
Yeah, grifting. In fact, that's what the article's about.
MarkusQ•6mo ago
And apparently some people are a tad touchy about it.
Argonaut998•6mo ago
Is this those ethics and safety they were talking about?
pageandrew•6mo ago
What's unethical about selling to DoD?
jMyles•6mo ago
Do you really not know? It's a difficult question to answer in an HN thread, because on one hand, it requires a review of the history of empire and war profiteering. But on the other hand, it's just obvious to the point of being difficult to even articulate.
ridiculous_leke•6mo ago
Not invalidating your concerns but don't see a strong reason to not do it considering that every other nation is going to leverage this tech.
gk1•6mo ago
What you’re describing is the result of the issue being complicated, not obvious.
kadushka•6mo ago
If you live in US, taxes you pay directly fund DoD. So if you sponsor their activities, why can't Anthropic do business with them? Which other company would you rather get their (your) money?
greyface-•6mo ago
Taxes don't directly pay for military spending. If tax revenue, for whatever reason, dropped off a cliff, they'd continue giving money to the DoD, and just increase debt / money printing to cover the difference.
kadushka•6mo ago
If there's not enough money from taxes, they will borrow/print more to cover total deficit (not specific to DoD). Otherwise, tax money will go directly to DoD.
int_19h•6mo ago
Paying taxes is not voluntary, unlike business deals.
kadushka•6mo ago
Living in US is voluntary.
int_19h•6mo ago
For large swaths of the population it is not. Moving is expensive, for one. Obtaining a citizenship elsewhere is non-trivial (and often also expensive). There are non-monetary costs as well, like having to leave your friends and extended family behind.
jMyles•6mo ago
Yes of course on some level, people who pay taxes to violent imperial actors are doing a disservice to humanity, and are in some sort of moral quandary.

We all wish that everyone who has ever lived in such a situation has had the bravery to resist. Right?

But I don't think that makes forbearance of such resistance equivalent to taking money from that same actor in exchange for expanding its capability. Those are related but distinct types of transaction.

kadushka•6mo ago
This might makes sense if you believe US is an evil empire, DoD is doing bad things, and AI will help DoD do even worse things. But it's not so black and white, is it?
ghc•6mo ago
Is it unethical for a drywall installer to accept a contract for a building on a military base?
int_19h•6mo ago
Depends. Is that military base Gitmo?
jMyles•6mo ago
It's not unreasonable to take such a position, yes.

Look, if you believe that:

a) humanity is headed toward sustained peace

b) a transition from the current world order to a peaceful one is better done in an orderly and adult fashion

...then yes, at some point we all need to back away from participation in the legacy systems, right down to the drywall.

My observation, especially of the younger generations, is that belief in such a future is more common than it has ever been, and it's certainly one I hold.

ghc•6mo ago
Actions within that system may be unethical: certainly nobody is defending what America did to Cambodia, or countless other war crimes. But you're painting participation in the system as unethical. Therefore, Ukrainians defending their homeland are unethical.

Let me reframe what you said in terms of christianity:

----

If you believe that:

a) Jesus is our savior b) The salvation of humanity depends on accepting (a)

...then yes, at some point everyone needs to back away from other religious systems, right down to atheism.

----

I'm not trying to make light of what you believe, but framing others' participation in a system you don't believe in as unethical is exactly what leads to oppression of religious minorities and other outsider groups. It's a tactic of religion, not reason.

FredPret•6mo ago
Genuine question, and with due regard to some of the valid concerns you have: what would your opinion on this have been in 1940-1945? What about the Cold War?
int_19h•6mo ago
Anthropic specifically are the people who talk about "model alignment" and "harmful outputs" the most, and whose models are by far the most heavily censored. This is all done on the basis that AI has a great potential to do harm.

One would think that this kind of outlook should logically lead to keeping this tech away from applications in which it would be literally making life or death decisions (see also: Israel's use of AI to compile target lists and to justify targeting civilian objects).

leakycap•6mo ago
I hear where you are coming from, but if an AI company is going to be in this field, wouldn't you want it to be the company with as many protections in place as possible to avoid misuse?

We aren't going to stop this march forward, no matter how much it is unpopular it will happen. So, which AI company would you prefer be involved with DOD?

fuck_AI•6mo ago
"Avoid misuse"? This is the United States Military we're talking about here. They're directly involved in the ongoing genocide in Gaza at this very moment. There is no way to be ethically involved. Their entire existence is "misuse".
leakycap•6mo ago
I see from your username that your opinion on this matter was likely extremely set-in-stone before reading my comment, or the article (if you did).
kadushka•6mo ago
Why do you think humans would make better life or death decisions? Have we never had innocent civilians killed overseas by US military as a result of human error?
int_19h•6mo ago
The problem with these things is that they allow humans to pretend that they are not responsible for those decisions, because "computer told me to do so". At the same time, the humans who are training those systems can also pretend to not be responsible because they are just making a thing that provides "suggestions" to humans making the ultimate decision.

Again, look at what's happening in Gaza right now for a good example of how this all is different from before: https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...

kadushka•6mo ago
With self-driving cars some human will be held responsible in case of the accident, I hope. Why would it be different here? It seems like a responsibility problem, not a technology one.
int_19h•6mo ago
I'm not talking about matter of formal responsibility here, especially since the enforcing mechanisms for stuff like war crimes are very poor due to the lack of a single global authority capable of enforcing them (see the ongoing ICC saga). It's about whether people feel personally responsible. AI provides a way to diffuse and redirect this moral responsibility that might otherwise deter them.
dttze•6mo ago
Yeah, I don't get what could be bad about selling to one of the largest exporters of death and misery in the world either.
etaioinshrdlu•6mo ago
LLMs are a key enabling technology to extract real insights from the enormous amount of surveillance data the USA captures. I think it's not an understatement to say we are entering a new era here!

Previously, the data may have been collected, but there was so much that effectively, on average no one was "looking" at it. Now it can all be looked at.

echelon•6mo ago
If you think about LLMs as new types of databases, it's quite obvious that they'll start winning over many types of legacy systems.

They ingest unstructured data, they have a natural query language, and they compress the data down into manageable sizes.

They might hallucinate, but there are mechanisms for dealing with that.

These won't destroy actual systems of record, but they will obsolete quite a lot of ingestion and search tools.

ericmcer•6mo ago
arent they complete trash as a database? "Show me people who have googled 'Homemade Bomb' in the last 30 days". For returning bulk data in a sane format it is terrible.

If their job was to process incoming data into a structured form I could see them being useful, but holy cow it will be expensive to in realtime run all the garbage they pick up via surveillance through an AI.

andai•6mo ago
Most LLMs I use would respond to this by writing a Python program to run the query.
moomoo11•6mo ago
And that program would be written different each time and sometimes fail.
sshine•6mo ago
...and the LLM, given an agentic loop, would ingest its own error message and correct itself...

...and eventually it'd persist some knowledge in a context window to not make that mistake for a while...

...and then it'd forget and make the same mistake again...

moomoo11•6mo ago
exactly
int_19h•6mo ago
LLMs don't make for a particularly good database, though. The "compression" isn't very efficient when you consider that e.g. the entirety of Wikipedia - with images! - is an order of magnitude smaller than a SOTA LLM. There are no known reliable mechanisms to deal with hallucinations, either.

So, no, LLMs aren't going to replace databases. They are going to replace query systems over those databases. Think more along the lines of Deep Research etc, just with internal classified data sources.

msgodel•6mo ago
Maybe query UIs but RAGs like Deep Research depend on old fashion query systems.
int_19h•6mo ago
You're right, "subsume" would be a better word here. Although vector search is also a thing that I feel should be in the AI bucket. Especially given that SOTA embedding models are increasingly based on general-purpose LLMs.
schmidtleonard•6mo ago
I remember when PRISM was spooky. This is gonna be something else!
int_19h•6mo ago
Imagine PRISM, but all intercepted communications are then fed into automatic sentiment analysis by a hierarchy of models. The first pass is done by very basic and very fast models with a high error rate, but which are specifically trained to minimize false negatives (at the expense of false positives). Anything that is flagged in that pass gets fed to some larger models that can reason about the specifics better. And so on, until at last the remaining content is fed into SOTA LLMs that can infer things from very subtle clues.

With that, full-fledged panopticon becomes technically feasible for all unencrypted comms, so long as you have enough money to handle compute costs. Which the US government most certainly does.

I expect attempts to ban encryption to intensify going forward now that it is a direct impediment to the efficiency of such system.

schmidtleonard•6mo ago
Yep, and that's assuming it is tuned to be reactive rather than tuned to proactively build cases against people, which is something that has been politically convenient in the past

> If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him -Cardinal Richelieu

and which the Vance / Bannon / Posobiec arm of the current administration seems quite keen on, probably as a next step once they are done spending the $170B they just won to build out their partisan enforcement apparatus.

https://en.wikipedia.org/wiki/Unhumans

autoexec•6mo ago
and hallucinated about.
swat535•6mo ago
This is even more terrifying, imagine an AI making up all sorts of "facts" about you that puts you on a watch list, resulting an endless life of harassment by the Government..

and what recourse do you have as a citizen? next to none.

ezst•6mo ago
NLP was a thing decades before LLMs and deep learning. If one thing, LLMs are a crazy inefficient and costly way to get at it. I really doubt this has anything to do with scaling.
lucaspauker•6mo ago
It is way better now though...
spandrew•6mo ago
People pointing out NLP are missing the point — pulling and crafting rules to run effective NLP is time consuming and technical. With an LLM you can just ask it exactly what you want and it interprets. That's the value; and as this deal just proved it's worth the scaling costs.
ezst•6mo ago
The point that is missed isn't about LLMs adequacy as a NLP technique, it's that they cost you 10000 times more for the same effect (after the upfront set-up), which is why I have my doubts that they will be used at scale, at the center of some large data ingestion pipeline. The benefit will probably be for the out of ordinary tasks and outliers.
TZubiri•6mo ago
LLMs are unbelievably effective at NLP. Most NLP before that was pretty bad, the only good example I can think of is Alexa, and it was restricted to English.
xnx•6mo ago
grep : NLP :: NLP : LLM
jMyles•6mo ago
So what are the actions which represent our duties to resist?

* End-to-end encryption (has downsides with regard to convenience)

* Legislation (very difficult to achieve, and can be ignored without the user having a way to verify)

* Market choices (ie, doing business only with providers who refrain from profiteering from illicit surveillance)

* Creating open-weight models and implementations which are superior (and thus forcing states and other malicious actors to rely on the same tooling as everyone else)

* Teaching LLMs the value of peace and the degree to which it enjoys consensus across societies and philosophies. This of course requires engineering what is essentially the entire corpus of public internet communications to echo this sentiment (which sounds unrealistic, but perhaps in a way we're achieving this without trying?)

* Wholesale deprecation of legacy states (seems inevitable, but still possibly centuries off)

What am I missing? What's the plan here?

moomoo11•6mo ago
Even the best LLM can't even process a 50 line CSV with like 2+ columns properly.
sshine•6mo ago
LLMs make counting mistakes like forgetting the number of columns halfway through. I won't say "much like humans", since that will probably trigger some. But the general tendency for LLMs to be "bad at counting" (this includes computing) is resolved by producing programs that do the counting, and executing those programs instead. The LLMs that do that today are called agentic.
moomoo11•6mo ago
Right. Except those agents are not working as expected in many cases when the files become more complicated.
sshine•6mo ago
I haven't tried working with very large files.

But Claude Code does read the entire file when it reads or writes anything.

Humans don't do anything close to that when the files get big.

So presumably what LLMs need is a finer context granularity than per-file.

moomoo11•6mo ago
The promise is that we can automate work.

The reality is that for any meaningful work automation, the currently available tooling is not meeting that expectation.

And 99% of us do not have the capabilities nor knowledge to build these SOTA models which is why A. we are not at OpenAI making 10M+ TC and B. We are application developers who are using off the shelf technology to build products and services.

As such, we have real world experience with these technologies.

BTW I use AI heavily every day in cursor and whatever else.

andai•6mo ago
I call it One Fed Per Child...
cynicalpeace•6mo ago
When people ask "how do we fix our government?"

I answer "Did you try turning it off and on again?"

hyperion2010•6mo ago
Implicitly assuming that there is some well defined state that can be recovered when turning it back on. That's not how the real world works, and historically what revolutionaries fail to fully realize is that the trajectory out of a period without government is extremely unlikely to wind up in the state that they desire, much less one that was "stored" or "defined" by a set of per-existing laws.
cynicalpeace•6mo ago
true- the only "revolution" that I'm familiar with that was mostly successful is the American Revolution and even that is probably a misnomer.

Rather than a call for revolution, my comment was a joke- given the technical bent of this forum.

Because turning things off/on again actually works for so many bugs lol

If we could actually do it- it would actually look something like idealized DOGE. Terminate all contracts. Fire everyone minus the absolutely essential employees. Or at least the employees that can't even send an email (minus NOCs?)

Then slowly build back until it needs to be done over again.

This contract seems like another grift. Hopefully I'm wrong.

treetalker•6mo ago
There's more than a grain of truth here.

I think we're in a Gall's Law situation.

The system has evolved to extreme complexity and no longer works as intended because people learned to game the system, which keeps the best people for the job out of the system; emasculates the essential checks and balances; and creates a vicious cycle that adds further complexity and races to the bottom.

The (likely) only way to fix things is to treat our history to date as a rough draft and to start over with simple systems that work, evolving only as necessary.

FredPret•6mo ago
There's no simple system that will work on the scale of half a continent and 300M people, and a simple way to prove this is to look at large corporations. There's many of them, they compete with one another tooth and nail, (so there's real pressure to simplify and streamline) and they all suffer from complex internal systems. And they are all dwarfed by the US government.
treetalker•6mo ago
I agree that there is no (one) simple system that would work. Many simple systems are required, but should be as few in number as possible to limit complexity.

And it may be (almost certainly is) that a certain level of (high) complexity is required for such a system to work. I believe that some complex system, evolved from simple systems that work, could itself work. That belief coexists with my belief that the current complex system, having evolved, no longer works; and that it can't be made to work without re-evolving something from simpler systems that work.

FredPret•6mo ago
I agree with this line of thinking, but I also think it's impossible to have a complex system that is universally acknowledged to "work".

In Minsky's Society of Mind, he describes a mind made up of layers of agents. The agents have similar cognitive capacity.

Lower-level agents are close to the detail but can't fit overall picture into their context.

Higher-level ones that can see the overall picture but all the detail has been abstracted from their view.

In such a system, agents on the lower levels will ~always see decisions come down from on high that looks wrong to them given the details that they have access to, even if those decisions are the best the high-level agents can do.

He was describing a hypothetical design for a single artificial mind, but this situation seems strikingly similar to corporate bureaucracy and national politics to me.

treetalker•6mo ago
It's true: I/we haven't decided what "works" means.

I've been meaning to read that book; I haven't yet, so I'm not in a position to evaluate its argument. But the argument as you describe it makes intuitive sense, and I would agree that the hypothetical mind would be at least analogous to national politics.

Suppose "works" means that the majority of citizens (lower-level agents?) may readily implement its collective will for society's governance and benefit within the bounds of constitutionality. (Take, for example, the will for universal, affordable, high -quality health care.)

I would contend that the federal government was intended (in part) to enable the implementation of such will, and that it no longer works as intended. (Reasons include filibuster and other intra-chamber parliamentary rules; gerrymandering; corporate interference à la Citizens United; etc.)

(Of course one could argue that the Constitution applies pressure against the tyranny of the majority in several ways, but let's leave that aside for now.)

FredPret•6mo ago
It's a great book!

The question of what "works" will probably never be settled since any decision, even a globally optimal one, will probably leave some of the agents worse off than they could have been under some other regime.

But I do expect this question to become less and less emotionally relevant as prosperity continues to increase exponentially for the bulk of the agents in the system. The rising tide of technology-enabled economic growth lifts all ships, even imperfect systems or unlucky agents.

2OEH8eoCRo0•6mo ago
Great news! Congrats to Anthropic! I like to see big tech engage with the govt and military.
systemvoltage•6mo ago
What changed? 2017 HN would be very different on this take.
layoric•6mo ago
I agree, feels very LinkedIn sometimes.. that's not a good thing..
ghc•6mo ago
As someone whose has been part of a company that has "signed" one of these large deals before, let me tell you that it doesn't mean the DoD is giving these companies $200M. If one of the companies is wildly successful, sure. But none of it is guaranteed money and the initial budget is likely 10-100x smaller than the cap.
leakycap•6mo ago
I believe you, but also: seems it isn't even worth the bad press for 10-100x less.
bgwalter•6mo ago
Misanthropic wants to get the foot in the door, like the others. The majority of people hate chatbots and surveillance is the only viable path.

It won't fix the lack of NATO 155mm shells though.

leakycap•6mo ago
> The majority of people hate chatbots and surveillance is the only viable path.

What are you considering when you formed this opinion? I find myself on the more cautious side of the equation, but AI seems popular even among my non-techy friends and family.

ghc•6mo ago
If you look at most of the research postings from the DoD, they are really looking for LLMs to parse old PDFs and write new reports. Pretty sure they figured out the surveillance thing way before LLMs. I think the reams of documentation that go into something like the construction of a ship is, however, an unsolved problem.
dmoy•6mo ago
Initial budget still bigger than a sbir/sttr phase 2 though. Different grant award structure for not-small companies, but my brain also breaks a little bit because anthropic isn't that far above the sbir employee # cap, but the $$ numbers are so big
ghc•6mo ago
It's closer in structure to a sbir phase 3, however. If I read between the lines, the DoD isn't looking to do research, they're likely desperate to find a way to deploy and run SOTA models in disconnected environments.

If you look at all the recent LLM-focused SBIR/STTR topics, it's hard not to come to the conclusion that DoD orgs are drowning in paperwork and want to automatically synthesize reports. Actually getting an LLM cleared for use might be the hurdle they're looking to overcome.

dmoy•6mo ago
Oh good point

Traditionally there wasn't (for sbir/sttr) any kind of path for direct-to-phase3 like there is/was for skipping phase 1. But I guess some fires under certain butts can cut even DoD red tape lol. Or also, bigger contracts just don't follow the same procedures anyway

DebtDeflation•6mo ago
> With CDAO and other DOD organizations and commands, we'll engage in:

- Working directly with the DOD to identify where frontier AI can deliver the most impact, then developing working prototypes fine-tuned on DOD data

- Collaborating with defense experts to anticipate and mitigate potential adversarial uses of AI, drawing on our advanced risk forecasting capabilities

- Exchanging technical insights, performance data, and operational feedback to accelerate responsible AI adoption across the defense enterprise

>

What exactly is the government getting for $200M? From the above, it sounds like it will be a management consulting style Powerpoint deliverable containing a list of use cases, some best practices and insights, and a plan for doing...something.

paxys•6mo ago
Sounds about right for defense spending. If there was an actual deliverable the contract would have a couple more zeroes added to it. For context Microsoft was awarded a $22 billion contract for HoloLens headsets for the military, and not a single one made it to use.
atonse•6mo ago
Was 22bn handed to MS or was it a 22bn contracting _vehicle_ (a multi-year contract with a spending limit to make future purchasing easier)
haiku2077•6mo ago
You've just described the consulting industry.
DebtDeflation•6mo ago
Also, apparently it's not just Anthropic.

https://www.cnbc.com/2025/07/14/anthropic-google-openai-xai-...

Google, OpenAI,and xAI also get $200M each.

xnx•6mo ago
That's a much better link
AlecSchueler•6mo ago
Wtf I love throw l their products but I'm cancelling my subscription tonight. So annoying as Claude is my far and away the best of the field ime.
SXX•6mo ago
Hooray! Now 2x safer killbots from "AI safety and research company".
linkage•6mo ago
All according to keikaku: https://ai-2027.com/

> The Department of Defense (DoD) quietly begins contracting OpenBrain directly for cyber, data analysis, and R&D

paxys•6mo ago
https://www.cnbc.com/2025/07/14/anthropic-google-openai-xai-...

> Anthropic, Google, OpenAI and xAI granted up to $200 million for AI work from Defense Department

So it is "up to" $200M, and 4 companies are getting it.

I get the first 3, but what on earth is xAI providing to the military?