We all wish that everyone who has ever lived in such a situation has had the bravery to resist. Right?
But I don't think that makes forbearance of such resistance equivalent to taking money from that same actor in exchange for expanding its capability. Those are related but distinct types of transaction.
Look, if you believe that:
a) humanity is headed toward sustained peace
b) a transition from the current world order to a peaceful one is better done in an orderly and adult fashion
...then yes, at some point we all need to back away from participation in the legacy systems, right down to the drywall.
My observation, especially of the younger generations, is that belief in such a future is more common than it has ever been, and it's certainly one I hold.
One would think that this kind of outlook should logically lead to keeping this tech away from applications in which it would be literally making life or death decisions (see also: Israel's use of AI to compile target lists and to justify targeting civilian objects).
We aren't going to stop this march forward, no matter how much it is unpopular it will happen. So, which AI company would you prefer be involved with DOD?
Again, look at what's happening in Gaza right now for a good example of how this all is different from before: https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...
Previously, the data may have been collected, but there was so much that effectively, on average no one was "looking" at it. Now it can all be looked at.
They ingest unstructured data, they have a natural query language, and they compress the data down into manageable sizes.
They might hallucinate, but there are mechanisms for dealing with that.
These won't destroy actual systems of record, but they will obsolete quite a lot of ingestion and search tools.
If their job was to process incoming data into a structured form I could see them being useful, but holy cow it will be expensive to in realtime run all the garbage they pick up via surveillance through an AI.
So, no, LLMs aren't going to replace databases. They are going to replace query systems over those databases. Think more along the lines of Deep Research etc, just with internal classified data sources.
With that, full-fledged panopticon becomes technically feasible for all unencrypted comms, so long as you have enough money to handle compute costs. Which the US government most certainly does.
I expect attempts to ban encryption to intensify going forward now that it is a direct impediment to the efficiency of such system.
> If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him -Cardinal Richelieu
and which the Vance / Bannon / Posobiec arm of the current administration seems quite keen on, probably as a next step once they are done spending the $170B they just won to build out their partisan enforcement apparatus.
* End-to-end encryption (has downsides with regard to convenience)
* Legislation (very difficult to achieve, and can be ignored without the user having a way to verify)
* Market choices (ie, doing business only with providers who refrain from profiteering from illicit surveillance)
* Creating open-weight models and implementations which are superior (and thus forcing states and other malicious actors to rely on the same tooling as everyone else)
* Teaching LLMs the value of peace and the degree to which it enjoys consensus across societies and philosophies. This of course requires engineering what is essentially the entire corpus of public internet communications to echo this sentiment (which sounds unrealistic, but perhaps in a way we're achieving this without trying?)
* Wholesale deprecation of legacy states (seems inevitable, but still possibly centuries off)
What am I missing? What's the plan here?
I answer "Did you try turning it off and on again?"
Rather than a call for revolution, my comment was a joke- given the technical bent of this forum.
Because turning things off/on again actually works for so many bugs lol
If we could actually do it- it would actually look something like idealized DOGE. Terminate all contracts. Fire everyone minus the absolutely essential employees. Or at least the employees that can't even send an email (minus NOCs?)
Then slowly build back until it needs to be done over again.
This contract seems like another grift. Hopefully I'm wrong.
I think we're in a Gall's Law situation.
The system has evolved to extreme complexity and no longer works as intended because people learned to game the system, which keeps the best people for the job out of the system; emasculates the essential checks and balances; and creates a vicious cycle that adds further complexity and races to the bottom.
The (likely) only way to fix things is to treat our history to date as a rough draft and to start over with simple systems that work, evolving only as necessary.
And it may be (almost certainly is) that a certain level of (high) complexity is required for such a system to work. I believe that some complex system, evolved from simple systems that work, could itself work. That belief coexists with my belief that the current complex system, having evolved, no longer works; and that it can't be made to work without re-evolving something from simpler systems that work.
In Minsky's Society of Mind, he describes a mind made up of layers of agents. The agents have similar cognitive capacity.
Lower-level agents are close to the detail but can't fit overall picture into their context.
Higher-level ones that can see the overall picture but all the detail has been abstracted from their view.
In such a system, agents on the lower levels will ~always see decisions come down from on high that looks wrong to them given the details that they have access to, even if those decisions are the best the high-level agents can do.
He was describing a hypothetical design for a single artificial mind, but this situation seems strikingly similar to corporate bureaucracy and national politics to me.
I've been meaning to read that book; I haven't yet, so I'm not in a position to evaluate its argument. But the argument as you describe it makes intuitive sense, and I would agree that the hypothetical mind would be at least analogous to national politics.
Suppose "works" means that the majority of citizens (lower-level agents?) may readily implement its collective will for society's governance and benefit within the bounds of constitutionality. (Take, for example, the will for universal, affordable, high -quality health care.)
I would contend that the federal government was intended (in part) to enable the implementation of such will, and that it no longer works as intended. (Reasons include filibuster and other intra-chamber parliamentary rules; gerrymandering; corporate interference à la Citizens United; etc.)
(Of course one could argue that the Constitution applies pressure against the tyranny of the majority in several ways, but let's leave that aside for now.)
It won't fix the lack of NATO 155mm shells though.
What are you considering when you formed this opinion? I find myself on the more cautious side of the equation, but AI seems popular even among my non-techy friends and family.
- Working directly with the DOD to identify where frontier AI can deliver the most impact, then developing working prototypes fine-tuned on DOD data
- Collaborating with defense experts to anticipate and mitigate potential adversarial uses of AI, drawing on our advanced risk forecasting capabilities
- Exchanging technical insights, performance data, and operational feedback to accelerate responsible AI adoption across the defense enterprise
>
What exactly is the government getting for $200M? From the above, it sounds like it will be a management consulting style Powerpoint deliverable containing a list of use cases, some best practices and insights, and a plan for doing...something.
https://www.cnbc.com/2025/07/14/anthropic-google-openai-xai-...
Google, OpenAI,and xAI also get $200M each.
> The Department of Defense (DoD) quietly begins contracting OpenBrain directly for cyber, data analysis, and R&D
> Anthropic, Google, OpenAI and xAI granted up to $200 million for AI work from Defense Department
So it is "up to" $200M, and 4 companies are getting it.
I get the first 3, but what on earth is xAI providing to the military?
cranberryturkey•2h ago
reliabilityguy•2h ago
billyjmc•2h ago