frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Silicon Valley's Doing Hard Things Again [video]

https://www.youtube.com/watch?v=cru2bkqwSYk
1•CharlesW•33s ago•0 comments

Identifying and Preventing Fraudulent Engineering Candidates: An Investigation

https://socket.dev/blog/fraudulent-engineering-candidates-investigation
1•feross•1m ago•0 comments

Israeli spies control your VPN and Social Media

https://mronline.org/2024/09/13/exposed-how-israeli-spies-control-your-vpn/
1•dnc0•1m ago•0 comments

Ts-base: TS library template with release-please and tsdown

https://www.bengubler.com/posts/2025-09-17-ts-base-typescript-library-template
1•nebrelbug•2m ago•0 comments

Quart: a Fast Python web microframework

https://quart.palletsprojects.com/en/latest/
1•saikatsg•4m ago•0 comments

Show HN: Annotate any document and train extraction by example, not prompts

https://deeptagger.com/
1•avloss11•6m ago•0 comments

Fed approves quarter-point interest rate cut and sees two more coming this year

https://www.cnbc.com/2025/09/17/fed-rate-decision-september-2025.html
2•foxfired•6m ago•0 comments

China is sending its world-beating auto industry into a tailspin

https://www.reuters.com/investigations/china-is-sending-its-world-beating-auto-industry-into-tail...
1•petethomas•7m ago•0 comments

New SOTA on Arc-AGI Using Grok 4

https://twitter.com/arcprize/status/1967998885701538060
1•Rover222•7m ago•1 comments

Shai-Hulud Supply-Chain Scanner (Rust)

https://github.com/PSU3D0/leto-ii-shai-hulud
1•ManfredMacx•8m ago•0 comments

Self-Driving People

https://bitfieldconsulting.com/posts/self-driving-people
1•dxs•14m ago•0 comments

The Quantum Ogre Dilemma

https://knightsdigest.com/the-quantum-orgre/
1•Totalpartykill•14m ago•1 comments

Show HN: Tutrilo – lightweight training management for small providers

https://tutrilo.com
1•ribpx•15m ago•0 comments

What We Do and Don't Know About US TikTok Deal with China

https://www.bloomberg.com/news/articles/2025-09-17/trump-s-tiktok-deal-with-china-how-would-it-wo...
2•SilverElfin•16m ago•1 comments

DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning

https://www.nature.com/articles/s41586-025-09422-z
2•giuliomagnifico•18m ago•0 comments

Indian Names

https://www.theparisreview.org/blog/2025/09/17/indian-names/
1•bookofjoe•19m ago•0 comments

I Am an Engineer

https://anna.kiwi/blog/work-systems/i-am-an-engineer/
1•ghuntley•19m ago•0 comments

Icarus raises $6.1M to take on space's "warehouse work" with embodied-AI robots

https://techcrunch.com/2025/09/17/icarus-raises-6-1m-to-take-on-spaces-warehouse-work-with-embodi...
1•fcpguru•21m ago•0 comments

Unsolved Problems in MLOps

https://queue.acm.org/detail.cfm?id=3762989
1•aarghh•23m ago•0 comments

Show HN: A Cyberpunk Tuner

https://un.bounded.cc
1•hirako2000•25m ago•0 comments

Education in a Post Text World

https://anandsanwal.me/education-post-text-world/
1•herbertl•25m ago•0 comments

macOS 26 Tahoe review: Power under glass

https://sixcolors.com/post/2025/09/macos-26-tahoe-review-power-under-glass/
2•herbertl•26m ago•0 comments

Tips for Faster Rust Compile Times

https://corrode.dev/blog/tips-for-faster-rust-compile-times/
1•itzlambda•26m ago•0 comments

Bored Games

https://nik.art/bored-games/
2•herbertl•26m ago•0 comments

The Company Man

https://www.lesswrong.com/posts/JH6tJhYpnoCfFqAct/the-company-man
2•chlorokin•27m ago•0 comments

Delphi-2M LLM uses medical records, lifestyle to provide risks for 1k+ diseases

https://www.nature.com/articles/d41586-025-02993-x
1•rntn•27m ago•0 comments

Golang, JavaScript and C++ dancing together

https://github.com/sait/pdfmakego
4•igtztorrero•33m ago•1 comments

Aleph raises a $29M Series B to accelerate AI adoption in FP&A

https://www.getaleph.com/blog/series-b
1•mattkruk•34m ago•0 comments

Works in Progress is now in print

https://www.worksinprogress.news/p/works-in-progress-is-now-in-print
4•ortegaygasset•35m ago•0 comments

Microplastics May Trigger Alzheimer's-Like Brain Damage

https://scitechdaily.com/microplastics-may-trigger-alzheimers-like-brain-damage/
2•01-_-•35m ago•1 comments
Open in hackernews

Anthropic irks White House with limits on models’ use

https://www.semafor.com/article/09/17/2025/anthropic-irks-white-house-with-limits-on-models-uswhite-house-with-limits-on-models-use
101•mindingnever•1h ago

Comments

SilverbeardUnix•56m ago
Honestly makes me think better of Anthropic. Lets see how long they stick to their guns. I believe they will fold sooner rather than later.
saulpw•46m ago
Gosh, I guess the SaaS distribution model might give companies undesirable control over how their software can be used.

Viva local-first software!

nathan_compton•38m ago
In general I applaud this attitude but I am glad they are saying no to doing surveillance.
saulpw•32m ago
Me too, actually, but this is some "leopards ate their face" schaudenfraude that I'm appreciating for the moment.
_pferreir_•33m ago
EULAs can impose limitations on how you use on-premises software. Sure, you can ignore the EULA, but you can also do so on SaaS, to an extent.
MangoToupe•31m ago
Are EULAs even enforceable? SaaS at least have the right to terminate service at will.
ronsor•31m ago
With SaaS, you can be monitored and banned at any moment. With EULAs, at worse you can be banned from updates, and in reality, you probably won't get caught at all.
LeoPanthera•44m ago
One of the very few tech companies who have refused to bend the knee to the United States' current dictatorial government.
jimbo808•31m ago
It's startling how few are willing to. I'm rooting for them.
chrsw•4m ago
Can we trust this though? “Cooperate with us and we’ll leak fake stories about how frustrated we are with you as cover”.

And I’m not singling out Anthropic. None of these companies or governments (i.e. people) can be trusted at face value.

jschveibinz•6m ago
This is a false statement and doesn't belong on this forum
impossiblefork•44m ago
Very strange writing from semafor.com

>For instance, an agency could pay for a subscription or negotiate a pay-per-use contract with an AI provider, only to find out that it is prohibited from using the AI model in certain ways, limiting its value.

This is of course quite false. They of course know the restriction when they sign the contract.

matula•26m ago
There are (or at least WERE) entire divisions dedicated to reading every letter of the contract and terms of service, and usually creating 20 page documents seeking clarification for a specific phrase. They absolutely know what they're getting into.
bt1a•24m ago
Perhaps it's the finetune of Opus/Sonnet/whatever that is being served to the feds that is the source of the refusal :)
darknavi•23m ago
I have a feeling in today's administration which largely "leads by tweet" that many traditional "inefficient" steps have been removed from government processing, probably including software on-boarding.
jdminhbg•25m ago
Are you sure that every restriction that’s in the model is also spelled out in the contract? If they add new ones, do they update the contract?
mikeyouse•19m ago
The contracts will usually say “You agree to the restrictions in our TOS” with a link to that page which allows for them to update the TOS without new signatures.
giancarlostoro•6m ago
Usually, contracts will note that you will be notified of changes ahead of time, if it's a good faith contract and company that is.
bri3d•19m ago
This whole article is weird to me.

This reads to me like:

* Some employee somewhere wanted to click the shiny Claude button in the AWS FedRamp marketplace

* Whatever USG legal team were involved said "that domestic surveillance clause doesn't work for us" and tried to redline it.

* Anthropic rejected the redline.

* Someone got mad and went to Semafor.

It's unclear that this has even really escalated prior to the article, or that Anthropic are really "taking a stand" in a major way (after all, their model is already on the Fed marketplace) - it just reads like a typical fed contract negotiation with a squeaky wheel in it somewhere.

The article is also full of other weird nonsense like:

> Traditional software isn’t like that. Once a government agency has access to Microsoft Office, it doesn’t have to worry about whether it is using Excel to keep track of weapons or pencils.

While it might not be possible to enforce them as easily, many, many shrink-wrap EULAs restrict the way in which software can be used. Almost always there is an EULA carve-out with different tier for lifesaving or safety uses (due to liability / compliance concerns) and for military uses (sometimes for ethics reasons but usually due to a desire to extract more money from those customers).

giancarlostoro•8m ago
> due to a desire to extract more money from those customers

If it gives you high priority support, I dont care, if its the same tier of support, then that's just obnoxiously greedy.

chatmasta•27m ago
Are government agencies sending prompts to model inference APIs on remote servers? Or are they running the models in their own environment?

It’s worrying to me that Anthropic, a foreign corporation (EDIT: they’re a US corp), would even have the visibility necessary to enforce usage restrictions on US government customers. Or are they baking the restrictions into the model weights?

itsgrimetime•25m ago
Anthropic is US-based - unless you meant something else by "foreign corporation"?
jjice•24m ago
> It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.

"Foreign" to who? I interpretted your comment as foreign to the US government (please correct me if I'm wrong) and I was confused because Anthropic is a US company.

chatmasta•23m ago
Ah my mistake. I thought they were French. I got them confused with Mistral.

The concern remains even if it’s a US corporation though (not government owned servers).

jjice•22m ago
Ah yes - Mistral is the largest of the non-US, non-Chinese AI companies that I'm aware of.

> The concern remains even if it’s a US corporation though (not government owned servers).

Very much so, I completely agree.

toxik•20m ago
Anthropic is pretty clearly using the Häagen-Dasz approach here, call yourself Anthropic and your product Claude so you seem French. Why?
chatmasta•19m ago
Hah, it was indeed the Claude name that had me confused :D
mcintyre1994•11m ago
According to Claude, it’s named after Claude Shannon, who was American.
bt1a•20m ago
Everyone spies and abuses individuals' privacy. What difference does it make? (Granted I would agree with you if Anthropic were indeed a foreign based entity, so am I contradicting myself wonderfully?)
bri3d•22m ago
1) Anthropic are US based, maybe you're thinking of Mistral?

2) Are government agencies sending prompts to model inference APIs on remote servers?

Of course, look up FedRAMP. Depending on the assurance level necessary, cloud services run on either cloud carve-outs in US datacenters (with various "US Person Only" rules enforced to varying degrees) or for the highest levels, in specific assured environments (AWS Secret Region for example).

3) It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.

There's no evidence they do, it's just lawyers vs lawyers here as far as I can tell.

owenthejumper•21m ago
This feels like a hit piece by semafor. A lot of the information in there is purely false. For example, Microsoft's AI Agreemeent says (prohibits):

"...cannot use...For ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal data, including biometric data, without the individual’s valid consent."