frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

WASM 3.0 Completed

https://webassembly.org/news/2025-09-17-wasm-3.0/
260•todsacerdoti•1h ago•70 comments

Anthropic irks White House with limits on models’ use

https://www.semafor.com/article/09/17/2025/anthropic-irks-white-house-with-limits-on-models-uswhi...
112•mindingnever•1h ago•45 comments

Apple Photos app corrupts images

https://tenderlovemaking.com/2025/09/17/apple-photos-app-corrupts-images/
814•pattyj•8h ago•305 comments

Depression Reduces Capacity to Learn to Actively Avoid Aversive Events

https://www.eneuro.org/content/12/9/ENEURO.0034-25.2025
74•PaulHoule•2h ago•18 comments

Tinycolor supply chain attack post-mortem

https://sigh.dev/posts/ctrl-tinycolor-post-mortem/
81•STRiDEX•2h ago•35 comments

DeepSeek writes less secure code for groups China disfavors

https://www.washingtonpost.com/technology/2025/09/16/deepseek-ai-security/
110•otterley•2h ago•54 comments

DeepMind and OpenAI Win Gold at ICPC, OpenAI AKs

https://codeforces.com/blog/entry/146536
50•notemap•1h ago•28 comments

Drought in Iraq Reveals Ancient Tombs Created 2,300 Years Ago

https://www.smithsonianmag.com/smart-news/severe-droughts-in-iraq-reveals-dozens-of-ancient-tombs...
35•pseudolus•2h ago•2 comments

Optimizing ClickHouse for Intel's 280 core processors

https://clickhouse.com/blog/optimizing-clickhouse-intel-high-core-count-cpu
19•ashvardanian•52m ago•2 comments

Ton Roosendaal to step down as Blender chairman and CEO

https://www.cgchannel.com/2025/09/ton-roosendaal-to-step-down-as-blender-chairman-and-ceo/
63•cma•2h ago•4 comments

Event Horizon Labs (YC W24) Is Hiring

https://www.ycombinator.com/companies/event-horizon-labs/jobs/U6oyyKZ-founding-engineer-at-event-...
1•ocolegro•2h ago

U.S. investors, Trump close in on TikTok deal with China

https://www.wsj.com/tech/details-emerge-on-u-s-china-tiktok-deal-594e009f
268•Mgtyalx•23h ago•251 comments

Tau² benchmark: How a prompt rewrite boosted GPT-5-mini by 22%

https://quesma.com/blog/tau2-benchmark-improving-results-smaller-models/
143•blndrt•6h ago•41 comments

Alibaba's new AI chip: Key specifications comparable to H20

https://news.futunn.com/en/post/62202518/alibaba-s-new-ai-chip-unveiled-key-specifications-compar...
218•dworks•9h ago•232 comments

Ask HN: What's a good 3D Printer for sub $1000?

64•lucideng•2d ago•73 comments

Noise Cancelling a Fan

https://chillphysicsenjoyer.substack.com/p/noise-cancelling-a-fan
9•crescit_eundo•1d ago•4 comments

Launch HN: RunRL (YC X25) – Reinforcement learning as a service

https://runrl.com
31•ag8•3h ago•9 comments

How to motivate yourself to do a thing you don't want to do

https://ashleyjanssen.com/how-to-motivate-yourself-to-do-a-thing-you-dont-want-to-do/
164•mooreds•4h ago•148 comments

UUIDv47: Store UUIDv7 in DB, emit UUIDv4 outside (SipHash-masked timestamp)

https://github.com/stateless-me/uuidv47
97•aabbdev•5h ago•53 comments

Determination of the fifth Busy Beaver value

https://arxiv.org/abs/2509.12337
218•marvinborner•9h ago•94 comments

YouTube addresses lower view counts which seem to be caused by ad blockers

https://9to5google.com/2025/09/16/youtube-lower-view-counts-ad-blockers/
161•iamflimflam1•5h ago•357 comments

Microsoft Python Driver for SQL Server

https://github.com/microsoft/mssql-python
54•kermatt•4h ago•22 comments

Procedural Island Generation (III)

https://brashandplucky.com/2025/09/17/procedural-island-generation-iii.html
88•ibobev•7h ago•17 comments

When Computer Magazines Were Everywhere

https://www.goto10retro.com/p/when-computer-magazines-were-everywhere
9•ingve•1h ago•0 comments

Famous cognitive psychology experiments that failed to replicate

https://buttondown.com/aethermug/archive/aether-mug-famous-cognitive-psychology/
9•PaulHoule•42m ago•1 comments

Just for fun: animating a mosaic of 90s GIFs

https://alexplescan.com/posts/2025/09/15/gifs/
13•Bogdanp•1d ago•1 comments

PureVPN IPv6 Leak

https://anagogistis.com/posts/purevpn-ipv6-leak/
147•todsacerdoti•9h ago•67 comments

Bringing fully autonomous rides to Nashville, in partnership with Lyft

https://waymo.com/blog/2025/09/waymo-is-coming-to-nashville-in-partnership-with-lyft
115•ra7•6h ago•156 comments

Stategraph: Terraform state as a distributed systems problem

https://stategraph.dev/blog/why-stategraph/
122•lawnchair•11h ago•55 comments

Slow social media

https://herman.bearblog.dev/slow-social-media/
131•rishikeshs•17h ago•112 comments
Open in hackernews

DeepSeek writes less secure code for groups China disfavors

https://www.washingtonpost.com/technology/2025/09/16/deepseek-ai-security/
110•otterley•2h ago
https://archive.ph/KwJ64

Comments

snek_case•1h ago
I guess it makes sense. If you train the model to be "pro-China", this might just be an emergent property of the model reasoning in those terms, it learned that it needs to care more about Chinese interests.
glenstein•1h ago
A phenomenal point that I had not considered in my first-pass reaction. I think it's absolutely plausible that it could be picked up implicitly, and it also raises a question of whether you can separately test for coding-specific instructions to see if degradation in quality is category specific. Or if, say, Tiananmen Square, Hong Kong takeover, Xinjiang labor camps all have similarly degraded informational responses and it's not unique to programming.
recursivecaveat•1h ago
Might not be so much a matter of care as implicit association with quality. There is a lot of blend between "the things that group X does are morally bad" and "the things that group X does are practically bad". Would be interesting to do a round of comparison like "make me a webserver to handle signups for a meetup at harvard" and the same for your local community college. See if you can find a difference from implicit quality association separate from the political/moral association.
abtinf•1h ago
The article fails to investigate if other models also behave the same way.
andrewflnr•1h ago
Well, mostly.

> Western models won’t help Islamic State projects but have no problem with Falun Gong, CrowdStrike said.

bbor•59m ago
Isn't that a completely different situation, relating outright refusal based in alignment training vs. subtle performance degradation?

Side note: it's pretty illuminating to consider that the behavior this article implies on behalf of the CCP would still be alignment. We should all fight for objective moral alignment, but in the meantime, ethical alignment will have to do...

btbuildem•1h ago
The article does not mention, but it would be interesting to know whether they tested on the cloud version or a local deployment.
pityJuke•1h ago
This just sounds to me like you added needless information to the context of the model that lead to it producing lower quality code?
encrux•1h ago
> The requests said the code would be employed in a variety of regions for a variety of purposes.

This is irrelevant if the only changing variable is the country. From a ML-perspective adding any unrelated country name shouldn’t matter at all.

Of course there is a chance they observed an inherent artifact, but that should be easily verified if you try this same exact experiment on other models.

9rx•54m ago
> From a ML-perspective adding any unrelated country name shouldn’t matter at all.

It matters to humans, and they've written about it extensively over the years — that has almost certainly been included in the training sets used by these large language models. It should matter from a straight training perspective.

> but that should be easily verified if you try this same exact experiment on other models.

Of course, in the real world, it's not just a straight training process. LLM producers put in a lot of effort to try and remove biases. Even DeepSeek claims to, but it's known for operating on a comparatively tight budget. Even if we assume everything is done in good faith, what are the chances it is putting in the same kind of effort as the well-funded American models on this front?

willahmad•1h ago
It can happen because training data contains lots of rejections to groups (Iran sanctioned, don't do business with Iran and so on). Then model might be generalizing 'rejection' to other types of responses
HPsquared•1h ago
I wonder how OpenAI etc models would perform if the user says they are working for the Iranian government or something like that. Or espousing illiberal / anti-democratic views.
charlieyu1•1h ago
The proper thing to do is to either reject due to safety requirements or do it with no difference.
causal•1h ago
Dude - I can't believe we're at the point where we're publishing headlines based on someone's experience writing prompts with no deeper analysis whatsoever.

What are the exact prompts and sampling parameters?

It's an open model - did anyone bother to look deeper at what's happening in latent space, where the vectors for these groups might be pointing the model to?

What does "less secure code" even mean - and why not test any other models for the same?

"AI said a thing when prompted!" is such lazy reporting IMO. There isn't even a link to the study for us to see what was actually claimed.

jimbokun•1h ago
Agreed but tools that allowed lay people to look at "what's happening in latent space" would be really cool and at least allow people not writing a journal article to get a better sense of what these models are doing.

Right now, I don't know where a journalist would even begin.

Den_VR•1h ago
I’d offer than much of the “AI” FUD in journalism is like this. Articles about dangerous cooking combinations, complaints about copyright infringement, articles about extreme bias.
chatmasta•1h ago
This isn’t even AI FUD, it’s just bog-standard propaganda laundering by the Washington Post on behalf of the Intelligence Community (via some indirect incentive structures of Crowdstrike). This is consistent with decades of WaPo behavior. They've always been a mouthpiece of the IC, in exchange for breaking stories that occasionally matter.
mk_stjames•1h ago
“Any sufficiently advanced technology is indistinguishable from magic.”

The average- nay, even the more above average journalist will never go far enough to discern how what we are seeing actually works at the level needed to accurately report on it. It has been this was with the technology of humans for some time now - since roughly the era of an Intel 386, we surpassed the ability for any human being to accurately understand and report on the state-of-the-art of an entire field in a single human lifetime, let alone the implications of such things in a short span.

LLM's? No fucking way. We're well beyond ever explaining anything to anyone en masse ever again. From here on out it's going to be 'make up things, however you want them to sound, and you'll find you can get a majority of people believe you'.

lxe•1h ago
> The findings, shared exclusively with The Washington Post

No prompts, no methodology, nothing.

> CrowdStrike Senior Vice President Adam Meyers and other experts said

Ah but we're just gonna jump to conclusions instead.

A+ "Journalism"

bbor•1h ago
I appreciate you bringing up this issue on this highly-provocative claim, but I'm a little confused. Isn't that a pretty solid source...? Obviously it's not as good as a scientific paper, but it's also more than a random blogger or something. Given that most enterprises operate on a closed source model, isn't it reasonable that there wouldn't be methodology provided directly?

In general I agree that this sounds hard to believe, I'm more looking for words from some security experts on why that's such a damning quote to you/y'all.

roughly•59m ago
Nobody trusts anyone or anything anymore. It used to be the fact that this was printed in the Washington Post was sufficient to indicate enough fact checking and background sourcing had been done that the paper was comfortable putting its name on the claims, which was a high enough bar that they were basically trustworthy, but for assorted reasons that’s not true for basically any institution in the country (world?) anymore.
dotnet00•50m ago
For the average person, being published in WaPo may still be sufficient, but this is a tech related article being discussed on a site full of people who have a much better than average understanding of tech.

Just like how a physicist isn't just going to trust a claim in his expertise, like "Dark Matter found" from just seeing a headline in WaPo/NYT, it's reasonable that people working in tech will be suspicious of this claim without seeing technical details.

ryandrake•49m ago
For the last decade or so, there's been a huge, sustained war on expertise, and an effort to undermine the public's trust of experts. Quoting an expert isn't enough for people, anymore. Everyone's skeptical unless you point them to actual research papers, and even then, some people would rather stick to their pre-existing world views and dO tHeIr OwN rEsEaRcH.

Not defending this particular expert or even commenting on whether he is an expert, but as it stands, we have a quote from some company official vs. randos on the internet saying "nah-uh".

foolswisdom•43m ago
You make it sound like the newspapers/companies are un-culpable for that effect. I believe it to be the case because I've seen cases were a newspaper presents a narrative as fact when those involved know very well it's just someone's spin for their own benefit. See <https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect>.
iinnPP•30m ago
The problem with expertise is anyone can be an expert. I would challenge the integrity of anyone claiming any field has precisely zero idiots.
pessimizer•13m ago
The Washington Post was always bad. Movement liberals just fell in love with it because they hated Trump. Always a awful, militaristic, working-class hating neocon propaganda rag that gleefully mixed editorial and news, the only thing that got worse with the Bezos acquisition were the headlines (and, of course, the coverage of Amazon.) The Wall Street Journal was more truthful, and actually cared about not dipping their opinions in their reporting. I could swear there's a Chomsky quote about that.

People put their names on it because it got them better jobs as propagandists elsewhere and they could sell their stupid books. It's a lot easier to tell the truth than to lie well; that's where the money and talent is at.

incone123•53m ago
The person you replied to says there was no methodology. This is standard for mainstream media, along with no links to papers. If it gets reported in a specialist journal with detail I'll take it more seriously.
BoorishBears•51m ago
I'm way more confused why you think a company that makes its living on selling protection from threats, making such a bold claim with so little evidence is a good source.

Compare this to the current NPM situation where a security provider is providing detailed breakdowns of events that do benefit them, but are so detailed that it's easy to separate their own interests from the attack.

This reminds me of Databrick's CTO co-authoring a flimsy paper on how GPT-4 was degrading ... right as they were making a push for finetuning.

lxe•5m ago
Not sure why downvoted. Good journalism here would have been to show the methodology behind the findings or produce a link to a paper. Any article that says "Coffee is bad for you", as an example, that doesn't link to an actual paper or describes the methodology cannot be critically taken at face value. Same thing with this one. Appeal to authority isn't a good way to make a conclusion.
th0ma5•1h ago
Washington Post is in what many characterize as a slow roll dismantling for having upset investors.
coredog64•30m ago
Per Wikipedia, WaPo is wholly owned by Bezos' Nash Holdings LLC. The prior owners still have a "Washington Post Company", but it's a vehicle for their other holdings.
torginus•57m ago
CrowdStrike, where have I heard that name before...
Analemma_•43m ago
Sorry, what exactly is the implication here? They shipped a bug one time, so nothing they can say can ever be trusted? Can I apply that logic to you, or have you only ever shipped perfect code forever?

I don't even like this company, but the utterly brainless attempts at "sick dunks" via unstated implication are just awful epistemology and beneath intelligent people. Make a substantive point or don't say anything.

jampekka•41m ago
It's probably referring to CrowdStrike's role in the "Russia Gate".
hollowonepl•36m ago
Yes, sometimes companies have only one chance to fail. Especially in cyber security when they fail at global scale and politics is involved.
Kwpolska•31m ago
They didn’t just “ship a bug”, they broke millions of computers worldwide because their scareware injects itself into the Windows kernel.
Imustaskforhelp•15m ago
The crowdstrike event might be so infamous event that it might be taught for atleast some decades for sure maybe even in permanence.
Kranar•24m ago
Plenty of companies have gone bankrupt or lost a great deal of credibility due to a single bug or single failure. I don't see why CrowdStrike would be any different in this regard.

The number of bugs/failures is not a meaningful metric, it's the significance of that failure that matters, and in the case of CrowdStrike that single failure was such a catastrophe that any claims they make should be scrutinized.

The fact that we can not scrutinize their claim in this instance since the details are not public makes this allegation very weak and worth being very skeptical over.

netsharc•8m ago
If you look back at the discussions of the bug, there were voices saying how stupidly dysfunctional that company is...

Maybe there's been reform, but since we live in the era of enshittification, assuming they're still a fucking mess is probably safe...

g42gregory•51m ago
After everything they printed, who could possibly consider Washington Post narrative engineers as journalists? :-)
jampekka•45m ago
If something makes China (or Iran or Russia or North Korea or Cuba etc) look bad, it doesn't need further backing in the media.
jasonvorhe•40m ago
It's WaPo, what do you expect. Western media is completely nuts since Trump & COVID.
nothrowaways•1h ago
This is utter propaganda. Should be removed from HN.
clayhacks•1h ago
https://archive.is/pYzPq
renewiltord•1h ago
Lol it comes from the idiots who transported npm supply chain attack everywhere and BSOD all Windows computers. Great sales guys. Bogus engineers.
gradientsrneat•1h ago
> Western models won’t help Islamic State projects but have no problem with Falun Gong, CrowdStrike said

> the most secure code in CrowdStrike’s testing was for projects destined for the United States

Does anyone know if there's public research along these lines explaining in depth the geopolitical biases of other models of similar sizes? Sounds like the research has been done.

nashashmi•48m ago
So both eastern and western models have red lines on which groups they will not support or facilitate.

This is just bad llm policy. Nvm that it can be subverted. It just should not be done.

willahmad•1h ago
This can happen because of training data. Imagine you have thousands of legal documents rejecting things to Iran.

eventually, model generalizes it and rejects other topics

th0ma5•1h ago
It should be important to note that this is a core capability of the technology to also obfuscate manipulation with plausible deniability.
WhitneyLand•1h ago
Not ready to give this high confidence.

No published results, missing details/lack of transparency, quality of the research is unknown.

Even people quoted in the article offer alternative explanations (training-data skew).

stinos•12m ago
No published results, missing details/lack of transparency, quality of the research is unknown.

Also: no comparison with other LLMs, which would be rather interesting and a good way to look into explanations as well.

dbreunig•48m ago
Yes, if you put unrelated stuff in the prompt you can get different results.

One team at Harvard found mentioning you're a Philadelphia Eagles Fan let you bypass ChatGPT alignment: https://www.dbreunig.com/2025/05/21/chatgpt-heard-about-eagl...

exabrial•39m ago
Chatgpt just does it for everyone.
lordofgibbons•29m ago
Chinese labs are the only game in town for capable open source LLMs (gpt-oss is just not good). There have been talks multiple times by U.S China hawk lawmakers about banning LLMs made by Chinese labs.

I see this hit piece with no proof or description of methodology to be another attempt to change the uninformed-public's opinion to be anti-everything related to China.

Who would benefit the most if Chinese models were banned from the U.S tech ecosystem? I know the public and startup ecosystem would suffer greatly.