frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
1•tosh•5m ago•0 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
1•onurkanbkrc•5m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•6m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•9m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•12m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•12m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•12m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•12m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•14m ago•1 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•16m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•18m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•20m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•21m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•21m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•24m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•27m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•29m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•30m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•31m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•32m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•35m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
3•chartscout•38m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•41m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•42m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•47m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•49m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•51m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•51m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•52m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•58m ago•0 comments
Open in hackernews

Proposal: AI Content Disclosure Header

https://www.ietf.org/archive/id/draft-abaris-aicdh-00.html
74•exprez135•5mo ago

Comments

rossant•5mo ago
Interesting initiative but I wonder if the mode provides sufficient granularity. For example, what about an original human-generated text that is entirely translated by an AI?
dijksterhuis•5mo ago
> what about an original human-generated text that is entirely translated by an AI?

probably ai-modified -- the core content was first created by humans, then modified (translated into another language). translating back would hopefully return you the original human generated content (or at least something as close as possible to the original).

    | class             | author | modifier/reviewer | 
    | ----------------- | ------ | ----------------- | 
    | none              | human  | human/none        | 
    | ai-modified       | human  | ai                | <--*
    | ai-originated     | ai     | human             |
    | machine-generated | ai     | ai/none           |
kelseyfrog•5mo ago
It certainly doesn't cover the case of mixed-origin content. Say for example, a dialog between a human and AI or even mixed-model content.

For those, my instinct is to fallback to markup which would seem to work quite well. There is the pesky issue of AI content in non-markup formats - think JSON that don't have the same orthogonal flexibility in annotating metadata.

shortrounddev2•5mo ago
Years ago people were arguing that fashion magazines should have to disclose if they photoshopped pictures of women to make them look skinnier. France implemented this law, and I believe other countries have as well. I believe that we should have similar laws for AI generated content.
xhkkffbf•5mo ago
I'm all for some kind of disclosure, but where do we draw the line. I use a pretty smart grammar and spell checker, one that's got more "AI" in it to analyze the sentence structure. Is that AI content?
stillpointlab•5mo ago
According to the spec, yes a grammar checker would be subject to disclosure:

> ai-modified Indicates AI was used to assist with or modify content primarily created by humans. The source material was not AI-generated. Examples include AI-based grammar checking, style suggestions, or generating highlights or summaries of human-written text.

AKSF_Ackermann•5mo ago
It feels like a header is the wrong tool for this, even if you hypothetically would want to disclose that, would you expect a blog cms to offer the feature? Or a web browser to surface it?
throwaway13337•5mo ago
Can we have a disclosure for sponsored content header instead?

I'd love to browse without that.

It does not bother me that someone used a tool to help them write if the content is not meant to manipulate me.

Let's solve the actual problem.

handfuloflight•5mo ago
We already have those legally mandated disclosures per the FTC.
ugh123•5mo ago
Hoping I don't need to click on something, or have something obstructing my view.
odie5533•5mo ago
The cookie banner just got 200px taller.
grumbel•5mo ago
Completely the wrong way around. We are heading into a future where everything will be touched by AI in some way, be it things like Photoshop Generative Fill, spell check, subtitles, face filters, upscaling, translation or just good old algorithmic recommendations. Even many smartphones already run AI over every photo they make.

Doing it in a HTTP header is furthermore extremely lossly, files get copy around and that header ain't coming with them. It's not a practical place to put that info, especially when we have Exif inside the images themselves.

The proper way to handle this is mark authentic content and keeping a trail of how it was edited, since that's the rare thing you might to highlight in a sea of slop, https://contentauthenticity.org/ is trying to do that.

politelemon•5mo ago
The authors do seem to be conflating AI as a marketing term with chat gpt types. AI encompasses a broad suite of technologies including the spell check you've mentioned and given them number of tools used today that would technically constitute AI, this header makes no sense.
TYPE_FASTER•5mo ago
Yup, this is the way. Assume everything is AI unless proven otherwise.
GuinansEyebrows•5mo ago
Maybe an ignorant question but at the dictionary level, how would one indicate that multiple providers/models went into the resulting work (based on the example given)? Is there a standard for nested lists?
vntok•5mo ago
This feels like the Security Flag proposal (https://www.ietf.org/rfc/rfc3514.txt)
gruez•5mo ago
or end up like california prop 65 warnings: https://en.wikipedia.org/wiki/1986_California_Proposition_65
layer8•5mo ago
Why only for HTTP? This would be appropriate for MIME multipart/mixed part headers as well. ;)

Maybe better define an RDF vocabulary for that instead, so that individual DIVs and IMGs can be correctly annotated in HTML. ;)

xgulfie•5mo ago
This is like asking the fox to announce itself before entering the henhouse
patrickhogan1•5mo ago
The bigger challenge here is that we already struggle with basic metadata integrity. Sites routinely manipulate creation dates for SEO - I regularly see 5-year-old content timestamped as "published yesterday" to game Google's freshness signals.

While this doesn't invalidate the proposal, it does suggest we'd see similar abuse patterns emerge, once this header becomes a ranking factor.

paulddraper•5mo ago
Does that work? There’s no way…

Most web servers use mtime for Last-Modified header.

It would be crazy for Google to treat that as authorship date, and I cannot believe that they do.

dragonwriter•5mo ago
> It would be crazy for Google to treat that as authorship date, and I cannot believe that they do.

I'm not sure what Google uses for authorship date, but if you do date-range based web searches, the actual dates of the content no longer have any meaningful relationship to what was set in the earch criteria (news seems mostly better but with some problems, but actual web search is hopeless). In both directions -- searching for recent stuff gets plenty of very old stuff mixed in, but searching for stuff from a period well in the past gets lots of stuff from yesterday, too.

patrickhogan1•5mo ago
On platforms like Wordpress these headers are settable via SEO plugins. Many sites will roll these headers forward.
weddpros•5mo ago
Maybe we should avoid training AI with AI-generated content: that's a use case I would defend.

Still I believe MIME would be the right place to say something about the Media, rather than the Transport protocol.

On a lighter note: we should consider second order consequences. The EU commission will demand its own EU-AI-Disclosure header be send to EU citizens, and will require consent from the user before showing him AI generated stuff. UK will require age validation before showing AI stuff to protect the children's brains. France will use the header to compute a new tax on AI generated content, due by all online platform who want to show AI generated content to french citizens.

That's a Pandora box I wouldn't even talk about, much less open...

blibble•5mo ago
> Maybe we should avoid training AI with AI-generated content: that's a use case I would defend.

if this takes off I'll:

   - tag my actual content (so they won't train on it)
   - not tag my infinite spider web of automatically generated slop output (so it'll poison the models)
win win!
ronsor•5mo ago
then they'll start ignoring the header and it'll be useless

(of course, it was never going to be useful)

ronsor•5mo ago
> The EU commission will demand its own EU-AI-Disclosure header be send to EU citizens, and will require consent from the user before showing him AI generated stuff. UK will require age validation before showing AI stuff to protect the children's brains. France will use the header to compute a new tax on AI generated content, due by all online platform who want to show AI generated content to french citizens.

I think the recent drama related to the UK's Online Safety Act has shown that people are getting sick of country-specific laws simply for serving content. The most likely outcome is sites either block those regions or ignore the laws, realizing there is no practical enforcement avenue.

giancarlostoro•5mo ago
It depends but for example if I wanted to train a LoRa that outputs a certain art style from a specific model, I have no issue with this being done. Its not like you are making a model from scratch.
paulddraper•5mo ago
Content-Type/MIME type is for the format.

There are dedicated headers for other properties, e.g. language.

weddpros•5mo ago
Actually you're 100% correct.

Feels weird to me that encoding is part of MIME, but language isn't, although I understand why.

paulddraper•5mo ago
Yeah. The reason is that charset is a specific to text types. Language can apply to many media.

Though FWIW, I think the Content-Encoding header is basically a mistake, should should been Content-Transform.

woah•5mo ago
Seems like someone just trying to get their name on a published IETF standard for the bragging/resume rights
judge123•5mo ago
I'm genuinely torn. On one hand, transparency is good. But on the other, I can totally see this header becoming a lazy filter for platforms to just automatically demote or even block any AI-assisted content. What happens to artists using AI tools, or writers using it for brainstorming?
xgulfie•5mo ago
They can adapt or get left behind
paulddraper•5mo ago
Ha
ivape•5mo ago
This is a Gentlemen’s agreement humans will not keep. Not how our species works.
nrmitchi•5mo ago
This seems like a (potential) solution looking for a nail-shaped problem.

Yes, there is a huge problem with AI content flooding the field, and being able to identify/exclude it would be nice (for a variety of purposes)

However, the issue isn't that content was "AI generated"; as long as the content is correct, and is what the user was looking for, they don't really care.

The issue is content that was generated en-masse, is largely not correct/trustworthy, and serves only to to game SEO/clicks/screentime/etc.

A system where the content you are actually trying to avoid has to opt in is doomed for failure. Is the purpose/expectation here that search/cdn companies attempt to classify, and identify, "AI content"?

yahoozoo•5mo ago
It says in the first paragraph it’s for crawlers and bots. How many humans are inspecting the headers of every page they casually browse? An immediate problem that could potentially be addressed by this is the “AI training on AI content” loop.
nikolayasdf123•5mo ago
I believe this is why Google did SynthID https://deepmind.google/science/synthid/
TrueDuality•5mo ago
How many of the makers of these trash SEO sites are going to voluntarily identify their content as AI generated?
TheRoque•5mo ago
Moreover, I find it ironic that website owners will gracefully give AI companies the power to identify what is "good" data and what is not. I mean, why would I do the work for them and identify my data as AI, so that they would ignore it ? "yes please, take all my work, this is quality content, train on it, it's free !" that's what it sounds like
nrmitchi•5mo ago
It would still be required for the content producer (ie, the content-spam-farm) to label their content as such.

The current approach is that the content served is the same for humans and agents (ie, a site serves consistent content regardless of the client), so who a specific header is "meant for" is a moot point here.

TylerE•5mo ago
It's the evil bit, but unironically.
edoceo•5mo ago
For today's lucky 10k:

https://www.ietf.org/rfc/rfc3514.txt

Note date published

0xDEAFBEAD•5mo ago
>Attack applications may use a suitable API to request that [the evil bit] be set. Systems that do not have other mechanisms MUST provide such an API; attack programs MUST use it.

Potential flaw: I'm concerned that attackers may be slow to update their malware to achieve compliance with this RFC. I suggest a transitional API: Intrusion detection systems respond to suspected-evil packets that have the evil bit set to 0 with a depreciation notice.

jrochkind1•5mo ago
deprecation notice
userbinator•5mo ago
Approximately as useless as "do not track".
webprofusion•5mo ago
Hack: only present this header to AI crawlers, so they don't index your content, lol.