frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Japanese rice is the most expensive in the world

https://www.cnn.com/2026/02/07/travel/this-is-the-worlds-most-expensive-rice-but-what-does-it-tas...
1•mooreds•34s ago•0 comments

White-Collar Workers Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•34s ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•57s ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•1m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•1m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•1m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•2m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•2m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•3m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•6m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•6m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•7m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•7m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•8m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•8m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•10m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•10m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•11m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•12m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•13m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•17m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•17m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•18m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•22m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•22m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•23m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•26m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•26m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•26m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•26m ago•0 comments
Open in hackernews

SynthID – A tool to watermark and identify content generated through AI

https://deepmind.google/science/synthid/
110•jonbaer•5mo ago

Comments

egeozcan•5mo ago
I guess this is the start of a new arms race on making generated content pass these checks undetected and detecting them anyway.
dragonwriter•5mo ago
Its not really an arms race; any gen AI system that doesn't explicitly incorporate a watermarking tool like this won't be detectable by tools that read the watermarks.

There is a kind of arms race that has existed for a while for non-watermarked content, except that the detection tools are pretty much Magic 8-ball level of reliability, so there's not a lot of effort on the counter-detection side.

peterkelly•5mo ago
Create the problem, sell the solution.
9dev•5mo ago
You can never be sure something has been generated by a model embedding one of these anyway, so it’s pretty moot.
montag•5mo ago
"The watermarks are embedded across Google’s generative AI consumer products, and are imperceptible to humans."

I'd love to see the data behind this claim, especially on the audio side.

donperignon•5mo ago
Nah that’s a solved problem if you work on the frequency domain. Same for image. Text is the hard rock here.
donperignon•5mo ago
I am not sure that text watermarking will be accurate, I foresee plenty of false positives.
drdebug•5mo ago
In practice, very short texts don't carry very high value so watermarking is (usually) less important. For longer text false positives are not an issue at all since you have a large amount of data to extract your signal from.
pelasaco•5mo ago
looks like the same as anti-virus companies in the 80s? Write virus, Write anti-virus and profit!
teiferer•5mo ago
Could anybody explain how this isn't easily circumvented by using a competitor's model?

Also, if everything in the future has some touch of AI inside, for example cameras using AI to slightly improve the perceived picture quality, then "made with AI" won't be a categorization that anybody lifts an eyebrow about.

dragonwriter•5mo ago
> Could anybody explain how this isn't easily circumvented by using a competitor's model?

Almost all the big hosted AI providers are publicly working on watermarking for at least media (text is more of a mixed bag); ultimately, its probably a regulatory play—the big providers expect that the combination of legitimate concerns and their own active fearmongering, combined with them demonstrating watermarking, will result in mandates for commercial AI generation services to include watermarking. This may even be part of the regulatory play to restrict availability and non-research use of open models.

mhl47•5mo ago
Yes but isn't the cat out of the box already? Don't we have sufficiently strong local models that can be finetuned in various ways to rewrite text/alternate images and thus destroy possible watermarks.

Sure in some cases a model might do some astounding things that always shine through, but I guess the jury still out on these questions.

verisimi•5mo ago
If you see the mark, you'd know at least that you aren't dealing with a purely mechanic rendering of whatever-it-is.
progval•5mo ago
By lobbying regulators to force your competitors to add watermarks too.
michaelt•5mo ago
> Could anybody explain how this isn't easily circumvented by using a competitor's model?

If the problem is "kids are using AI to cheat on their schoolwork and it's bad PR / politicians want us to do something" then competitors' models aren't your problem.

On the other hand, if the problem is "social media is flooded with undetectable, super-realistic bots pushing zany, divisive political opinions, we need to save the free world from our own creation" then yes, your competitors' models very much are part of the problem too.

QuadmasterXLII•5mo ago
I wonder if this will survive distillation. I vaguely recall that most open models answer “ I am chat gpt” when asked who they are, as they’re heavily trained on openai outputs. If the version of chatgpt used to generate the training data had a watermark, a sufficiently powerful function approximator would just learn the watermark.
xpe•5mo ago
Are you expecting a distilled model to be sufficiently powerful to capture the watermark? I wouldn’t.

Additionally, I don’t think the watermark has to be deterministic.

doawoo•5mo ago
the beginning of walled garden “AI” tools has been interesting to follow
chii•5mo ago
i find the premise to be an invalid one personally - why is the property that a works from an AI model must be identified/identifiable?
HighGoldstein•5mo ago
Video evidence of you committing a crime, for example, should be identifiable as AI-generated.
chii•5mo ago
how do we currently deal with tampered video evidence today, before the advent of ai generated videos? Why cant same methods be used for an ai generated video?
drdebug•5mo ago
If you are interested, you can look into the work of Hany Farid on this topic as a good introduction to image forensics and related topics.
hiatus•5mo ago
People want to know when they are interacting with AI-generated content.
Oras•5mo ago
OpenAI has been doing something similar for generated images using C2PA [0]

It is easy to alter by just saving to a different format or basic cropping.

I would love to see how SynthID is fixing this issue.

https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...

NoahZuniga•5mo ago
No this is very different. C2PA is just some extra metadata, it doens't watermark the image.
JimDabell•5mo ago
> Large language models generate text one word (token) at a time. Each word is assigned a probability score, based on how likely it is to be generated next. So for a sentence like “My favourite tropical fruits are mango and…”, the word “bananas” would have a higher probability score than the word “airplanes”.

> SynthID adjusts these probability scores to generate a watermark. It's not noticeable to the human eye, and doesn’t affect the quality of the output.

I think they need to be clearer about the constraints involved here. If I ask What is the capital of France? Just the answer, no extra information.” then there’s no room to vary the probability without harming the quality of the output. So clearly there is a lower bound beyond which this becomes ineffective. And presumably the longer the text, the more resilient it is to alterations. So what are the constraints?

I also think that this is self-interest dressed up as altruism. There’s always going to be generative AI that doesn’t include watermarks, so a watermarking scheme cannot tell you if something is genuine. It is, however, useful for determining that something came from a specific provider, which could be valuable to Google in all sorts of ways.

merelysounds•5mo ago
This might be enforced in some trivial way, e.g. by requiring AI models to answer with at least a sentence. The constraints may not be fully published and the obscurity might make it more efficient, if only temporarily.

Printer tracking dots[1] is one prior solution like this; annoying, largely unknown, workarounds exist, still - surprisingly efficient.

[1]: https://en.m.wikipedia.org/wiki/Printer_tracking_dots

ChrisMarshallNY•5mo ago
I think they are what busted that ironically-named young lady that leaked NSA information.
merelysounds•5mo ago
Yes, the Wikipedia article mentions that and includes links to more sources:

> Both journalists and security experts have suggested that The Intercept's handling of the leaks by whistleblower Reality Winner, which included publishing secret NSA documents unredacted and including the printer tracking dots, was used to identify Winner as the leaker, leading to her arrest in 2017 and conviction.

DrewADesign•5mo ago
Security and surveillance products don’t have to be perfect to be useful enough to some.
postquantumfax•5mo ago
Choosing the slightly less probable output is changing the quality of the output if it weren't LLMs wouldn't work by processing a large amount of data to get these probabilities as accurate as possible.
trehans•5mo ago
For answers like that, it probably wouldn't matter whether it was AI-generated or not. It becomes more relevant with long-form generated content
Viliam1234•5mo ago
> It is, however, useful for determining that something came from a specific provider, which could be valuable to Google in all sorts of ways.

Oh crap, knowing Google it probably means they will put articles generated using their AI higher among the search results.

HighGoldstein•5mo ago
I wonder if, conversely, authentic media can be falsely watermarked as AI-generated.
notpushkin•5mo ago
For photos, I think the answer is yes. For texts, the wording will be changed when you watermark them, so I guess that’s a no.
NitpickLawyer•5mo ago
When chatgpt launched there was a rush of "solutions" to catch llm generated text. The problem was not their terrible accuracy, but their even more terrible false positive rates. The classic example was pasting the declaration of independence, and getting 100% AI generated. What's even more sad is that some of those solutions are still used today, and for a while they were used against students, with chilling repercussions for them.
R_Spaghetti•5mo ago
It only works across Google shit.
DrNosferatu•5mo ago
If I slightly edit plain text watermarked with it, will the watermark identification be robust?
DrNosferatu•5mo ago
The first good use of blockchain comes to mind.
wenbin•5mo ago
I really hope SynthID becomes a widely adopted standard - at the very least, Google should implement it across its own products like NotebookLM.

The problem is becoming urgent: more and more so-called “podcasts” are entirely fake, generated by NotebookLM and pushed to every major platform purely to farm backlinks and run blackhat SEO campaigns.

Beyond SynthID or similar watermarking standards, we also need models trained specifically [0] to detect AI-generated audio. Otherwise, the damage compounds - people might waste 30 minutes listening to a meaningless AI-generated podcast, or worse, absorb and believe misleading or outright harmful information.

[0] 15,000+ ai generated fake podcasts https://www.kaggle.com/datasets/listennotes/ai-generated-fak...

6LLvveMx2koXfwn•5mo ago
Given there is "misleading or outright harmful" information generated by humans, why is it more pressing that we track such content generated by AI?
anuramat•5mo ago
I suppose efficiency? It's easier to filter out petabytes of AI slop than to determine which human generated content is harmful
utilize1808•5mo ago
I feel this is not the scalable/right way to approach this. The right way would be for human creators to apply their own digital signatures to the original pieces they created (specialised chips on camera/in software to inject hidden pixel patterns that are verifiable). If a piece of work lacks such signature, it should be considered AI-generated by default.
shkkmo•5mo ago
That seems like a horrible blow to anonymity and psuedonymity that would also empower identity thieves.
utilize1808•5mo ago
Not necessarily. It’s basically document signing with key pairs —- old tech that is known to work. It’s purpose is not to identify the individual creators, but to verify that a piece of work was created by a process/device that is not touched by AI.
BoiledCabbage•5mo ago
And what happens when someone uses their digital signature to sign an essay that was generated by AI?
utilize1808•5mo ago
You can’t. It may be set up such that your advisor could sign it if they know for sure that you wrote it yourself with using AI.
akoboldfrying•5mo ago
> You can’t.

I like the digital signature approach in general, and have argued for it before, but this is the weak link. For photos and video, this might be OK if there's a way to reliably distinguish "photos of real things" from "photos of AI images"; for plain text, you basically need a keystroke-authenticating keyboard on a computer with both internet access and copy and paste functionality securely disabled -- and then you still need an authenticating camera on the user the whole time to make sure they aren't just asking Gemini on their phone and typing its answer in.

shkkmo•5mo ago
> for plain text, you basically need a keystroke-authenticating keyboard on a computer with both internet access and copy and paste functionality securely disabled -- and then you still need an authenticating camera on the user the whole time to make sure they aren't just asking Gemini on their phone and typing its answer in.

Which is why I say it would destroy privacy/pseudonymity.

> For photos and video, this might be OK if there's a way to reliably distinguish "photos of real things" from "photos of AI images";

I suspect if you think about it, many of the issues with text also apply to images and videos.

You'd need a secure enclave You'd need a chain of signatures and images to allow human editing. You'd need a way of revoking the public keys of not just insecure software, but bad actors. You would need verified devices to prevent allowing AI tooling the using software to edit the image....etc.

This are only the flaws I can think of in like 5 minutes. You've created a huge incentive to break an incredibly complex system. I have no problem comfortably saying that the end result is a complete lack of privacy for most people while those with power/knowledge would still be able to circumvent it.

shkkmo•5mo ago
> It’s basically document signing with key pairs —- old tech that is known to work.

I understand the technical side of the suggestion. The social and practical side is inevitably flawed.

You need some sort of global registry of public keys. Not only does each registrar have to be trusted but you also need to both trust every single real person to protect and not misuse their keys.

Leaving aside the complete practical infeasability that, even if you accomplish it, you now have a unique identifier tied to every piece of text. There will inevitably be both legal processes to identify who produce a signed work as well as data analysis approaches to deanonamize the public keys.

The end result is pretty clearly that anyone wishing to present material that purports to be human made has to forgo anonymity/pseudonymity. Claiming otherwise is like claiming we can have a secure government backdoor for encryption.

taminka•5mo ago
there's very likely already some sort of fingerprinting in camera chips, à la printer yellow dot watermarks that uniquely identify a printer and a print job...
shkkmo•5mo ago
The way those work is primarily through a combination of obscurity (most people don't know they exist) and through a lack of real finacial incentive to break them at scale.

I would also argue that those techniques do greatly reduce privacy and anonymity.

xpe•5mo ago
Maybe as the direct effect, maybe not. Also think about second order effects: how would various interests respond? The desire for privacy is strong and people will search for ways to get it.

Have you looked into kinds of mitigations that cryptography offers? I’m not an expert, but I would expect there are ways to balance some degree of anonymity with some degree of human identity verification.

Perhaps there are some experts out there who can comment?

HPsquared•5mo ago
Then you just point the special camera at a screen showing the AI content.
utilize1808•5mo ago
Sure. But then it will receive more scrutiny because you are showing a "capture" rather than the raw content.
HPsquared•5mo ago
Actually come to think of it, I suppose a "special camera" could also record things like focusing distance, zoom, and accelerations/rotation rates. These could be correlated to the image seen to detect this kind of thing.
tough•5mo ago
ROC Camera does exactly this

> Creates a Zero Knowledge (ZK) Proof of the camera sensor data and other metadata

https://roc.camera/

jay-barronville•5mo ago
> If a piece of work lacks such signature, it should be considered AI-generated by default.

That sounds like a nightmare to me.

xpe•5mo ago
You aren’t specifying your point of comparison. A nightmare relative to what? You might be saying a nightmare relative to what we have now. Are you?

We once considered text to be generated exclusively by humans, but this assumption must be tossed out now.

I usually reject arguments based on an assumption of some status quo that somehow just continues.

Why? I’ll give two responses, which are similar but use different language.

1. There is a fallacy where people compare a future state to the present state, but this is incorrect. One has to compare two future states, because you don’t get to go back in time.

2. The “status quo” isn’t necessarily a stable equilibrium. The state of things now is not necessarily special nor guaranteed.

I’m now of the inclination to ask for a supporting model (not just one rationale) for any prediction, even ones that seem like common sense. Common sense can be a major blind spot.

jay-barronville•5mo ago
> You aren’t specifying your point of comparison. A nightmare relative to what? You might be saying a nightmare relative to what we have now. Are you?

Very fair point.

And no, it’s less about the status quo and more about AI being the default. There are just too many reasons why this proposal, on its face, seems problematic to me. The following are some questions to highlight just a few of them:

- How exactly would “human creators [applying] their own digital signatures to the original pieces they created” work for creators who have already passed away?

- How fair exactly would it be to impose such a requirement when large portions of the world’s creators (especially in underdeveloped areas) would likely not be able to access and use the necessary software?

- How exactly do anonymous and pseudonymous creators survive such a requirement?

kedv•5mo ago
Would be nice if you guys open source the detection code, similar to the way C2PA is open
harshreality•5mo ago
That's like asking for Adobe to open source their C2PA signing keys.

AI watermarking is adversarial, and anyone who generates a watermarked output either doesn't care, or wants the watermarked removed.

C2PA is cooperative: publishers want the signatures intact, so that the audience has trust in the publisher.

By "adversarial" and "cooperative", I mean in relation to the primary content distributor. There's an adversarial aspect to C2PA, too: bad actors want leaked keys so they can produce fake video and images with metadata attesting that they're real.

A lot of people have a large incentive to disrupt the AI watermark. Leaked C2PA keys will be a problem, but probably a minor one. C2PA is merely an additional assurance, beyond the reputation and representation of the publishing entity, of the origin of a piece of media.

spidersouris•5mo ago
There is a repo: https://github.com/google-deepmind/synthid-text
mingtianzhang•5mo ago
1. One-sample detection is impossible. These detection methods work at the distributional level—more like a two-sample test in statistics—which means you need to collect a large amount of generated text from the same model to make the test significant. Detecting based on a short piece of generated text is theoretically impossible. For example, imagine two different Gaussian distributions: you can never be 100% certain whether a single sample comes from one Gaussian or the other, since both share the same support.

2. Adding watermarks may reduce the ability of an LLM, which is why I don’t think they will be widely adopted.

3. Consider this simple task: ask an LLM to repeat exactly what you said. Is the resulting text authored by you, or by the AI?

mingtianzhang•5mo ago
For images/video/audio, removing such a watermark is very simple. By adding noise to the generated image and then using an open-source diffusion model to denoise it, the watermark can be broken. Or in an autoregressive model, use an open-sourced model to do generation with "teacher forcing" loll.
drdebug•5mo ago
I wonder where you got that impression. Several professional watermarking systems for movie studio type content I have worked with (and on) are highly resistant to noise removal while remaining imperceptible.
mingtianzhang•5mo ago
Based on my research experience and judgment, I have published several top-conference papers in both the detection and diffusion domain, but I haven’t explored the engineering/product side. I believe that if such a system hasn’t been invented yet, it wouldn’t be difficult to create one to remove that watermark using an open-source image/video model and maintain the high quality. Would you be interested in having a further discussion on this?
kimi•5mo ago
For text, have a big model generate the "intelligent" answer, and then ask a local LLM to rephrase.
mingtianzhang•5mo ago
Yeah exactly, you can always do that by using another model that doesn't have the watermark.
umbra07•5mo ago
UMG (music label) has been watermarking their music for many years now, and I'm unaware of any tool to remove their watermarks.
mingtianzhang•5mo ago
Do you think if such a tool exists, will it benefit the community or not?
fertrevino•5mo ago
I wonder what exactly would prevent a developer from removing the signature from a generated file. One could remove arbitrary segments that signal that it is AI generated.
BoredPositron•5mo ago
For images it's not that easy it's in Fourier space and injected through the whole denoising process.
greatgib•5mo ago
Also, what will happen if you cut and paste some part or the whole image inside another bigger one, like traditional photo editing?

And if I scan the image or take a picture of the image on display.

daft_pink•5mo ago
Would you really use Google products to write your email if you knew that they were watermarking it like this?

I think this technology is gonna quickly get eliminated from the marketplace, cause people aren’t willing to use AI for many common tasks that are watermarked this way. It’s ultimately gonna cause Google to lose share.

This technology has a basic use dilemma problem where widely publishing it’s ability and existence will cause your AI to stop being used in some applications

A4ET8a8uTh0_v2•5mo ago
While I want to believe this is true, the experience of being human over the past decade or so suggests otherwise. I think, overall, most people would not even begin this line of inquiry; not to mention care once the thought is considered.
daft_pink•5mo ago
Maybe not initially but over time, I believe this will be the case.
DrewADesign•5mo ago
People use much more invasive tools than that. If it works, they don’t care.

I think it’s weirder that they’re clamoring to give people tools to detect AI while clamoring to present AI-generated content as perfectly normal— no different than if the user had typed it in themselves.

xpe•5mo ago
Have you weighed factors that would push in the other direction? This requires a synthesis, and it requires breaking out of the tendency to only think about factors that support one narrative.

To the extent watermarking technology builds trust and confidence in a product, this is a factor that moves against your prediction.

Talk is cheap. People sometimes make predictions just as easily as they generate words.

djoldman•5mo ago
Here's the SynthID paper:

https://www.nature.com/articles/s41586-024-08025-4.pdf

https://www.nature.com/articles/s41586-024-08025-4

stillsut•5mo ago
Hey I made an open source version of this last week (albeit for different purposes). Check it out at: https://github.com/sutt/innocuous

There's lot of room for contributions here, and I think "fingerprinting layer" is an under-valued part of the LLM stack, not being explored by enough entrants.

tiku•5mo ago
The whole NFT thing could be used to mark content, pictures and hashes of texts for example.
wiradikusuma•5mo ago
For image, what happen if I screenshot it? Will the watermark survive?