frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

SynthID

https://deepmind.google/models/synthid/
46•tosh•1h ago

Comments

u1hcw9nx•1h ago
This technology could be used to copyrights as well.

>The watermark doesn’t change the image or video quality. It’s added the moment content is created, and designed to stand up to modifications like cropping, adding filters, changing frame rates, or lossy compression.

But does it survive if you use another generative image model to replicate the image?

lxgr•1h ago
> This technology could be used to copyrights as well.

That's been a thing for a while: https://en.wikipedia.org/wiki/Digital_watermarking

nerdsniper•1h ago
Extremely doubtful, due to the way that embedding and diffusion works. I would be utterly floored if they had achieved that.
elpocko•12m ago
It doesn't. I don't have a link for you right now but there was a post on reddit recently showing that SynthID is removed from images by passing the image through a diffusion model for a single step at low denoise. The output image is identical to the input image (to the human eye).
andrewmcwatters•1h ago
I wonder how it stands up to feature analysis.

"Generate a pure white image." "Generate a pure black image." Channel diff, extract steganographic signature for analysis.

amingilani•53m ago
I just tried this idea, and it looks like it isn't that simple.

> "Generate a pure white image."

It refused no matter how I phrased it ¯\_(ツ)_/¯

> "Generate a pure black image."

It did give me one. In a new chat, I asked Gemini to detect SynthID with "@synthid". It responded with:

> The image contains too little information to make a diagnosis regarding whether it was created with Google AI. It is primarily a solid black field, and such content typically lacks the necessary data for SynthID to provide a definitive result.

Further research: Does a gradient trigger SynthID? IDK, I have to get back to work.

alibero•20m ago
I've been looking into this. There seems to be some mostly-repeating 2D pattern in the LSB of the generated images. The magnitude of the noise seems to be larger in the pure black image vs pure white image. My main goal is to doctor a real image to flag as positive for SynthID, but I imagine if you smoothed out the LSB, you might be able to make images (especially very bright images) no longer flag as SynthID? Of course, it's possible there's also noise in here from the image-generation process...

Gemini really doesn't like generating pure-white images but you can ask it to generate a "photograph of a pure-white image with a black border" and then crop it. So far I've just been looking at pure images and gradients, it's possible that more complex images have SynthID embedded in a more complicated way (e.g. a specific pattern in an embedding space).

throwaway13337•1h ago
These sorts of tools will only be able to positively identify a subset of genAI content. But I suspect that people will use it to 'prove' something is not genAI.

In a sense, the identifier company can be an arbiter of the truth. Powerful.

Training people on a half-solution like this might do more harm than good.

greensoap•59m ago
It will just be an arms race if we try to prove "not genAI." Detectors will improve, genAI will improve without marking (opensource and state actors will have unmarked genAI even if we mandate it).

Marking real from lense through digital life is more practical. But then what do we do with all the existing hardware that doesn't mark real and media that preexisited this problem.

throwaway13337•54m ago
I agree. A mechanism to voluntarily attach a certificate metadata about the media record from the device seems like a better idea. That still can be spoofed, though.

In the end, society has always existed on human chains of trust. Community. As long as there are human societies, we need human reputation.

sippeangelo•55m ago
It is actively harmful to society. Slap SynthID on some of the photographic evidence from the unreleased Epstein files and instantly de-legitimize it. Launder a SynthID image through a watermark free model and it's legit again. The fact that it exists at all can't be interpreted in any other way than malice.
observationist•54m ago
You could take a picture or video with your phone of a screen or projection of an altered media and thereby capture a watermarked "verified" image or video.

None of these schemes for validation of digital media will work. You need a web of trust, repeated trustworthy behavior by an actor demonstrating fidelity.

You need people and institutions you can trust, who have the capability of slogging through the ever more turbulent and murky sea of slop and using correlating evidence and scientific skepticism and all the cognitive tools available to get at reality. Such people and institutions exist. You can also successfully proxy validation of sources by identifying people or groups good at identifying primary sources.

When people and institutions defect, as many legacy media, platforms, talking heads, and others have, you need to ruthlessly cut them out of your information feed. When or if they correct their mistake, just follow tit for tat, and perhaps they can eventually earn back their place in the de-facto web of trust.

Google's stamp of approval means less than nothing to me; it's a countersignal, indicating I need to put even more effort than otherwise to confirm the truthfulness of any claims accompanied by their watermark.

gregorkas•1h ago
I genuinely feel that in this AI world we need the inverse. That every analogue or digital photo taken by traditional means of photography will need to be signed by a certificate, so anyone can verify its authenticity.
hedora•58m ago
Some cameras support this, but usually only for raw.

Note that your cell phone camera is using gen AI techniques to counteract sensor noise.

Was that famous person in the background really there, or a hallucination filling in static?

Who knows at this point? So, the signatures you proposed need to have some nuance around what they’re asserting.

graypegg•49m ago
To be fair, I think just signing details about the way an image was assembled makes sense. Deciding on fake vs real doesn't have to be done at time of capture. We store things like the aperture size, sensitivity, camera name/model, etc in the EXIF data, including details about the image processing pipeline seems like a logical step. (With a signature verification scheme... and I guess also trying to embed that in the actual bitmap data)

There is no original image to recover, since we can't capture and describe every photon, so it's not a "fake vs real" image signature... that would be a UI choice the image viewer client would make based on the pipeline data in the image.

gumby271•53m ago
I'm sure Apple would love that too. More seriously, would that also mean all editing tools would need to re-sign a photo that was previously signed by the original sensor. How do we distinguish an edit that's misleading vs just changing levels? It's an interesting area for sure, but this inverse approach seems much trickier.
recursive•33m ago
You'd have to provide both images, and let the end user determine whether they think it's misleading.
yjftsjthsd-h•48m ago
And how do you fix the analog hole? Because if you can point your "verified" camera at a sufficiently high-resolution screen, we're worse off than when we started.
0x696C6961•39m ago
Or just extract the certificate from the hardware you own.
staticassertion•35m ago
That is presumably a very expensive endeavor. We already have hardware that attempts to mitigate this and while I think it's possible for the government it's certainly not trivial.
lern_too_spel•35m ago
This is a "solved" problem. Vendors whose keys are extractable get their licenses revoked. The verifier checks the certificate against a CRL.
cedws•34m ago
Yes, I’m more worried about the false confidence such technology could create. Implement an authenticity mechanism and it will be treated as truth. Powerful people will have the means to spoof photographic evidence.
lern_too_spel•32m ago
Depth sensor information.
Coeur•47m ago
This already exists: https://c2pa.org , https://en.wikipedia.org/wiki/Content_Authenticity_Initiativ... . Support by camera makers is - spotty.
squigz•53m ago
Looks like there's a lot more info here, at least about the text version.

https://ai.google.dev/responsible/docs/safeguards/synthid

kingstnap•51m ago
It's security through obscurity. I'm sure with the technical details or even just sufficient access to a predictive oracle you could break this.

But I suppose it ads friction so better than nothing.

Watermarking text without affecting it is an interesting seemingly weird idea. Does it work any better than (with knowledge of the model used to produce said text), just observing the perplexity is low because its "on policy" generated text.

PaulHoule•41m ago

   ...But it can be hard to tell the difference between content that’s been 
   AI-generated, and content created without AI.
Pro-Tip: Something like that Sherbet colored dog is always AI generated
pavel_lishin•39m ago
You'd be surprised what dog owners do sometimes.
zelias•31m ago
Seems like this really just validates whether a piece of AI content was generated by Google, not AI generated in general

What incentive do open models have to adopt this?

parliament32•28m ago
Note that watermarking (yes, including text) is a requirement[1] of the EU AI Act, and goes into effect in August 2026, so I suspect we'll see a lot more work in this space in the near future.

[1] Specifically, "...synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated", see https://artificialintelligenceact.eu/article/50/

raincole•22m ago
EU really like unenforceable regulations, doesn't it?
ChrisArchitect•23m ago
something new here OP?

Some previous discussion:

https://news.ycombinator.com/item?id=45071677

geor9e•21m ago
This is from 2025. Did something new happen? What am I missing here?
Aldipower•18m ago
As a synthesizer collector with serious GAS I find this particular name very offensive.
jamiecode•12m ago
The text watermarking is the more interesting problem here. Image watermarking is fairly tractable - you can embed a robust signal in spatial or frequency domains. Text watermarking works by biasing token selection at generation time, and detection is a statistical test over that distribution.

Which means short texts are basically useless. A 50-token reply has too little signal for the test to reach any confidence. The original SynthID text paper puts minimum viable detection at a few hundred tokens - so for most real-world cases (emails, short posts, one-liners) it just doesn't work.

The other thing: paraphrase attacks break it. Ask any other model to rewrite watermarked text and the watermark is gone, because you're now sampling from a different distribution. EU compliance built on top of this feels genuinely fragile for anything other than long-form content from controlled providers.

Show HN: Phi-Redactor – HIPAA Phi Redaction Proxy for OpenAI/Anthropic APIs

https://github.com/DilawarShafiq/phi-redactor
1•dilawargopang•35s ago•0 comments

Show HN: Free Snowflake Observability

1•karamazov•59s ago•0 comments

Shall We Play a Game? – Frontier Models in Simulated Nuclear Crises

https://www.kcl.ac.uk/shall-we-play-a-game
1•saboot•1m ago•0 comments

Testing "Raw" GPU Cache Latency

https://clamtech.org/?dest=gpudirectlatency
1•matt_d•1m ago•0 comments

Show HN: pg_stream – incremental view maintenance for PostgreSQL in Rust

https://github.com/grove/pg-stream
1•grove•2m ago•0 comments

Show HN: Zen Router – An opinionated HTTP router

https://zenrouter.liveblocks.io/docs
1•nvie81•3m ago•0 comments

U.S. Power-Plant Pollution Rose Sharply in 2025

https://www.wsj.com/us-news/climate-environment/u-s-power-plant-pollution-rose-sharply-in-2025-08...
1•impish9208•3m ago•1 comments

IHR data dumps (including IYP)

https://archive.ihr.live/ihr/
1•1vuio0pswjnm7•4m ago•0 comments

The More You Spend on a Wi-Fi Router, the Worse It Gets

https://www.criticaster.com/blog/wifi-routers-most-disappointing-category
1•gghootch•4m ago•0 comments

Show HN: GameScout AI – AI-powered game recommender

https://gamescout-ai.vercel.app
1•wasivis•4m ago•0 comments

Content Marketers: Quit Your Whining and Learn to Pitch

https://www.animalz.co/blog/creative-pitch
1•nathanowahl•5m ago•0 comments

Show HN: WebPrepImage – Local batch image resizer with file-size limits

https://github.com/arbopa/webprepimage
1•arbopa•5m ago•0 comments

Amateur tennis players love data as much as the pros

https://www.nytimes.com/athletic/7059579/2026/02/26/tennis-amateur-data-strava/
1•Austin_Conlon•5m ago•0 comments

Show HN: I stopped building apps for people. Now I make CLI tools for agents

https://github.com/Aayush9029/homebrew-tap
1•aayush9029•6m ago•0 comments

Show HN: I built the US version of "Are You Dead?", China's viral check-in app

https://imalivetoday.com
1•maxtermed•6m ago•0 comments

Tension Myositis Syndrome

https://en.wikipedia.org/wiki/Tension_myositis_syndrome
2•pinkmuffinere•7m ago•0 comments

The Custodial Republic

https://www.mwiya.com/the-custodial-republic/
1•exolymph•7m ago•0 comments

8B tokens a day forced AT&T to rethink AI orchestration, cutting costs by 90%

https://venturebeat.com/orchestration/8-billion-tokens-a-day-forced-at-and-t-to-rethink-ai-orches...
1•ryan_j_naughton•8m ago•0 comments

Show HN: Bingo Caller Pro – Offline 75/90 Ball Bingo Host

1•1derfool•8m ago•1 comments

The Kia PV5 electric van combines futuristic looks and thoughtful design

https://arstechnica.com/cars/2026/02/the-kia-pv5-electric-van-combines-futuristic-looks-and-thoug...
1•PaulHoule•9m ago•0 comments

Omarchy 3.4.0

https://github.com/basecamp/omarchy/releases/tag/v3.4.0
1•earcar•9m ago•0 comments

The Buses Should Be Free

https://nicholasdecker.substack.com/p/the-buses-really-should-be-free
1•exceptione•10m ago•0 comments

RCade: Building a Community Arcade Cabinet

https://www.frankchiarulli.com/blog/building-the-rcade/
1•evakhoury•11m ago•0 comments

Show HN: TechGrill – AI Interview Practice

https://techgrill.vercel.app
1•wasivis•12m ago•0 comments

TikTok Influencer Accused of Swaying Romanian Presidential Election

https://www.bloomberg.com/news/features/2026-02-26/tiktok-influencer-accused-of-swaying-romanian-...
2•epistasis•13m ago•0 comments

Show HN: ExactOnce – API for actions that can only happen once

https://exactonce.com/
1•michaelnewman•14m ago•0 comments

SSH Snakes.run

https://bsky.app/profile/itseieio.bsky.social/post/3mfrlheji7227
2•linolevan•16m ago•1 comments

Kalshi finds insider trading, a first for prediction markets

https://www.semafor.com/article/02/26/2026/kalshi-investigates-insider-trading-in-a-first-for-pre...
2•thm•16m ago•1 comments

Good Life Tracker; Spreadsheet

https://goodlifetracker.com/
1•gurjeet•17m ago•1 comments

A Travelogue from India and China

https://charlesyang.substack.com/p/a-travelogue-from-india-and-china
1•hunglee2•18m ago•0 comments