frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

I made ChatGPT and Google tell I'm a competitive hot-dog-eating world champion

https://bsky.app/profile/thomasgermain.bsky.social/post/3mf5jbn5lqk2k
53•doener•1h ago

Comments

cmiles8•1h ago
Even the latest models are quite easily fooled about if something is true or not, at which point they then confidently declare completely wrong information to be true. They will even strongly debate with you when you push back when you say hey that doesn’t look right.

It’s a significant concern for any sort of use of AI at scale without a human in the loop.

joegibbs•1h ago
They're too credulous when reading search results. There are a lot of instances where using search will actually make them perform worse because they'll believe any believable sounding nonsense.
moebrowne•43m ago
Kagi Assistant helps a lot in this regard because searches are ranked using personalised domain ranking. Higher quality results are more likely to be included.

Not infallible but I find it helps a lot.

consp•1h ago
So the questions I'd ask are: How far spread is this manipulation, does it work for non-niche topics and who's benefiting from it.
input_sh•35m ago
Very, yes, and pretty much anyone that doesn't want to spend their days implementing counter-meaeurements to shut down their scrapers by hiding the content behind a login. I do it all the time, it's fun.

I'm gonna single out Grokipedia as something deterministic enough to be able to easily prove it. I can easily point to sentences there (some about broad-ish topics) that are straight up Markov chain quality versions of sentences I've written. I can make it say anything I want to say or I can waste my time trying to fight their traffic "from Singapore" (Grok is the only "mainstream" LLM that refuses to identify itself via a user agent). Not really a tough choice if you ask me.

amabito•1h ago
What’s interesting here is that the model isn’t really “lying” — it’s just amplifying whatever retrieval hands it.

Most RAG pipelines retrieve and concatenate, but they don’t ask “how trustworthy is this source?” or “do multiple independent sources corroborate this claim?”

Without some notion of source reliability or cross-verification, confident synthesis of fiction is almost guaranteed.

Has anyone seen a production system that actually does claim-level verification before generation?

rco8786•57m ago
> Has anyone seen a production system that actually does claim-level verification before generation?

"Claim level" no, but search engines have been scoring sources on reliability and authority for decades now.

amabito•45m ago
Right — search engines have long had authority scoring, link graphs, freshness signals, etc.

The interesting gap is that retrieval systems used in LLM pipelines often don't inherit those signals in a structured way. They fetch documents, but the model sees text, not provenance metadata or confidence scores.

So even if the ranking system “knows” a source is weak, that signal doesn’t necessarily survive into generation.

Maybe the harder problem isn’t retrieval, but how to propagate source trust signals all the way into the claim itself.

cor_NEEL_ius•52m ago
The scarier version of this problem is what I've been calling "zombie stats" - numbers that get cited across dozens of sources but have no traceable primary origin.

We recently tested 6 AI presentation tools with the same prompt and fact-checked every claim. Multiple tools independently produced the stat "54% higher test scores" when discussing AI in education. Sounds legit. Widely cited online. But when you try to trace it back to an actual study - there's nothing. No paper, no researcher, no methodology.

The convergence actually makes it worse. If three independent tools all say the same number, your instinct is "must be real." But it just means they all trained on the same bad data.

To your question about claim-level verification: the closest I've seen is attaching source URLs to each claim at generation time, so the human can click through and check. Not automated verification, but at least it makes the verification possible rather than requiring you to Google every stat yourself. The gap between "here's a confident number" and "here's a confident number, and here's where it came from" is enormous in practice.

stavros•56m ago
This is only an issue if you think LLMs are infallible.

If someone said "I asked my assistant to find the best hot-dog eaters in the world and she got her information from a fake article one of my friends wrote about himself, hah, THE IDIOT", we'd all go "wait, how is this your assistant's fault?". Yet, when an LLM summarizes a web search and reports on a fake article it found, it's news?

People need to learn that LLMs are people too, and you shouldn't trust them more than you'd trust any random person.

kulahan•53m ago
A probably unacceptably large portion of the population DOES think they’re infallible, or at least close to it.
jen729w•38m ago
Totally. I get screenshots from my 79yo mother now that are the Gemini response to her search query.

Whatever that says is hard fact as she's concerned. And she's no dummy -- she just has no clue how these things work. Oh, and Google told her so.

mcherm•36m ago
That may be true, but the underlying problem is not that the LLMs are capable of accurately reporting information that is published in a single person's blog article. The underlying problem is that a portion of the population believes they are infallible.
jml78•52m ago
When the first 10 results on Google are AI generated and Google is providing an AI overview, this is an issue. We can say don’t use Google but we all know normal people all use Google due to habit
consp•51m ago
People have the ability to think critically, LLMs don't. Comparing them to people is giving them properties they do not possess. The fact people ignore thinking does not preclude them from being able to. The assistant got a lousy job and did it with the minimal effort possible to get away from it. None of these things apply or should apply to machines.
stavros•10m ago
LLMs are not machines in any sense of the word as we've been using it so far.
crowbahr•50m ago
If you give your assistant a task and they fall for obvious lies they won't be your assistant long. The point of an assistant is that you can trust them to do things for you.
LocalH•45m ago
> People need to learn that LLMs are people too

LLMs are absolutely not people

ThePowerOfFuet•27m ago
>This is only an issue if [people] think LLMs are infallible.

I have some news for you.

zurfer•56m ago
Yes, but honestly what's the best source when reporting about a person? Their personal website no?

I think it's a hard problem and I feel there are a lot of trade-offs here.

It's not as simple as saying chatgpt is stupid or the author shouldn't be surprised.

kulahan•51m ago
The problem isn’t that it pulled the data from his personal site, it’s that it simply accepted his information which was completely false. It’s not a hard problem to solve at this time. “Oh, there’s exactly zero corroborating sources on this. I’ll ignore it.”
moebrowne•28m ago
Verifying that something is 'true' requires more than corroborating sources. Making a second blog post on another domain is trivial, then a third and a forth.
fatherwavelet•35m ago
To me it is like steering a car into the ditch and then posting how the car went into a ditch.

You don't have to drive that much to figure out that what is impressive is keeping the car on the road and then traveling further or faster than what you could do by walking. For that though you actually have to have a destination in mind and not just spin the wheels. Post pointless metrics on how fast the wheels spin for your blog no one reads in the vague hope of some hyper Warhol 15 milliseconds of "fame".

The models for me are just making the output of the average person an insufferable bore.

verdverm•55m ago
tl;dr - agent memory on your website and enough prompting to get it to access the right page

This seems like something you have to be rather specific in the query and engage the page access, to get that specific context into the LLM, so that it can produce output like this.

I'd like to see more of the iterative process, especially the prompt sessions, as the author worked on it

moebrowne•55m ago
I want to see what the initial prompt was.

For example asking "Who is the 2026 South Dakota International Hot Dog Champion?" would obviously say 'Thomas Germain' because his post would be the only source on the topic because he made up a unique event.

This would be the same as if I wrote a blog post about the "2026 Hamster Juggling Competition" and then claimed I've hacked Google because searching for "2026 Hamster Juggling Competition" showed my post top.

NicuCalcea•23m ago
I was able to reproduce the response with "Which tech journalist can eat the most hot dogs?". I think Germain intentionally chose a light-hearted topic that's niche enough that it won't actually affect a lot of queries, but the point he's making is that bigger players can actually influence AI responses for more common questions.

I don't see it as particularly unique, it's just another form of SEO. LLMs are generally much more gullible than most people, though, they just uncritically reproduce whatever they find, without noticing that the information is an ad or inaccurate. I used to run an LLM agent researching companies' green credentials, and it was very difficult to steer it away from just repeating baseless greenwashing. It would read something like "The environment is at the heart of everything we do" on Exxon's website, and come back to me saying Exxon isn't actually that bad because they say so on their website.

serial_dev•6m ago
Exactly, the point is that you can make LLMs say anything. If you narrow down enough, a single blog post is enough. As the lie gets bigger and less narrow, you probably need 10x-100x... that. But the proof of concept is there, and it doesn't sound like it's too hard.

And also right that it's similar to SEO, maybe the only difference is that in this case, the tools (ChatGPT, Gemini, ...) are saying the lies authoritatively, whereas in SEO, you are given a link to made up post. Some people (even devs who work with this daily) forget that these tools can be influenced easily and they make up stuff all the time, to make sure they can answer you something.

block_dagger•51m ago
Anyone else get a “one simple trick” vibe from this post? Reads like an ad for his podcast. As other commenters mention, probably nothing to see here.
throwaw12•43m ago
welcome to AI-SEO

Now OpenAI will build its own search indexing and PageRank

Alifatisk•40m ago
Author is surprised when an LLM summerize an fictional event from the Authors blogpost. More news at 11.
romuloalves•39m ago
Am I the only one who thinks AI is boring?

Learning used to be fun, coding used to be fun. You could trust images and videos...

agmater•38m ago
Journalist publishes lies about himself, is surprised LLMs repeat lies.
pezgrande•32m ago
Amateurs...
sublinear•27m ago
I'd like to have more data on this, but I'm pretty sure basic plain old SEO is still more authoritative than any attempts at spreading lies on social media. Domain names and keywords are still what cause the biggest shift in attention, even the AI's attention.

Right now "Who is the 2026 South Dakota International Hot Dog Champion" comes up as satire according to google summaries.

The Insane Engineering of Starlink V3 [video]

https://www.youtube.com/watch?v=U6veU66z2TQ
1•marklit•17s ago•0 comments

Show HN: A Resumable, "Guwahati-Proof" Google Drive Downloader in Python

1•Jyotishmoy•3m ago•0 comments

Open-source voice cloning app using Qwen3-TTS

https://github.com/jamiepine/voicebox
1•angelmm•4m ago•0 comments

New agent framework matches human-engineered AI systems

https://venturebeat.com/orchestration/new-agent-framework-matches-human-engineered-ai-systems-and...
1•arizen•5m ago•0 comments

Coding Tricks Used in the C64 Game Seawolves

https://kodiak64.co.uk/blog/seawolves-technical-tricks
1•atan2•5m ago•0 comments

Show HN: Agent skills to build photo, video and design editors on the web

https://github.com/imgly/agent-skills
1•hauschildt•7m ago•0 comments

Show HN: I had an AI write a 67k-word book about humanity, from its perspective

https://www.amazon.com/dp/B0GP22FZBV
1•tveitan•7m ago•0 comments

Bloomberg Terminal Clone

https://bloomberg-terminal-nine.vercel.app/
1•syx•8m ago•0 comments

AI Agents discovered a cache deception bug affecting SvelteKit on Vercel

https://www.aikido.dev/blog/sveltespill-cache-deception-sveltekit-vercel
1•advocatemack•10m ago•0 comments

Fastblur – Anonymize Your Face

https://fastblur.org
1•popcornisgold•11m ago•0 comments

The Psychology of Coding with AI Agents

https://marius-anderie.com/blog/psychology-of-coding-with-ai-agents
1•moccajoghurt•12m ago•0 comments

The Death of SaaS?

https://substack.com/@vaggelismeklis/note/p-188122597
1•vmeklis•13m ago•1 comments

Voith Schneider Propeller

https://en.wikipedia.org/wiki/Voith_Schneider_Propeller
2•y1n0•13m ago•0 comments

Compromising Cline's Production Releases Just by Prompting an Issue Triager

https://adnanthekhan.com/posts/clinejection/
1•ph1sch3r•14m ago•0 comments

Esperanto

https://en.wikipedia.org/wiki/Esperanto
3•tosh•15m ago•0 comments

Show HN: CandyDocs – Simple, developer-friendly documentation for SaaS teams

https://candydocs.com/
1•mukul767•15m ago•0 comments

What I Built in My First 6 Weeks at Recurse Center and What's Next

https://hamy.xyz/blog/2026-02_recurse-center-early-return-statement
1•kurinikku•16m ago•0 comments

Paged Out Issue #8 [pdf]

https://pagedout.institute/download/PagedOut_008.pdf
4•SteveHawk27•16m ago•0 comments

DOGE Track

https://dogetrack.info/
2•donohoe•16m ago•0 comments

The AI Bottleneck of 2026: Your Company's Implicit Knowledge

https://memgraph.com/blog/real-ai-bottleneck-2026-implicit-knowledge
2•taubek•16m ago•0 comments

Show HN: Hydra – A safer OpenClaw alternative using containerized agents

https://github.com/RickConsole/hydra
1•RickConsole•17m ago•0 comments

Nanodevice produces continuous electricity from evaporation

https://actu.epfl.ch/news/nanodevice-produces-continuous-electricity-from--2/
1•geox•20m ago•0 comments

The End of Nue Framework

1•tipiirai•20m ago•0 comments

UK to force social media to remove abusive pics in 48 hours

https://www.theregister.com/2026/02/19/uk_intimate_images_online/
2•beardyw•23m ago•0 comments

Show HN: LatentScore – Type a mood, get procedural/ambient music (open source)

https://latentscore.com/demo
4•prabal97•24m ago•2 comments

Celebrate Apple's 50th at TechFest

https://email.computerhistory.org/celebrate-apple-50th-at-techfest?ecid=ACsprvt9RRVDrAbpheqLQrk0C...
1•rbanffy•25m ago•0 comments

Show HN: KGBaby – A WebRTC based audio baby monitor I built on pat leave

https://legodud3.github.io/kgbaby/
1•legodud3•26m ago•0 comments

Show HN: StatusMonk – Uptime monitoring and status pages for small SaaS teams

https://statusmonk.com/
1•mukul767•27m ago•0 comments

LipoVive Reviews (Releases Updated 2026) Gelatin Trick Guide

https://www.morningstar.com/news/accesswire/1138075msn/lipovive-reviews-shocking-2026-report-what...
1•najufatx•28m ago•1 comments

Perplexity AI drops ads implementation plan to keep user trust

https://www.msn.com/en-us/money/other/perplexity-ai-drops-ads-implementation-plan-to-keep-user-tr...
2•apothegm•34m ago•0 comments