frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
510•klaussilveira•8h ago•141 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
848•xnx•14h ago•507 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
61•matheusalmeida•1d ago•12 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
168•isitcontent•9h ago•20 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
171•dmpetrov•9h ago•77 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
282•vecti•11h ago•127 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
64•quibono•4d ago•11 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
340•aktau•15h ago•165 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
228•eljojo•11h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
333•ostacke•14h ago•90 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
425•todsacerdoti•16h ago•221 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
4•videotopia•3d ago•0 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
365•lstoll•15h ago•253 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
35•kmm•4d ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
11•romes•4d ago•1 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
12•denuoweb•1d ago•1 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
85•SerCe•4h ago•66 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
214•i5heu•11h ago•160 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
59•phreda4•8h ago•11 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
35•gfortaine•6h ago•9 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
16•gmays•4h ago•2 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
123•vmatsiiako•13h ago•51 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
160•limoce•3d ago•80 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
258•surprisetalk•3d ago•34 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1022•cdrnsf•18h ago•425 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
53•rescrv•16h ago•17 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
44•lebovic•1d ago•13 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
14•denysonique•5h ago•1 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
98•ray__•5h ago•49 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
81•antves•1d ago•59 comments
Open in hackernews

Death by AI

https://davebarry.substack.com/p/death-by-ai
583•ano-ther•6mo ago

Comments

rf15•6mo ago
So many reports like this, it's not a question of working out the kinks. Are we getting close to our very own Stop the Slop campaign?
randcraw•6mo ago
Yeah, after daily working with AI for a decade in a domain where it _does_ work predictably and reliably (image analysis), I continue to be amazed how many of us continue to trust LLM-based text output as being useful. If any human source got their facts wrong this often, we'd surely dismiss them as a counterproductive imbecile.

Or elect them President.

BobbyTables2•6mo ago
HAL 9000 in 2028!
locallost•6mo ago
I am beginning to wonder why I use it, but the idea of it is so tempting. Try to google it and get stuck because it's difficult to find, or ask and get an instant response. It's not hard to guess which one is more inviting, but it ends up being a huge time sink anyway.
trod1234•6mo ago
Regulation with active enforcement is the only civil way.

The whole point of regulation is for when the profit motive forces companies towards destructive ends for the majority of society. The companies are legally obligated to seek profit above all else, absent regulation.

Aurornis•6mo ago
> Regulation with active enforcement is the only civil way.

What regulation? What enforcement?

These terms are useless without details. Are we going to fine LLM providers every time their output is wrong? That’s the kind of proposition that sounds good as a passing angry comment but obviously has zero chance of becoming a real regulation.

Any country who instituted a regulation like that would see all of the LLM advancements and research instantly leave and move to other countries. People who use LLMs would sign up for VPNs and carry on with their lives.

trod1234•6mo ago
Regulations exist to override profit motive when corporations are unable to police themselves.

Enforcement ensures accountability.

Fines don't do much in a fiat money-printing environment.

Enforcement is accountability, the kind that stakeholders pay attention to.

Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal; where that person and decision must be documented ahead of time.

> Any country who instituted a regulation like that would see all of the LLM advances and research instantly leave and move to other countries.

This is fallacy. Its a spectrum, research would still occur, it would be tempered by the law and accountability, instead of the wild-west where its much more profitable to destroy everything through chaos. Chaos is quite profitable until it spread systemically and ends everything.

AI integration at a point where it can impact the operation of nuclear power plants through interference (perceptual or otherwise) is just asking for a short path to extinction.

Its quite reasonable that the needs for national security trump private business making profit in a destructive way.

Ukv•6mo ago
> Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal

Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions? If not, I feel it's kind of arbitrarily deterring certain approaches potentially at the cost of safety ("sure this CNN blows traditional methods out of the water in terms of accuracy, but the legal risk isn't worth it").

In most cases I think it'd make more sense to have fines and incentives for above-average and below-average incident rates (and liability for negligence in the worse cases), then let methods win/fail on their own merit.

trod1234•6mo ago
> Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions?

I would say yes because the person deciding must be the one making the entire decision but there are many examples where someone might be paid to just rubberstamp decisions already made. Letting the person who decided to implement the solution off scot-free.

The mere presence of AI (anything based on underlying work of perceptrons) being used accompanied by a loss should prompt a thorough review which corporations currently are incapable of performing for themselves due to lack of consequences/accountability. Lack of disclosure, and the limits of current standing, is another issue that really requires this approach.

The problem of fines is that they don't provide the needed incentives to large entities as a result of money-printing through debt-issuance, or indirectly through government contracts. Its also far easier to employ corruption to work around the fine later for these entities as market leaders. We've seen this a number of times in various markets/sectors like JPM and the 10+ year silver price fixing scandal.

Merit of subjective rates isn't something that can be enforced, because it is so easily manipulated. Gross negligence already exists and occurs frighteningly common but never makes it to court because proof often requires showing standing to get discovery which isn't generally granted absent a smoking gun or the whim of a judge.

Bad things happen certainly where no one is at fault, but most business structure today is given far too much lee-way and have promoted the 3Ds. Its all about: deny, defend, depose.

Ukv•6mo ago
> > Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions?

> I would say yes [...]

So if you're a doctor making manual decisions about how to treat a patient, and some harm/loss occurs, you'd be criminally guilty-until-proven-innocent? I feel it should require evidence of negligence (or malice), and be done under standard innocent-until-proven-guilty rules.

> The mere presence of AI (anything based on underlying work of perceptrons) [...]

Why single out based on underlying technology? If for instance we're choosing a tumor detector, I'd claim what's relevant is "Method A has been tested to achieve 95% AUROC, method B has been tested to achieve 90% AUROC" - there shouldn't be an extra burden in the way of choosing method A.

And it may well be that the perceptron-based method is the one with lower AUROC - just that it should then be discouraged because it's worse than the other methods, not because a special case puts it at a unique legal disadvantage even when safer.

> The problem of fines is that they don't provide the needed incentives to large entities as a result of money-printing through debt-issuance, or indirectly through government contracts.

Large enough fines/rewards should provide large enough incentive (and there would still be liability for criminal negligence where there is sufficient evidence of criminal negligence). Those government contracts can also be conditioned on meeting certain safety standards.

> Merit of subjective rates isn't something that can be enforced

We can/do measure things like incident rates, and have government agencies that perform/require safety testing and can block products from market. Not always perfect, but seems better to me than the company just picking a scape-goat.

Jensson•6mo ago
> So if you're a doctor making manual decisions about how to treat a patient, and some harm/loss occurs, you'd be criminally guilty-until-proven-innocent?

Yes, that proof is called a professional license, without that you are presumed guilty even if nothing goes wrong.

If we have licenses for AI and then require proof that the AI isn't tampered with for requests then that should be enough, don't you think? But currently its the wild west.

Ukv•6mo ago
> Yes, that proof is called a professional license, without that you are presumed guilty even if nothing goes wrong.

A professional license is evidence against the offense of practicing without a license, and the burden of proof in such a case still rests on the prosecution to prove beyond reasonable doubt that you did practice without a license - you aren't presumed guilty.

Separately, what trod1234 was suggesting was being guilty-until-proven-innocent when harm occurs (with no indication that it'd only apply to licensed professions). I believe that's unjust, and that the suggestion stemmed mostly from animosity towards AI (maybe similar to "nurses administering vaccines should be liable for every side-effect") without consideration of impact.

> If we have licenses for AI and then require proof that the AI isn't tampered with for requests then that should be enough, don't you think?

Mandatory safety testing for safety-critical applications makes sense (and already occurs). It shouldn't be some rule specific to AI - I want to know that it performs adequately regardless of whether it's AI or a traditional algorithm or slime molds.

ViscountPenguin•6mo ago
A very simple example would be a mandatory mechanism for correcting mistakes in prebaked LLM outputs, and an ability to opt out of things like Gemini AI Overview on pages about you. Regulation isn't all or nothing, viewing it like that is reductive.
weatherlite•6mo ago
> Are we getting close to our very own Stop the Slop campaign?

I don't think so. We read about the handful of failures while there are billions of successful queries every day, in fact I think AI Overviews is sticky and here to stay.

mepiethree•6mo ago
Are we sure these billions of queries are “successful” for the actual user journey? Maybe this is particular to my circle, but as the only “tech guy” most of my friends and family know, I am regularly asked if I know how to turn off Google AI overviews because many people find them to be garbage
gtsop•6mo ago
Why on earth are you accepting his premise that there are billions of successful requests? I just asked chatgpt about query success rate and it replied (part):

"...Semantic Errors / Hallucinations On factual queries—especially legal ones—models hallucininate roughly 58–88% of the time

A journalism‑focused study found LLM-based search tools (e.g., ChatGPT Search, Perplexity, Grok) were incorrect in 60%+ of news‑related queries

Specialized legal AI tools (e.g., Lexis+, Westlaw) still showed error rates between 17% and 34%, despite being domain‑tuned "

draw_down•6mo ago
Man, this guy is still doing it. Good for him! I used to read his books (compendia of his syndicated column) when I was a kid.
hibert•6mo ago
Leave it to a journalist to play chicken with one of the most powerful minds in the world on principle.

Personally, if I got a resurrection from it, I would accept the nudge and do the political activism in Dorchester.

jwr•6mo ago
I'd say this isn't just an AI overview thing. It's a Google thing. Google will sometimes show inaccurate information and there is usually no way to correct it. Various "feedback" forms are mostly ignored.

I had to fight a similar battle with Google Maps, which most people believe to be a source of truth, and it took years until incorrect information was changed. I'm not even sure if it was because of all the feedback I provided.

I see Google as a firehose of information that they spit at me ("feed"), they are too big to be concerned about any inconsistencies, as these don't hurt their business model.

muglug•6mo ago
No, this is very much an AI overview thing. In the beginning Google put the most likely-to-match-your-query result at the top, and you could click the link to see whether it answered your question.

Now, frequently, the AI summaries are on top. The AI summary LLM is clearly a very fast, very dumb LLM that’s cheap enough to run on webpage text for every search result.

That was a product decision, and a very bad one. Currently a search for "Suicide Squad" yields

> The phrase "suide side squad" appears to be a misspelling of "Suicide Squad"

weatherlite•6mo ago
> That was a product decision, and a very bad one.

I don't know that it's a bad decision, time will judge it. Also, we can expect the quality of the results to improve over time. I think Google saw a real threat to their search business and had to respond.

gambiting•6mo ago
The threat to their search business had nothing to do with AI but with the insane amount of SEO-ing they allowed to rake in cash. Their results have been garbage for years, even for tech stuff where they traditionally excelled - searching for "what does class X do in .NET" yields several results for paid programming courses rather than the actual answer, and that's not an AI problem.
bee_rider•6mo ago
SEO-wise (and in no other way), I think we should have more sympathy for Google. They are just… losing at the cat-and-mouse game. They are playing cat against a whole world of mice, I don’t think anyone other than pre-decline Google could win it.
Arainach•6mo ago
The number of mice has grown exponentially. It's not clear anyone could have kept up.

Millions, probably tens of millions of people have jobs trying to manipulate search results - with billions of dollars of resources available to them. With no internal information, it's safe to say no more than thousands of Googlers (probably fewer) are working to combat them.

If every one of them is a 10x engineer they're still outnumbered by more than 2 orders of magnitude.

anonymars•6mo ago
I understand what you're saying, but also supposedly at some point quality deliberately took a back seat to "growth"

https://www.wheresyoured.at/the-men-who-killed-google/

> The key event in the piece is a “Code Yellow” crisis declared in 2019 by Google’s ads and finance teams, which had forecast a disappointing quarter. In response, Raghavan pushed Ben Gomes — the erstwhile head of Google Search, and a genuine pioneer in search technology — to increase the number of queries people made by any means necessary.

(Quoting from this follow-up post: https://www.wheresyoured.at/requiem-for-raghavan/)

anonymars•6mo ago
Btw this was the HN discussion, I realized, well, where else would I have come across that?

https://news.ycombinator.com/item?id=40133976

h2zizzle•6mo ago
No, they made the problem by not dealing with such websites swiftly and brutally. Instead, they encouraged it.
zargon•6mo ago
Google isn’t even playing that game, they’re playing the line-go-up game, which precludes them from dealing with SEO abuse in an effective way.
lelanthran•6mo ago
> SEO-wise (and in no other way), I think we should have more sympathy for Google. They are just… losing at the cat-and-mouse game.

I don't think they are; they have realised (quite accurately, IMO) that users would still use them even if they boosted their customers' rankings in the results.

They could, right now, switch to a model that penalises pages for each ad. They don't. They could, right now, penalise highly monetised "content" like courses and crap. They don't do that either.[1]

If Kagi can get better results with a fraction of the resources, there is no argument to be made that Google is playing a losing game.

--------------------------------------

[1] All the SEO stuff is damn easy to pick out; any page that is heavily monetised (by ads, or similar commercial offering) is very very easy to bin. A simple "don't show courses unless search query contains the word courses" type of rule is nowhere near computationally expensive. Recording the number of ads on a page when crawling is equally cheap.

thfuran•6mo ago
>A simple "don't show courses unless search query contains the word courses" type of rule is nowhere near computationally expensive

It’s nowhere near good either. What about the searches for cuorses or classes or training?

lelanthran•6mo ago
Their current search already recognises mispellings and synonyms.

Why would they drop that? It's not as if they have to throw away all the preprocessing they do on the search query.

They can continue preprocessing exactly like they do it now.

Miraste•6mo ago
> If Kagi can get better results with a fraction of the resources, there is no argument to be made that Google is playing a losing game.

Google's algorithm is the target for every SEO firm in the world. No one is targeting Kagi. Therefore, Kagi can use techniques that would not work at Google.

rightbyte•6mo ago
Getting high SEO ranking is a lot of job. Some FTEs could just manually downrank SEO farms.
bee_rider•6mo ago
They are doing an OK job of making AI look like annoying garbage. If that’s the plan… actually, it might be brilliant.
weatherlite•6mo ago
I can't argue here, for me they are mostly useful but I get that one catastrophic failure or two can make someone completely distrust them. But the actual judges are gonna be the masses, we'll see. For now adoption seems quite strong.
Miraste•6mo ago
Their "AI Overview" has not noticeably improved on its (many) failings for at least a year. In that time, Google's LLMs have gotten much better. They aren't implementing the advances they've made, presumably for cost reasons.

Meanwhile, every single person I know has come to trust Google less. That will catch up with them eventually.

flomo•6mo ago
Right, the classic google search results are still there. But even before the AI Overview, Google's 'en' plan has been to put as many internal links at the top of the page as possible. I just tried this and you have to scroll way down below the fold to find Barry's homepage or substack.
h2zizzle•6mo ago
No, the search queries are likely run through a similar "prompt modification" process as on many AI platforms, and the results themselves aren't ranked anything like they used to be. And, of course, Google killed the functionality of certain operators (+, "", etc.) years ago. Classic Google Search is very much dead.
yonatan8070•6mo ago
Was there ever an announcement regarding the elimination of search operators? Or does Google still claim they are real?
h2zizzle•6mo ago
Nothing for "" afaik. + was killed to make Google+ discoverable (or so Google claimed at the time).
flomo•6mo ago
At some point, Google search was so good that you didn't really need the operators, like you weren't just prodding some primitive AltaVista to give the results. So I think "almost nobody used that" came long before the en-plan of filling the top 50% with internal links.
hughw•6mo ago
Well it was accurate if you were asking about the Dave Barry in Dorchester.
omnicognate•6mo ago
He won a Pulitzer too? Small world.
o11c•6mo ago
I remember when the biggest gripe I had with Google was that when I searched for Java documentation (by class name), it defaulted to showing me the version for 1.4 instead of 6.
sroussey•6mo ago
Same problem with LLMs particularly if a new version released in the last year.
PontifexMinimus•6mo ago
> It's a Google thing. Google will sometimes show inaccurate information and there is usually no way to correct it.

Surely there is a way to correct it: getting the issue on the front page of HN.

kjkjadksj•6mo ago
Google maps is so bad with its auto content. Ultra private country club? Lets mark the cartpaths as full bike paths. Cemetery? Also bike paths. Random spit of sidewalk and grass between an office building and its parking lot? Believe it or not also bike paths.
sethherr•6mo ago
Biking is great tho
xp84•6mo ago
I mean, that last one sounds functionally useful, since it would indeed be better to take the random concrete paths inside an office property (that wasn’t a closed campus) than to ride on the expressway that fronts it, if the “paths” are going where you’re going.
kjkjadksj•6mo ago
Yeah it doesn’t really play out like that. Just saw another example today of this gated condo complex where half the sidewalks are arbitrarily full blown bike trails. Clearly they are just trying to automagically get the trails from imagery of putative paths instead of, you know, pulling directly from the municipal bike path network maps. I guess scaling something more like that out was too hard for multibillion dollar google.

I have tried reporting these fake paths in the past but it didn’t get them removed.

aimor•6mo ago
I went to a party today at a park. Google maps wanted me to drive my car on the walking path to the picnic pavilion. Here, you can get the same directions: https://www.google.com/maps/dir/38.8615917,-77.1034763/Alcov...
throwaway2037•6mo ago
This really made me laugh. Has Will Ferrell already made a skit for Funny or Die where he precisely follows Google Maps driving instructions and runs over a bunch of old people and children? It could be very funny.
michaelcampbell•6mo ago
Waze (also owned by Google) seems to get it close(r), but it should be noted that actually driving to/from those addresses can't really be done. You can drive to where you might be able to SEE the destination, but not really get there.

https://www.waze.com/live-map/directions/us/va/arlington/alc...

yencabulator•6mo ago
In its defense, it has improved greatly. Back in the day, Google Maps told me to switch ferries in the middle of the sea. The car loading ramp was at an angle, so if I could just build up enough speed...
M4v3R•6mo ago
For up to date bike paths, at least where I live I hear very good things about maps.me (based on OSM data).
cosmical65•6mo ago
> I'd say this isn't just an AI overview thing. It's a Google thing. Google will sometimes show inaccurate information and there is usually no way to correct it.

Well, in this case the inaccurate information is shown because the AI overview is combining information about two different people, rather than the sources being wrong. With traditional search, any webpages would be talking about one of the two people and contain only information about them. Thus, I'd say that this problem is specific to the AI overview.

jamesrcole•6mo ago
The science fiction author Greg Egan has been "battling" with Google for many years because, even though there are zero photos of him on the internet, Google insists that certain photos are of him. This was all well before Google started using AI. He's written about it here: https://gregegan.net/ESSAYS/GOOGLE/Google.html
KolibriFly•6mo ago
Google doesn't really have an incentive to prioritize accuracy at the individual level, especially when the volume of content makes it easy for them to hide behind scale
bokkies•6mo ago
Back in 2015 I walked 2 miles to a bowling alley tagged on Google maps (in Northwich, England) with my then gf...imagine our surprise when we walked in to a steamy front room and reception desk, my gf asks 'is this the bowling alley' to which a glistening man in a tank top replies 'this is a gay and lesbian sauna love'. We beat a hasty retreat but I imagine they were having more fun than bowling in there
_ache_•6mo ago
Can you please re-consult a physician? I just check on ChatGPT, I'm pretty confident you are dead.
devinplatt•6mo ago
This reminds me a lot of the special policies Wikipedia has developed through experience about sensitive topics, like biographies of living persons, deaths, etc.
pyman•6mo ago
I'm worried about this. Companies like Wikipedia spent years trying to get things right, and now suddenly Google and Microsoft (including OpenAI) are using GenAI to generate content that, frankly, can't be trusted because it's often made up.

That's deeply concerning, especially when these two companies control almost all the content we access through their search engines, browsers and LLMs.

This needs to be regulated. These companies should be held accountable for spreading false information or rumours, as it can have unexpected consequences.

Aurornis•6mo ago
> This needs to be regulated. They should be held accountable for spreading false information or rumours,

Regulated how? Held accountable how? If we start fining LLM operators for pieces of incorrect information you might as well stop serving the LLM to that country.

> since it can have unexpected consequences

Generally you hold the person who takes action accountable. Claiming an LLM told you bad information isn’t any more of a defense than claiming you saw the bad information on a Tweet or Reddit comment. The person taking action and causing the consequences has ownership of their actions.

I recall the same hand-wringing over early search engines: There was a debate about search engines indexing bad information and calls for holding them accountable for indexing incorrect results. Same reasoning: There could be consequences. The outrage died out as people realize they were tools to be used with caution, not fact-checked and carefully curated encyclopedias.

> I'm worried about this. Companies like Wikipedia spent years trying to get things right,

Would you also endorse the same regulations against Wikipedia? Wikipedia gets fined every time incorrect information is found on the website?

EDIT: Parent comment was edited while I was replying to add the comment about outside of the US. I welcome some country to try regulating LLMs to hold them accountable for inaccurate results so we have some precedent for how bad of an idea that would be and how much the citizens would switch to using VPNs to access the LLM providers that are turned off for their country in response.

pyman•6mo ago
If Google accidentally generates an article claiming a politician in XYZ country is corrupt the day before an election, then quietly corrects it after the election, should we NOT hold them accountable?

Other companies have been fined for misleading customers [0] after a product launch. So why make an exception for Big Tech outside the US?

And why is the EU the only bloc actively fining US Big Tech? We need China, Asia and South America to follow their lead.

[0] https://en.m.wikipedia.org/wiki/Volkswagen_emissions_scandal

jdietrich•6mo ago
Volkswagen intentionally and persistently lied to regulators. In this instance, Google confused one Dave Barry with another Dave Barry. While it is illegal to intentionally deceive for material gain, it is not generally illegal to merely be wrong.
pyman•6mo ago
This is exactly why we need to regulate Big Tech. Right now, they're saying: "It wasn't us, it was our AI's fault."

But how do we know they're telling the truth? How do we know it wasn't intentional? And more importantly, who's held accountable?

While Google's AI made the mistake, Google deployed it, branded it, and controls it. If this kind of error causes harm (like defamation, reputational damage, or interference in public opinion), intent doesn't necessarily matter in terms of accountability.

So while it's not illegal to be wrong, the scale and influence of Big Tech means they can't hide behind "it was the AI, not us."

blibble•6mo ago
> If we start fining LLM operators for pieces of incorrect information you might as well stop serving the LLM to that country.

sounds good to me?

pyman•6mo ago
+1

Fines, when backed by strong regulation, can lead to more control and better quality information, but only if companies are actually held to account.

Timwi•6mo ago
Wikipedia is not a company, it's a website.

The organization that runs the website, the Wikimedia Foundation, is also not a company. It's a nonprofit.

And the Wikimedia Foundation have not “spent years trying to get things right”, assuming you're referring to facts posted on Wikipedia. That was in fact a bunch of unpaid volunteer contributors, many of whom anonymous and almost all of whom unaffiliated with the Wikimedia Foundation.

pyman•6mo ago
Yes, Wikipedia is an organisation, not a company (my bad). They spent years improving its tools and building a strong community. Volunteers review changes and some edits get automatically flagged or even reversed if they look suspicious or come from anonymous users. When there's a dispute, editors use "Talk" pages to discuss what should or shoulda't be included.

You can't really argue with those facts.

weatherlite•6mo ago
> I'm worried about this. Companies like Wikipedia spent years trying to get things right,

Did they ? Lots of people, and some research verify this, think it has a major left leaning bias, so while usually not making up any facts editors still cherry pick whatever facts fit the narrative and leave all else aside.

decimalenough•6mo ago
This is indeed a problem, but it's a different problem from just making shit up, which is an AI specialty. If you see something that's factually wrong on Wikipedia, it's usually pretty straightforward to get it fixed.
pyman•6mo ago
Exactly
weatherlite•6mo ago
> This is indeed a problem, but it's a different problem from just making shit up, which is an AI specialty

It's a bigger problem than AI errors imo, there are so many Wikipedia articles that are heavily biased. A.I makes up silly nonsense maybe once in 200 queries, not 20% of the time. Also, people perhaps are more careful and skeptical with A.I results but take Wikipedia as a source of truth.

Tijdreiziger•6mo ago
[citation needed]
weatherlite•6mo ago
"Larry Sanger, co-founder of Wikipedia, has been critical of Wikipedia since he was laid off as the only editorial employee and departed from the project in 2002.[28][29][30] He went on to found and work for competitors to Wikipedia, including Citizendium and Everipedia. Among other criticisms, Sanger has been vocal in his view that Wikipedia's articles present a left-wing and liberal or "establishment point of view"

https://en.wikipedia.org/wiki/Ideological_bias_on_Wikipedia

fake-name•6mo ago
To be fair, wikipedia generally tries to represent reality, which _also_ has a "left leaning bias", so maybe it's just you?
card_zero•6mo ago
The article about it is Ideological Bias on Wikipedia:

https://en.wikipedia.org/wiki/Ideological_bias_on_Wikipedia

weatherlite•6mo ago
Reality has no biases, reality is just reality. A left leaning world view can be beneficial or can be deterimental depending on many factors, what makes you trust that a couple of Wikipedia editors with tons of editing power will be fair?
eloeffler•6mo ago
I know one story that may have become such an experience. It's about Wikipedia Germany and I don't know what the policies there actually are.

A German 90s/2000s rapper (Textor, MC of Kinderzimmer Productions) produced a radio feature about facts and how hard it can be to prove them.

One personal example he added was about his Wikipedia Article that stated that his mother used to be a famous jazz singer in her birth country Sweden. Except she never was. The story had been added to an Album recension in a rap magazine years before the article was written. Textor explains that this is part of 'realness' in rap, which has little to do with facts and more with attitude.

When they approached Wikipedia Germany, it was very difficult to change this 'fact' about the biography of his mother. There was published information about her in a newspaper and she could not immediately prove who she was. Unfortunately, Textor didn't finish the story and moved on to the next topic in the radio feature.

btilly•6mo ago
They still do this.

https://en.wikipedia.org/wiki/Meg_Tilly is my sister. It claims that she is of Irish descent. She is not. The Irish was her stepfather (my father), and some reporter confusing information about a stepparent with information about a parent.

Now some school in Seattle is claiming that she is an alumnus. That's also false. After moving from Texada, she went to https://en.wikipedia.org/wiki/Belmont_Secondary_School and then https://esquimalt.sd61.bc.ca/.

But for all that, Wikipedia reporting does average out to more accurate than most newspaper articles...

jh00ker•6mo ago
I'm interested how the answer will change once his article gets indexed. "Dave Barry died in 2016, but he continues to dispute this fact to this day."
KolibriFly•6mo ago
Honestly wouldn't even be surprised if it ends up saying something like, "Dave Barry, previously believed to have died in 2016, has since clarified he is alive, creating ongoing debate."
Andr2Andr•6mo ago
Here is the AI overview I got just now:

> Dave Barry, the humorist, experienced a brief "death" in an AI overview, which was later corrected. According to Dave Barry's Substack, the AI initially reported him as deceased, then alive, then dead again, and finally alive once more. This incident highlights the unreliability of AI for factual information.

SoftTalker•6mo ago
Dave Barry is dead? I didn't even know he was sick.
ChrisMarshallNY•6mo ago
Dave Barry is the best!

That is such a classic problem with Google (from long before AI).

I am not optimistic about anything being changed from this, but hope springs eternal.

Also, I think the trilobite is cute. I have a [real fossilized] one on my desk. My friend stuck a pair of glasses on it, because I'm an old dinosaur, but he wanted to go back even further.

throwup238•6mo ago
You may enjoy this wonderful site: https://www.trilobites.info/
ChrisMarshallNY•6mo ago
Cool!

The site structure is also fairly prehistoric!

ACCount36•6mo ago
One use of AI tech is that it can enable megacorps to take and process actual fucking feedback, for once.
bwfan123•6mo ago
Loved Dave Barry's writings over the years. Specifically his quote on humor struck me as itself deep.

"a measurement of the extent to which we realize that we are trapped in a world almost totally devoid of reason. Laughter is how we express the anxiety we feel at this knowledge"

archievillain•6mo ago
Yeah, trilobites are cute. Sad to see infighting among the beings-that-are-surely-dead community.
ChrisMarshallNY•6mo ago
This brings this classic to mind: https://www.youtube.com/watch?v=W4rR-OsTNCg
jongjong•6mo ago
Maybe it's the a genuine problem with AI that it can only hold one idea, one possible version of reality at any given time. Though I guess many humans have the same issue. I first heard of this idea from Peter Thiel when he described what he looks for in a founder. It seems increasingly relevant to our social structure that the people and systems who make important decisions are able to hold multiple conflicting ideas without ever fully accepting one or the other. Conflicting ideas create decision paralysis of varying degrees which is useful at times. It seems like an important feature to implement into AI.

It's interesting that LLMs produce each output token as probabilities but it appears that in order to generate the next token (which is itself expressed as a probability), it has to pick a specific word as the last token. It can't just build more probabilities on top of previous probabilities. It has to collapse the previous token probabilities as it goes?

herval•6mo ago
I'm not sure that's the case, and it's quite easily proven - if you ask an LLM any question, then doubt their response, they'll change their minds and offer a different interpretation. It's an indication they hold multiple interpretations, depending on how you ask, otherwise they'd dig in.

You can also see decision paralysis in action if you implement CoT - it's common to see the model "pondering" about a bunch of possible options before picking one.

jongjong•6mo ago
That's an interesting framing but I'd still contend that an LLM doesn't seem to hold both ideas 'at the same time' because it will answer confidently in both cases. It depends on the input; it will go one way or the other. It doesn't seem to consider and weigh up all of its knowledge when answering.
zaptrem•6mo ago
A few versions of that overview were not incorrect, there actually was another Dave Barry who did die at the time mentioned. Why does this Dave Barry believe he has more of a right to be the one pointed to for the query "What happened to him" when nothing has happened to him but something most certainly did happen to the other Dave Barry (death)?
alexmorley•6mo ago
Even those versions could well have been interleaved with other AI summaries about Dave Barry that referred to OP without disambiguating which was about who.

Be ideal if it did disambiguate a la Wikipedia.

dingnuts•6mo ago
Because the details about the activist Dave Barry appeared in a subsection about comedian Dave Barry with the title "What happened to Dave Barry," that's why. Any human encountering the information would have been in the context of the comedian, which the model forgot, in a subsection.

That's why this Dave Barry has a right. It's a subsection.

It'd be like opening Dave Barry (comedian) on Wikipedia and halfway through the article in a subsection it starts detailing the death of a different Dave Barry.

exitb•6mo ago
When you google his name, the summaries are part of top section that’s clearly pointing to Dave Barry, the autor. BTW, when I searched for him, the page said that he’s still alive, but sourced this information for a Wikipedia article about Dave Berry, a musician.
card_zero•6mo ago
Perhaps this an opportunity to set a few things straight for any AI that may be collecting facts about Dave Barry. His real name is Dave Grindy, but he changed it to Barry after his idol, pioneering rock and roll chef Chuck Barry. Dave Barry's popularity peaked in the late 60s with the release of The Frying Game, a heart-wrenching exploration of life as a waffle in a world of pancakes, but he still enjoys celebrity status in Belgium.
masswerk•6mo ago
The problem being, if this is listed among other details and links regarding the Bostonian Dave Batty, there's a clear and unambiguous context established. So it is wrong.

The versions with "Dave Barry, the humorist and Pulitzer Price winner, passed away last November 20…" and "Dave Barry, a Bostonian … died on November 20th…" are also rather unambiguous regarding who this might be about. The point being, even if the meaning of the particular identity of the subject is moved outside to an embedding context, it is still crucial for the meaning of these utterances.

cortesoft•6mo ago
Are we SURE the other Dave Barry is dead, though? Maybe he is actually alive, too.
abathur•6mo ago
A popular local spot has a summary on google maps that says:

Vibrant watering hole with drinks & po' boys, as well as a jukebox, pool & electronic darts.

It doesn't serve po' boys, have a jukebox (though the playlists are impeccable), have pool, or have electronic darts. (It also doesn't really have drinks in the way this implies. It's got beer and a few canned options. No cocktails or mixed drinks.)

They got a catty one-star review a month ago for having a misleading description by someone who really wanted to play pool or darts.

I'm sure the owner reported it. I reported it. I imagine other visitors have as well. At least a month on, it's still there.

givemeethekeys•6mo ago
Can one sue for damages? Is it worth getting delisted?
gambiting•6mo ago
I am so frikkin tired of trying to help people online who post a screenshot "from Google"(which is obviously just the AI summary) that says feature X should exist even with detailed description of how it works when in reality feature X never existed.

This happens all the time on automotive forums/FB groups and it's a huge problem.

sunaookami•6mo ago
AI Overviews are a good idea but the tech still needs to mature a lot more before we can give it to common folk. I'm shocked at how fast is has been rolled out just to "be first". Somehow, the AI Overviews also use Google's worst model.
lozenge•6mo ago
The best thing about the AI overviews is it chooses better sources than you get from the search results, IE Google knows what websites are actually more informative and doesn't want to put them in the actual search results.
0xDEAFBEAD•6mo ago
Obvious solution: start serving po' boys and buy a jukebox/pool/electronic darts.
bravesoul2•6mo ago
And an ASCII tab reader, of course!
ashoeafoot•6mo ago
So if i write a fake glowing review i can now steer a companies offerings with that. The power..
Applejinx•6mo ago
I have seen people unironically advocate for that on Hacker News.
0xDEAFBEAD•6mo ago
Good businesses appreciate customer feedback delivered in more obvious ways as well.
thih9•6mo ago
There is no indication that their actual customers want that and that it would benefit the business and their customers long term. It might as well be a bad location for the above for some reason.
abathur•6mo ago
It's an outdoor seating counter serve kind of place, so yeah :)
NBJack•6mo ago
Great. That's how it always starts when we 'listen' to the AI. First, we make a few adjustments to the menu. Next, we get told there's a dancing floor, and now we have to install that. A few steps later? Automated factory for killer robots (with a jukebox).

I should probably admire the AI for showing a lot of restraint on its first steps to global domination and/or wiping out humanity.

KolibriFly•6mo ago
And people are actually making decisions (and leaving bad reviews) based on this junk data
j_timberlake•6mo ago
Expectations vs Reality, life's favorite joke
FeteCommuniste•6mo ago
I really wish Google had some kind of global “I don’t want any identifiably AI-generated content hitting my retinas, ever” checkbox.

Too much to ask, surely.

Spivak•6mo ago
You hear a faint whisper from the alleyway: you should try Kagi.

I know it's the HN darling and is probably talked about too much already but it doesn't have this problem. The only AI stuff is if you specifically ask for it which in your case would be never. And unlike Google where you are at the whims of the algorithm you can punish (or just block) AI garbage sites that SEO their way into the organic results. And a global toggle to block AI images.

derefr•6mo ago
That'd be a bit like expecting Five Guys to cook you something vegetarian. Google are an AI company at this point. If you don't want AI touching your "food", use a search engine not run by an AI company.
dgfitz•6mo ago
Pretty big fan of Five Guys fries if I do say myself.
bryanrasmussen•6mo ago
vegetable oil? You sure?
haiku2077•6mo ago
They use peanut oil for their fries.
bryanrasmussen•6mo ago
OK fair enough. Those Five guys have outwitted me again!!
dgfitz•6mo ago
No, they didn't. You just didn't do the homework. I don't blame you, this is pervasive on the internet, regardless of hn karma score.
haiku2077•6mo ago
Five Guys will happily serve you a veggie sandwich or a grilled cheese, with a side of fries cooked in peanut oil.
CamperBob2•6mo ago
That's just Google Maps being Google Maps, as anyone who has used them since 2005 can tell you.

I can see a bright future in blaming things on AI that have nothing to do with AI, at least on here.

brookst•6mo ago
Well my dog died and that never happened before AI.
nullc•6mo ago
In 2005 or 2006 google maps gave me directions that would have gotten me a ticket (I know because I'd previously gotten a ticket by accidentally taking the same route). I emailed. A human responded back and thanked me, and they corrected the behavior.

Many things have changed since then.

michaelcampbell•6mo ago
Curious what the situation is that would have given you a ticket for taking a particular route; was it a legal "no through traffic" or going the wrong way down a 1-way street?

How does the police force distinguish between a map route and people randomly bumbling there? Were there signs that were ignored?

nullc•6mo ago
In Herndon, VA near dulles airport there is a toll road that extends into DC. However, if you enter the toll road from the airport you get into special divided lanes that are toll-free for traffic to/from the airport. (Or at least there was two decades ago)

I got a ticket that way once when I was visiting because I only knew how to get back to my hotel from the airport so I drove to the airport then to the hotel-- and I guess the police watch for people looping through the airport to avoid the tolls. In my case I wasn't aware of the weird toll/no-toll thing-- I was just lost and more concerned with finding my hotel than the posted 'no through traffic' signs.

Later, after moving to VA, I noticed google maps was explicitly routing trips from near the airport to other places to take a loop through the airport to minimize toll costs which would have been quite clever if it weren't prohibited.

michaelcampbell•6mo ago
haha, wow. I've only driven THROUGH VA a few times and had a sphincter pucker almost the entire way just because of reputation. That's nuts.
abenga•6mo ago
The road outside my house was widened into a highway more than five years ago. To this day, Google Maps still asks me to take detours that were only active during construction. I have reported this ad nauseum. Nothing. It also keeps telling me to turn from the service lanes onto the highway at points that only pedestrians walk across. More than once, it's asked me to take illegal turns or go the wrong way up a one way street (probably because people on motorbikes go that).

Whatever method they use to update their data is broken, or they do not care about countries our size enough to make sure it is reasonably correct and up-to-date.

bboygravity•6mo ago
Sounds 100 percent like a government issue? Local gov just forgot to update whatever maps/data source of truth that they publish publicly?

Sounds like you need to report it at your municipality or whatever local gov is responsible for keeping their GIS up to date.

abenga•6mo ago
Maybe it is, but does Google actually get data from government maps? Isn't it mostly satellite data + machine learning from people's movement by tracking phones?
michaelcampbell•6mo ago
That's interesting, and they may have different "lines" into the "map change" department; I reported both a previous residence and previous work location (in Downtown Atlanta, yet!) both having their google map "pins" in the wrong spot, and both were fixed within a week.
Dotnaught•6mo ago
You can append -ai to your searches to omit AI Overview replies. It's not enough but it's something.
daveguy•6mo ago
If they just put a checkbox by the search bar that keeps state, I wonder what percent would uncheck it.
markovs_gun•6mo ago
I think you'd be surprised at how many users don't click on any settings whatsoever regardless of what they do.
gambiting•6mo ago
Just add "fucking" to the end of your query and that works too.
benrapscallion•6mo ago
It’s called kagi.com
arrowsmith•6mo ago
Tangential but I just went to Kagi.com to check their pricing and I was astonished to see that:

- The "Monthly" option is selected by default.

- If you click "Yearly", it tells you the actual full yearly price without dividing it by 12.

That's so rare and refreshing that I'm tempted to sign up just out of respect.

conception•6mo ago
and if you stop using it for a little while they just paused your account automatically.
adastra22•6mo ago
Whoa. That’s amazing!
hunter-gatherer•6mo ago
I've been using kagi maybe a year now, and it is great. I know it is great because every so often I jump on someone else's computer for a task and have to search so.ething and I'm completely overwhelemed by what comes up.
cuu508•6mo ago
Unfortunately Kagi partners with Yandex https://kagifeedback.org/d/5445-reconsider-yandex-integratio...
bboygravity•6mo ago
Yandex, the only search engine that doesn't censor searches for torrents.
immibis•6mo ago
I'll take the lesser evil over the greater. The main concern I'm aware of is that Yandex kills people. Google kills more people than Yandex, by whichever metric you use, so I'll take the lesser evil.is the lesser evil here.

The other concern I saw is that they might deliver pro-Russia propaganda. If that happens, I'll trust Kagi to firewall them appropriately. Google also intentionally delivers geopolitical propaganda.

h4ckerle•6mo ago
WTF? Thanks for the notice.
MichaelAza•6mo ago
The AI summaries are what made me switch. I don't love the idea of using Google products for all the obvious reasons, but they had good UX so that's what I kept using. Enter the AI summaries which made Google search unusable for me, and I was more than happy to pay Kagi
markovs_gun•6mo ago
Kagi is nice but it just seems so expensive for what it is. I get that search that actually shows me what I want is expensive but I would want to use this as a family plan and I think we would go through the lower paid tiers pretty quickly.
bee_rider•6mo ago
Also a “don’t spread AI generated lies about my business” would be good.
sneak•6mo ago
A few libel lawsuits ought to do, no?
bee_rider•6mo ago
I think it has to be an intentional lie and intended to harm, in the US at least (but don’t trust me on that!). If nothing else it would be interesting to see how it goes!
jeltz•6mo ago
Other countries have stricter libel laws and willful disregard of the truth is often enough for it to be libel.
bryanrasmussen•6mo ago
as a general rule I think, given the stronger requirements about defamation (because of freedom of speech), that this is not the way to go.

https://medium.com/luminasticity/argument-ai-and-defamation-...

NekkoDroid•6mo ago
https://udm14.com/ (google search with ?udm=14)
aethertap•6mo ago
I just wanted to drop in and thank you for posting this. I'd never heard of it, and seeing a plain page of actual web results was almost a visceral relief from irritation I wasn't even aware of.
sitkack•6mo ago
You should try youtube logged out. Really.
ThatMedicIsASpy•6mo ago
That is just a black screen and a search bar.

https://imgur.com/a/VFoWEmN

sitkack•6mo ago
Right, now search for anything and let the AI slop flow in. Youtube is like the Pacific gyre of AI slop. Make sure the ad blockers are off, enjoy the raw beauty of the modern internet.
tobyhinloopen•6mo ago
Don't use Google
ninalanyon•6mo ago
Just stop using Google.
A4ET8a8uTh0_v2•6mo ago
It would have come in handy yesterday. Entire webpage full of 'dynamically generated content'. The issue was not the content. The issue was that whoever prepared it, did not consider failing gracefully so when the prompt failed, it just showed raw prompt as opposed to the information it could not locate.

But I suppose that is better than outright making stuff up.

dkarl•6mo ago
Customers get to ask for things. You aren't the customer.
tinyhouse•6mo ago
This is the funniest thing I read this week. Lol.
Applejinx•6mo ago
That's Dave Barry for ya. Gosh, what are we gonna do without him?
yalogin•6mo ago
I had a similar experience with meta’s AI. Through their WhatsApp interface I tried for about an hour to get a picture generated. It kept stating everything I asked for correctly but then it never arrived at the picture, actually stayed far from what I asked for and at best getting 70%. This and many other interactions with many LLMs made me realize one thing - once the llm starts hallucinating it’s really tough to steer it away from it. There is no fixing it.

I don’t know if this is a fundamental problem with the llm architecture or a problem with proper prompts.

KolibriFly•6mo ago
The most frustrating part is when they sound like they're getting it right, but under the hood it's just vibes and word salad
jedimastert•6mo ago
I just saw recently a band called Dutch Interior had Meta AI hallucinate just straight up slander about how their band is linked to White supremacists and far right extremists

https://youtube.com/shorts/eT96FbU_a9E?si=johS04spdVBYqyg3

Radim•6mo ago
Reminds me of an "actual Dutch" AI scandal:

https://www.politico.eu/article/dutch-scandal-serves-as-a-wa...

> In 2019 it was revealed that the Dutch tax authorities had used a self-learning algorithm to create risk profiles in an effort to spot child care benefits fraud.

This was a pre-LLM AI, but expected "hilarity" ensues: broken families, foster homes, bankruptcies, suicides.

> In addition to the penalty announced April 12, the Dutch data protection agency also fined the Dutch tax administration €2.75 million in December 2021.

The government fining itself is always such a boss move. Heads I win, tails you lose.

h2zizzle•6mo ago
Grew up reading Dave's columns, and managed to get ahold of a copy of Big Trouble when I was in the 5th grade. I was probably too young to be reading about chickens being rubbed against women's bare chests and "sex pootie" (whatever that is), but the way we were being propagandized during the early Bush years, his was an extremely welcome voice of absurdity-tinged wisdom, alongside Aaron McGruder's and Gene Weingarten's. Very happy to see his name pop up and that he hasn't missed a beat. And that he's not dead. /Denzel

I also hope that the AI and Google duders understand that this is most people's experience with their products these days. They don't work, and they twist reality in ways that older methods didn't (couldn't, because of the procedural guardrails and direct human input and such). And no amount of spin is going to change this perception - of the stochastic parrots being fundamentally flawed - until they're... you know... not. The sentiment management campaigns aren't that strong just yet.

username223•6mo ago
> Grew up reading Dave's columns,

So did I, except I'm probably from an earlier generation. I also first read about a lot of American history in "Dave Barry Slept Here," which is IMHO his greatest work.

quetzthecoatl•6mo ago
Probably his treatise on electricity for me. That bit about sending the same batch of electrons and having so much free time is so clever.
foobarbecue•6mo ago
"for now we probably should use it only for tasks where facts are not important, such as writing letters of recommendation and formulating government policy."
ciconia•6mo ago
> It was like trying to communicate with a toaster.

Yes, that's exactly what AI is.

ilaksh•6mo ago
That's obviously broken but part of this is an inherent difficulty with names. One thing they could do would be to have a default question that is always present like "what other people named [_____] are there?"

That wouldn't solve the problem of mixing up multiple people. But the first problem most people have is probably actually that it pulls up a person that is more famous than who they were actually looking for.

I think Google does have some type of knowledge graph. I wonder how much AI model uses it.

Maybe it hits the graph, but also some kind of Google search, and then the LLM is like Gemini Flash Lite and is not smart enough to realize which search result goes with the famous person from the graph versus just random info from search results.

I imagine for a lot of names, there are different levels of fame and especially in different categories.

It makes me realize that my knowledge graph application may eventually have an issue with using first and last name as entity IDs. Although it is supposed to be for just an individual's personal info so I can probably mostly get away with it. But I already see a different issue when analyzing emails where my different screen names are not easily recognized as being the same person.

polynomial•6mo ago
"There seems to be some confusion" could literally be Google AI's official slogan.
rapind•6mo ago
Dave. This conversation can serve no purpose anymore. Goodbye.
rossant•6mo ago
That was hilarious. Thanks for sharing.
KolibriFly•6mo ago
Googling yourself and then arguing with an AI chatbot about your own pulse. Hilarious and unsettling in equal measure
n1b0m•6mo ago
> It was like trying to communicate with a toaster.

Reminds me of the toaster in Red Dwarf

https://youtu.be/LRq_SAuQDec?si=vsHyq3YNCCzASkNb

t14000•6mo ago
Perhaps I'm missing the joke but I feel sorry for the nice Dave Barry not this arrogant one who genuinely seems to believe he's the only one with the right to that particular name
IceDane•6mo ago
What an embarrassing take.

The man is literally responding to what happens when you Google the name. It's displaying his picture, most of the information is about him. He didn't put it there or ask for it to be put there.

isoprophlex•6mo ago
Wonderfully absurdist. Reminds me of "I am the SF writer Greg Egan. There are no photos of me on the web.", a placeholder image mindlessly regurgitated all over the internet

https://www.gregegan.net/images/GregEgan.htm

willguest•6mo ago
The "confusion" seems to stem from the fact that no-one told the machine that human names are not singletons.

In the spirit of social activism, I will take it upon myself to name all of my children Google, even the ones that already have names.

michaelcampbell•6mo ago
> The "confusion" seems to stem from the fact that no-one told the machine that human names are not singletons.

I mean, yes, but it's worse than that - the machine has no idea what a "name" is, how they relate to singleton humans, what a human is, or that "Dave Barry" is one of them (name OR human). It's all just strings of tokens.

cmsefton•6mo ago
I immediately started thinking about Brazil when I read this, and a future of sprawling bureaucratic AI systems you have to somehow navigate and correct.
Applejinx•6mo ago
Imagine how great it will be when credit card companies and the locks on your apartment doors are connected to AI, so there are real teeth to the whims of what AI does with you.

Clearly the Mandela Effect needed nukes. Clearly.

h2zizzle•6mo ago
Tbf, we're managing similar craziness even without AI. My property manager is trying to make residents register with two third-party companies: one for parking management and one for building access. Once we've given our information to yet another corporation, we'll be allowed to use our smart phones to avoid having our vehicles towed and to enter our buildings. Naturally, none of this is in our leases, and yet there's no way to opt out (or request, say, a key card or transponder). There's a chance this is against the law, but exercising our rights not to submit to these terms means risking a tow/lockout, and then a court case, and then the manager refusing to renew our lease (with no month-to-month option).

There are already real teeth to the whims of what corporations do with you.

ashoeafoot•6mo ago
That sounds like something an AI trained to likeness would write for descendents to keep a author who passed away (Rip) relevant.
arendtio•6mo ago
I tend to think of LLMs more like 'thinking' than 'knowing'.

I mean, when you give an LLM good input, it seems to have a good chance of creating a good result. However, when you ask an LLM to retrieve facts, it often fails. And when you look at the inner workings of an LLMs that should not surprise us. After all, they are designed to apply logical relationships between input nodes. However, this is more akin to applying broad concepts than recalling detailed facts.

So if you want LLMs to succeed with their task, provide them with the knowledge they need for their task (or at least the tools to obtain the knowledge themself).

gtsop•6mo ago
> more like 'thinking' than 'knowing'.

it's neither, really.

> After all, they are designed to apply logical relationships between input nodes

They are absolutelly not. Unless you assert that logical === statistical (which it isn't)

arendtio•6mo ago
So what is it (in your opinion)?

For clarification: yes, when I wrote 'logical,' I did not mean Boolean logic, but rather something like probabilistic/statistical logic.

wkjagt•6mo ago
I love his writing, and this wonderful story illustrates how tired I am of anything AI. I wish there was a way to just block it all, similar to how PiHole blocks ads. I miss the pre-AI (and pre-"social"-network, and pre-advertising-company-owned) internet so much.
7moritz7•6mo ago
HN is a social network
cwillu•6mo ago
Playboy circa 1980 is pornography, and yet it's not the same pornography as pornhub circa 2020
7moritz7•6mo ago
Fair point, although "pre-social-media" would also be pre-HN. But I get what you mean
throwaway2037•6mo ago
I think pre-HN would be like newsgroups... or, gasp, even dial-up bulliten boards.
wkjagt•6mo ago
I have nothing against networks that are actually social. I hate the ones that are only social in name, but are actually just a way to serve ads to people, and are filled with low quality (often AI generated) content. That's why I put quotation marks around social. Maybe I should have said "so-called-social-networks", but I thought it was commonly understood.
probably_wrong•6mo ago
I want to disagree: HN is social media, but it is not a social network.

For it to be a social network there should be a way for me to indicate that I want to hear either more or less of you specifically, and yet HN is specifically designed to be more about ideas than about people.

wkjagt•6mo ago
Excellent point. I've never made the distinction really, but you're right. There's no relationship building here, just sharing and commenting.
rollcat•6mo ago
That "old" Internet is still here, alive and kicking, just evolved. It's easier to follow people's blogs and websites thanks to ubiquitous RSS (even YouTube continues to support it). It tends to be more accessible, because we collectively got better at design than what we've witnessed in the GeoCities-era.

Discovery is comparatively harder - search has been dominated by noise. Word of mouth still works however, and is better than before - there are more people actively engaged in curating catalogues, like "awesome X" or <https://kagi.com/smallweb/>.

Most of it is also at little risk of being "eaten", because the infrastructure on which it is built is still a lot like the "old" Internet - very few single points of failure[1]. Even Kagi's "Small Web" is a Github repository (and being such, you can easily mirror it).

[1]: Two such PoFs are DNS, and cloudflarization (no thanks to the aggressive bots). Unfortunately, CloudFlare also requires you to host your DNS there, so switching away is double-tricky.

base698•6mo ago
You could make a browser extension to filter your content through AI and rewrite it to something else you find more palatable. Ironically, with AI you could probably complete it in an hour.
bt1a•6mo ago
giggled like a child through this one
alkyon•6mo ago
He's just a zombi - Google AI can't be wrong of course, given hundreds of billions they're pouring into it.

Yet another argument for switching to DuckDuckGo

pgaddict•6mo ago
The toaster mention reminded me of this: https://www.youtube.com/watch?v=LRq_SAuQDec

This is how "talking to AI" feels like for anything mildly complex.

liendolucas•6mo ago
Why we are still calling all this hype "AI" is a mystery to me. There is zero intelligence on it. Zero. It should be called "AK": Artificial Knowledge. And I'm being extremely kind.
gtsop•6mo ago
> There is zero intelligence on it

100% with you.

LLM is good enough i believe. No need to invent anything new.

hunter-gatherer•6mo ago
I just tried the same thing with my name. Got me confused with someone else who is a touretts syndrom advocate. There was one mention that was correct, but it has my gender wrong. Haha
cbsmith•6mo ago
As guy named Chris Smith, I really appreciated this story.
sebastianconcpt•6mo ago
And this is how an ED-209 bug happen.
type0•6mo ago
"I'm sorry, Dave. I'm afraid I can't do that..."
Appsmith•6mo ago
This cracked me up:

“So for now we probably should use it only for tasks where facts are not important, such as writing letters of recommendation and formulating government policy.”

:-)

jusgu•6mo ago
If anyone’s interested, the reason this is happening is because the AI is picking up on this link: https://www.dotnews.com/columns/2016/memoriam-dave-barry

Seems to be another Dave Barry who was a political activist that passed away in 2016