frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Plwm – An X11 window manager written in Prolog

https://github.com/Seeker04/plwm
126•jedeusus•5h ago•24 comments

Ask HN: What are you working on? (May 2025)

87•david927•3h ago•262 comments

Path to a free self-taught education in Computer Science

https://github.com/ossu/computer-science
141•saikatsg•6h ago•68 comments

Lottie is an open format for animated vector graphics

https://lottie.github.io/
231•marcodiego•8h ago•99 comments

Writing your own CUPS printer driver in 100 lines of Python (2018)

https://behind.pretix.eu/2018/01/20/cups-driver/
126•todsacerdoti•7h ago•14 comments

Lisping at JPL (2002)

https://flownet.com/gat/jpl-lisp.html
92•adityaathalye•3d ago•19 comments

Claude 4 System Card

https://simonwillison.net/2025/May/25/claude-4-system-card/
517•pvg•17h ago•205 comments

Koog, a Kotlin-based framework to build and run Al agents in idiomatic Kotlin

https://github.com/JetBrains/koog
33•prof18•3d ago•4 comments

Show HN: Zli – A Batteries-Included CLI Framework for Zig

https://github.com/xcaeser/zli
52•caeser•6h ago•21 comments

Design Pressure: The Invisible Hand That Shapes Your Code

https://hynek.me/talks/design-pressure/
122•NeutralForest•9h ago•32 comments

Show HN: DaedalOS – Desktop Environment in the Browser

https://github.com/DustinBrett/daedalOS
99•DustinBrett•7h ago•19 comments

Writing a Self-Mutating x86_64 C Program (2013)

https://ephemeral.cx/2013/12/writing-a-self-mutating-x86_64-c-program/
62•kepler471•6h ago•19 comments

Denmark to raise retirement age to 70

https://www.telegraph.co.uk/world-news/2025/05/23/denmark-raise-retirement-age-70/
242•wslh•6h ago•585 comments

CAPTCHAs are over (in ticketing)

https://behind.pretix.eu/2025/05/23/captchas-are-over/
95•pabs3•22h ago•123 comments

Tariffs in American History

https://imprimis.hillsdale.edu/tariffs-in-american-history/
70•smitty1e•1d ago•106 comments

Martin (YC S23) Is Hiring Founding AI/Product Engineers to Build a Better Siri

https://www.ycombinator.com/companies/martin/jobs
1•darweenist•6h ago

Wrench Attacks: Physical attacks targeting cryptocurrency users (2024) [pdf]

https://drops.dagstuhl.de/storage/00lipics/lipics-vol316-aft2024/LIPIcs.AFT.2024.24/LIPIcs.AFT.2024.24.pdf
84•pulisse•11h ago•63 comments

Trading with Claude (and writing your own MCP server)

https://dangelov.com/blog/trading-with-claude/
16•dangelov•3d ago•5 comments

Is TfL losing the battle against heat on the Victoria line?

https://www.swlondoner.co.uk/news/16052025-is-tfl-losing-the-battle-against-heat-on-the-victoria-line
68•zeristor•14h ago•106 comments

'Strange metals' point to a whole new way to understand electricity

https://www.science.org/content/article/strange-metals-point-whole-new-way-understand-electricity
92•pseudolus•9h ago•29 comments

Can a corporation be pardoned?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5202339
43•megamike•6h ago•73 comments

Show HN: SVG Animation Software

https://expressive.app/expressive-animator/
160•msarca•11h ago•75 comments

On File Formats

https://solhsa.com/oldernews2025.html#ON-FILE-FORMATS
105•ibobev•4d ago•70 comments

Fanaka – a handbook for African success in the international software industry

https://fanaka.readthedocs.io
9•DanieleProcida•2d ago•0 comments

Dependency injection frameworks add confusion

http://rednafi.com/go/di_frameworks_bleh/
95•ingve•15h ago•108 comments

We broke down that weird 9-minute Sam Altman and Jony Ive video

https://sfstandard.com/2025/05/23/sam-altman-jony-ive-video/
6•herbertl•25m ago•1 comments

Now you can watch the Internet Archive preserve documents in real time

https://www.theverge.com/news/672682/internet-archive-microfiche-lo-fi-beats-channel
105•LorenDB•2d ago•10 comments

Programming on 34 Keys (2022)

https://oppi.li/posts/programming_on_34_keys/
51•todsacerdoti•10h ago•75 comments

Show HN: Wall Go – browser remake of a Devil's Plan 2 mini-game

https://schaoss.github.io/wall-go/
25•sychu•8h ago•8 comments

The Newark airport crisis

https://www.theverge.com/planes/673462/newark-airport-delay-air-traffic-control-tracon-radar
107•01-_-•6h ago•86 comments
Open in hackernews

AI Hallucination Cases Database

https://www.damiencharlotin.com/hallucinations/
60•Tomte•7h ago

Comments

irrational•6h ago
I still think confabulation is a better term for what LLMs do than hallucination.

Hallucination - A hallucination is a false perception where a person senses something that isn't actually there, affecting any of the five senses: sight, sound, smell, touch, or taste. These experiences can seem very real to the person experiencing them, even though they are not based on external stimuli.

Confabulation - Confabulation is a memory error consisting of the production of fabricated, distorted, or misinterpreted memories about oneself or the world. It is generally associated with certain types of brain damage or a specific subset of dementias.

bluefirebrand•6h ago
You're not wrong in a strict sense, but you have to remember that most people aren't that strict about language

I would bet that for most people they define the words like:

Hallucination - something that isn't real

Confabulation - a word that they have never heard of

static_void•6h ago
We should not bend over backwards to use language the way ignorant people do.
add-sub-mul-div•5h ago
"Bending over backwards" is a pretty ignorant metaphor for this situation, it describes explicit activity whereas letting people use metaphor loosely only requires passivity.
furyofantares•5h ago
I like communicating with people using a shared understanding of the words being used, even if I have an additional, different understanding of the words, which I can use with other people.

That's what words are, anyway.

dingnuts•4h ago
I like calling it bullshit[0] because it's the most accurate, most understandable, and the most fun to use with a footnote

0 (featured previously on HN) https://link.springer.com/article/10.1007/s10676-024-09775-5

rad_gruchalski•3h ago
Ignorance is easy to hide behind many words.
static_void•2h ago
I'm glad we can agree. I also like communicating with people using a shared understanding of the words being used, i.e. their definitions.
AllegedAlec•3h ago
We should not bend over backwards to use language the way anally retentive people demand we do.
rad_gruchalski•3h ago
Ignorance clusters easily. You’ll have no problem finding alike.
vkou•3h ago
> Ignorance clusters easily.

So does pedantry and prickliness.

Intelligence is knowing that a tomato is a fruit, wisdom is not putting it in a fruit salad. It's fine to want to do your part to steer language, but this is not one of those cases where it's important enough for anyone to be an asshole about it.

rad_gruchalski•3h ago
It also becomes apparent that ignorance leads to a weird aggressive asshole fetish.

Hey… here’s a fruit salad with tomatoes: https://www.spoonabilities.com/stone-fruit-caprese-salad/.

AllegedAlec•2h ago
Sure bud.
blooalien•3h ago
Problem is that in some fields of study / work, and in some other situations absolute clarity and accuracy are super important to avoid dangerous or harmful mistakes. Many of the sciences are that way, and A.I. is absolutely one of those sciences where communicating accurately can matter quite a lot. Otherwise you end up with massive misunderstandings about the technology being spread around as gospel truth by people who are quite simply mis-informed (like you see happening right now with all the A.I. hype).
static_void•2h ago
Just in case you're talking about descriptivism vs. prescriptivism.

I'm a descriptivist. I don't believe language should have arbitrary rules, like which kinds of words you're allowed to end a sentence with.

However, to be an honest descriptivist, you must acknowledge that words are used in certain ways more frequently than others. Definitions attempt to capture the canonical usage of a word.

Therefore, if you want to communicate clearly, you should use words the way they are commonly understood to be used.

resonious•3h ago
I would go one step further and suppose that a lot of people just don't know what confabulation means.
maxbond•6h ago
I think "apophenia" (attributing meaning to spurious connections) or "pareidolia" (the form of aphonenia where we see faces where there are none) would have been good choices, as well.
cratermoon•6h ago
anthropoglossic systems.
Terr_•5h ago
Largely Logorrhea Models.
rollcat•5h ago
There's a simpler word for that: lying.

It's also equally wrong. Lying implies intent. Stop anthropomorphising language models.

sorcerer-mar•4h ago
Lying is different from confabulation. As you say, lying implies intent. Confabulation does not necessarily, ergo it's a far better word than either lying or hallucinating.

A person with dementia confabulates a lot, which entails describing reality "incorrectly";, but it's not quite fair to describe it as lying.

bandrami•3h ago
A liar seeks to hide the truth; a confabulator is indifferent to the truth entirely. It's an important distinction. True statements can still be confabulations.
matkoniecz•5h ago
And why confabulation is better one of those?
bee_rider•5h ago
It seems like these are all anthropomorphic euphemisms for things that would otherwise be described as bugs, errors (in the “broken program” sense), or error (in the “accumulation of numerical error” sense), if LLMs didn’t have the easy-to-anthropomorphize chat interface.
diggan•5h ago
Imagine you have function that is called "is_true" but it only gets it right 60% of the time. We're doing this within CS/ML, so lets call that "correctness" or something fancier. In order for that function to be valuable, would we need to hit a 100% in correctness? I mean probably most of the time, yeah. But sometimes, maybe even rarely, we're fine with it being less than 100%, but still as high as possible.

So in this point of view, it's not a bug or error that it currently sits at 60%, but if we manage to find a way to hit 70%, it would be better. But in order to figure this out, we need to call this "correct for most part, but could be better" concept something. So we look at what we already know and are familiar with, and try to draw parallels, maybe even borrow some names/words.

bee_rider•5h ago
This doesn’t seem too different from my third thing, error (in the “accumulation of numerical error” sense).
timewizard•4h ago
> but if we manage to find a way to hit 70%, it would be better.

Yet still absolutely worthless.

> "correct for most part, but could be better" concept something.

When humans do that we just call it "an error."

> so lets call that "correctness" or something

The appropriate term is "confidence." These LLM tools all could give you a confidence rating with each and every "fact" it attempts to relay to you. Of course they don't actually do that because no one would use a tool that confidently gives you answers based on a 70% self confidence rating.

We can quibble over terms but more appropriately this is just "garbage." It's a giant waste of energy and resources that produces flawed results. All of that money and effort could be better used elsewhere.

vrighter•4h ago
and even those confidence ratings are useless, imo. If trained with wrong data, it will report high confidence for the wrong answer. And curating a dataset is a black art in the first place
furyofantares•2h ago
> These LLM tools all could give you a confidence rating with each and every "fact" it attempts to relay to you. Of course they don't actually do that because no one would use a tool that confidently gives you answers based on a 70% self confidence rating.

Why do you believe they could give you a confidence rating? They can't, at least not a meaningful one.

georgemcbay•3h ago
They aren't really bugs though in the traditional sense because all LLMs ever do is "hallucinate", seeing what we call a hallucination as something fundamentally different than what we consider a correct response is further anthropomorphising the LLM.

We just label it with that word when it statistically generates something we know to be wrong, but functionally what it did in that case is no different than when it statistically generated something that we know to be correct.

skybrian•2h ago
It’s metaphor. A hardware “bug” is occasionally due to an actual insect in the machinery, but usually it isn’t, and for software bugs it couldn’t be.

The word “hallucination” was pretty appropriate for images made by DeepDream.

https://en.m.wikipedia.org/wiki/DeepDream

anshumankmr•6h ago
Can we submit ChatGPT convo histories??
Flemlo•6h ago
So what's the amount of cases were it was wrong but no one checked?
add-sub-mul-div•6h ago
Good point. People putting the least amount of effort into their job that they can get away with is universal, judges are no more immune to it than lawyers.
mullingitover•3h ago
This seems like a perfect use case for a legal MCP server that can provide grounding for citations. Protomated already has one[1].

[1] https://github.com/protomated/legal-context

0xDEAFBEAD•1h ago
These penalties need to be larger. Think of all the hours of work that using ChatGPT could save a lawyer. An occasional $2500 fine will not deter the behavior.

And this matters, because this database is only the fabrications which got caught. What happens when a decision is formulated based on AI-fabricated evidence, and that decision becomes precedent?

Here in the US, our legal system is already having its legitimacy assailed on multiple fronts. We don't need additional legitimacy challenges.

How about disbarring lawyers who present confabulated evidence?