frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•51s ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•4m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•4m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•5m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•5m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•7m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•8m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•8m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•8m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•9m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
1•simonw•9m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•10m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•10m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•12m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•12m ago•1 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•18m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•19m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•20m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•22m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•22m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•22m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
3•birdmania•23m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
7•samasblack•25m ago•2 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•26m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•27m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•28m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•30m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•30m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•30m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•31m ago•0 comments
Open in hackernews

AI Hallucination Legal Cases Database

https://www.damiencharlotin.com/hallucinations/
86•Tomte•8mo ago

Comments

irrational•8mo ago
I still think confabulation is a better term for what LLMs do than hallucination.

Hallucination - A hallucination is a false perception where a person senses something that isn't actually there, affecting any of the five senses: sight, sound, smell, touch, or taste. These experiences can seem very real to the person experiencing them, even though they are not based on external stimuli.

Confabulation - Confabulation is a memory error consisting of the production of fabricated, distorted, or misinterpreted memories about oneself or the world. It is generally associated with certain types of brain damage or a specific subset of dementias.

bluefirebrand•8mo ago
You're not wrong in a strict sense, but you have to remember that most people aren't that strict about language

I would bet that for most people they define the words like:

Hallucination - something that isn't real

Confabulation - a word that they have never heard of

static_void•8mo ago
We should not bend over backwards to use language the way ignorant people do.
add-sub-mul-div•8mo ago
"Bending over backwards" is a pretty ignorant metaphor for this situation, it describes explicit activity whereas letting people use metaphor loosely only requires passivity.
furyofantares•8mo ago
I like communicating with people using a shared understanding of the words being used, even if I have an additional, different understanding of the words, which I can use with other people.

That's what words are, anyway.

dingnuts•8mo ago
I like calling it bullshit[0] because it's the most accurate, most understandable, and the most fun to use with a footnote

0 (featured previously on HN) https://link.springer.com/article/10.1007/s10676-024-09775-5

rad_gruchalski•8mo ago
Ignorance is easy to hide behind many words.
static_void•8mo ago
I'm glad we can agree. I also like communicating with people using a shared understanding of the words being used, i.e. their definitions.
furyofantares•8mo ago
You might be interested to learn that the people who write down the definitions in dictionaries consider themselves to be in the business of documenting usage, not bringing commandments on stone tablets down from the mountain.
static_void•8mo ago
Lol. https://news.ycombinator.com/item?id=44091037
AllegedAlec•8mo ago
We should not bend over backwards to use language the way anally retentive people demand we do.
rad_gruchalski•8mo ago
Ignorance clusters easily. You’ll have no problem finding alike.
vkou•8mo ago
> Ignorance clusters easily.

So does pedantry and prickliness.

Intelligence is knowing that a tomato is a fruit, wisdom is not putting it in a fruit salad. It's fine to want to do your part to steer language, but this is not one of those cases where it's important enough for anyone to be an asshole about it.

AllegedAlec•8mo ago
Sure bud.
blooalien•8mo ago
Problem is that in some fields of study / work, and in some other situations absolute clarity and accuracy are super important to avoid dangerous or harmful mistakes. Many of the sciences are that way, and A.I. is absolutely one of those sciences where communicating accurately can matter quite a lot. Otherwise you end up with massive misunderstandings about the technology being spread around as gospel truth by people who are quite simply mis-informed (like you see happening right now with all the A.I. hype).
static_void•8mo ago
Just in case you're talking about descriptivism vs. prescriptivism.

I'm a descriptivist. I don't believe language should have arbitrary rules, like which kinds of words you're allowed to end a sentence with.

However, to be an honest descriptivist, you must acknowledge that words are used in certain ways more frequently than others. Definitions attempt to capture the canonical usage of a word.

Therefore, if you want to communicate clearly, you should use words the way they are commonly understood to be used.

furyofantares•8mo ago
> However, to be an honest descriptivist, you must acknowledge that words are used in certain ways more frequently than others. Definitions attempt to capture the canonical usage of a word.

True. And that's generally how they order the definitions in the dictionary, in order of usage.

For example, "an unfounded or mistaken impression or notion" is indeed the 2nd definition in M-W for "hallucination", not the first.

trehalose•8mo ago
A dictionary entry's second definition isn't necessarily an uncommonly used one. It could be up to 49% of the word's usage (assuming the dictionary has such precise statistics).
resonious•8mo ago
I would go one step further and suppose that a lot of people just don't know what confabulation means.
maxbond•8mo ago
I think "apophenia" (attributing meaning to spurious connections) or "pareidolia" (the form of aphonenia where we see faces where there are none) would have been good choices, as well.
cratermoon•8mo ago
anthropoglossic systems.
Terr_•8mo ago
Largely Logorrhea Models.
rollcat•8mo ago
There's a simpler word for that: lying.

It's also equally wrong. Lying implies intent. Stop anthropomorphising language models.

sorcerer-mar•8mo ago
Lying is different from confabulation. As you say, lying implies intent. Confabulation does not necessarily, ergo it's a far better word than either lying or hallucinating.

A person with dementia confabulates a lot, which entails describing reality "incorrectly";, but it's not quite fair to describe it as lying.

rollcat•8mo ago
I took note, and I agree. The problem is that (as a non-native English speaker), I had to look the word up. I'm concerned that this nuance could escape people, even when they know what the word stands for.

"Making things up" is precise but wordy. "Lying" is good enough, obvious, and concise.

bandrami•8mo ago
A liar seeks to hide the truth; a confabulator is indifferent to the truth entirely. It's an important distinction. True statements can still be confabulations.
matkoniecz•8mo ago
And why confabulation is better one of those?
bee_rider•8mo ago
It seems like these are all anthropomorphic euphemisms for things that would otherwise be described as bugs, errors (in the “broken program” sense), or error (in the “accumulation of numerical error” sense), if LLMs didn’t have the easy-to-anthropomorphize chat interface.
diggan•8mo ago
Imagine you have function that is called "is_true" but it only gets it right 60% of the time. We're doing this within CS/ML, so lets call that "correctness" or something fancier. In order for that function to be valuable, would we need to hit a 100% in correctness? I mean probably most of the time, yeah. But sometimes, maybe even rarely, we're fine with it being less than 100%, but still as high as possible.

So in this point of view, it's not a bug or error that it currently sits at 60%, but if we manage to find a way to hit 70%, it would be better. But in order to figure this out, we need to call this "correct for most part, but could be better" concept something. So we look at what we already know and are familiar with, and try to draw parallels, maybe even borrow some names/words.

bee_rider•8mo ago
This doesn’t seem too different from my third thing, error (in the “accumulation of numerical error” sense).
timewizard•8mo ago
> but if we manage to find a way to hit 70%, it would be better.

Yet still absolutely worthless.

> "correct for most part, but could be better" concept something.

When humans do that we just call it "an error."

> so lets call that "correctness" or something

The appropriate term is "confidence." These LLM tools all could give you a confidence rating with each and every "fact" it attempts to relay to you. Of course they don't actually do that because no one would use a tool that confidently gives you answers based on a 70% self confidence rating.

We can quibble over terms but more appropriately this is just "garbage." It's a giant waste of energy and resources that produces flawed results. All of that money and effort could be better used elsewhere.

vrighter•8mo ago
and even those confidence ratings are useless, imo. If trained with wrong data, it will report high confidence for the wrong answer. And curating a dataset is a black art in the first place
furyofantares•8mo ago
> These LLM tools all could give you a confidence rating with each and every "fact" it attempts to relay to you. Of course they don't actually do that because no one would use a tool that confidently gives you answers based on a 70% self confidence rating.

Why do you believe they could give you a confidence rating? They can't, at least not a meaningful one.

diggan•8mo ago
> Yet still absolutely worthless.

Depends on the context, doesn't it? Nothing is usually 100% worthless or 100% "worthy", there are grey areas in life where we're fine with "kind of right, most of the time". Are you saying these scenarios absolutely never exists in your world? I guess I'd be grateful if my life was so easy always.

georgemcbay•8mo ago
They aren't really bugs though in the traditional sense because all LLMs ever do is "hallucinate", seeing what we call a hallucination as something fundamentally different than what we consider a correct response is further anthropomorphising the LLM.

We just label it with that word when it statistically generates something we know to be wrong, but functionally what it did in that case is no different than when it statistically generated something that we know to be correct.

skybrian•8mo ago
It’s metaphor. A hardware “bug” is occasionally due to an actual insect in the machinery, but usually it isn’t, and for software bugs it couldn’t be.

The word “hallucination” was pretty appropriate for images made by DeepDream.

https://en.m.wikipedia.org/wiki/DeepDream

JimDabell•8mo ago
Confabulation is a good term for the majority of what is currently termed AI hallucinations, but there is still a good proportion that is accurately called hallucination.

For instance, if you give AI a photo and ask it to describe in detail what it seems, it will often report things that aren’t there. That’s not confabulation, that’s hallucination. But if you ask a general knowledge question with no additional context and it responds with something untrue, then that would be confabulation, I agree.

nurettin•8mo ago
IIRC The reason it was originally called a hallucination was due to experiments like "clone github.com/bleh/blah" and it would give you a non-existent repository. You could list and edit the files and so on. Or you would ask it to list files in your home dir and go three directories deep and it would keep on generating.
xyzal•8mo ago
Taking into account the output usually has the structure of being an expert output (probably because of its correctness language-wise and from a formal standpoint), I propose "bullshitting".
latexr•8mo ago
> I still think confabulation is a better term for what LLMs do than hallucination.

That battle is both lost and irrelevant. Don’t confuse accurate word usage with effective communication, most people don’t understand the nuance between hallucination and confabulation nor do they care. Even if you convinced everyone in the world to start using “confabulation” right now, nothing would change.

You’re doing a disservice to your own cause by insisting on this pointless weak distinction. If you truly want to make a point about the issue with LLMs in a way anyone outside of HN might pay attention, suggest a simpler and stronger word people already know: “lying”, “bullshitting”, …

You can surely object to those suggestions: “LLMs don’t lie. That would require active deception which they are incapable of, being statistical token generators”. Which is true, but:

1. There are plenty of people who both believe LLMs are intelligent and capable of deception.

2. For everyone else, “bullshitted” is no more inaccurate than “hallucinated” yet still conveys a stronger urgency and required care of operation.

anshumankmr•8mo ago
Can we submit ChatGPT convo histories??
Flemlo•8mo ago
So what's the amount of cases were it was wrong but no one checked?
add-sub-mul-div•8mo ago
Good point. People putting the least amount of effort into their job that they can get away with is universal, judges are no more immune to it than lawyers.
mullingitover•8mo ago
This seems like a perfect use case for a legal MCP server that can provide grounding for citations. Protomated already has one[1].

[1] https://github.com/protomated/legal-context

0xDEAFBEAD•8mo ago
These penalties need to be larger. Think of all the hours of work that using ChatGPT could save a lawyer. An occasional $2500 fine will not deter the behavior.

And this matters, because this database is only the fabrications which got caught. What happens when a decision is formulated based on AI-fabricated evidence, and that decision becomes precedent?

Here in the US, our legal system is already having its legitimacy assailed on multiple fronts. We don't need additional legitimacy challenges.

How about disbarring lawyers who present confabulated evidence?

steve_gh•8mo ago
I'm presuming that if my case gets dismissed because my lawyer submits fabricated AI generated materials to the court, then I would have a very good case against my lawyer for professional misconduct. And that the damages could be very high.