frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
1•tanelpoder•36s ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•1m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
1•elsewhen•4m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•5m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•9m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
1•mooreds•9m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•10m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•10m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•10m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•10m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•11m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•12m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
2•nick007•13m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•14m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•14m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
2•belter•17m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•18m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
2•momciloo•18m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•18m ago•2 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•18m ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•19m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•19m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•19m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•22m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•22m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
2•valyala•24m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•25m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•26m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
5•randycupertino•28m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•30m ago•0 comments
Open in hackernews

AI hallucinations will be solvable within a year (2024)

https://fortune.com/2024/04/16/ai-hallucinations-solvable-year-ex-google-researcher/
12•rvz•6mo ago

Comments

techpineapple•6mo ago
I wonder if it would be better to have 1 “perfect” LLM trying to solve problems or 5 intentionally biased LLM’s.
d00mB0t•6mo ago
I'm so tired of these rich dweebs pontificating to everyone.
ipv6ipv4•6mo ago
There is ample evidence that hallucinations are incurable in the best extant model of intelligence - people.
add-sub-mul-div•6mo ago
Someday we'll figure out how to program computers to behave deterministically so that they can complement our human abilities rather than badly impersonate them.
Dylan16807•6mo ago
Getting down to the level of a median helpful human with the same knowledge would be a massive step forward.

Getting down to the level of a moderately humble expert taking the time to double check would be almost as good as solving it.

davesmylie•6mo ago
Obviously it's well over a year since this article was posted and if anything I've anecdotally noticed hallucinations getting more, not less, common.

Possibly/probably with another years experience with LLMs I'm just more attuned to noticing when they have lost the plot and are making shit up

BoorishBears•6mo ago
RL for reasoning definitely introduces hallucinations, and sometimes it introduces a class of hallucinations that feels a lot worse than the classic ones.

I noticed OpenAI's models picked up a tendency to hold strong convictions on completely unknowable things.

"<suggests possible optimization> Implement this change and it will result in a 4.5% uplift in performance"

"<provides code> I ran the updated script 10 times and it completes 30.5 seconds faster than before on average"

It's bad it enough it convinces itself it did things it can't do, but then it goes further and hallucinates insights from the tasks it hallucinated itself doing in the first places!

I feel like lay people aren't ready for that. Normal hallucinations felt passive, like a slip up. To the unprepared, this becomes more like someone actively trying to sell their slip ups.

I'm not sure if it's a form of RL hacking making it through to the final model or what, but even OpenAI seems to have noticed it in testing based on their model cards.

alkyon•6mo ago
The more accurate word would be confabulation
Wowfunhappy•6mo ago
You lost this battle, sorry. It's not going to happen.

Both terms are "inaccurate" because we're talking about a computer program, not a person. However, at this point "hallucination" has been firmly cemented in public discourse. I don't work in tech, but all of my colleagues know what an AI hallucination is, as does my grandmother. It's only a matter of time until the word's alternate meaning gets added to the dictionary.

peterashford•6mo ago
Correct. This is the way language works. It's annoying when you know what words mean but this is the way it is.
alkyon•6mo ago
Maybe I lost this battle, but also in science the terminology evolves. If you replace AI hallucination with AI confabulation even your grandmaother would get it right. I also don't agree that both terms are equally inaccurate.
Wowfunhappy•6mo ago
> Maybe I lost this battle, but also in science the terminology evolves.

Ah yes, science, where we have fixed stars that move, imaginary numbers that are real, and atoms that can be divided into smaller particles.

alkyon•6mo ago
All of these examples are many centuries old, let's compare apples to apples.
alkyon•6mo ago
Obviously, hallucination is by definition a perception, so it incorrectly anthropomorphizes AI models. On the other hand, the term confabulation involves filling in gaps with fabrication, exactly what LLMs do (aka bullshitting).
more_corn•6mo ago
What an absurd prediction.
DoctorOetker•6mo ago
In the article it is argued that the brainfarts could be beneficial for exploration of new ideas.

I don't agree. The "temperature" parameter should be used for this. Confabulation / bluff / hallucination / unfounded guesses are undesirable at low temperatures.

Wowfunhappy•6mo ago
> “If you look at the models before they are fine-tuned on human preferences, they’re surprisingly well calibrated. So if you ask the model for its confidence to an answer—that confidence correlates really well with whether or not the model is telling the truth—we then train them on human preferences and undo this.

Now that is really interesting! I didn't realize RLHF did that.