frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Some Things Just Take Time

https://lucumr.pocoo.org/2026/3/20/some-things-just-take-time/
201•vaylian•3h ago•90 comments

Grafeo – A fast, lean, embeddable graph database built in Rust

https://grafeo.dev/
93•0x1997•3h ago•24 comments

Passengers who refuse to use headphones can now be kicked off United flights

https://www.cnn.com/2026/03/21/travel/travel-news-happiest-countries
86•edward•51m ago•72 comments

Invisalign Became the Biggest User of 3D Printers

https://www.wired.com/story/how-invisalign-became-the-worlds-biggest-3d-printing-company/
28•mikhael•2d ago•17 comments

OpenCode – Open source AI coding agent

https://opencode.ai/
1130•rbanffy•21h ago•554 comments

ZJIT removes redundant object loads and stores

https://railsatscale.com/2026-03-18-how-zjit-removes-redundant-object-loads-and-stores/
35•tekknolagi•2d ago•2 comments

Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
19•Anon84•3h ago•4 comments

Meta's Omnilingual MT for 1,600 Languages

https://ai.meta.com/research/publications/omnilingual-mt-machine-translation-for-1600-languages/?...
93•j0e1•3d ago•28 comments

404 Deno CEO not found

https://dbushell.com/2026/03/20/denos-decline-and-layoffs/
189•WhyNotHugo•3h ago•128 comments

Ubuntu 26.04 Ends 46 Years of Silent sudo Passwords

https://pbxscience.com/ubuntu-26-04-ends-46-years-of-silent-sudo-passwords/
185•akersten•13h ago•212 comments

Books of the Century by Le Monde

https://standardebooks.org/collections/le-mondes-100-books-of-the-century
53•zlu•2d ago•30 comments

Mamba-3

https://www.together.ai/blog/mamba-3
246•matt_d•3d ago•48 comments

A Japanese glossary of chopsticks faux pas (2022)

https://www.nippon.com/en/japan-data/h01362/
402•cainxinth•21h ago•317 comments

Blocking Internet Archive Won't Stop AI, but Will Erase Web's Historical Record

https://www.eff.org/deeplinks/2026/03/blocking-internet-archive-wont-stop-ai-it-will-erase-webs-h...
375•pabs3•11h ago•108 comments

FFmpeg 101 (2024)

https://blogs.igalia.com/llepage/ffmpeg-101/
184•vinhnx•15h ago•7 comments

Show HN: Joonote – A note-taking app on your lock screen and notification panel

https://joonote.com/
9•kilgarenone•3h ago•0 comments

Senior European journalist suspended over AI-generated quotes

https://www.theguardian.com/technology/2026/mar/20/mediahuis-suspends-senior-journalist-over-ai-g...
54•Brajeshwar•3h ago•35 comments

Molly guard in reverse

https://unsung.aresluna.org/molly-guard-in-reverse/
184•surprisetalk•1d ago•76 comments

I Built a Spy Satellite Simulator in a Browser. Here's What I Learned

https://www.spatialintelligence.ai/p/i-built-a-spy-satellite-simulator
6•cyrc•3d ago•2 comments

Fujifilm X RAW STUDIO webapp clone

https://github.com/eggricesoy/filmkit
129•notcodingtoday•2d ago•47 comments

Iran launched unsuccessful attack on UK's Diego Garcia

https://www.bbc.com/news/articles/c5yljdgwppzo
77•alephnerd•3h ago•160 comments

How we give every user SQL access to a shared ClickHouse cluster

https://trigger.dev/blog/how-trql-works
47•eallam•4d ago•56 comments

Ghostling

https://github.com/ghostty-org/ghostling
287•bjornroberg•20h ago•60 comments

An industrial piping contractor on Claude Code [video]

https://twitter.com/toddsaunders/status/2034243420147859716
105•mighty-fine•2d ago•69 comments

Linux Applications Programming by Example: The Fundamental APIs (2nd Edition)

https://github.com/arnoldrobbins/LinuxByExample-2e
144•teleforce•18h ago•18 comments

The worst volume control UI in the world (2017)

https://uxdesign.cc/the-worst-volume-control-ui-in-the-world-60713dc86950
208•andsoitis•3d ago•104 comments

Attention Residuals

https://github.com/MoonshotAI/Attention-Residuals
221•GaggiX•1d ago•29 comments

We rewrote our Rust WASM parser in TypeScript and it got faster

https://www.openui.com/blog/rust-wasm-parser
270•zahlekhan•20h ago•175 comments

The Story of Marina Abramovic and Ulay (2020)

https://www.sydney-yaeko.com/artsandculture/marina-and-ulay
42•NaOH•2d ago•34 comments

Cryptography in Home Entertainment (2004)

https://mathweb.ucsd.edu/~crypto/Projects/MarkBarry/
72•rvnx•2d ago•40 comments
Open in hackernews

Senior European journalist suspended over AI-generated quotes

https://www.theguardian.com/technology/2026/mar/20/mediahuis-suspends-senior-journalist-over-ai-generated-quotes
54•Brajeshwar•3h ago

Comments

Chinjut•1h ago
Good lord, even the apology is AI generated: "That was not just careless—it was wrong."

https://pressanddemocracy.substack.com/p/i-am-admitting-my-m...

intended•1h ago
I’m tempted to agree, but this is a case where I think there’s more human than AI. Maybe he used LLMs for a bit, and changed parts of it. Maybe he is patient zero for LLM speak?
rsynnott•59m ago
Particularly given that the dreaded em-dash is not commonly used in Irish or UK English; it’s mostly a US English thing.
hvb2•28m ago
I think his apology was actually written in Dutch so this might be a translation that was automated?

Source: https://www.linkedin.com/posts/peter-vandermeersch-a4381b30_...

the_biot•15m ago
His non-apology apology even follows a familiar pattern: I wrote it myself but just used AI for some help, and it inserted false quotes! Bad tech! But I have now learned my lesson!

Very similar to what a rector recently wrote when she got busted giving an AI-generated speech in her inaugural speech in her new university job.

None of it is true, of course. These people are just sorry they got caught.

phreack•1h ago
> “It is particularly painful that I made precisely the mistake I have repeatedly warned colleagues about: these language models are so good that they produce irresistible quotes you are tempted to use as an author. Of course, I should have verified them. The necessary ‘human oversight’, which I consistently advocate, fell short.”

What? Irresistible quotes? This betrays a terrible way of thinking as a journalist. Basically an admission of wanting to fake news that'd sound good. At that point just write fiction.

sofixa•1h ago
> Basically an admission of wanting to fake news that'd sound good

How did you read that? Something sounding good and making sense and you wanting it to be true doesn't mean you'd fake it.

Obscurity4340•55m ago
Cant you, like, ask or instruct it to create a bibliography with the citations or at least put the source of any quotes next to it for reviewing purposes?
abaieorro•1h ago
> I wrongly put words into people’s mouths, when I should have presented them as paraphrases

Journalists were doing this for decades. Stitching and editing words out of context, to put words into peoples mouths! I will take AI halucinations over journalists halucinations anytime, at least machine has no hostile intent, and is making a geunine error!

hulitu•1h ago
> I will take AI halucinations over journalists halucinations anytime, at least machine has no hostile intent,

Famous last words. What do you think is the main application for AI ? Spreading propaganda.

garciansmith•1h ago
The idea that somehow AI is magically unbiased and not influenced by those making it is incorrect.
mmooss•1h ago
They said earlier that they didn't verify the quotes. I understand them to mean that the LLM outputted text that included quotes. They assumed the output was accurate and found it so appealing, on an emotional level, that they just went with it without checking.

The most valuable lesson here, by far, is not about other people but about ourselves. This person is trained, takes it seriously, and advocates for making sure the AI is supervised, and got caught in the emotional manipulation of LLM design [0].

We all are at risk. If we look at the other person and mock them, and think we are better than them, we are only exposing ourselves to more risk. If we think - oh my goodness, look what happened, this is perilous - then we gain from what happened and can protect ourselves.

(We might also ask why this valuable tool also includes such manipulative interface. Don't take it for granted; it's not at all necessary for LLMs to work, and they could just as easily sound like a-holes.)

[0] I mean that obviously they are carefully designed to sound appealing

PeterStuer•1h ago
"Journalism" over here seems to have died a long time ago. Most if not all of the former "quality newspapers" unfortunately seem to have devolved into what could be more accurately described as "pro regime activist blogs".
camillomiller•1h ago
I have witnessed in person what LLMs have done to the mind of seemingly intelligent people. It’s a disaster.
cinntaile•1h ago
Don't leave us hanging. What happened?
dude250711•56m ago
They stop thinking and they stop verifying output too.
camillomiller•45m ago
A CTO sent me a message that opened with:

“Here’s a friendly message that will perfectly convey what you want to say”.

A double PhD friend says she has to talk to chatGPT for all sort of advice and can’t feel safe not doing it, “because you know I’m single and don’t have a companion to spitball my ideas”. She let chatGPT decide which way to take to get to a certain island, and she got stranded because the suggested service didn’t exist.

I have more examples. It’s a fucking mind virus.

sigseg1v•30m ago
How is the getting stranded example different than asking on a travel forum how to get somewhere, and an active and well intentioned user that isn't familiar with your area of travel answers, gives you wrong instructions, and you get lost?
kibwen•19m ago
Because the vast and overwhelmingly majority of the time, if you ask a question into the ether that nobody has a good answer to, most people will gloss over it and not bother answering, as attested by decades of relatable memes ( https://xkcd.com/979/ ). In contrast, the chatbot is trained to always attempt to give an answer, and is seemingly disincentivized via its training set to just shrug and say "I don't know, good luck fam".
shahbaby•18m ago
Because they aren't probabilistic parrots? If they get it wrong, there's usually an understandable reason behind it.
intended•1h ago
Looking at the media ecosystem at large, gives me a case of gallows humor.

In some sections of the ecosystem, firms still penalize journalists for errors. In other sections, checking reduces the velocity of attention grabbing headlines. The difference in treatment is… farcical.

We need more good journalists, and more good journalism - but we no longer have ways to subsidize such work. Ads / classifieds are dead, and revenue accrues to only a few.

I have no idea how we square this circle.

PeterStuer•1h ago
We can't square this circle. It's why they're all A/B flipping headlines (resulting in the most deranged partisan clickbait), killed of their (too expensive) redactions (especially international news), rely solely on (barely) rewriting AP, Reuters and PRNewswire, and fill their site with opinion rather than factual reporting in support of gov handouts to the sector.
ashwinnair99•50m ago
The tool didn't fail here, the person did. An experienced journalist should know better. Editorial review exists for exactly this reason, if you skip it, this is what happens.
maxrmk•47m ago
Ironic coming from the Guardian. One of their journalists consistently publishes ai slop and the paper is in denial about it.

https://x.com/maxwelltani/status/2023089526445371777?s=46

zarzavat•41m ago
It doesn't seem AI generated to me. Are we at the point where you have to write in a particularly outrageous style in order to not be accused of using AI?
maxrmk•36m ago
Fair enough. It reads as extremely AI generated to me. But that isn’t completely reliable.
gruez•28m ago
>Are we at the point where you have to write in a particularly outrageous style in order to not be accused of using AI?

I don't think we've gotten to the extent that all popular writing styles (eg. hamburger paragraphs) are considered suspect, but the "it's not just X, it's Y" construction[1] attracts particular scrutiny.

[1] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing#...

philipp-gayret•25m ago
This is either ChatGPT or the one journalist who influenced all of ChatGPT's writing style.
gruez•23m ago
If you look at the replies[1] to that tweet, many commenters point out his style was entirely different prior to chatgpt.

[1] https://xcancel.com/maxwelltani/status/2023089526445371777?

philipp-gayret•21m ago
I was giving this the benefit of the doubt as well and was just looking at his older writings that have a little "This article is more than 5 years old" banner above it. Looks totally different indeed.
crop_rotation•38m ago
HN is full of people saying ABCD should know better and honestly I thought the same, but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly. People get defensive when I point out out to them that ChatGPT will make things up and it is widely know, and some even tell me it is the fault of "tech people" for not fixing it and they can't be expected to double check every chatgpt conversation. So I am very sure this problem is more prevalent than what we see and also that it is going to continue increasing.
doctorpangloss•15m ago
on the flip side, so much chatgpt usage, full of flaws, doesn't seem to really matter in various "critical domains." you can't generalize "critical."
joe_mamba•12m ago
>but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly

That's why I lost trust and faith in people who end up in positions of doctor, lawyer or judge. When I was young I used to think they must be the smartest most high-IQ people in the world, having read the most books and have the highest levels of critical thinking and debate skills ever. When in fact they were only good at memorizing and regurgitating the right information that the school required to pass the exam that gave them that prestigious title and that's it. It's a miracle society functions at all.

andrewflnr•11m ago
Your friends should know better. That their behavior is prevalent does not contradict that.
shahbaby•21m ago
> That was not just careless – it was wrong

lol