frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ars Technica fires reporter after AI controversy involving fabricated quotes

https://futurism.com/artificial-intelligence/ars-technica-fires-reporter-ai-quotes
98•danso•4h ago

Comments

ab_testing•1h ago
So they fired that author after the author had publicly apologized on Blue sky.
coldtea•1h ago
"Apologized on Blue Sky" is absolutely no reason to keep them. The author did the absolutely worst things a journalist can do (short of actual corruption) and is unfit for the job:

- He didn't care for his story,

- he didn't care to verify his story,

- he published bullshit made up stuff,

- and put words in a real person's mouth

- and he didn't even care to write the thing himself

Why keep him and pay him? What mentality all the above show? What respect, both self respect and respect for the job?

If they wanted stories from an LLM, they can pay for a subscription to one directly.

Hope this sends a message to journalist hacks who offload their writing or research to an LLM.

bingaweek•1h ago
What is the connection between these two statements? Are we supposed to presume that someone who apologizes on Bluesky should never be fired? Or did you also read the article and thought this was important information?
bigyabai•1h ago
Can you name any other way for Ars Technica to handle this situation without permanently soiling their reputation?
somenameforme•1h ago
He was supposed to be their "Senior AI Reporter." Him including basically anything from LLMs, without verifying it, in articles not only demonstrates a complete lack of credibility as a writer, but also a complete lack of understanding of AI. Even if they might have personally wanted to keep him on, you just can't after something like this.
danso•1h ago
Why would apologizing for plagiarism and fabrication preclude you from facing sanctions for plagiarism and fabrication?
bandrami•57m ago
That absolutely should be career-ending for a journalist, apology or no
landl0rd•53m ago
The raison d’etre for the journalist, in AD 2026, is less to gather information than to verify it. The journalist who cannot be trusted is no journalist at all. He is a blogger.
add-sub-mul-div•1h ago
> senior AI reporter

A true "senior" AI reporter should be more skeptical of LLM output than anyone else.

zmmmmm•1h ago
I think that's the nail in the coffin. Most others could say it was a giant whoopsie, but here it goes to the heart of their credibility. How could they continue write authoritatively about AI, having done this.
amarant•1h ago
I dunno. If AI doesn't write your articles, are you even an AI reporter?

Sorry, I never could resist a good dad joke

Revanche1367•1h ago
So the original blogger got slandered by an LLM agent, then got slandered again by a human journalist who used an LLM agent to write the article about him getting slandered by an LLM agent? How ironic.

But, does that mean he got slandered twice by an LLM agent or once by an agent and once by a human? Or was he technically slandered 3 times? Twice by agents and a third time by the journalist? New questions for the new agentic society.

amstan•41m ago
4 times, you forgot the owner of the bot that did the PR.
zarflax•16m ago
No, the journalist came in and slandered the LLM Twice and Jim Fell.
sparky_z•16m ago
He was only slandered once, by the LLM Agent. The Ars Technica article had presented paraphrases that it falsely attributed as direct quotes, and was therefore factually incorrect reporting. But it was not defamatory by any reasonable standard. Slander isn't just a synonym of "lie".
JumpCrisscross•1h ago
“Edwards also stressed that his colleague Kyle Orland, the site’s senior gaming editor who co-bylined the retracted story, had ‘no role in this error.’”

Has Orland issued a real apology? He bylined a piece containing fraudulent quotes.

schiffern•1h ago
"I always have and always will abide by that rule to the best of my knowledge at the time a story is published."

Nothing suspicious about heavy use of qualifiers in a non-apology blanket denial. Where's the Polymarket for whether this guy has a job next month?

https://www.404media.co/ars-technica-pulls-article-with-ai-f...

JumpCrisscross•56m ago
> whether this guy has a job next month?

That’s a problem. If he really hasn’t apologized, neither he nor Ars have recognized there is a problem, which means it will happen again.

slg•38m ago
Is there something to the story that I'm missing? Why does Orland need to apologize? Edwards fabricated the quotes via AI and seemingly presented them to Orland as authentic. Orland had no reason to suspect the quotes weren't real until after publishing.

When journalists are working on a shared byline, they don't each do the same research in order to fact-check each other. There is inherently a level of trust required for collaborating like this and Edwards violated that trust.

You can say this is a failure by the editorial process for not including fact checking, but that is an organizational issue with Ars, it's not the fault of Orland for failing to duplicate the work that he believed his coauthor did.

sl0pmaestro•59m ago
> while working from bed with a fever and very little sleep," he "unintentionally made a serious journalistic error" as he attempted to use an "experimental Claude Code-based AI tool" to help him

Oh right, being ill is what caused the error. I can bet that if you start verifying the past content from this author, you will see similar AI slop. Either that or he has been always ill with very little sleep.

geerlingguy•56m ago
Context from earlier discussion of the article being pulled: https://news.ycombinator.com/item?id=47009949
dang•44m ago
Thanks! and indeed - here's the sequence (in the usual reverse order). If there are missing threads we can add them...

OpenClaw is dangerous - https://news.ycombinator.com/item?id=47064470 - Feb 2026 (93 comments)

An AI Agent Published a Hit Piece on Me – Forensics and More Fallout - https://news.ycombinator.com/item?id=47051956 - Feb 2026 (82 comments)

Editor's Note: Retraction of article containing fabricated quotations - https://news.ycombinator.com/item?id=47026071 - Feb 2026 (205 comments)

An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (624 comments)

AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (30 comments)

The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)

An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (951 comments)

AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (750 comments)

vadansky•54m ago
Good time to watch Shattered Glass.

Imagine what he could have gotten up to with LLMs.

jackyli02•54m ago
The role "reporter" deserves very little credence in AI now. The public might be better off if they get their information on AI from ChatGPT.
neya•52m ago
[flagged]
dang•41m ago
Would you please stop breaking the site guidelines? I just had to ask you this in a different context.

You may not owe your least favorite publications better, but you owe this community better if you're participating in it.

https://news.ycombinator.com/newsguidelines.html

kittikitti•22m ago
"Don't feed egregious comments by replying; flag them instead."

You probably wish everyone would post as bots do, without em—dashes of course.

apparent•36m ago
Can you elaborate? Perhaps I haven't noticed that they push pro-sponsored content (what does this mean, exactly?). I do find their comment section to be pretty lousy, and very partisan. But the tech coverage always seemed fair enough. What am I missing?
sl0pmaestro•52m ago
Happy to see some accountability here. Athough it's unclear why the other co-author who stamped their name on that article was retained. Maybe they just stamped their name to meet their quota of articles. In any case this follow up action makes me take arstechnica standards a bit more seriously.
aizk•49m ago
I have a story with Benji.

Last year I went viral, and Benji was the first person to interview me. It was a really cool experience, we chatted via Twitter dms, and he wrote a piece about my work - overall did a decent job.

Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response.

Then, tech crunch wrote an article on our project.

I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)

I thought that was rather strange, especially since we already had built up a relationship.

I don't really have a moral or lesson to this story, other than that journalism can be rather opaque sometimes.

Oh one other tip for anyone reading this - if you do ever get reached out to by journalists, communicate in writing, not a phone call so you can be VERY precise in your wordings.

areoform•28m ago
Sometimes people get busy and overwhelmed, but they don't know how to say no.
epistasis•26m ago
I know a lot of people that don't get through their email every week, for example. Even saying no takes too much time.
jmyeet•44m ago
The crazy part to me is that even here on HN there are people who still insist that LLMs don't fabricate things or otherwise lie.

I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.

Sure there is such a thing as a naysayer but there are also people think all forms of valid criticism are just naysaying.

rahimnathwani•37m ago
The headline says Ars fired the reporter, but AFAICT the article doesn't include any facts that indicate this. All we know is that he no longer works there, and that Ars refused to provide any additional information.
aidenn0•36m ago
I don't know that this is what happened here, but any time there is a push to do more with less, you end up rewarding people who take shortcuts over those who do a proper job, and from the outside, it looks like journalism has a push to do more with less.
raincole•27m ago
I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links. It's scary that it got from 'practically useless' to 'the actual google search' in less than two years.

I really don't know where the internet is heading to and how any content site can survive.

SchemaLoad•15m ago
It's because the AI overview is most of the time directly summarising the search results rather than synthesizing an answer from internal model knowledge. Which is why it can hyperlink the sources for the facts now. Even a very dumb lightweight model can extract relevant text from articles

I just can't see how this is sustainable since they are stealing from the sources who are now getting defunded.

Barrin92•26m ago
people have said enough about the ethics of all of it but what I found even sadder is that the story made me curious to take a look at the actual piece he "investigated" with AI, it's this one (https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...) This is btw a bit more than 1k words, which takes the average American reader, not senior journalist, ~5 minutes.

This whole story involved asking Claude to mine this text for quotes, which refused because it included harassment related content, then asking ChatGPT to explain that, and so on.

That entire ordeal probably generated more text from the chatbots than just reading the few paragraphs of the blogpost. That's why I think the "I'm sick" angle doesn't matter much. This is the same brainrot as people who go "grok what does this mean" under every twitter post. It's like a schoolchild who cheats and expends more energy cheating than just learning what they're supposed to.

lich_king•24m ago
I clicked through the author's earlier stories when this first made waves. I obviously had no proof, but I was pretty certain that he's been using LLMs to generate stories for a good while.

When Ars released a statement saying this was an isolated incident, my reaction was "they probably didn't look too hard". I suspect they did, in the end?

nsxwolf•18m ago
Sad if true. I used to really enjoy reading his freelance articles in various publications pre-AI.
bragr•17m ago
The headline is a bit sensational considering all we know from the reporting is that he isn't working there anymore. Fired likely, sure, but not for a fact.
AnonC•12m ago
Journalists and bloggers usually write about others’ mess ups and apologies, dissecting which apologies are authentic and which apologies are non-apologies.

In this incident, Aurich Lawson of Ars Technica deleted the original article (which had LLM hallucinated quotes) instead of updating it with the error. He then published a vague non-apology, just like large companies and politicians usually do. And now we learn that this reporter was fired and yet Ars Technica doesn’t publish a snippet of an article about it.

There’s something to be said about the value of owning up to issues and being forthright with actions and consequences. In this age of indignation and fear of being perceived as weak or vulnerable due to honesty, I would’ve thought that Ars would be or could’ve been a beacon for how things should be talked about.

It’s sad to see Ars Technica at this level.

Meta’s AI smart glasses and data privacy concerns

https://www.svd.se/a/K8nrV4/metas-ai-smart-glasses-and-data-privacy-concerns-workers-say-we-see-e...
835•sandbach•6h ago•477 comments

British Columbia is permanently adopting daylight time

https://www.cbc.ca/news/canada/british-columbia/b-c-adopting-year-round-daylight-time-9.7111657
604•ireflect•8h ago•311 comments

Ars Technica fires reporter after AI controversy involving fabricated quotes

https://futurism.com/artificial-intelligence/ars-technica-fires-reporter-ai-quotes
98•danso•4h ago•43 comments

Simple Screw Counter

https://mitxela.com/projects/screwcounter
34•jk_tech•2d ago•8 comments

Show HN: I built a sub-500ms latency voice agent from scratch

https://www.ntik.me/posts/voice-agent
276•nicktikhonov•8h ago•76 comments

Intent-Based Commits

https://github.com/adamveld12/ghost
7•adamveld12•1h ago•2 comments

Guilty Displeasures

https://www.hopefulmons.com/p/what-are-your-guilty-displeasures
40•aregue•1d ago•48 comments

Seed of Might Color Correction Process (2023) [pdf]

https://andrewvanner.github.io/som/SoM_CC_Process_Day.pdf
80•haunter•6h ago•19 comments

New iPad Air, powered by M4

https://www.apple.com/newsroom/2026/03/apple-introduces-the-new-ipad-air-powered-by-m4/
352•Garbage•15h ago•564 comments

First in-utero stem cell therapy for fetal spina bifida repair is safe: study

https://health.ucdavis.edu/news/headlines/first-ever-in-utero-stem-cell-therapy-for-fetal-spina-b...
275•gmays•14h ago•51 comments

Physicists developing a quantum computer that’s entirely open source

https://physics.aps.org/articles/v19/24
63•tzury•6h ago•16 comments

Against Query Based Compilers

https://matklad.github.io/2026/02/25/against-query-based-compilers.html
49•surprisetalk•1d ago•20 comments

Launch HN: OctaPulse (YC W26) – Robotics and computer vision for fish farming

86•rohxnsxngh•12h ago•30 comments

Motorola announces a partnership with GrapheneOS

https://motorolanews.com/motorola-three-new-b2b-solutions-at-mwc-2026/
2113•km•22h ago•756 comments

Show HN: Govbase – Follow a bill from source text to news bias to social posts

https://govbase.com
176•foxfoxx•12h ago•72 comments

Moldova broke our data pipeline

https://www.avraam.dev/blog/moldova-broke-our-pipeline
10•almonerthis•2d ago•3 comments

The Cathode Ray Tube site

https://www.crtsite.com/didactic-crt.html
23•joebig•1d ago•2 comments

RCade: Building a Community Arcade Cabinet

https://www.frankchiarulli.com/blog/building-the-rcade/
64•evakhoury•4d ago•14 comments

The 185-Microsecond Type Hint

https://blog.sturdystatistics.com/posts/type_hint/
54•kianN•7h ago•7 comments

iPhone 17e

https://www.apple.com/newsroom/2026/03/apple-introduces-iphone-17e/
234•meetpateltech•15h ago•322 comments

Programmable Cryptography

https://0xparc.org/writings/programmable-cryptography-1
57•fi-le•2d ago•33 comments

Inside the M4 Apple Neural Engine, Part 1: Reverse Engineering

https://maderix.substack.com/p/inside-the-m4-apple-neural-engine
298•zdw•1d ago•79 comments

Ask HN: Who is hiring? (March 2026)

187•whoishiring•13h ago•232 comments

Elevated Errors in Claude.ai

https://status.claude.com/incidents/yf48hzysrvl5
41•LostMyLogin•1h ago•24 comments

Show HN: Visual Lambda Calculus – a thesis project (2008) revived for the web

https://github.com/bntre/visual-lambda
33•bntr•2d ago•4 comments

Welcome (back) to Macintosh

https://take.surf/2026/03/01/welcome-back-to-macintosh
287•Udo_Schmitz•8h ago•206 comments

Closure of the Weatheradio service in Canada

https://www.rac.ca/rac-responds-to-the-closure-of-the-weatherradio-service-in-canada/
112•da768•6h ago•51 comments

Reflex (YC W23) Is Hiring Software Engineers – Python

https://www.ycombinator.com/companies/reflex/jobs
1•apetuskey•12h ago

Ask HN: Who wants to be hired? (March 2026)

81•whoishiring•13h ago•204 comments

"That Shape Had None" – A Horror of Substrate Independence (Short Fiction)

https://starlightconvenience.net/#that-shape-had-none
85•casmalia•10h ago•16 comments