frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

With Apple: Fortify your app: Essential strategies to strengthen security

https://developer.apple.com/events/view/TUHA23T82K/dashboard
1•pjmlp•1m ago•0 comments

AI analysis for UK Parliament bills

https://ukparliament.vercel.app/
1•ArisC•3m ago•0 comments

iPhotron 4.10 Is Released

https://github.com/OliverZhaohaibin/iPhotron-LocalPhotoAlbumManager/releases/tag/v4.1.0
1•main-protect•5m ago•0 comments

Court orders Acer and Asus to stop selling PCs in Germany over H.265 patents

https://videocardz.com/newz/acer-and-asus-are-now-banned-from-selling-pcs-and-laptops-in-germany-...
2•ledoge•5m ago•0 comments

The Prompt of Babel

https://joemclean.github.io/writing/the-prompt-of-babel.html
1•jjjjjjjjoe•7m ago•3 comments

How Can Something Fall Faster Than Gravity? [video]

https://www.youtube.com/watch?v=dosAbCCKXLs
1•zahlman•9m ago•0 comments

Top AI SDR tools analysis

https://revenuesystemslab.substack.com/p/ai-sdr-tools
1•Atbech•9m ago•0 comments

Pentagon threatens to cut off Anthropic in AI safeguards dispute

https://www.reuters.com/technology/pentagon-threatens-cut-off-anthropic-ai-safeguards-dispute-axi...
1•MKais•10m ago•0 comments

Baseband, Bessel and Beyond

https://www.youtube.com/watch?v=0GjWRQMFVA8
1•michh•10m ago•0 comments

Addicted to your phone? Try "bricking" it

https://economist.com/culture/2026/02/15/addicted-to-your-phone-try-bricking-it
1•andsoitis•11m ago•0 comments

Codeberg is why developers are broke

https://sharemygit.com/
2•onesandofgrain•19m ago•1 comments

Show HN: Claude-relais – A plan/build/judge loop mixing Claude with Cursor

https://github.com/clementrog/claude-relais
1•crog•19m ago•0 comments

Can agentic coding raise the quality bar?

https://lpalmieri.com/posts/agentic-coding-raises-quality/
2•LukeMathWalker•20m ago•1 comments

Learning Kubernetes with the official docs and NotebookLM

https://randomwrites.com/
1•mutahirs•20m ago•0 comments

List of Sports Clichés

https://en.wikipedia.org/wiki/List_of_sports_clich%C3%A9s
1•carlos-menezes•21m ago•0 comments

State Attorneys General Want to Tie Online Access to ID

https://reclaimthenet.org/40-attorneys-general-back-ids-online-safety-act
19•computerliker•22m ago•4 comments

Python Fiddle – Online Python IDE, Compiler, and Interpreter

https://python-fiddle.com
2•Curiositry•23m ago•0 comments

Large Language Model Reasoning Failures

https://arxiv.org/abs/2602.06176
1•kawera•24m ago•0 comments

When Your Ally Turns Narcissistic: Manual for Navigating Transatlantic Relations

https://gppi.net/2025/10/12/when-your-ally-turns-narcissistic
3•rendx•25m ago•1 comments

The Second Half of the Chessboard

https://joshs.bearblog.dev/the-second-half-of-the-chessboard/
2•psychedare•30m ago•1 comments

Peter Steinberger: I need AI that scans every PR and Issue and de-dupes

https://twitter.com/steipete/status/2023057089346580828
1•vibeprofessor•31m ago•0 comments

Show HN: VOOG – Moog-style polyphonic synthesizer in Python with tkinter GUI

https://github.com/gpasquero/voog
5•gpasquero•31m ago•1 comments

Accessibility Is All You Need – Why agent protocols for the web are redundant

https://github.com/webmachinelearning/webmcp/issues/91
2•lulzx•31m ago•1 comments

Miller – CLI tool for querying, shaping, and reformatting data in many formats

https://miller.readthedocs.io/en/6.16.0/
3•smartmic•32m ago•0 comments

What Happened in El Paso? – By James Fallows

https://fallows.substack.com/p/what-happened-in-el-paso
2•MaysonL•33m ago•0 comments

I made a real BMO local AI agent with a Raspberry Pi and Ollama

https://www.youtube.com/watch?v=l5ggH-YhuAw
1•emigre•37m ago•0 comments

Show HN: Let AI agents try things without consequences

https://github.com/multikernel/branching
2•wang_cong•37m ago•0 comments

California Drastically Reduces Creditor Exemptions for Qualified Accounts (2024)

https://www.forbes.com/sites/jayadkisson/2024/10/07/california-drastically-reduces-creditor-exemp...
2•dataflow•41m ago•0 comments

Show HN: A tool to keep dotfiles and system configs in sync with a Git repo

https://github.com/senotrusov/etcdotica
1•senotrusov•42m ago•0 comments

Show HN: Design Memory – Extract design systems from live websites via CLI

https://github.com/memvid/design-memory
1•saleban1031•44m ago•0 comments
Open in hackernews

Editor's Note: Retraction of article containing fabricated quotations

https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/
66•bikenaga•1h ago

Comments

usefulposter•1h ago
tl;dr: We apologize for getting caught. Ars Subscriptors in the comments thank Ars for their diligence in handling an editorial fuckup that wasn't identified by Ars.
icegreentea2•1h ago
The comments are trending towards being more critical as of my posting. A lot more asking what they're going to do about the authors, and what the hell happened.
anonymous908213•55m ago
> Greatly appreciate this direct statement clarifying your standards, and yet another reason that I hope Ars can remain a strong example of quality journalism in a world where that is becoming hard to find

> Kudos to ARS for catching this and very publicly stating it.

> Thank you for upholding your journalistic standards. And a note to our current administration in DC - this is what transparency looks like.

> Thank you for upholding the standards of journalism we appreciate at ars!

> Thank you for your clarity and integrity on your correction. I am a long time reader and ardent supporter of Ars for exactly these reasons. Trust is so rare but also the bedrock of civilization. Thank you for taking it seriously in the age of mass produced lies.

> I like the decisive editorial action. No BS, just high human standards of integrity. That's another reason to stick with ARS over news feeds.

There is some criticism, but there is also quite a lot of incredible glazing.

icegreentea2•50m ago
Yeah, the initial comments are pretty glazey, but go to the second and third pages of comments (ars default sorts by time). I'll pull some quotes:

> If there is a thread for redundant comments, I think this is the one. I, too, will want to see substantially more followup here, ideally this week. My subscription is at stake.

> I know Aurich said that a statement would be coming next week, due to the weekend and a public holiday, so I appreciate that a first statement came earlier. [...] Personally, I would expect Ars to not work with the authors in the future

> (from Jim Salter, a former writer at Ars) That's good to hear. But frankly, this is still the kind of "isolated incident" that should be considered an immediate firing offense.

> Echoing others that I’m waiting to see if Ars properly and publicly reckons with what happened here before I hit the “cancel subscription” button

arduanika•27m ago
No reason to trust that the comment section is any more genuine than the deleted fake article. If an Ars employee used genAI to astroturf these comments, they clearly would not be fired for it or even called out by name.
malfist•1h ago
I don't know how you could possibly have that take away from reading this. They did a review of their context to confirm this was an isolated incident and reaffirmed that it did not follow the journalistic standards they have set for themselves.

They admit wrong doing here and point to multiple policy violations.

add-sub-mul-div•53m ago
It's embarrassing for them to put out such a boilerplate "apology" but even more embarrassing to take it at its word.

It's such a cliche that they should have apologized in a human enough way that it didn't sound like the apology was AI generated as well. It's one way they could have earned back a small bit of credibility.

misnome•41m ago
> That rule is not optional, and it was not followed here.

It’s not optional, but wasn’t followed, with zero repercussions.

Sounds optional.

throw3e98•28m ago
Reading between the lines, this is corporate-speak for "this is a terminable offense for the employees involved." It's a holiday weekend in the US so they may need to wait for office staff to return to begin the process.
lapcat•23m ago
> It's a holiday weekend in the US so they may need to wait for office staff to return to begin the process.

That's not how it works. It's standard op nowadays to lock out terminated employees before they even walk in the door.

Sometimes they just snail mail the employee's personal possessions from their desk.

Moreover, Ars Technica publishes articles every day. Aside from this editor's note, they published one article today and three articles yesterday. So "holiday weekend" is practically irrelevant in this case.

g947o•20m ago
They might as well wait till business hours to sort things out before publishing a statement. Nobody needs to see such hollow corpo speak on a Sunday.
maxbond•3m ago
No, admitting fault as soon as possible makes a big difference. It's essential to restoring credibility.

If they had waited until Monday the thread would be filled with comments criticizing them for waiting that long.

anonymous908213•1h ago
Zero repercussions for the senior editor involved in fabricating quotations (they neglect to even name the culprit), so this is essentially an open confession that Ars has zero (really, negative) journalistic integrity and will continue to blatantly fabricate articles rather than even pretending to do journalism, so long as they don't get caught. To get to the stage where an editor who has been at the company for 14 years is allowed to publish fraudulent LLM output, which is both plagiarism (claiming the output as his own), and engaging in the spread of disinformation by fabricating stories wholesale, indicates a deep cultural rot within the organisation that should warrant a response deeper than "oopsie". The publication of that article was not an accident.
maxbond•45s ago
What is the evidence that lead you to believe there have been no repercussions?
add-sub-mul-div•1h ago
> We have covered the risks of overreliance on AI tools for years

If the coverage of those risks brought us here, of what use was the coverage?

Another day, another instance of this. Everyone who warned that AI would be used lazily without the necessary fact-checking of the output is being proven right.

Sadly, five years from now this may not even result in an apology. People might roll their eyes at you for correcting a hallucination they way they do today if you point out a typo.

esseph•1h ago
> Sadly, five years from now this may not even result in an apology. People might roll their eyes at you for correcting a hallucination they way they do today if you point out a typo.

I think this track is unavoidable. I hate it.

j0057•53m ago
Odd that there's no link to the retracted article.

Thread on Arstechnica forum: https://arstechnica.com/civis/threads/editor%E2%80%99s-note-...

The retracted article: https://web.archive.org/web/20260213194851/https://arstechni...

andrewflnr•50m ago
People put a lot of weight on blame-free post-mortems and not punishing people who make "mistakes", but I believe that has to stop at the level of malice. Falsifying quotes is malice. Fire the malicious party or everything else you say is worthless.
anonymous908213•46m ago
Yes. This is being treated as thought it were a mistake, and oh, humans make mistakes! But it was no mistake. Possibly it was a mistake on the part of whoever was responsible for reviewing the article before publication didn't catch it. But plagiariasm and fabrication require malicious intent, and the authors responsible engaged in both.
blell•45m ago
There’s no malice if there was no intention of falsifying quotes. Using a flawed tool doesn’t count as intention.
anonymous908213•44m ago
Outsourcing your job as a journalist to a chatbot that you know for a fact falsifies quotes (and everything else it generates) is absolutely intentional.
furyofantares•32m ago
It's intentionally reckless, not intentionally harmful or intentionally falsifying quotes. I am sure they would have preferred if it hadn't falsified any quotes.
blactuary•8m ago
He's on the AI beat, if he is unaware that a chatbot will fabricate quotes and didn't verify them that is a level of reckless incompetence that warrants firing
andrewflnr•43m ago
They're expected by policy to not use AI. Lying about using AI is also malice.
furyofantares•35m ago
It's a reckless disregard for the readers and the subjects of the article. Still not malice though, which is about intent to harm.
andrewflnr•27m ago
Lying is intent to deceive. Deception is harm. This is not complicated.
maxbond•17m ago
I think you're reading a lot of intentionality into the situation what may be present, but I have not seen information confirming or really even suggesting that it is. Did someone challenge them, "was AI used in the creation of this article?" and they denied it? I see no evidence of that.

Seems like ordinary, everyday corner cutting to me. I don't think that rises to the level of malice. Maybe if we go through their past articles and establish it as a pattern of behavior.

That's not a defence to be clear. Journalists should be held to a higher standard than that. I wouldn't be surprised if someone with "senior" in their title was fired for something like this. But I think this malice framing is unhelpful to understanding what happened.

roxolotl•33m ago
The issues with such tools are highly documented though. If you’re going to use a tool with known issues you’d better do your best to cover for them.
kermatt•32m ago
Outsourcing writing to a bot without attribution may not be malicious, but it does strain integrity.
InsideOutSanta•18m ago
I don't think the article was written by an LLM; it doesn't read like it, it reads like it was written by actual people.

My assumption is that one of the authors used something like Perplexity to gather information about what happened. Since Shambaugh blocks AI company bots from accessing his blog, it did not get actual quotes from him, and instead hallucinated them.

They absolutely should have validated the quotes, but this isn't the same thing as just having an LLM write the whole article.

I also think this "apology" article sucks, I want to know specifically what happened and what they are doing to fix it.

lapcat•28m ago
> Using a flawed tool doesn’t count as intention.

"Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here."

They aren't allowed to use the tool, so there was clearly intention.

gdulli•7m ago
The tool when working as intended makes up quotes. Passing that off as journalism is either malicious or unacceptably incompetent.
skybrian•44m ago
I don’t see how you could know that without more information. Using an AI tool doesn’t imply that they thought it would make up quotes. It might just be careless.

Assuming malice without investigating is itself careless.

anonymous908213•42m ago
we are fucking doomed holy shit

we're really at the point where people are just writing off a journalist passing off their job to a chatgpt prompt as though that's a normal and defensible thing to be doing

maxbond•36m ago
No one said it was defensible. They drew a distinction between incompetence and malice. Let's not misquote each other here in the comments.
anonymous908213•31m ago
Even if it didn't fabricate quotes wholesale, taking an LLM's output and claiming it as your own writing is textbook plagiarism, which is malicious intent. Then, if you know that LLMs are next-token-prediction-engines that have no concept of "truth" and are programmed solely to generate probabilistically-likely text with no specific mechanism of anchoring to "reality" or "facts", and you use that output in a journal that (ostensibly) exists for the reason of presenting factual information to readers, you are engaging in a second layer of malicious intent. It would take an astounding level of incompetence for a tech journal writer to not be aware of the fact that LLMs do not generate factual output reliably, and it beggars belief given that one of the authors has worked at Ars for 14 years. If they are that incompetent, they should probably be fired on that basis anyways. But even if they are that incompetent, that still only covers one half of their malicious intent.
maxbond•24m ago
The article in question appears to me to be written by a human (excluding what's in quotation marks), but of course neither of us has a crystal ball. Are there particular parts of it that you would flag as generated?

Honestly I'm just not astounded by that level of incompetence. I'm not saying I'm impressed or that's it's okay. But I've heard much worse stories of journalistic malpractice. It's a topical, disposable article. Again, that doesn't justify anything, but it doesn't surprise me that a short summary of a series of forum exchanges and blog posts was low effort.

jemmyw•23m ago
That don't actually say it's a blame free post-mortem, nor is it worded as such. They do say it's their policy not to publish AI generated anything unless specifically labelled. So the assumption would be that someone didn't follow policy and there will be repercussions.

The problem is people on the Internet, hn included, always howl for maximalist repercussions every time. ie someone should be fired. I don't see that as a healthy or proportionate response, I hope they just reinforce that policy and everyone keeps their jobs and learns a little.

unethical_ban•45m ago
Who got fired?
netsharc•37m ago
The bylines are known, check in 4-5 months whether either or both names still appear on new articles or not..
maxbond•31m ago
They're both still on the staff page presently. https://arstechnica.com/staff-directory/

It is definitely not a good look for a "Senior AI Reporter."

throw3e98•30m ago
This is a US holiday weekend and lots of people are going to be on weekend vacations. Check back on Wednesday.
g947o•23m ago
Then they should take their time publishing this statement.

Nobody is in a hurry.

mzajc•38m ago
What are they changing to prevent this from happening in the future? Why was the use of LLMs not disclosed in the original article? Do they host any other articles covertly generated by LLMs?

As far as I can tell, the pulled article had no obvious tells and was caught only because the quotes were entirely made up. Surely it's not the only one, though?

g947o•24m ago
My read is, "Oops someone made a mistake and got caught. That shouldn't have happened. Let's do better in the future." and that's about it.
mrandish•23m ago
When an article is retracted it's standard to at least mention the title and what specific information was incorrect so that anyone who may have read, cited or linked it is informed what information was inaccurate. That's actually the point of a retraction and without it this non-standard retraction has no utility except being a fig leaf for Ars to prevent external reporting becoming a bigger story.

In the comments I found a link to the retracted article: https://arstechnica.com/ai/2026/02/after-a-routine-code-reje.... Now that I know which article, I know it's one I read. I remember the basic facts of what was reported but I don't recall the specifics of any quotes. Usually quotes in a news article support or contextualize the related facts being reported. This non-standard retraction leaves me uncertain if all the facts reported were accurate.

It's also common to provide at least a brief description of how the error happened and the steps the publication will take to prevent future occurrences.. I assume any info on how it happened is missing because none of it looks good for Ars but why no details on policy changes?

qnleigh•20m ago
Yes I just read the retracted article and I can't find anything that I knew was false. What were the fabricated quotes?
trevwilson•12m ago
This blog post from the person who was falsely quoted has screenshots and an archive link: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...