frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I am building a cloud

https://crawshaw.io/blog/building-a-cloud
249•bumbledraven•3h ago•105 comments

Alberta startup sells no-tech tractors for half price

https://wheelfront.com/this-alberta-startup-sells-no-tech-tractors-for-half-price/
1701•Kaibeezy•15h ago•541 comments

Apple fixes bug that cops used to extract deleted chat messages from iPhones

https://techcrunch.com/2026/04/22/apple-fixes-bug-that-cops-used-to-extract-deleted-chat-messages...
558•cdrnsf•11h ago•137 comments

We found a stable Firefox identifier linking all your private Tor identities

https://fingerprint.com/blog/firefox-tor-indexeddb-privacy-vulnerability/
662•danpinto•14h ago•184 comments

Ars Technica: Our newsroom AI policy

https://arstechnica.com/staff/2026/04/our-newsroom-ai-policy/
34•zdw•2h ago•21 comments

5x5 Pixel font for tiny screens

https://maurycyz.com/projects/mcufont/
576•zdw•3d ago•121 comments

A True Life Hack: What Physical 'Life Force' Turns Biology's Wheels?

https://www.quantamagazine.org/what-physical-life-force-turns-biologys-wheels-20260420/
59•Prof_Sigmund•1d ago•10 comments

The Onion to Take over InfoWars

https://www.nytimes.com/2026/04/20/business/infowars-alex-jones-the-onion.html
118•lxm•2d ago•18 comments

Over-editing refers to a model modifying code beyond what is necessary

https://nrehiew.github.io/blog/minimal_editing/
352•pella•14h ago•202 comments

Tempest vs. Tempest: The Making and Remaking of Atari's Iconic Video Game

https://tempest.homemade.systems
65•mwenge•7h ago•22 comments

Website streamed live directly from a model

https://flipbook.page/
256•sethbannon•14h ago•74 comments

Technical, cognitive, and intent debt

https://martinfowler.com/fragments/2026-04-02.html
257•theorchid•15h ago•66 comments

Plexus P/20 Emulator

https://spritetm.github.io/plexus_20_emu/
15•hggh•3d ago•1 comments

Borrow-checking without type-checking

https://www.scattered-thoughts.net/writing/borrow-checking-without-type-checking/
53•jamii•5h ago•13 comments

OpenAI's response to the Axios developer tool compromise

https://openai.com/index/axios-developer-tool-compromise/
67•shpat•7h ago•35 comments

Ping-pong robot beats top-level human players

https://www.reuters.com/sports/ping-pong-robot-ace-makes-history-by-beating-top-level-human-playe...
115•wslh•16h ago•129 comments

Parallel agents in Zed

https://zed.dev/blog/parallel-agents
221•ajeetdsouza•14h ago•120 comments

Verus is a tool for verifying the correctness of code written in Rust

https://verus-lang.github.io/verus/guide/
49•fanf2•2d ago•9 comments

An amateur historian's favorite books about the Silk Road

https://bookdna.com/best-books/silk-road
6•bwb•1d ago•1 comments

Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model

https://qwen.ai/blog?id=qwen3.6-27b
824•mfiguiere•18h ago•383 comments

Scoring Show HN submissions for AI design patterns

https://www.adriankrebs.ch/blog/design-slop/
305•hubraumhugo•17h ago•218 comments

Your hex editor should color-code bytes

https://simonomi.dev/blog/color-code-your-bytes/
3•tobr•1d ago•0 comments

Ultraviolet corona discharges on treetops during storms

https://www.psu.edu/news/earth-and-mineral-sciences/story/treetops-glowing-during-storms-captured...
229•t-3•18h ago•65 comments

Arch Linux Now Has a Bit-for-Bit Reproducible Docker Image

https://antiz.fr/blog/archlinux-now-has-a-reproducible-docker-image/
31•maxloh•6h ago•3 comments

Flow Map Learning via Nongradient Vector Flow [pdf]

https://openreview.net/pdf?id=C1bkDPqvDW
23•E-Reverance•5h ago•0 comments

Bodega cats of New York

https://bodegacatsofnewyork.com
190•zdw•5d ago•70 comments

Workspace Agents in ChatGPT

https://openai.com/index/introducing-workspace-agents-in-chatgpt/
132•mfiguiere•14h ago•50 comments

Windows 9x Subsystem for Linux

https://social.hails.org/@hailey/116446826733136456
943•sohkamyung•22h ago•224 comments

The handmade beauty of Machine Age data visualizations

https://resobscura.substack.com/p/the-handmade-beauty-of-machine-age
33•benbreen•17h ago•1 comments

What killed the Florida orange?

https://slate.com/business/2026/04/florida-state-orange-food-houses-real-estate.html
151•danso•2d ago•135 comments
Open in hackernews

Ars Technica: Our newsroom AI policy

https://arstechnica.com/staff/2026/04/our-newsroom-ai-policy/
34•zdw•2h ago

Comments

gnabgib•2h ago
Doesn't need Ars Technica added to the title
ares623•1h ago
Trust, reputation, and credibility will become (even more of) a premium.
legitster•1h ago
AI is in danger of peeing in it's own water source. It's unbelievably useful at imitating and generating content, but it needs enough original content to be able to train and scrape.

Google got one thing wrong and nearly destroyed the internet - people need to have an incentive to contribute content online, and that incentive should not be to game the system for advertising.

This in particular dawned on me when asking Claude for instructions in taking apart my dryer. There was literally only one webpage on the internet left with instructions for my particular dryer - the page was more or less unusable with rotten links and riddled with adware. Claude did it's best but filled in the missing diagrams with hallucinations.

I was imaging if LLMs could finally solve the micropayments solution people have always proposed for the internet. Part of my monthly payment gets split between all of the sites that the LLM scraped knowledge. Paid out like Spotify pays out artists.

It might not be a lot of money, but it would certainly be more than the pitiful ad revenue you get from posting content online right now. And if I want to upload corrected instructions for repairing this dryer I would have reason to.

ares623•48m ago
> I was imaging if LLMs could finally solve the micropayments solution people have always proposed for the internet. Part of my monthly payment gets split between all of the sites that the LLM scraped knowledge. Paid out like Spotify pays out artists.

As a software user I wish I could do the same for all the software I use.

defrost•1h ago
\1 AI-generated news is unhuman slop. Crikey is banning it (2024) - Crikey.com.au - https://www.crikey.com.au/2024/06/24/crikey-insider-artifici...

\2 Why Crikey retracted an article that we found out was written with AI help (2026) - https://www.crikey.com.au/2026/03/19/crikey-responds-to-ai-c...

  Yesterday, we published an article by a contributor who later confirmed they used AI in some aspects of its production.

  This goes against our editorial policies. As a result, we’ve taken down the story and the preceding three stories in the series.
(\2) is an interesting follow on from the policy set two years earlier (\1) as the specific piece in question "used AI in some aspects of its production" but was largely very much a human conceived, shaped and written piece that was only "assisted" by AI.

The Australian Media Watch team looked at this tension closely and felt the rejection was unfair, pointing out that while slop is bad, assistance (subject to terms and conditions) can enhance.

- Media Watch, likely geolocked to AU, might need a proxy - https://www.abc.net.au/mediawatch/episodes/ep-08/106487250

vintagedave•1h ago
> Anyone who uses AI tools in our editorial workflow is responsible for the accuracy and integrity of the resulting work. This responsibility cannot be transferred to colleagues, editors...

This sounds a direct callout to the incident earlier this year where an apparently sick staff member relied on an AI to reproduce quotes, and it did not. Ars retracted the article and the staffmember was fired.

I have felt very ethically uneasy about this because the person was ill, and I emailed the Ars editorial team directly to express concern re labour conditions, and to note that it is the editorial team's responsibility to do things like check quotes.

Of course it is the journalist's responsibility: when you have a job you do your job by policy (I wonder if this policy existed in writing at the time of the firing?) plus, it is part of the job to be accurate. But I am also a firm believer in responsibility being greater at higher levels. This sounds a direct abrogation of journalistic standards by the Ars editorial team.

lynx97•41m ago
> apparently sick staff member relied on an AI to reproduce quotes

"Apparently sick", you couldn't phrase it more accurately.

Kudos for firing them, the only valid course of action for a publisher.

applfanboysbgon•42m ago
Self-contradictory policy.

> Reporters may use AI tools vetted and approved for our workflow to assist with research, including navigating large volumes of material, summarizing background documents, and searching datasets.

If this is their official policy, Ars Technica bears as much responsibility as the author they fired for the fabricated reporting. LLMs are terrible at accurately summarizing anything. They very randomly latch on to certain keywords and construct a narrative from them, with the result being something that is plausibly correct but in which the details are incorrect, usually subtly so, or important information is omitted because it wasn't part of the random selection of attention.

You cannot permit your employees to use LLMs in this manner and then tell them it's entirely their fault when it makes mistakes, because you gave them permission to use something that will make mistakes 100% without fail. My takeaway from this is to never trust anything that Ars reports because their policy is to rely on plausible generated fictional research and their solution to getting caught is to fire employees rather than taking accountability for doing actual research.

brey•29m ago
The next sentence after your quoted section:

“Even then, AI output is never treated as an authoritative source. Everything must be verified.”

applfanboysbgon•24m ago
Any verification process thorough enough to catch all LLM fabrications would take more work than simply not using the LLM in the first place. If anything verifying what an LLM wrote is substantially more difficult than just reading the material it's "summarising", because you need to fully read and comprehend the material and then also keep in mind what the LLM generated to contrast and at that point what the fuck are you even doing?

I believe this policy can never result in a positive outcome. The policy implicitly suggests that verification means taking shortcuts and letting fabrications slip through in the name of "efficiency", with the follow-up sentence existing solely so that Ars won't take accountability for enabling such a policy but instead place the blame entirely on the reporters it told to take shortcuts.

Paracompact•12m ago
> I believe this policy can never result in a positive outcome.

I get where you're coming from (I'm learning more and more over time that every sentence or line of code I "trust" an AI with, will eventually come back to bite me), but this is too absolutist. Really, no positive result, ever, in any context? We need more nuanced understanding of this technology than "always good" or "always bad."

applfanboysbgon•6m ago
I didn't say in any context. I'm specifically talking about this policy on journalistic research.
JumpCrisscross•11m ago
> Any verification process thorough enough to catch all LLM fabrications would take more work than simply not using the LLM in the first place

Sometimes you have a weak hunch that may take hours to validate. Putting an LLM to doing the preliminary investigation on that can be fruitful. Particularly if, as if often the case, you don't have a weak hunch, but a small basket of them.

JumpCrisscross•12m ago
> the author they fired for the fabricated reporting

Didn't one of the magazine's editors share the byline?

fooker•10m ago
> LLMs are terrible at accurately summarizing anything.

I think you are perhaps stuck in 2023?

sharkjacobs•41m ago
> Our creative team may use AI tools in the production of certain visual material, but the creative direction and editorial judgment are human-driven.

As opposed to what? This is a little facetious, but what could it possibly mean to have creative direction and editorial judgement without human involvement?

Presumably we're talking about image generated by a diffusion model or something, but further, an image which is generated without being edited by any human. The prompt used to generate the image isn't written by a human, and it can't really be based on the contents of the (human authored and edited) article either. No human may select the service or model used, and once generated the image is published sight unseen without being reviewed by any human.

If some kind of agentic AI does any of these things it is one which appears ex nihilo, spontaneously appearing without being created or directed by any human.

sharkjacobs•26m ago
There's a good post from Aurich in the comments of the article detailing the practical reality of how they (don't) use AI tools in their image work, but as a policy statement this sentence is 100% vibes, 0% actual guidance or restriction
npodbielski•24m ago
It is nice to see, but I fear it will be the same as with papers and their news and internet. I could buy a paper and read it but why would I?

The same will most likely happen with human written news and cheap AI slop news. Why would anyone pay more for higher quality when you can have low quality cheap product?

Look at food for example. Price is most important factor in the choice of what you are going to buy. I will probably not happen now, in few months or in even few years but it will happen if models will still be advancing.

riffraff•11m ago
> Our creative team may use AI tools in the production of certain visual material, but the creative direction and editorial judgment are human-driven.

How is this different from anyone else publishing AI slop images on their blog? Those people also direct the AI through prompting and evaluate the results.

I mean, use AI images, so long as they are not crap, but why keep up this charade of "we're authoring the slop".

JumpCrisscross•8m ago
Context:

"An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library.

...

I’ve talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down – here’s the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed.

...

Update: Ars Technica issued a brief statement admitting that AI was used to fabricate these quotes" [1].

[1] https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

Discussion: https://news.ycombinator.com/item?id=47009949

mellosouls•7m ago
Related discussions from a couple months ago:

Ars Technica fires reporter after AI controversy involving fabricated quotes (606 points, 394 comments)

https://news.ycombinator.com/item?id=47226608

Editor's Note: Retraction of article containing fabricated quotations (308 points, 211 comments)

https://news.ycombinator.com/item?id=47026071