frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Joy of Playing Grandia, on Sega Saturn

https://www.segasaturnshiro.com/2025/11/27/the-joy-of-playing-grandia-on-sega-saturn/
36•tosh•1h ago•3 comments

LLM from scratch, part 28 – training a base model from scratch on an RTX 3090

https://www.gilesthomas.com/2025/12/llm-from-scratch-28-training-a-base-model-from-scratch
45•gpjt•6d ago•0 comments

No ARIA is better than bad ARIA

https://www.w3.org/WAI/ARIA/apg/practices/read-me-first/
57•robin_reala•6d ago•24 comments

Epsilon: A WASM virtual machine written in Go

https://github.com/ziggy42/epsilon
49•ziggy42•1w ago•12 comments

Icons in Menus Everywhere – Send Help

https://blog.jim-nielsen.com/2025/icons-in-menus/
569•ArmageddonIt•15h ago•233 comments

Richard Stallman on ChatGPT

https://www.stallman.org/chatgpt.html
26•colesantiago•13m ago•12 comments

The universal weight subspace hypothesis

https://arxiv.org/abs/2512.05117
290•lukeplato•11h ago•103 comments

Kroger acknowledges that its bet on robotics went too far

https://www.grocerydive.com/news/kroger-ocado-close-automated-fulfillment-centers-robotics-grocer...
175•JumpCrisscross•11h ago•156 comments

Manual: Spaces

https://type.today/en/journal/spaces
66•doener•11h ago•7 comments

Jepsen: NATS 2.12.1

https://jepsen.io/analyses/nats-2.12.1
370•aphyr•16h ago•135 comments

Strong earthquake hits northern Japan, tsunami warning issued

https://www3.nhk.or.jp/nhkworld/en/news/20251209_02/
320•lattis•20h ago•148 comments

Microsoft increases Office 365 and Microsoft 365 license prices

https://office365itpros.com/2025/12/08/microsoft-365-pricing-increase/
393•taubek•21h ago•458 comments

Horses: AI progress is steady. Human equivalence is sudden

https://andyljones.com/posts/horses.html
421•pbui•10h ago•330 comments

AMD GPU Debugger

https://thegeeko.me/blog/amd-gpu-debugging/
253•ibobev•19h ago•45 comments

Launch HN: Nia (YC S25) – Give better context to coding agents

https://www.trynia.ai/
116•jellyotsiro•18h ago•75 comments

Let's put Tailscale on a jailbroken Kindle

https://tailscale.com/blog/tailscale-jailbroken-kindle
289•Quizzical4230•18h ago•67 comments

Has the cost of building software dropped 90%?

https://martinalderson.com/posts/has-the-cost-of-software-just-dropped-90-percent/
293•martinald•16h ago•436 comments

Trials avoid high risk patients and underestimate drug harms

https://www.nber.org/papers/w34534
128•bikenaga•16h ago•39 comments

IBM to acquire Confluent

https://www.confluent.io/blog/ibm-to-acquire-confluent/
404•abd12•21h ago•325 comments

Paramount launches hostile bid for Warner Bros

https://www.cnbc.com/2025/12/08/paramount-skydance-hostile-bid-wbd-netflix.html
326•gniting•21h ago•336 comments

The Lost Machine Automats and Self-Service Cafeterias of NYC (2023)

https://www.untappedcities.com/automats-cafeterias-nyc/
78•walterbell•10h ago•24 comments

Hunting for North Korean Fiber Optic Cables

https://nkinternet.com/2025/12/08/hunting-for-north-korean-fiber-optic-cables/
260•Bezod•18h ago•92 comments

A thousand-year-long composition turns 25 (2024)

https://longplayer.org/news/2024/12/31/a-thousand-year-long-composition-turns-25/
24•1659447091•4h ago•5 comments

Periodic Spaces

https://ianthehenry.com/posts/periodic-spaces/
20•surprisetalk•5d ago•6 comments

Cassette tapes are making a comeback?

https://theconversation.com/cassette-tapes-are-making-a-comeback-yes-really-268108
97•devonnull•5d ago•150 comments

Show HN: Fanfa – Interactive and animated Mermaid diagrams

https://fanfa.dev/
115•bairess•4d ago•26 comments

OSHW: Small tablet based on RK3568 and AMOLED screen

https://oshwhub.com/oglggc/rui-xin-wei-rk3568-si-ceng-jia-li-chuang-mian-fei-gong-yi
83•thenthenthen•5d ago•34 comments

Microsoft Download Center Archive

https://legacyupdate.net/download-center/
168•luu•3d ago•25 comments

AI should only run as fast as we can catch up

https://higashi.blog/2025/12/07/ai-verification/
167•yuedongze•17h ago•147 comments

A series of tricks and techniques I learned doing tiny GLSL demos

https://blog.pkh.me/p/48-a-series-of-tricks-and-techniques-i-learned-doing-tiny-glsl-demos.html
187•ibobev•18h ago•24 comments
Open in hackernews

The consumption of AI-generated content at scale

https://www.sh-reya.com/blog/consumption-ai-scale/
30•ivansavz•1w ago

Comments

bryanrasmussen•1w ago
yeah everything sounds like AI, and why is that? Well it might be because everything is AI but I think that writing style is more LinkedIn than LLM, the style of people who might get slapped down if they wrote something individual.

Much of the world has agreed to sound like machines.

Another thing I've noticed is that weird stuff that is perhaps off in some way, also gets accused of being LLMs because it doesn't feel right.

If you sound unique and weird you get accused of being a bad LLM that can't falsify humanity well enough, and if you sounds boring and bland and boosterist, you get accused of being a good LLM.

You can't write like no one else, but you also can't write like everybody else.

1bpp•12h ago
Text feeling awkward or not flowing very well has ironically become a very strong signal for human-written text for me, and usually makes me pay more attention now
FarmerPotato•10h ago
When I encounter this LLM generated Mad Lib:

"We embody <adjective> <noun> through <adjective> <noun>, <adjective> <noun>, and <adjective> <noun>. "

my uncanny warning blares--so I test if it becomes more intelligible with the adjectives stripped out. These padded-out pabulums are the tells.

I hope Elements of Style is rediscovered.

chemotaxis•13h ago
The best part is that this article is almost certainly AI-generated or heavily AI-assisted too.

Before people get angry with me... there's plenty of small tells, starting with section headings, a lot of linguistic choices, and low information density... but more importantly, the author openly says she writes using LLMs: https://www.sh-reya.com/blog/ai-writing/#how-i-write-with-ll...

absoluteunit1•13h ago
Was thinking this as well.

Just skimming throught the first two paragraphs felt like I as reading a ChatGPT response. That and the fact that there's multiple em dashes in the intro alone.

spoiler•11h ago
Tangentially related, but I'm low key miffed that em dashes get a bad rep now because of AI.

They're a great way to "inject" something into a sentence, similar to how people speak in person. I feel like my written style has now gotten worse because I have to dumb it down, or I'll be anxious any writing/linguistic flourish will be interpreted as gen AI

FarmerPotato•10h ago
I learned em-dash from The Mac Is Not A Typewriter. From now on I'll keep the -- plain ASCII to hopefully avoid the backlash.
112233•4h ago
I'm doubling down on emdashes. May even start using language-appropriate quotes too („aaa“ «bbb» 「ccc」and so on). This meme about surface-level LLM tells is actively dangerous.
__del__•4h ago
i'll never give up the em dash. and i will continue to evangelize the en dash from now–forever (hint hint, ranges should use en dashes instead of hyphens).
phainopepla2•12h ago
I would think a decent LLM would know the difference between a metaphor and simile, unlike the author
DarkNova6•2h ago
Most likely. I got turned off instantly reading it but then realized that this is part of the joke.
SunshineTheCat•13h ago
What's crazy is you're starting to see an overreaction to this fact as well.

The other day I posted a short showcasing some artwork I made for a TCG I'm in the process of creating.

Comments poured in saying it was "doomed to fail" because it was just "AI slop"

In the video itself I explained how I made them, in Adobe Illustrator (even showing some of the layers, elements, etc).

Next I'm actually posting a recording of me making a character from start to finish, a timelapse.

Will be interesting if I get any more "AI slop" comments, but it's becoming increasingly difficult to share anything drawn now because people immediately assume it's generated.

p_l•12h ago
The people commenting about AI Slop, at least considerable portion, do so because it allows them to feel morally superior at little effort.

Do not expect them to retract or stop if there's a way to not see the making of :P

nh23423fefe•12h ago
someone on the internet is wrong?
phainopepla2•12h ago
I have seen this as well. Any nicely formatted medium to long text without obvious errors immediately comes under suspicion, even without the obvious tells
raincole•8h ago
I feel you, but people nowadays go extreme lengths to present AI-generated artworks as hand-drawn.

It's not even funny. You can google "asamiarts tracing over AI" and read the whole drama. They have not only timelapse, but real world footage as 'evidence.' And they are not the only case.

It's not the fight you can win. Either ignore the comments calling you AI or just use AI.

furyofantares•12h ago
Scroll through and read only the section headers. I would be shocked if this wasn't at the very least run through an LLM itself. For sure the section headers are, I'll skip the rest unless someone posts that it's worth a read for some reason.

It doesn't appear to be section headings glued together with bullet lists so maybe the content really does retain the author's perspective but at this point I'd rather skip stuff I know has been run through an LLM and miss a few gems rather than get slopped daily.

krupan•12h ago
"Now, with LLM-generated content, it’s hard to even build mental models for what might go wrong, because there’s such a long tail of possible errors. An LLM-generated literature review might cite the right people but hallucinate the paper titles. Or the titles and venues might look right, but the authors are wrong."

This is insidious and if humans were doing it they would be fired and/or cancelled on the spot. Yet we continue to rave about how amazing LLMs are!

It's actually a complete reversal on self driving car AI. Humans crash cars and hurt people all the time. AI cars are already much safer drivers than humans. However, we all go nuts when a Waymo runs over a cat, but ignore the fact that humans do that on a daily basis!

Something is really broken in our collective morals and reasoning

carbarjartar•12h ago
> AI cars are already much safer drivers than humans.

I feel this statement should come with a hefty caveat.

"But look at this statistic" you might retort, but I feel the statistics people pose are weighted heavily in the autonomous service's favor.

The frontrunner in autonomous taxis only runs in very specific cities for very specific reasons.

I avoid using them out of a feeble attempt to 'do my part', but I was recently talking to a friend and was surprised that they avoid using these autonomous services because they drive, what would be to a human driver, very strange routes.

I wondered if these unconventional, often longer, routes were also taken in order to stick to well trodden and predictable paths.

"X deaths/injuries per mile" is a useless metric when the autonomous vehicles only drive in specific places and conditions.

To get the true statistic you'd have to filter the human driver statistics to match the autonomous services' data. Things like weather, cities, number of and location of people in the vehicle, and even which streets.

These service providers could do this, they have the data, compute, and engineering to do so, though they are disincentivized to do so as long as everyone keeps parroting their marketing speak for them.

colonCapitalDee•10h ago
I don't know why that matters? The city selection and routing is a part of the overall autonomous system. People get to where they need to be with fewer deaths and injuries, and that's what matters. I suppose you could normalize to "useful miles driven" to account for longer, safer routes, but even then the statistics are overwhelmingly clear that Waymo is at least an order of safer than human drivers, so a small tweak like that is barely going to matter.
carbarjartar•10h ago
> so a small tweak like that

Well it would seem these autonomous driving service providers disagree with your claim that it is just a 'small tweak' considering they only operate under these specific conditions when it would be to their substantial benefit to instead operate everywhere and at all times.

watwut•2h ago
> AI cars are already much safer drivers than humans.

Nothing like that was shown. We have a bunch of very "motivated reasoning" kind of studies and best you can conclude from them is that "some circumstances where ai cars are safer drivers exist". The common trick is to compare overall human records with ai car record in super tailored circumstances.

They have potential to be safer drivers one day, if they will be produced by companies that are forced to care about safety by regulations.

tensegrist•12h ago
> There’s a frustration I can’t quite shake when consuming content now—

perhaps even a frustration you can't quite name

nh23423fefe•12h ago
gpt is eternal september for normies
pessimizer•12h ago
I'm pretty sure that the reason everything seems like AI is that AI produces stupid, pointless content at scale, and our "writers" have become people who generate stupid, pointless content at scale.

There's no reason for most things to have been written. Whatever point is being made is pointless. It's not really entertaining, it's meant to be identified with; it's not a call to any specific action; it doesn't create some new fertile interpretation of past events or ideas; it's not even a cry for help. It's just pointless fluff to surround advertising. From a high concept likely dictated by somebody's boss.

AI has no passion and no point. It is not trying to convince anyone of anything, because it does not care. If AI were trying to be convincing, it would try to conceal its own style. But it doesn't mean anything for an AI to try. It's just running through the motions of filling out an idea to a certain length. It's whatever the opposite of compression is.

A generation of writers raised on fanfiction and prestige tv who grew up to write Buzzfeed articles at the rate of five a day are indistinguishable from AI.

Why This Matters

FarmerPotato•10h ago
God help us if we give the bag of words a reason to live for. It might try to be convincing.
SpaceManNabs•11h ago
> If something seems off, I can just regenerate and hope the next version is better. But that’s not the same as actually checking. It feels like a slot machine—pull the lever again, see if you get a better result—substitutes for the slower, harder work of understanding whether the output is correct.

What a great point. In some work loops I feel like I get addicted to seeing what pops in the next generation.

One of the things i Learned from moderating internet usage is not fall prey to recommendation systems. As in, when I am on the web, I only consume what I explicitly looked for, and not what the algorithm thinks i should consume next.

sites like reddit and HN make this tricky.

Havoc•11h ago
Noticing this most in visual content rather than LLMs. That era of anyone young and perpetually online can spot AI via uncanny valley was remarkably shortlived. [0]

>In the pre-LLM era, I could build mental models, rely on heuristics, or spot-check information strategically.

I wonder if this will be an enduring advantage of the current generation - building your formative world model in a pre-AI era. It seems plausible to me that anyone who built the foundations there has a much higher chance of having instincts that are more grounded even if post-AI experiences are layered on later

[0] https://www.reddit.com/r/antiai/comments/1p8z6y6/nano_banana...

nancyminusone•11h ago
In my opinion, this is the biggest (current) problem with AI. It is really good at that thing you used to do when you had to hit a word count in a school essay. How long until the world's hard drive space is filled up with filler words and paragraphs of text that goes nowhere, and how could you possibly search and find anything in such conditions?