frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
2•sohimaster•1m ago•0 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
2•harshalone•1m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•6m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•7m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•8m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
1•Brajeshwar•8m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•10m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•10m ago•0 comments

NSA detected phone call between foreign intelligence and person close to Trump

https://www.theguardian.com/us-news/2026/feb/07/nsa-foreign-intelligence-trump-whistleblower
5•c420•10m ago•0 comments

How to Fake a Robotics Result

https://itcanthink.substack.com/p/how-to-fake-a-robotics-result
1•ai_critic•11m ago•0 comments

It's time for the world to boycott the US

https://www.aljazeera.com/opinions/2026/2/5/its-time-for-the-world-to-boycott-the-us
1•HotGarbage•11m ago•0 comments

Show HN: Semantic Search for terminal commands in the Browser (No Back end)

https://jslambda.github.io/tldr-vsearch/
1•jslambda•11m ago•1 comments

The AI CEO Experiment

https://yukicapital.com/blog/the-ai-ceo-experiment/
2•romainsimon•13m ago•0 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
3•surprisetalk•16m ago•0 comments

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
3•TheCraiggers•17m ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
2•birdculture•18m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
8•doener•19m ago•2 comments

MyFlames: View MySQL execution plans as interactive FlameGraphs and BarCharts

https://github.com/vgrippa/myflames
1•tanelpoder•20m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•20m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
3•tanelpoder•21m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•22m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
2•elsewhen•25m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•26m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•30m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
2•mooreds•30m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•31m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•31m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•31m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•31m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•32m ago•2 comments
Open in hackernews

The consumption of AI-generated content at scale

https://www.sh-reya.com/blog/consumption-ai-scale/
32•ivansavz•2mo ago

Comments

bryanrasmussen•2mo ago
yeah everything sounds like AI, and why is that? Well it might be because everything is AI but I think that writing style is more LinkedIn than LLM, the style of people who might get slapped down if they wrote something individual.

Much of the world has agreed to sound like machines.

Another thing I've noticed is that weird stuff that is perhaps off in some way, also gets accused of being LLMs because it doesn't feel right.

If you sound unique and weird you get accused of being a bad LLM that can't falsify humanity well enough, and if you sounds boring and bland and boosterist, you get accused of being a good LLM.

You can't write like no one else, but you also can't write like everybody else.

1bpp•2mo ago
Text feeling awkward or not flowing very well has ironically become a very strong signal for human-written text for me, and usually makes me pay more attention now
FarmerPotato•2mo ago
When I encounter this LLM generated Mad Lib:

"We embody <adjective> <noun> through <adjective> <noun>, <adjective> <noun>, and <adjective> <noun>. "

my uncanny warning blares--so I test if it becomes more intelligible with the adjectives stripped out. These padded-out pabulums are the tells.

I hope Elements of Style is rediscovered.

chemotaxis•2mo ago
The best part is that this article is almost certainly AI-generated or heavily AI-assisted too.

Before people get angry with me... there's plenty of small tells, starting with section headings, a lot of linguistic choices, and low information density... but more importantly, the author openly says she writes using LLMs: https://www.sh-reya.com/blog/ai-writing/#how-i-write-with-ll...

absoluteunit1•2mo ago
Was thinking this as well.

Just skimming throught the first two paragraphs felt like I as reading a ChatGPT response. That and the fact that there's multiple em dashes in the intro alone.

spoiler•2mo ago
Tangentially related, but I'm low key miffed that em dashes get a bad rep now because of AI.

They're a great way to "inject" something into a sentence, similar to how people speak in person. I feel like my written style has now gotten worse because I have to dumb it down, or I'll be anxious any writing/linguistic flourish will be interpreted as gen AI

FarmerPotato•2mo ago
I learned em-dash from The Mac Is Not A Typewriter. From now on I'll keep the -- plain ASCII to hopefully avoid the backlash.
112233•2mo ago
I'm doubling down on emdashes. May even start using language-appropriate quotes too („aaa“ «bbb» 「ccc」and so on). This meme about surface-level LLM tells is actively dangerous.
__del__•2mo ago
i'll never give up the em dash. and i will continue to evangelize the en dash from now–forever (hint hint, ranges should use en dashes instead of hyphens).
phainopepla2•2mo ago
I would think a decent LLM would know the difference between a metaphor and simile, unlike the author
DarkNova6•2mo ago
Most likely. I got turned off instantly reading it but then realized that this is part of the joke.
SunshineTheCat•2mo ago
What's crazy is you're starting to see an overreaction to this fact as well.

The other day I posted a short showcasing some artwork I made for a TCG I'm in the process of creating.

Comments poured in saying it was "doomed to fail" because it was just "AI slop"

In the video itself I explained how I made them, in Adobe Illustrator (even showing some of the layers, elements, etc).

Next I'm actually posting a recording of me making a character from start to finish, a timelapse.

Will be interesting if I get any more "AI slop" comments, but it's becoming increasingly difficult to share anything drawn now because people immediately assume it's generated.

p_l•2mo ago
The people commenting about AI Slop, at least considerable portion, do so because it allows them to feel morally superior at little effort.

Do not expect them to retract or stop if there's a way to not see the making of :P

nh23423fefe•2mo ago
someone on the internet is wrong?
phainopepla2•2mo ago
I have seen this as well. Any nicely formatted medium to long text without obvious errors immediately comes under suspicion, even without the obvious tells
raincole•2mo ago
I feel you, but people nowadays go extreme lengths to present AI-generated artworks as hand-drawn.

It's not even funny. You can google "asamiarts tracing over AI" and read the whole drama. They have not only timelapse, but real world footage as 'evidence.' And they are not the only case.

It's not the fight you can win. Either ignore the comments calling you AI or just use AI.

officeplant•2mo ago
>You can google "asamiarts tracing over AI" and read the whole drama.

My life has come full circle when gooner AI art tracer drama gets mentioned on HN. Its crazy the lengths they went to try and cover it all up including faked procreate shots with botched UI elements.

furyofantares•2mo ago
Scroll through and read only the section headers. I would be shocked if this wasn't at the very least run through an LLM itself. For sure the section headers are, I'll skip the rest unless someone posts that it's worth a read for some reason.

It doesn't appear to be section headings glued together with bullet lists so maybe the content really does retain the author's perspective but at this point I'd rather skip stuff I know has been run through an LLM and miss a few gems rather than get slopped daily.

krupan•2mo ago
"Now, with LLM-generated content, it’s hard to even build mental models for what might go wrong, because there’s such a long tail of possible errors. An LLM-generated literature review might cite the right people but hallucinate the paper titles. Or the titles and venues might look right, but the authors are wrong."

This is insidious and if humans were doing it they would be fired and/or cancelled on the spot. Yet we continue to rave about how amazing LLMs are!

It's actually a complete reversal on self driving car AI. Humans crash cars and hurt people all the time. AI cars are already much safer drivers than humans. However, we all go nuts when a Waymo runs over a cat, but ignore the fact that humans do that on a daily basis!

Something is really broken in our collective morals and reasoning

carbarjartar•2mo ago
> AI cars are already much safer drivers than humans.

I feel this statement should come with a hefty caveat.

"But look at this statistic" you might retort, but I feel the statistics people pose are weighted heavily in the autonomous service's favor.

The frontrunner in autonomous taxis only runs in very specific cities for very specific reasons.

I avoid using them out of a feeble attempt to 'do my part', but I was recently talking to a friend and was surprised that they avoid using these autonomous services because they drive, what would be to a human driver, very strange routes.

I wondered if these unconventional, often longer, routes were also taken in order to stick to well trodden and predictable paths.

"X deaths/injuries per mile" is a useless metric when the autonomous vehicles only drive in specific places and conditions.

To get the true statistic you'd have to filter the human driver statistics to match the autonomous services' data. Things like weather, cities, number of and location of people in the vehicle, and even which streets.

These service providers could do this, they have the data, compute, and engineering to do so, though they are disincentivized to do so as long as everyone keeps parroting their marketing speak for them.

colonCapitalDee•2mo ago
I don't know why that matters? The city selection and routing is a part of the overall autonomous system. People get to where they need to be with fewer deaths and injuries, and that's what matters. I suppose you could normalize to "useful miles driven" to account for longer, safer routes, but even then the statistics are overwhelmingly clear that Waymo is at least an order of safer than human drivers, so a small tweak like that is barely going to matter.
carbarjartar•2mo ago
> so a small tweak like that

Well it would seem these autonomous driving service providers disagree with your claim that it is just a 'small tweak' considering they only operate under these specific conditions when it would be to their substantial benefit to instead operate everywhere and at all times.

watwut•2mo ago
> AI cars are already much safer drivers than humans.

Nothing like that was shown. We have a bunch of very "motivated reasoning" kind of studies and best you can conclude from them is that "some circumstances where ai cars are safer drivers exist". The common trick is to compare overall human records with ai car record in super tailored circumstances.

They have potential to be safer drivers one day, if they will be produced by companies that are forced to care about safety by regulations.

tensegrist•2mo ago
> There’s a frustration I can’t quite shake when consuming content now—

perhaps even a frustration you can't quite name

nh23423fefe•2mo ago
gpt is eternal september for normies
pessimizer•2mo ago
I'm pretty sure that the reason everything seems like AI is that AI produces stupid, pointless content at scale, and our "writers" have become people who generate stupid, pointless content at scale.

There's no reason for most things to have been written. Whatever point is being made is pointless. It's not really entertaining, it's meant to be identified with; it's not a call to any specific action; it doesn't create some new fertile interpretation of past events or ideas; it's not even a cry for help. It's just pointless fluff to surround advertising. From a high concept likely dictated by somebody's boss.

AI has no passion and no point. It is not trying to convince anyone of anything, because it does not care. If AI were trying to be convincing, it would try to conceal its own style. But it doesn't mean anything for an AI to try. It's just running through the motions of filling out an idea to a certain length. It's whatever the opposite of compression is.

A generation of writers raised on fanfiction and prestige tv who grew up to write Buzzfeed articles at the rate of five a day are indistinguishable from AI.

Why This Matters

FarmerPotato•2mo ago
God help us if we give the bag of words a reason to live for. It might try to be convincing.
SpaceManNabs•2mo ago
> If something seems off, I can just regenerate and hope the next version is better. But that’s not the same as actually checking. It feels like a slot machine—pull the lever again, see if you get a better result—substitutes for the slower, harder work of understanding whether the output is correct.

What a great point. In some work loops I feel like I get addicted to seeing what pops in the next generation.

One of the things i Learned from moderating internet usage is not fall prey to recommendation systems. As in, when I am on the web, I only consume what I explicitly looked for, and not what the algorithm thinks i should consume next.

sites like reddit and HN make this tricky.

Havoc•2mo ago
Noticing this most in visual content rather than LLMs. That era of anyone young and perpetually online can spot AI via uncanny valley was remarkably shortlived. [0]

>In the pre-LLM era, I could build mental models, rely on heuristics, or spot-check information strategically.

I wonder if this will be an enduring advantage of the current generation - building your formative world model in a pre-AI era. It seems plausible to me that anyone who built the foundations there has a much higher chance of having instincts that are more grounded even if post-AI experiences are layered on later

[0] https://www.reddit.com/r/antiai/comments/1p8z6y6/nano_banana...

nancyminusone•2mo ago
In my opinion, this is the biggest (current) problem with AI. It is really good at that thing you used to do when you had to hit a word count in a school essay. How long until the world's hard drive space is filled up with filler words and paragraphs of text that goes nowhere, and how could you possibly search and find anything in such conditions?