frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•2m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
1•rcarmo•3m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•3m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•4m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•5m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•7m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•7m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•9m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•10m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•11m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•11m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•11m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•11m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•12m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•13m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•14m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•20m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•21m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•21m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
24•bookofjoe•21m ago•9 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•22m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•23m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•24m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•24m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•24m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•24m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•25m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•26m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•26m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•27m ago•0 comments
Open in hackernews

Ask HN: What's behind the strong anti-AI sentiment on Hacker News?

14•cloudking•8mo ago
I've noticed that most AI-related posts here receive a lot of anti-AI commentary. Why is that? Are people not finding these tools useful, even with the significant investment and hype in the space?

Comments

actionfromafar•8mo ago
A lot of the hype is very short-term and unrealistic, such as AGI. On the other hand it's easy to underestimate the impact in a million mundane things.
neom•8mo ago
On top of this, I'd add also: for me personally, the writing is on the walls for many things (like AGI) - now the tech is clear and people can vision out timelines on things, it becomes grating to hear about every tiny incremental update.
gtirloni•8mo ago
How is the tech clear for AGI?
actionfromafar•8mo ago
I think you should read that something like, "now that it's clear what the tech does - and it's not AGI".
neom•8mo ago
Yeah, poorly worded on my part, we don't even all agree on what AGI even is let alone when we'll have it or what it will be, but there is no harm in focusing on the super super good auto-complete we now have.
incomingpain•8mo ago
People are scared of the unknown. They are scared that their livelyhoods might be impacted.

My autism flavour, I have a weakness in communication, and AI spits out better writing than I do. Personally I love that it helps me. Autism is a disability and AI helps me through it.

Imagine however if you're an expert in communication; then this is a new competitor that's undefeatable.

gtirloni•8mo ago
Experts in communication might disagree with you. Just like experts in software engineering don't think the current wave of AI tools is all it's made up to be.
kasey_junk•8mo ago
I’m an expert in software engineering and am pretty gobsmacked at how good the current wave of tools is.

I don’t have much of a prediction around if llms will conquer agi or other hyped summits. But I’m nearly 100% certain development tooling will be mostly AI driven in 5 years.

incomingpain•8mo ago
>Experts in communication might disagree with you.

I'm quite downvoted, it would seem people disagree with what I posted, i did preface that its a disability for me.

From my pov, AI is amazingly helpful to me.

usersouzana•8mo ago
With AI humans aim to automate some forms of intelligent work. People that do this kind of work don't necessarily like that, for obvious reasons, and many HN participants are part of that cohort.
throwawayffffas•8mo ago
I think the hype is the reason. The performance of the tools is nowhere near the level implied by the hype.

Also, HN loves to hate things, remember the welcome dropbox got in 2007?

https://news.ycombinator.com/item?id=8863

andyjohnson0•8mo ago
Disgust at all the hype. Worry over being made obsolete. Lazy negativity ("merely token predictors") in an attempt to sound knowledgeable. Worry over not understanding the tech. Distress over dehumanising AI use in hiring etc. Herd psychology.
jqpabc123•8mo ago
Any result produced by current AI is suspect until proven otherwise.

Any result comes at very high relative cost in terms of computing time and energy consumed.

AI is the polar opposite of traditional logic based computing --- instead of highly accurate and reliable facts at low cost, you get unreliable opinions at high cost.

There are valid uses cases for current AI but it is not a universal replacement for logic based programming that we all know and love --- not even close. Suggesting otherwise smacks of snake oil and hype.

Legal liability for AI pronouncements is another on-going concern that remains to be fully addressed in the courts. One example: An AI chatbot accused a pro basketball player of vandalism due to references found of him "throwing bricks" during play.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4546063

skydhash•8mo ago
> AI is the polar opposite of traditional logic based computing --- instead of highly accurate and reliable facts at low cost, you get unreliable opinions at high cost.

In other words: Instead of buying a simple hammer for nailing a plank, all the marketing is about buying a bulldozer in a foreign country that they will ship to you, and in the process of using it for hammering the nail, you destroy the whole house.

tinthedev•8mo ago
Practical AI vs hype AI is what I see the biggest distinction on.

I haven't seen people negatively comment on simple AI tooling, or cases where AI creates real output.

I do see a lot of hate on hype-trains and, for what it's worth, I wouldn't say it's undeserved. LLMs are currently oversold as this be-all end-all AI, while there's still a lot of "all" to conquer.

31337Logic•8mo ago
Here's why (for me, at least):

https://www.humanetech.com/podcast

austin-cheney•8mo ago
The perception that I have of AI is two goals:

1) A keyword to game out investment capital from investors

2) A crutch for developers who should probably the be replaced by AI

I do believe there is some utility and value behind AI, but its still so primitive that its a smarter auto-complete.

paulcole•8mo ago
> but its still so primitive that its a smarter auto-complete

Is it 10x smarter than auto-complete on your iPhone or 10000x smarter?

throwawayffffas•8mo ago
It's not comparable it's to your iPhone auto-complete, because it's code completion.

It's a mixed bag, because it often provides plausible but incorrect completions.

paulcole•8mo ago
OK, how mixed is that mixed bag?

Is it totally useless or is it the greatest thing ever? If neither, where in the middle do you put it?

How often does it provide plausible but incorrect completions? Is it every few minutes or is it a couple times a day?

This is my biggest issue with the AI complainers on here. It's always the broadest and most vague complaints. I'd rather somebody just say, "You know what I just don't like AI" rather than try to convince me it's bad through vagueness.

throwawayffffas•8mo ago
> Is it totally useless or is it the greatest thing ever? If neither, where in the middle do you put it?

For code completion, I think it's close to useless in my experience, traditional code completion feels much more useful.

> How often does it provide plausible but incorrect completions? Is it every few minutes or is it a couple times a day?

It varies with the workload but closer to every few minutes than a couple times a day. For example while writing rust the majority (like 95%) of the code completion suggestions are incorrect. When writing a python website it gets better but you still get bad suggestions that look good, several times a day.

The killer feature is generating code, not as completion but after an explicit prompt. Most models are okayish on that task. But still you have to pay attention.

That's all in my experience across like a year, your mileage may vary.

oulipo•8mo ago
AI has (some limited) benefits, and many huge and proven drawbacks (used in the Israel genocide, used to disrupt elections in the US and Europe, used to spy on people)

So yes, there's a healthy criticism of blindly allowing a few multi-billionnaires to own a tech that can rip off the fabric of our societies

hodder•8mo ago
Change is uncomfortable and scary, and AI represents a pretty seismic shift. It touches everything from jobs and creativity to ethics and control. There's also fatigue from the hype cycle, especially when some tools overpromise and underdeliver.
jqpabc123•8mo ago
It touches everything from jobs and creativity to ethics and control.

And the results from all that "touching" are mixed at best.

Example: IBM and McDonalds spent 3 years trying to get AI to take orders at drive-thru windows. As far as a "job" goes, this is pretty low hanging fruit.

Here are the results:

https://apnews.com/article/mcdonalds-ai-drive-thru-ibm-bebc8...

AnimalMuppet•8mo ago
> Are people not finding these tools useful, even with the significant investment and hype in the space?

That sounds like there's a flawed assumption buried in there. Hype has very little correlation with usefulness. Investment has perhaps slightly more, but only slightly.

Investment tells you that people invested. Hype tells you that people are trying to sell it. That's all. They tell you nothing about usefulness.

jf22•8mo ago
There are a lot of people on HN who will be replaced by AI tools and that's hard to cope with.
palata•8mo ago
Something that I haven't seen in the other comments: whoever controls the AI has a lot of power. Now that people seem to move from Google to LLMs and blindly believe whatever they read, it feels scary to know that those who own the LLMs are often crazy and dangerous billionaires.
horsellama•8mo ago
just give a go at vibe coding a moderately complex system and you’ll realize that this is only hype, nothing concrete

it’s a shame that this “thing” has now monopolized tech discussions

d--b•8mo ago
It's like Ozempic in Hollywood, everyone is using it secretly.
paulcole•8mo ago
Nobody likes their livelihood becoming a commodity. Especially not one of the most arrogant groups of people on the planet.
aristofun•8mo ago
I see 2 parts that contribute:

1. Failed expectations - hackers tend to dream big and they felt like we're that close to AGI. Then they faced the reality of a "dumb" (yet very advanced) auto-complete. It's very good, but not as good as they wanted it.

2. Too much posts all over the internet from people who has zero idea about how LLMs work and their actual pros/cons and limitations. Those posts cause natural compensating force.

I don't see a fear of losing job as a serious tendency (only in junior developers and wannabes).

It's the opposite - senior devs secretly waited for something that would off load a big part of the stress and dumb work of their shoulders, but it happened only occasionally and in a limited form (see point 1 above)

ldjkfkdsjnv•8mo ago
Failed expectations? its passing the turing test.
aristofun•8mo ago
and yet it fails to fix any real bug e2e in a large enough codebase. It require a lot of babysitting to the point that actual performance boost is very questionable
ElectronCharge•8mo ago
The Turing Test didn't anticipate super-parrots that converse nicely, instead assuming that an AI would actually reason to participate in a conversation.

The "AI" we have now isn't actually "I".

ldjkfkdsjnv•8mo ago
I was going to make a post about this, any pro AI comment I make gets downvoted, and sometimes flagged. I think HN has people who:

1. Have not kept up with and actively experimented with the tooling, and so dont know how good they are.

2. Have some unconscious concern about the commoditization of their skill sets

3. Are not actively working in AI and so want to just stick their head in the sand

ferguess_k•8mo ago
I'm not really anti-AI. I use AI every day and is a ChatGPT pro user.

My concerns are:

1) Regardless of whether AI could do this, the corporation leaders are pushing for AI replacement for humans. I don't care whether AI could do it or not, but multiple mega corporations are talking about this openly. This is not going to bode well for us ordinary programmers;

2) Now, if AI could actually do that -- might not be now, or a couple of years, but 5-10 years from now, and even if they could ONLY replace junior developers, it's going to be hell for everyone. Just think about the impact to the industry. 10 years is actually fine for me, as I'm 40+, but hey, you guys are probably younger than me.

--> Anyone who is pushing AI openly && (is not in the leadership || is not financially free || is an ordinary, non-John-Carmack level programmer), if I may say so, is not thinking straight. You SHOULD use it, but you should NOT advocate it, especially to replace your team.

stephenr•8mo ago
> Are people not finding these tools useful, even with the significant investment and hype in the space?

How exactly would someone find hype useful?

Hell, even the investment part is questionable in an industry that's known for "fake it till you make it" and "thanks for the journey" messages when it's inevitably bought by someone else and changes dramatically or is shut down.

bradgranath•8mo ago
1)VC driven hype. Stop claiming to have invented God, and people will stop making fun of you for saying so.

2)Energy/Environment. This stuff is nearly as bad as crypto in terms Energy Input & Emissions per Generated Value.

3)A LOT of creatives are really angry at what they perceive as theft, and 'screwing over the little guy'. Regardless of whether you agree with them, you can't just ignore them and expect that their arguments will just go away.

vuggamie•8mo ago
1. LLM use has made me a more productive programmer, especially when learning new technology. The ability to ask questions about documentation, code, and best practices is nice. It's like Stack Overflow without all the toxicity, outdated answers, and user login antipatterns. Also, one less cookie notification to dismiss.

2. Energy is an important issue. We need a sane energy policy and worldwide cooperation. Corporations should pay the full cost of their energy use, including pollution mitigation and carbon offsets. Pragmatism suggests that this is not likely to happen any time soon. The US will be out of any discussion of sane energy policy for the foreseeable future.

3. The training of many (all?) major LLMs included a step that was criminal. That is, downloading Z-Library or Library Genesis. The issue of Fair Use for training models on copyrighted text is unsettled. The legality of downloading pirated ebooks is well-defined. These books were stripped of their DRM, which itself is illegal under DMCA. It's a crime and CEO's should be held accountable. Training an LLM on copyrighted works might be legal, but stealing those copyrighted works is not. At least buy a copy of the book.

hollerith•8mo ago
My main objection to AI is that sooner or later, one of the AI labs is going to create an entity much "better at reality" (capable) than people are, which maybe would turn out OK or not-too-bad if the lab would retain control over the entity, but no one has a plan that would enable a person or a group to retain control of such an entity. IMHO current AI models are controllable only because they're less cognitively capable than the people exercising control over them.

I don't claim to be able to predict when such an AI that is much more capable than people will be created beyond saying that if the AI labs are not stopped (i.e., banned by the major governments) it will probably happen some time in the next 45 years.

EmpireoftheSun•8mo ago
I am pretty dumb dude so take this with a grain of salt .

Majority AI today can create/simulate a "Moment" but not the whole "Process". For example,You can create a "short hollywood movie clip" but not the whole "Hollywood movie". I am pretty sure my reasoning is incorrect so I am commenting here to get valid feedback.

CM30•8mo ago
Well, the internet in general has a very strong anti-AI sentiment to be honest. If you even say anything positive about it on most social media sites (Twitter, Reddit, BlueSky, Mastodon, Threads, Instagram, etc) a large percentage of the audience will all but call for you to be burnt at the stake. In a sense, Hacker News is barely any different from the rest of the internet there.

The reactions basically seem to range from "AI is useless because it's inaccurate/can't do this" to "AI is evil because of how it takes jobs from humans, and should never have been invented".

Still, the former is probably the bigger reason here in particular. LLMs can be useful if you're working within very, very general domains with a ton of source material (like say, React programming), but they're usually not as good as a standard solution to the issue would be, especially when said issue isn't as set in stone as programming might be. So most of these solutions just come across as a worse way to solve an already solved problem, except with AI added as a buzzword.

kalleboo•8mo ago
AI sucks the fun out of everything.

It's even worse when you had made that fun your livelihood. Now it's sucked the fun out of everything and put you out of a job.