frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

I spent 15 years developing a tool to make sense of software version numbers

2•a1tern•53m ago•0 comments

Ask HN: Way to build AI voice agents

7•warthog•2h ago•0 comments

Tell HN: The Hetzner Experience - Invisible Outages

24•AmazingTurtle•11h ago•10 comments

Ask HN: What's behind the strong anti-AI sentiment on Hacker News?

10•cloudking•6h ago•36 comments

Ask HN: Does the languages we speak affect the way we think?

2•Genius_um•7h ago•6 comments

Ask HN: Don't You Mind That LLMs Are Mostly Proprietary?

11•dakiol•20h ago•12 comments

Ask HN: Do people actually pay for small web tools?

52•scratchyone•2d ago•59 comments

I'm Peter Roberts, immigration attorney, who does work for YC and startups. AMA

253•proberts•4d ago•451 comments

Ask HN: When do you just give up and ship it?

13•90s_dev•16h ago•8 comments

How to Fix the Gaming Industry

2•azyLum•12h ago•5 comments

Big Beautiful Bill R&D Tax: Will tech go on a hiring spree again?

14•jbverschoor•1d ago•15 comments

Ask HN: How do you use AI for development in high security environments?

6•thesurlydev•22h ago•2 comments

Ask HN: What's your go-to message queue in 2025?

59•enether•5d ago•96 comments

Ask HN: Is there a Wikipedia or LLM wrapper for kids (with parental controls)?

4•soferio•17h ago•6 comments

What interesting things low spending people do, that others know nothing about?

15•evolve2k•1d ago•35 comments

Ask HN: Conversational AI to Learn a Language

14•edweis•3d ago•6 comments

Ask HN: We built a travel app – a classic tarpit idea. What now?

4•kenforthewin•1d ago•6 comments

Ask HN: How do you handle licensing and revenue leaks for self-hosted software?

3•lexokoh•1d ago•2 comments

Ask HN: Tech Behind the "Magic" Company?

5•ChicagoBoy11•1d ago•2 comments

Ask HN: Anyone working in traditional ML/stats research instead of LLMs?

20•itsmekali321•3d ago•10 comments

Ask HN: Best on device LLM tooling for PDFs?

4•martinald•1d ago•1 comments

Ask HN: When will managers be replaced by AI?

53•GianFabien•15h ago•74 comments

Ask HN: Moving to London from California

12•siamese_puff•1d ago•13 comments

Is current state of querying on observability data broken?

12•pranay01•3d ago•0 comments

Ask HN: Email Provider for Main Account?

24•agent008t•5d ago•27 comments

Ask HN: Do you have a side project making more than $100 monthly?

12•leonagano•1d ago•4 comments

Ask HN: What newsletters do you follow?

5•cyndunlop•1d ago•6 comments

Ask HN: How do you store the knowledge gained in a day?

67•dennisy•1w ago•96 comments

Best AI editor for local models?

7•rocketbro•3d ago•0 comments

Xray: A full-behavior-chain anti-malware system built in Go by a student

6•tangtian•3d ago•1 comments
Open in hackernews

Ask HN: What's behind the strong anti-AI sentiment on Hacker News?

10•cloudking•6h ago
I've noticed that most AI-related posts here receive a lot of anti-AI commentary. Why is that? Are people not finding these tools useful, even with the significant investment and hype in the space?

Comments

actionfromafar•6h ago
A lot of the hype is very short-term and unrealistic, such as AGI. On the other hand it's easy to underestimate the impact in a million mundane things.
neom•6h ago
On top of this, I'd add also: for me personally, the writing is on the walls for many things (like AGI) - now the tech is clear and people can vision out timelines on things, it becomes grating to hear about every tiny incremental update.
gtirloni•5h ago
How is the tech clear for AGI?
actionfromafar•4h ago
I think you should read that something like, "now that it's clear what the tech does - and it's not AGI".
neom•2h ago
Yeah, poorly worded on my part, we don't even all agree on what AGI even is let alone when we'll have it or what it will be, but there is no harm in focusing on the super super good auto-complete we now have.
incomingpain•6h ago
People are scared of the unknown. They are scared that their livelyhoods might be impacted.

My autism flavour, I have a weakness in communication, and AI spits out better writing than I do. Personally I love that it helps me. Autism is a disability and AI helps me through it.

Imagine however if you're an expert in communication; then this is a new competitor that's undefeatable.

gtirloni•5h ago
Experts in communication might disagree with you. Just like experts in software engineering don't think the current wave of AI tools is all it's made up to be.
kasey_junk•5h ago
I’m an expert in software engineering and am pretty gobsmacked at how good the current wave of tools is.

I don’t have much of a prediction around if llms will conquer agi or other hyped summits. But I’m nearly 100% certain development tooling will be mostly AI driven in 5 years.

incomingpain•3h ago
>Experts in communication might disagree with you.

I'm quite downvoted, it would seem people disagree with what I posted, i did preface that its a disability for me.

From my pov, AI is amazingly helpful to me.

usersouzana•6h ago
With AI humans aim to automate some forms of intelligent work. People that do this kind of work don't necessarily like that, for obvious reasons, and many HN participants are part of that cohort.
throwawayffffas•6h ago
I think the hype is the reason. The performance of the tools is nowhere near the level implied by the hype.

Also, HN loves to hate things, remember the welcome dropbox got in 2007?

https://news.ycombinator.com/item?id=8863

andyjohnson0•5h ago
Disgust at all the hype. Worry over being made obsolete. Lazy negativity ("merely token predictors") in an attempt to sound knowledgeable. Worry over not understanding the tech. Distress over dehumanising AI use in hiring etc. Herd psychology.
jqpabc123•5h ago
Any result produced by current AI is suspect until proven otherwise.

Any result comes at very high relative cost in terms of computing time and energy consumed.

AI is the polar opposite of traditional logic based computing --- instead of highly accurate and reliable facts at low cost, you get unreliable opinions at high cost.

There are valid uses cases for current AI but it is not a universal replacement for logic based programming that we all know and love --- not even close. Suggesting otherwise smacks of snake oil and hype.

Legal liability for AI pronouncements is another on-going concern that remains to be fully addressed in the courts. One example: An AI chatbot accused a pro basketball player of vandalism due to references found of him "throwing bricks" during play.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4546063

tinthedev•5h ago
Practical AI vs hype AI is what I see the biggest distinction on.

I haven't seen people negatively comment on simple AI tooling, or cases where AI creates real output.

I do see a lot of hate on hype-trains and, for what it's worth, I wouldn't say it's undeserved. LLMs are currently oversold as this be-all end-all AI, while there's still a lot of "all" to conquer.

31337Logic•5h ago
Here's why (for me, at least):

https://www.humanetech.com/podcast

austin-cheney•5h ago
The perception that I have of AI is two goals:

1) A keyword to game out investment capital from investors

2) A crutch for developers who should probably the be replaced by AI

I do believe there is some utility and value behind AI, but its still so primitive that its a smarter auto-complete.

paulcole•2h ago
> but its still so primitive that its a smarter auto-complete

Is it 10x smarter than auto-complete on your iPhone or 10000x smarter?

throwawayffffas•1h ago
It's not comparable it's to your iPhone auto-complete, because it's code completion.

It's a mixed bag, because it often provides plausible but incorrect completions.

oulipo•4h ago
AI has (some limited) benefits, and many huge and proven drawbacks (used in the Israel genocide, used to disrupt elections in the US and Europe, used to spy on people)

So yes, there's a healthy criticism of blindly allowing a few multi-billionnaires to own a tech that can rip off the fabric of our societies

hodder•4h ago
Change is uncomfortable and scary, and AI represents a pretty seismic shift. It touches everything from jobs and creativity to ethics and control. There's also fatigue from the hype cycle, especially when some tools overpromise and underdeliver.
jqpabc123•4h ago
It touches everything from jobs and creativity to ethics and control.

And the results from all that "touching" are mixed at best.

Example: IBM and McDonalds spent 3 years trying to get AI to take orders at drive-thru windows. As far as a "job" goes, this is pretty low hanging fruit.

Here are the results:

https://apnews.com/article/mcdonalds-ai-drive-thru-ibm-bebc8...

AnimalMuppet•4h ago
> Are people not finding these tools useful, even with the significant investment and hype in the space?

That sounds like there's a flawed assumption buried in there. Hype has very little correlation with usefulness. Investment has perhaps slightly more, but only slightly.

Investment tells you that people invested. Hype tells you that people are trying to sell it. That's all. They tell you nothing about usefulness.

jf22•4h ago
There are a lot of people on HN who will be replaced by AI tools and that's hard to cope with.
palata•4h ago
Something that I haven't seen in the other comments: whoever controls the AI has a lot of power. Now that people seem to move from Google to LLMs and blindly believe whatever they read, it feels scary to know that those who own the LLMs are often crazy and dangerous billionaires.
horsellama•3h ago
just give a go at vibe coding a moderately complex system and you’ll realize that this is only hype, nothing concrete

it’s a shame that this “thing” has now monopolized tech discussions

d--b•3h ago
It's like Ozempic in Hollywood, everyone is using it secretly.
paulcole•2h ago
Nobody likes their livelihood becoming a commodity. Especially not one of the most arrogant groups of people on the planet.
aristofun•2h ago
I see 2 parts that contribute:

1. Failed expectations - hackers tend to dream big and they felt like we're that close to AGI. Then they faced the reality of a "dumb" (yet very advanced) auto-complete. It's very good, but not as good as they wanted it.

2. Too much posts all over the internet from people who has zero idea about how LLMs work and their actual pros/cons and limitations. Those posts cause natural compensating force.

I don't see a fear of losing job as a serious tendency (only in junior developers and wannabes).

It's the opposite - senior devs secretly waited for something that would off load a big part of the stress and dumb work of their shoulders, but it happened only occasionally and in a limited form (see point 1 above)

ldjkfkdsjnv•52m ago
Failed expectations? its passing the turing test.
aristofun•15m ago
and yet it fails to fix any real bug e2e in a large enough codebase. It require a lot of babysitting to the point that actual performance boost is very questionable
ldjkfkdsjnv•2h ago
I was going to make a post about this, any pro AI comment I make gets downvoted, and sometimes flagged. I think HN has people who:

1. Have not kept up with and actively experimented with the tooling, and so dont know how good they are.

2. Have some unconscious concern about the commoditization of their skill sets

3. Are not actively working in AI and so want to just stick their head in the sand

ferguess_k•2h ago
I'm not really anti-AI. I use AI every day and is a ChatGPT pro user.

My concerns are:

1) Regardless of whether AI could do this, the corporation leaders are pushing for AI replacement for humans. I don't care whether AI could do it or not, but multiple mega corporations are talking about this openly. This is not going to bode well for us ordinary programmers;

2) Now, if AI could actually do that -- might not be now, or a couple of years, but 5-10 years from now, and even if they could ONLY replace junior developers, it's going to be hell for everyone. Just think about the impact to the industry. 10 years is actually fine for me, as I'm 40+, but hey, you guys are probably younger than me.

--> Anyone who is pushing AI openly && (is not in the leadership || is not financially free || is an ordinary, non-John-Carmack level programmer), if I may say so, is not thinking straight. You SHOULD use it, but you should NOT advocate it, especially to replace your team.

stephenr•1h ago
> Are people not finding these tools useful, even with the significant investment and hype in the space?

How exactly would someone find hype useful?

Hell, even the investment part is questionable in an industry that's known for "fake it till you make it" and "thanks for the journey" messages when it's inevitably bought by someone else and changes dramatically or is shut down.

babyent•6m ago
I use it for quick research or reducing my emails down to 1-2 sentences if I've got some thoughts that are more than 3-4 sentences. I really like it for these tasks.

I'm not exactly impressed by the results when it comes to actual work around building software.

For example, instead of spending 15 minutes reading a wikipedia entry about someone or something that has absolutely zero effect on my life besides some curiosity, I can ask chatgpt and learn enough in 2-3 minutes.

However, it usually creates more work when its about something that does have an effect on my life, like my work. After the second or third hallucination, its like GTFO.

bradgranath•5m ago
1)VC driven hype. Stop claiming to have invented God, and people will stop making fun of you for saying so.

2)Energy/Environment. This stuff is nearly as bad as crypto in terms Energy Input & Emissions per Generated Value.

3)A LOT of creatives are really angry at what they perceive as theft, and 'screwing over the little guy'. Regardless of whether you agree with them, you can't just ignore them and expect that their arguments will just go away.

hollerith•1m ago
My main objection to AI is that sooner or later, one of the AI labs is going to create an entity much "better at reality" than people are, which I would be okay with if the lab would retain control over the entity, but no one has a plan that has much hope of keeping any person or group of person in control of such an entity. (Current AI models are controllable only because they're less capable than the people trying to control them.)