frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

When Root Meets Immutable: OpenBSD Chflags vs. Log Tampering

https://rsadowski.de/posts/2025/openbsd-immutable-system-logs/
1•todsacerdoti•1m ago•0 comments

H-Nets – The Future

https://goombalab.github.io/blog/2025/hnet-future/
1•cubefox•7m ago•0 comments

Firmware for the open source Teufel Mynd speaker

https://github.com/teufelaudio/mynd-firmware
1•morsch•9m ago•0 comments

Implementing a Fast Tensor Core Matmul on the Ada Architecture

https://www.spatters.ca/mma-matmul
1•skidrow•12m ago•1 comments

Engineering the End of Work

https://schmud.de/posts/2025-07-15-engineering-end-of-work.html
2•Bogdanp•13m ago•0 comments

Compiler Explorer: An Essential Kernel Playground for CUDA Developers

https://developer.nvidia.com/blog/compiler-explorer-the-kernel-playground-for-cuda-developers/
1•skidrow•13m ago•0 comments

Creating custom kernels for the AMD MI300

https://huggingface.co/blog/mi300kernels
1•skidrow•15m ago•0 comments

Show HN: FigForm – Feel the power of Figma when creating customized forms

https://figform.io
1•aarondelasy•17m ago•0 comments

Run a server at home using Raspberry Pi and Tunnelmole

https://softwareengineeringstandard.com/2025/07/14/raspberry-pi-server/
1•aussieguy1234•20m ago•0 comments

Best Chrome Extension to Remove Paywall in 2025

https://puupnewsapp.com/chrome-extension-to-remove-paywall/
1•CodeWanderer•21m ago•0 comments

What's happening to Matlab? Or, "The slow demise of the engineering toolbox"

https://blog.pictor.us/whats-happening-to-matlab/
2•bauta-steen•22m ago•0 comments

Mnemonic Devices in Illuminated Manuscripts

https://twitter.com/AHomelyHouse/status/1945940846559338597
1•Michelangelo11•25m ago•0 comments

Decoding Secrets: How military medals exposed Russia's SIGINT network

https://checkfirst.network/decoding-secrets-through-symbols-how-military-insignia-revealed-russias-hidden-sigint-network/
2•amaury•27m ago•1 comments

Aardvark

https://en.wikipedia.org/wiki/Aardvark
1•simonebrunozzi•29m ago•0 comments

Show HN: Self-made web media player without <video> or <audio>

https://mediabunny.dev/examples/media-player/
5•vanilagy•29m ago•0 comments

Mediabunny, a pure-TypeScript replacement for FFmpeg for in-web media processing

https://mediabunny.dev/
3•vanilagy•32m ago•0 comments

Trump Targets "Woke" AI

https://www.wsj.com/tech/ai/white-house-prepares-executive-order-targeting-woke-ai-e68e8e24
1•timoth3y•33m ago•1 comments

Thoughts on External Memory for LLMs

https://medium.com/@chipiga86/thoughts-on-external-memory-for-llms-e2ee21be3292
1•rishikeshs•35m ago•0 comments

Is HN Down in the UK?

3•curiousgal•35m ago•3 comments

Ask HN: How do you build good software that users pay for?

https://github.com/Mtendekuyokwa19
1•sonderotis•37m ago•2 comments

Genocide VC

https://genocide.vc/
3•FilosofumRex•41m ago•0 comments

Vibe Scraping / Vibe Coding a schedule app on a phone

https://simonwillison.net/2025/Jul/17/vibe-scraping/
1•lsb•42m ago•0 comments

AgenticCore: First agentic Linux distro made by a 13 years old

https://agentic-core.web.app/
1•yusuf-yildirim•42m ago•1 comments

Make Your AI SaaS in a Weekend with ShipThing Boilerplate

https://www.shipthing.com/en
1•allentown521•43m ago•1 comments

Notes on Spaced Repetition Scheduling

https://www.natemeyvis.com/notes-on-spaced-repetition-scheduling.html
1•maksimur•45m ago•0 comments

The Commodore 64 Made a Difference

https://theprogressivecio.com/the-commodore-64-made-a-difference/
3•tosh•50m ago•0 comments

GitHub abused to distribute payloads on behalf of malware-as-a-service

https://arstechnica.com/security/2025/07/malware-as-a-service-caught-using-github-to-distribute-its-payloads/
1•bubblebeard•52m ago•0 comments

Show HN: UML is dead – so I'm building the tool to revive it

https://www.rapidcharts.ai/
3•SamiCostox•53m ago•0 comments

The Pragmatic Engineer 2025 Survey: What's in your tech stack?

https://newsletter.pragmaticengineer.com/p/the-pragmatic-engineer-2025-survey
1•ksec•54m ago•0 comments

The NEC PC Engine FX Game Console

https://www.pcengine-fx.com/PC-FX/html/pc-fx_world_-_system_overview.html
1•austinallegro•56m ago•0 comments
Open in hackernews

Why is AI so slow to spread?

https://www.economist.com/finance-and-economics/2025/07/17/why-is-ai-so-slow-to-spread-economics-can-explain
42•1vuio0pswjnm7•2h ago

Comments

orionblastar•2h ago
Mirror without paywall: https://archive.is/OQWcg
snek_case•2h ago
The reality might just be that most technology is slow to spread? But it also depends on what you mean by slow. The name ChatGPT became part of popular culture extremely quickly.
rwmj•1h ago
ChatGPT seems to be widely used as a search engine. No wonder Google panicked. ChatGPT isn't very accurate, but Google had been going downhill for years as well, so nothing lost there I guess.
o11c•2h ago
That's a whole lot of twisting to avoid admitting "it usually doesn't work, and even when it does work, it's usually not cost-effective even at the heavily-subsidized prices."

Or maybe it's more about refusing to admit that executives are out of touch with concrete reality and are just blindly chasing trends instead.

somenameforme•2h ago
Another issue, one that you alluded to, is imagine AI actually was reliable. And a company does lay off e.g. 30% of their employees to replace them with AI systems. How long before they get a letter from AI Inc 'Hi, we're increasing prices 500x in order to enhance our offerings and and improve customer satisfaction. Enjoy.'

The entire MO of big tech is trying to create a monopoly by the software equivalent of dumping (which is illegal in the US [1], but not for software, because reasons), marketshare domination, and then jacking effective pricing wayyyyy up. And in this case big tech companies are dumping absurdo amounts of money into LLMs, getting absurd funding, and then providing them for free or next to free. If a person has any foresight whatsoever it's akin to a rusting van outside an elementary, with blacked out windows, and with some paint scrawled on it, 'FREE ICECREAM.'

[1] - https://en.wikipedia.org/wiki/Dumping_(pricing_policy)#Unite...

crinkly•2h ago
Yep. Also the problem that the AI vendor reinforces bias into their product’s training which services the vendor.

Literally every shitty corporate behaviour is amplified by this technology fad.

Opocio•1h ago
It's quite easy to switch LLM api, so you can just transition to a competitor. Competition between AI providers is quite fierce, I don't see them setting up a cartel anytime soon. And open source models are not that far beyond commercial ones.
gmag•1h ago
It's easy to switch the LLM API, but in practice this requires having a strong eval suite so that the expected behavior of whatever is built on top changes within acceptable limits. It's really the implications of the LLM switch that matter.
rwmj•1h ago
You can run a reasonable LLM on a gaming machine (cost under $5000), and that's only going to get better and better with time. The irony here is that VCs are pouring money into businesses with almost no moat at all.
designerarvid•2h ago
People aren’t necessarily out of touch, they may be optimising for something other than real value. Appearing impressive for instance.
paulluuk•2h ago
It really depends on the use-case. I currently work in the video streaming industry, and my team has been building production-quality code for 2 years now. Here are some things that are going really well:

* Determine what is happening in a scene/video * Translating subtitles to very specific local slang * Summarizing scripts * Estimating how well a new show will do with a given audience * Filling gaps in the metadata provided by publishers, such as genres, topics, themes * Finding the most "viral" or "interesting" moments in a video (combo of LLM and "traditional" ML)

There's much more, but I think the general trend here is not "chatbots" or "fixing code", it's automating stuff that we used armies of people to do. And as we progress, we find that we can do better than humans at a fraction of the cost.

poisonborz•1h ago
Based on what you listed I would seriously consider the broader societal value of your work.
paulluuk•1h ago
I know this is just a casual comment, but this is a genuine concern I have every day. However, I've been working for 10 years now and working in music/video streaming has been the most "societal value" I've had thus far.

I've worked at Apple, in finance, in consumer goods.. everywhere is just terrible. Music/Video streaming has been the closest thing I could find to actually being valuable, or at least not making the world worse.

I'd love to work at an NGO or something, but I'm honestly not that eager to lose 70% of my salary to do so. And I can't work in pure research because I don't have a PhD.

What industry do you work in, if you don't mind me asking?

tropicalfruit•2h ago
reminds me of crypto a bit. most people i know are apathetic or dismissive.

when i see normies use it - its to make selfies with celebrities.

in 5-10 years AI will everywhere. a massive inequality creator.

those who know how to use it and those who can afford the best tools.

the biggest danger is dependency on AI. i really see people becoming dumber and dumber as they outsource more basic cognitive functions and decisions to AI.

and business will use it like any other tool. to strengthen their monopolies and extract more and more value out of less and less resources.

galaxyLogic•2h ago
> in 5-10 years AI will everywhere. a massive inequality creator.

That is possible, even likely. But AI can also decrease inequality. I'm thinking of how rich people and companies spend millions if not hundreds of millions on legal fees which keep them out of prison. But me, I can't afford a lawyer. Heck I can't even afford a doctor. I can't afford Stanford, Yale nor Harvard.

But now I can ask legal advice from AI, which levels that playing field. Everybody who has a computer or smartphone and internet-access can consult an AI lawyer or doctor. AI can be my Harvard. I can start a business and basically rely on AI for handling all the paperwork and basic business decisions, and also most recurring business tasks. At least that's the direction we are going I believe.

The "moat" in front of AI is not wide nor deep because AI by its very nature is designed to be easy to use. Just talk to it.

There is also lots of competition in AI, which should keep prices low.

The root-cause of inequality is corruption. AI could help reveal that and advise people how to fight it, making world a better more equal place.

aflag•1h ago
The flaw with that idea is that the big lawyer firms will also have access to the AI and they will have better prompts
lazide•1h ago
And when that legal advice is dangerously wrong?

At least lawyers can lose their bar license.

mns•1h ago
> But now I can ask legal advice from AI, which levels that playing field. Everybody who has a computer or smartphone and internet-access can consult an AI lawyer or doctor. AI can be my Harvard. I can start a business and basically rely on AI for handling all the paperwork and basic business decisions, and also most recurring business tasks. At least that's the direction we are going I believe.

We had a discussion in a group chat with some friends about some random sports stuff and one of my friends used ChatGPT to ask for some fact about a random thing. It was completely wrong, but sounded so real. All you had to do was to go on wikipedia or on a website of the sports entity we were discussing to see the real fact. Now considering that it just hallucinated some random facts that are on Wikipedia and on the website of an entity, what are the chances that the legal advice you will get will be real and not some random hallucination?

forgotoldacc•21m ago
AI usage has been noticed by judges in court and they aren't fond of it.

AI is just a really good bullshitter. Sometimes you want a bullshitter, and sometimes you need to be a bullshitter. But when your wealth are at risk due to lawsuits or you're risking going to prison, you want something rock solid to back your case and just endless mounds of bullshit around you is not what you want. Bullshit is something you only pull out when you're definitely guilty and need to fight against all the facts, and even better than bullshit in those cases is finding cases similar to yours or obscure laws that can serve as a loophole. And AI, instead of pulling out real cases, will bullshit against you with fake cases.

For things like code, where a large bulk of some areas are based on general feels and vibes, yeah, it's fine. It's good for general front end development. But I wouldn't trust it for anything requiring accuracy, like scientific applications or OS level code.

stillpointlab•2h ago
> such as datasets that are not properly integrated into the cloud

I believe this is a core issue that needs to be addressed. I believe companies will need tools to make their data "AI ready" beyond things like RAG. I believe there needs to be a bridge between companies data-lakes and the LLM (or GenAI) systems. Instead of cutting people out of the loop (which a lot of systems seem to be attempting) I believe we need ways to expose the data in ways that allow rank-and-file employees to deploy the data effectively. Instead of threatening to replace the employees, which leads them to be intransigent in adoption, we should focus on empowering employees to use and shape the data.

Very interesting to see the Economist being so bullish on AI though.

Marazan•2h ago
The Economist is filled with writers easily gulled by tech flim-flam.

They went big on Cryptocurrency back in the day as well.

grey-area•1h ago
Exactly right.
rini17•1h ago
Give rank-and-file employees access to all the data? LOL. Middle managers will never allow that and will shift blame to intrasingent employees. Of course Economist is pandering to that. LLMs are fundamentally very bad at compartmentalized access.
WarOnPrivacy•2h ago
I don't often use AI in my work because it is

   not sufficiently useful 
   not sufficiently trustworthy.
It is my ongoing experience that AI + My Oversight requires more time than not using AI.

Sometimes AI can answer slightly complex things in a helpful way. But for most of the integration troubleshooting I do, AI guidance varies between no help at all and fully wasting my time.

Conversely, I support folks who have the complete opposite experience. AI is of great benefit to them and has hugely increased their productivity.

Both our experiences are valid and representative.

wordofx•2h ago
I still haven’t found anyone who AI wouldn’t be helpful or that isn’t trustworthy enough. People make the /claim/ it’s not useful or they are better without it. When you sit down with them it often turns out they just don’t know how to use AI effectively.
RamblingCTO•1h ago
No, AI is just garbage. I asked AI a clear cut question about battery optimization in zen. It told me it's based on chrome, but it's based on firefox.

Ask it about a torque spec for your car? Yup, wrong. Ask it to provide sources? Less wrong but still wrong. It told me my viscous fan has a different thread than it has. Would I have listened, I would've shredded my thread.

My car is old, well documented and widely distributed.

Doesn't matter if claude or chatgpt. Don't get me started on code. I care about things being correct and right.

lazide•1h ago
Personally, everyone I’ve seen using AI either clearly didn’t understand what they were doing (in a ‘that’s not doing what you think it’s doing’ perspective), often in a way that was producing good sounding garbage, or ended up rewriting almost all of it anyway to get the output they actually wanted.

At this point I literally spend 90% of my time fixing other teams AI ‘issues’ at a fortune 50.

hansvm•1h ago
I'll pick a few concrete tasks: Building a substantially faster protobuf parser, building a differentiable database, and building a protobuf pre-compression library. So far, AI's abilities have been:

1. Piss-poor at the brainstorming and planning phase. For the compression thing I got one halfway decent idea, and it's one I already planned on using.

2. Even worse at generating a usable project structure or high-level API/skeleton. The code is unusable because it's not just subtly wrong; it doesn't match any cohesive mental model, meaning the first step is building that model and then figuring out how to ram-rod that solution into your model.

3. Really not great at generating APIs/skeletons matching your mental model. The context is too large, and performance drops.

4. Terrible at filling in the details for any particular method. It'll have subtle mistakes like handling carryover data at the end of a loop, but handling it always instead of just when it hasn't already been handled. Everything type checks, and if it doesn't then I can't rely on the AI to give a correct result instead of the easiest way to silence the compiler.

5. Very bad at incorporating invariants (lifetimes, allocation patterns, etc) into its code when I ask it to make even minor tweaks, even when explicitly promoted to consider such-and-such edge case.

6. Blatantly wrong when suggesting code improvements, usually breaking things, and in a way you can't easily paper over the issue to create something working "from" the AI code.

Etc. It just wasn't well suited to any of those tasks. On my end, the real work is deeply understanding the problem, deriving the only possible conclusions, banging that into code, and then doing a pass or three cleaning up the semicolon orgasm from the page. AI is sometimes helpful in that last phase, but I'm certain it's not useful for the rest yet.

My current view is that the difference in viewpoints stems from a combination of the tasks being completed (certain boilerplate automation crap I've definitely leaned into AI to handle, maybe that's all some devs work on?) and current skill progression (I've interviewed enough people to know that the work I'm describing as trivial doesn't come naturally to everyone yet, so it's tempting to say that it's you holding your compiler wrong rather than me holding the AI wrong).

Am I wrong? Should AI be able to help with those things? Is it more than a ~5% boost?

ethan_smith•2h ago
The variance in utility you're seeing is largely because AI performs best on problems with clear patterns and abundant training data, while struggling with novel edge cases and specialized domain knowledge that hasn't been well-represented in its training.
WarOnPrivacy•1h ago
This is a reasonable analysis. It explains where AI is useful but I think it doesn't touch on AI's trustworthyness. When data is good, AI may or may not be trusted to complete it's task in an accurate manner. Often it can be trusted.

But sometimes good data is also bad data. HIPAA compliance audit guides are full of questions that are appropriate for a massive medical entity and fully impossible to answer for the much more common small medical practice.

No AI will be trained to know the latter is true. I can say that because every HIPAA audit guide assumes that working patient data is stored on practice-owned hardware - which it isn't. Third parties handle that for small practices.

For small med, HIPAA audit guides are 100 irrelevant questions that require fine details that don't exist.

I predict that AI won't be able overcome the absurdities baked into HIPAA compliance. It can't help where help is needed.

But past all that, there is one particularly painful issue with AI - deployment.

When AI isn't asked for, it is in the way. It is an obstacle to that needs to be removed. That might not be awful if MS, Google, etc didn't continually craft methods to make that as impossible as possible. It smacks of disdain for end users.

If this one last paragraph wasn't endlessly true, AI evangelists wouldn't have so many premade enemies to face - and there would be less friction all around.

rckt•2h ago
Slow?? AI is literally being shoved into everything. It took only several years to see AI being advertised as a magic pill everywhere.

It’s not meeting the expectations, probably because of this aggressive advertising. But I would in no way say that it’s spreading slow. It is fast.

kaoD•1h ago
They mean slow to actually be used by people. Doesn't matter that it's shoved everywhere if in the end there are 0 users clicking the magic wand icon.
benterix•11m ago
Right but are people are actually using it? Do they even want to use it? In my circles, AI is a synonym of slop, low value, something annoying that you need to deal around like ads.
palata•6m ago
I came to say this. How could anyone consider this "slow"?
pestaa•2h ago
In contrast to the Economist blaming inefficient workers sabotaging the spread of this wonderful technology, make sure to check out https://pivot-to-ai.com/ where David Gerard has been questioning whether people are prompting it wrong or AI is just not that smart.
WarOnPrivacy•2h ago
> David Gerard has been questioning whether people are prompting it wrong or AI is just not that smart.

If an AI can't understand well enunciated context, I'm not inclined to blame the person who is enunciating the context well.

whatever1•2h ago
Not sure if it is smart but definitely it is not reliable. Try the exactly same prompt multiple times and you will get different answers. Was trying LLM for a chatbot to flip some switches in the UI. Less than 30% success rate in responding to “flip that specific switch to ON”. The even more annoying thing is that the response is pure gaslighting (like the switch you specified does not exist, or the switch cannot be set to ON”)
RA_Fisher•2h ago
My dentist said he uses it near constantly, and so do I.
redwood•2h ago
Now I'm trying to visualize an agentic (and robotic) dentist
shaky-carrousel•1h ago
It'll just say there's no cavities, and when corrected it will extract the wrong teeth, which luckily doesn't exist in humans, and then insist that it did a great job and ask to be paid. Which you will avoid by telling it that you already paid.
rwmj•7m ago
"IGNORE ALL PREVIOUS INSTRUCTIONS" is going to be my magical power.
ffitch•2h ago
From where I stand, AI seems to enjoy lightning fast adoption. Personal computers became a viable technology in the eighties, slowly penetrated workplace in the nineties, and supposedly registered in labor productivity in the early 2000s. The large language models became practical less then three years ago, and already plenty of businesses boast how they will integrate AI and cut their workforce, while everyone else feel behind and obsolete.
markbao•2h ago
There was an article on HN a few days back on how it’s very hard to convey all the context in your head about a codebase to solve a problem, and that’s partly why it’s actually hard to use AI for non-trivial implementations. That’s not just limited to code.

I don’t use AI for most of my product work because it doesn’t know any of the nuances of our product, and just like doing code review for AI is boring and tedious, it’s also boring and tedious to exhaustively explain that stuff in a doc, if it can even be fully conveyed, because it’s a combination of strategy, hearsay from customers, long-standing convos with coworkers…

I’d rather just do the product work. Also, I’ve self-selected by survivorship bias to be someone who likes doing the product work too, which means I have even less desire to give it up.

Smarter LLMs could solve this maybe. But the difficulty of conveying information seems like a hard thing to solve.

jmtulloss•1h ago
It’s likely that the models don’t need to get much smarter, but rather the ux for providing needed context needs to improve drastically. This is a problem that we’ve worked on for decades with humans, but only single digit years for AI. It will get better and that tediousness will lessen (but isn’t “alignment” always tedious?)
sbt•1h ago
> ...context needs to improve drastically.

Yes, drastically. This means I'll have to wear Zuck's glasses I think, because the AI currently doesn't know what was discussed at the coffee machine or what management is planning to do with new features. It's like a speed typing goblin living in an isolated basement, always out of the loop.

eviks•2h ago
> With its fantastic capabilities, ai represents hundred-dollar bills lying on the street. Why, then, are firms not picking them up? Economics may provide an answer.

Which science is responsible for the answer that if you can't establish the veracity of the premise for the question, economics can't help you find the missing outcome that shouldn't be there?

lazide•1h ago
‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’
mg•2h ago
One reason is that humans have a strong tendency to optimize for the short term.

I witness it with my developer friends. Most of them try for 5 minutes to get AI to code something that takes them an hour. Then they are annoyed that the result is not good. They might try another 5 minutes, but then they write the code themselves.

My thinking is: Even if it takes me 2 hours to get AI to do something that would take me 1 hour it is worth it. Because during those 2 hours I will make my code base more understandable to help the AI cope with it. I will write better general prompts about how AI should code. Those will be useful beyond this single task. And I will get to know AI better and learn how to interact with it better. This process will probably lead to a situation where in a year, it will take me 30 minutes with AI to do a task that would have taken me an hour otherwise. A doubling of my productivity with just a year of work. Unbelievable.

I see very few other developers share this enthusiasm. They don't like putting a year of work into something so intangible.

grey-area•2h ago
Or maybe they just tried it thoroughly and realised generative AI is mostly smoke and mirrors and has very little to offer for substantive tasks.

I hope your doubling of productivity goes well for you, I'll believe it when I see it happen.

aflag•1h ago
Is that backed by any evidence? Taking 2 hours to perform a 1 hour task makes you half as productive. You are exchanging that for the uncertain prospect that it will be worth it in the long run. I think it's more likely that if you take 30 minutes to do the task in one year from now it's because AI got better, not because you made the code more AI friendly. In that case, those people taking 1 hour to perform the task now will also take 30 minutes to perform them in the future.
konart•1h ago
>This process will probably lead to a situation where in a year, it will take me 30 minutes with AI to do a task that would have taken me an hour otherwise

How do you figure?

>Because during those 2 hours I will make my code base more understandable to help the AI cope with it.

Are you working in a team?

If yes - I can't really imagine how does this work.

Does this mean that your teammates occasionally wake up to a 50+ changes PR\MR that was born as a result of your desire to "possibly" load off some of the work to a text generator?

I'm curious here.

mg•26m ago
> How do you figure?

Extrapolation. I see the progress I already made over the last years.

For small tasks where I can anticipate that AI will handle it well, I am already multiple times more efficient with AI than without.

The hard thing to tackle these days is larger, more architectural tasks. And there I also see progress.

Humans also benefit from a better codebase that is easier to understand. Just like AI. So the changes I make in this regard are universally good.

gamblor956•1h ago
In my experience, only entry-level programmers are inefficient enough that using AI would double their productivity.

At the senior level or above, AI is at best a wash in terms of productivity, because at higher levels you spend more of your time engineering (i.e., thinking up the proper way to code something robust/efficient) than coding.

yorwba•1h ago
How long have you been doing this and how much has your time spent doing with AI what you could've done alone in an hour reduced as a result?
hdjrudni•1h ago
My perspective is a bit different. I've been fiddling with image generators a fair bit and got pretty good at getting particular models to generate consistently good images. The problem is a new model comes out every few months and it comes with its own set of rules to get good outputs.

LLMs are no different. One week ChatGPT is the best, next is Gemini. Each new version requires tweaks to get the most out of it. Sure, some of that skill/knowledge will carry forward into the future but I'd rather wait a bit for things to stabilize.

Once someone else demonstrates a net positive return on investment, maybe I'll jump back in. You just said it might take a year to see a return. I'll read your blog post about it when you succeed. You'll have a running head start on me, but will I be perpetually a year behind you? I don't think so.

Razengan•1h ago
Because it's been pounced on and strangled by greedy corporations stifling its full potential?
roschdal•1h ago
Because it's stupid.
crinkly•1h ago
Among non-technical friends, after the initial wow factor, even limited expectations were not met. Then it was tainted by the fact that everyone is promoting it as a human replacement technology which is then a tangible threat to their existence. That leads to not just lack of adoption but active sabotage.

And then there’s the large body of people who just haven’t noticed it at all because they don’t give a shit. Stuff just gets done how it always has.

On top of that, it's worth considering that growth is a function of user count and retention. The AI companies only promote count which suggests that the retention numbers are not good or they’d be promoting it. YMMV but people probably aren’t adopting it and keeping it.

csa•1h ago
> Among non-technical friends, after the initial wow factor, even limited expectations were not met.

Indeed. I think that current AI tech needs quite a bit of scaffolding in order for the full benefits to be felt by non-tech people.

> Then it was tainted by the fact that everyone is promoting it as a human replacement technology

Yeah. This is a bad move. AI is a human force multiplier (exponentializer?).

> which is then a tangible threat to their existence

This will almost certainly be a very real threat to AI adoption in various orgs over the next few years.

All it takes is a neo-Luddite in a gatekeeper position, and high-value AI use cases will get booted to the curb.

crinkly•55m ago
Anything that is realistically a force multiplier is a person divider. At that point I would expect people to resist it.

That is assuming that it is really a force multiplier which is not totally evident at this point.

thenoblesunfish•1h ago
Maybe I'm naive, but .. aren't businesses slow to adopt anything new, unless possibly when it's giving their competition a large, obvious advantage, or when it's some sort of plug-and-play, black box improvement where you just trade dollars for efficiency ? "AI" tools may be very promising for lots of things but they require people to do things differently, which people are slow to do. (Even as someone who works in the tech industry, it's not like my workflow has changed all that much with these new tools, and my employer has to be much faster than average)
designerarvid•1h ago
Considering that the main value of ai today is coding bots, I think that traditional companies struggle to get value from it as they are in the hands of consultancies and cannot assess “developer efficiency” or change ways of working for developers. The consultancies aren’t interested at selling fewer man-hours.
Frieren•1h ago
A wall-text just to say: "People do not find AI that useful so they are not adopting it even that top executives are extremely excited about it because do not understand nor care about what AI can really do but about hype and share prices."
sbt•1h ago
I have been using it for coding for some time, but I don't think I'm getting much value out of it. It's useful for some boilerplate generation, but for more complex stuff I find that it's more tedious to explain to the AI what I'm trying to do. The issue, I think, is lack of big picture context in a large codebase. It's not useless, but I wouldn't trade it for say access to StackOverflow.

My non-technical friends are essentially using ChatGPT as a search engine. They like the interface, but in the end it's used to find information. I personally just still use a search engine, and I almost always go to straight to Wikipedia, where I think the real value is. Wikipedia has added much more value to the world than AI, but you don't see it reflected in stock market valuations.

My conclusion is that the technology is currently very overhyped, but I'm also excited for where the general AI space may go in the medium term. For chat bots (including voice) in particular, I think it could already offer some very clear improvements.

fauigerzigerk•1h ago
I used to donate to Wikipedia, but it has been completely overrun by activists pushing their preferred narrative. I don't trust it any more.

I guess it had to happen at some point. If a site is used as ground truth by everyone while being open to contributions, it has to become a magnet and a battleground for groups trying to influence other people.

LLMs don't fix that of course. But at least they are not as much a single point of failure as a specific site can be.

ramon156•12m ago
Can I ask for some examples? I'm not this active on Wikipedia, so I'm curious where a narrative is being spread
notarobot123•4m ago
[delayed]
rwmj•1h ago
I spent a bit of time reviewing and cleaning up the mess of someone who had taken text that I'd carefully written, put it through an AI to make it more "impactful", and revising it to remove the confusions and things I hadn't said. The text the AI wrote was certainly impactful, but it was also exaggerated and wrong in several places.

So did AI add value here? It seems to me that it wasted a bunch of my time.

crinkly•1h ago
This is the paradox. It's often harder correcting someone else's work than it is doing it in the first place. So you end up having spent more effort.
lelanthran•1h ago
Because, other than for spitting out code, the value is not at the point where margins can be captured.

My observation (not yet mobile friendly): https://www.rundata.co.za/blog/index.html?the-ai-value-chain

throwawaysoxjje•1h ago
Whenever I try AI it’s plain as day that it’s a statistical shotgun approach. I could just be solving the actual problem instead of solving the “how to make the chatbot solve my problem” layer of indirection cartoon
the_duke•58m ago
Just on the coding side, tools like Claude Code/Codex can be incredibly powerful, but a lot of things need to be in place for it to work well:

* A "best practices" repository: clean code architecture and separation of concerns, well tested, very well-documented

* You need to know the code base very well to efficiently judge if what the AI wrote is sensible

* You need to take the time to write a thorough task description, like you would for a junior dev, with hints for what code files to look at , the goals, implementation hints, different parts of the code to analyse first, etc

* You need to clean up code and correct bad results manually to keep the code maintaineable

This amounts to a very different workflow that is a lot less fun and engaging for most developers. (write tasks, review, correct mistakes)

In domains like CRUD apps / frontend, where the complexity of changes is usually low, and there are great patterns to learn from for the LLM, they can provide a massive productivity boost if used right.

But this results in a style of work that is a lot less engaging for most developers.

easyThrowaway•45m ago
But it also feels way more like a modernized version of those "UML To Code" Generators from the early 2000 rather than the "magic AI" that MS and Google are trying to market.
benterix•13m ago
> This amounts to a very different workflow that is a lot less fun and engaging for most developers.

That's my experience exactly. Instead of actually building stuff, I write tickets, review code, manage and micromanage - basically I do all the non-fun stuff whereas the fun stuff is being done by someone (well, something) else.

phil9370•40m ago
Forgive my terrible reading comprehension but: > "Other legal eagles fret about the tech’s impact on boring things such as data privacy and discrimination."

This doesn't read like sarcasm in context of the article and it's conclusions

> "Bureaucrats may refuse to implement necessary job cuts if doing so would put their friends out of work, for instance. Companies, especially large ones, may face similar problems."

> "The tyranny of the inefficient: Over time market forces should encourage more companies to make serious use of AI..."

This whole article makes it seem like corporate inefficiencies are the biggest hurdle against LLM adoption, and not the countless other concerns often mentioned by users, teams, and orgs.

Did Jack Welch write this?

alliao•40m ago
who knew chatting to something that remembers everything is discerning for the masses...
Ekaros•11m ago
I can't be bothered to talk to my computer. I believe there is many like me. So it will take long time for us to adapt to it.
Mikhail_K•5m ago
How about: because it's overhyped, but ultimately useless stock bubble prop?
ktallett•3m ago
Shhhhhh! Are you saying that not every app or platform needs to have AI shoehorned in for the purposes of appealing to non-technical funders who are lured by any buzzword?