Also when you use Google via an alternative frontend like startpage you get a slightly better version without the crap than the one currently serving at google.com. If you use Kagi which most of the time only shows results from the google search index it becomes the best quality Google ever had.
I believe it is really only the "frontend" that is responsible for the current quality and very much an intentional choice. If Google wants to be better, it could choose so tomorrow. Actually it doesn't, because it already does today, just not on a modern browser on google.com .
Or maybe it's more about refusing to admit that executives are out of touch with concrete reality and are just blindly chasing trends instead.
The entire MO of big tech is trying to create a monopoly by the software equivalent of dumping (which is illegal in the US [1], but not for software, because reasons), marketshare domination, and then jacking effective pricing wayyyyy up. And in this case big tech companies are dumping absurdo amounts of money into LLMs, getting absurd funding, and then providing them for free or next to free. If a person has any foresight whatsoever it's akin to a rusting van outside an elementary, with blacked out windows, and with some paint scrawled on it, 'FREE ICECREAM.'
[1] - https://en.wikipedia.org/wiki/Dumping_(pricing_policy)#Unite...
Literally every shitty corporate behaviour is amplified by this technology fad.
* Determine what is happening in a scene/video * Translating subtitles to very specific local slang * Summarizing scripts * Estimating how well a new show will do with a given audience * Filling gaps in the metadata provided by publishers, such as genres, topics, themes * Finding the most "viral" or "interesting" moments in a video (combo of LLM and "traditional" ML)
There's much more, but I think the general trend here is not "chatbots" or "fixing code", it's automating stuff that we used armies of people to do. And as we progress, we find that we can do better than humans at a fraction of the cost.
I've worked at Apple, in finance, in consumer goods.. everywhere is just terrible. Music/Video streaming has been the closest thing I could find to actually being valuable, or at least not making the world worse.
I'd love to work at an NGO or something, but I'm honestly not that eager to lose 70% of my salary to do so. And I can't work in pure research because I don't have a PhD.
What industry do you work in, if you don't mind me asking?
Also I only read the comment above, it's you who can judge what you contribute to and what you find fair. I just wish there were a mandatory "code of conduct" for engineers. The way AI is reshaping the field, I could imagine this becoming more like a medical/law field where this would be possible.
I work in IoT telemetrics. The company is rumored to partake in military contracts at a future point, that would be my exit then.
Can you elaborate on this point?
Most stuff is obvious: nobody needs to tell you what segment of society is drawn to soap operas or action movies, for example. But there's plenty of room for nuance in some areas.
This doesn't guarantee that it actually becomes a succesful movie or show, though. That's a different project and frankly, a lot harder. Things like which actors, which writers, which directors, which studio are involved, and how much budget the show has.. it feels more like Moneyball but with more intangible variables.
when i see normies use it - its to make selfies with celebrities.
in 5-10 years AI will everywhere. a massive inequality creator.
those who know how to use it and those who can afford the best tools.
the biggest danger is dependency on AI. i really see people becoming dumber and dumber as they outsource more basic cognitive functions and decisions to AI.
and business will use it like any other tool. to strengthen their monopolies and extract more and more value out of less and less resources.
That is possible, even likely. But AI can also decrease inequality. I'm thinking of how rich people and companies spend millions if not hundreds of millions on legal fees which keep them out of prison. But me, I can't afford a lawyer. Heck I can't even afford a doctor. I can't afford Stanford, Yale nor Harvard.
But now I can ask legal advice from AI, which levels that playing field. Everybody who has a computer or smartphone and internet-access can consult an AI lawyer or doctor. AI can be my Harvard. I can start a business and basically rely on AI for handling all the paperwork and basic business decisions, and also most recurring business tasks. At least that's the direction we are going I believe.
The "moat" in front of AI is not wide nor deep because AI by its very nature is designed to be easy to use. Just talk to it.
There is also lots of competition in AI, which should keep prices low.
The root-cause of inequality is corruption. AI could help reveal that and advise people how to fight it, making world a better more equal place.
At least lawyers can lose their bar license.
We had a discussion in a group chat with some friends about some random sports stuff and one of my friends used ChatGPT to ask for some fact about a random thing. It was completely wrong, but sounded so real. All you had to do was to go on wikipedia or on a website of the sports entity we were discussing to see the real fact. Now considering that it just hallucinated some random facts that are on Wikipedia and on the website of an entity, what are the chances that the legal advice you will get will be real and not some random hallucination?
AI is just a really good bullshitter. Sometimes you want a bullshitter, and sometimes you need to be a bullshitter. But when your wealth are at risk due to lawsuits or you're risking going to prison, you want something rock solid to back your case and just endless mounds of bullshit around you is not what you want. Bullshit is something you only pull out when you're definitely guilty and need to fight against all the facts, and even better than bullshit in those cases is finding cases similar to yours or obscure laws that can serve as a loophole. And AI, instead of pulling out real cases, will bullshit against you with fake cases.
For things like code, where a large bulk of some areas are based on general feels and vibes, yeah, it's fine. It's good for general front end development. But I wouldn't trust it for anything requiring accuracy, like scientific applications or OS level code.
I believe this is a core issue that needs to be addressed. I believe companies will need tools to make their data "AI ready" beyond things like RAG. I believe there needs to be a bridge between companies data-lakes and the LLM (or GenAI) systems. Instead of cutting people out of the loop (which a lot of systems seem to be attempting) I believe we need ways to expose the data in ways that allow rank-and-file employees to deploy the data effectively. Instead of threatening to replace the employees, which leads them to be intransigent in adoption, we should focus on empowering employees to use and shape the data.
Very interesting to see the Economist being so bullish on AI though.
They went big on Cryptocurrency back in the day as well.
In fact, giving RBAC functionality to middle managers would be a key component to any strategy for AI deployment activity. You want traceability/auditability. Give the middle managers charts, logs, visibility into who is using what data with which LLMs. Give them levers to grant and deny access. Give them metrics on outcomes from the use of the data. This could eve even make legal happy.
This missing layer will exist, its just a matter of time.
not sufficiently useful
not sufficiently trustworthy.
It is my ongoing experience that AI + My Oversight requires more time than not using AI.Sometimes AI can answer slightly complex things in a helpful way. But for most of the integration troubleshooting I do, AI guidance varies between no help at all and fully wasting my time.
Conversely, I support folks who have the complete opposite experience. AI is of great benefit to them and has hugely increased their productivity.
Both our experiences are valid and representative.
Ask it about a torque spec for your car? Yup, wrong. Ask it to provide sources? Less wrong but still wrong. It told me my viscous fan has a different thread than it has. Would I have listened, I would've shredded my thread.
My car is old, well documented and widely distributed.
Doesn't matter if claude or chatgpt. Don't get me started on code. I care about things being correct and right.
At this point I literally spend 90% of my time fixing other teams AI ‘issues’ at a fortune 50.
1. Piss-poor at the brainstorming and planning phase. For the compression thing I got one halfway decent idea, and it's one I already planned on using.
2. Even worse at generating a usable project structure or high-level API/skeleton. The code is unusable because it's not just subtly wrong; it doesn't match any cohesive mental model, meaning the first step is building that model and then figuring out how to ram-rod that solution into your model.
3. Really not great at generating APIs/skeletons matching your mental model. The context is too large, and performance drops.
4. Terrible at filling in the details for any particular method. It'll have subtle mistakes like handling carryover data at the end of a loop, but handling it always instead of just when it hasn't already been handled. Everything type checks, and if it doesn't then I can't rely on the AI to give a correct result instead of the easiest way to silence the compiler.
5. Very bad at incorporating invariants (lifetimes, allocation patterns, etc) into its code when I ask it to make even minor tweaks, even when explicitly promoted to consider such-and-such edge case.
6. Blatantly wrong when suggesting code improvements, usually breaking things, and in a way you can't easily paper over the issue to create something working "from" the AI code.
Etc. It just wasn't well suited to any of those tasks. On my end, the real work is deeply understanding the problem, deriving the only possible conclusions, banging that into code, and then doing a pass or three cleaning up the semicolon orgasm from the page. AI is sometimes helpful in that last phase, but I'm certain it's not useful for the rest yet.
My current view is that the difference in viewpoints stems from a combination of the tasks being completed (certain boilerplate automation crap I've definitely leaned into AI to handle, maybe that's all some devs work on?) and current skill progression (I've interviewed enough people to know that the work I'm describing as trivial doesn't come naturally to everyone yet, so it's tempting to say that it's you holding your compiler wrong rather than me holding the AI wrong).
Am I wrong? Should AI be able to help with those things? Is it more than a ~5% boost?
If I digress a bit, I wonder what it is with some hypes - tech or just broader in society - where something reaches a critical mass of publicity where suddenly a surprisingly large portion of people becomes just convinced, regardless of if they have some actual knowledge of the subject or not. Like, anecdotally, watching completely unrelated sports streams and there's a really sincere off-topic comment that AI will change everything now and "society isn't ready" or similar. But I guess it shouldn't be surprising when supposedly intelligent tech gurus and media folks are just eating everything up and starts peddling the narrative to the general public, why shouldn't one believe it? It's like a confluence of incentives that becomes a positive feedback loop until it all blows up.
or in the case of AI, the majority of people are “wow this is useful and helpful” and then HN is like “it didn’t 1 shot answer from the first prompt I gave it so it’s useless”.
You only need to read every discussion on HN about AI. The majority here are against AI are also the same people who really have no idea how to use it.
So in the end there is no conversation to engage with because HN has tunnel vision. “Elon bad” “ai bad” “science bad” “anything not aligned with my political view bad”
A good problem to throw at AI, I thought. I handed the tools to a SOTA model and asked it to generate me some files. Garbage. Some edits to the prompts and I still get garbage. Okay, that's pretty hard to generate a binary with complex internal structure directly. Let's ask it to tell me how to make the toolchain generate these for me. It gives me back all sorts of CLI examples. None work. I keep telling it what output I am getting and how it differs from what I want. Over and over it fails.
I finally reach out to somebody on the toolchain team and they tell me how to do it. Great, now I can generate some valid files. Let's try to generate some invalid ones to test error paths. I've got a file. I've got the spec. I ask the LLM to modify the file to break the spec in a single way each time and tell me which part of the spec it broke each time. Doesn't work. Okay. I ask it to write me a python program that does this. Works a little bit, but not consistently and I need to inspect each output carefully. Finally I throw my files into a coverage guided fuzzer corpus and over a short period of time it's generated inputs that have excellent branch coverage for me.
What would effective have looked like to you in this situation?
This is the classic example of “I want it to do everything. But it didn’t do what I wanted. So obviously it’s not helpful.”
It doesn’t solve /all/ problems. And some models are better than others at certain tasks. You see people talk about PRDs and they say “I got Claude to create a PRD and it sucked” but you sit them down with o3 and generate the same PRD and they are like “oh wow this is actually pretty decent”.
But difficult to help over HN. As a sort of related example. Back in May I had a requirement to ingest files from a new POS system we didn’t currently support. The exports we got are CSV but based on the first char decides the first type of line that needs to be parsed and how many commas will be in that data set line.
I used o3 and explained everything I could have how the file worked and how it should parse it into a generic list etc. got it to generate a basic PRD with steps and assertions along the way.
I then fed this into cursor using Claude sonnet 4 with the csv files asking it to look at the files and the PRD and asked it if there was anything that didn’t make sense. Then asked it to begin implementing the steps 1 by 1 and letting me check before moving onto the next step. Couple of times it misunderstood and did things slightly wrong but I just corrected it or asked Claude to correct it. But it essentially wrote code. Wrote test. Verified. I verified. It moved on.
Typically in the past these tasks take a few days to implement and roll out. This whole thing took about 1 hour. The code is in the style of the other parsers with the exception that it optimised some parts and despite being a bit more complicated runs faster than some of our older parsers.
While most of my usage is around programming. We also use AI for marketing, customer leads, I have scheduled tasks to give me summaries of tickets from the support system to know what should be prioritised for the day. So even tho AI doesn’t solve all programming issues we get value in almost all aspects of the business.
It’s like you’re asking a junior to go build a house. So he goes off and uses bricks to create 4 walls. And you turn around and get angry because he didn’t use wood. So he uses wood and you get angry cos he didn’t add windows, roof, door.
I’m sorry that you picked up a career where you struggle to use new tools and technologies.
Unbelievable.
But sometimes good data is also bad data. HIPAA compliance audit guides are full of questions that are appropriate for a massive medical entity and fully impossible to answer for the much more common small medical practice.
No AI will be trained to know the latter is true. I can say that because every HIPAA audit guide assumes that working patient data is stored on practice-owned hardware - which it isn't. Third parties handle that for small practices.
For small med, HIPAA audit guides are 100 irrelevant questions that require fine details that don't exist.
I predict that AI won't be able overcome the absurdities baked into HIPAA compliance. It can't help where help is needed.
But past all that, there is one particularly painful issue with AI - deployment.
When AI isn't asked for, it is in the way. It is an obstacle to that needs to be removed. That might not be awful if MS, Google, etc didn't continually craft methods to make that as impossible as possible. It smacks of disdain for end users.
If this one last paragraph wasn't endlessly true, AI evangelists wouldn't have so many premade enemies to face - and there would be less friction all around.
It’s not meeting the expectations, probably because of this aggressive advertising. But I would in no way say that it’s spreading slow. It is fast.
It is shoved everywhere but nobody really needs it.
Being better than Atlassian's search isn't a difficult hurdle to clear, but it's one of the only areas where I've seen a noticeable improvement caused by an "AI" product. Not sure the search actually uses AI or just happens to be a decent cross-product search tool, there's a separate interface to chat with it & have it summarize things so the search might not really be using the AI.
The thrashing wails of people who've spent billions on something that they should have done more due diligence on and don't want to write off their investment.
And what does this mean ? "Around 62% of ChatGPT's social media traffic comes via YouTube."
The Economist should look for businesses that have actually used AI to get measurable benefits, or tried and failed to do so, rather than falling back on niche studies of organizational behavior.
If an AI can't understand well enunciated context, I'm not inclined to blame the person who is enunciating the context well.
I don’t use AI for most of my product work because it doesn’t know any of the nuances of our product, and just like doing code review for AI is boring and tedious, it’s also boring and tedious to exhaustively explain that stuff in a doc, if it can even be fully conveyed, because it’s a combination of strategy, hearsay from customers, long-standing convos with coworkers…
I’d rather just do the product work. Also, I’ve self-selected by survivorship bias to be someone who likes doing the product work too, which means I have even less desire to give it up.
Smarter LLMs could solve this maybe. But the difficulty of conveying information seems like a hard thing to solve.
Yes, drastically. This means I'll have to wear Zuck's glasses I think, because the AI currently doesn't know what was discussed at the coffee machine or what management is planning to do with new features. It's like a speed typing goblin living in an isolated basement, always out of the loop.
LLM's won't make it on their own because there isn't enough data with their concerns in mind for them to learn from, just a few happy accidents where an LLM can excel in that specific code base.
Agile processes only work when the developers can be self-guiding, which LLMs aren't.
Which science is responsible for the answer that if you can't establish the veracity of the premise for the question, economics can't help you find the missing outcome that shouldn't be there?
I witness it with my developer friends. Most of them try for 5 minutes to get AI to code something that takes them an hour. Then they are annoyed that the result is not good. They might try another 5 minutes, but then they write the code themselves.
My thinking is: Even if it takes me 2 hours to get AI to do something that would take me 1 hour it is worth it. Because during those 2 hours I will make my code base more understandable to help the AI cope with it. I will write better general prompts about how AI should code. Those will be useful beyond this single task. And I will get to know AI better and learn how to interact with it better. This process will probably lead to a situation where in a year, it will take me 30 minutes with AI to do a task that would have taken me an hour otherwise. A doubling of my productivity with just a year of work. Unbelievable.
I see very few other developers share this enthusiasm. They don't like putting a year of work into something so intangible.
I hope your doubling of productivity goes well for you, I'll believe it when I see it happen.
How do you figure?
>Because during those 2 hours I will make my code base more understandable to help the AI cope with it.
Are you working in a team?
If yes - I can't really imagine how does this work.
Does this mean that your teammates occasionally wake up to a 50+ changes PR\MR that was born as a result of your desire to "possibly" load off some of the work to a text generator?
I'm curious here.
Extrapolation. I see the progress I already made over the last years.
For small tasks where I can anticipate that AI will handle it well, I am already multiple times more efficient with AI than without.
The hard thing to tackle these days is larger, more architectural tasks. And there I also see progress.
Humans also benefit from a better codebase that is easier to understand. Just like AI. So the changes I make in this regard are universally good.
At the senior level or above, AI is at best a wash in terms of productivity, because at higher levels you spend more of your time engineering (i.e., thinking up the proper way to code something robust/efficient) than coding.
LLMs are no different. One week ChatGPT is the best, next is Gemini. Each new version requires tweaks to get the most out of it. Sure, some of that skill/knowledge will carry forward into the future but I'd rather wait a bit for things to stabilize.
Once someone else demonstrates a net positive return on investment, maybe I'll jump back in. You just said it might take a year to see a return. I'll read your blog post about it when you succeed. You'll have a running head start on me, but will I be perpetually a year behind you? I don't think so.
And then there’s the large body of people who just haven’t noticed it at all because they don’t give a shit. Stuff just gets done how it always has.
On top of that, it's worth considering that growth is a function of user count and retention. The AI companies only promote count which suggests that the retention numbers are not good or they’d be promoting it. YMMV but people probably aren’t adopting it and keeping it.
Indeed. I think that current AI tech needs quite a bit of scaffolding in order for the full benefits to be felt by non-tech people.
> Then it was tainted by the fact that everyone is promoting it as a human replacement technology
Yeah. This is a bad move. AI is a human force multiplier (exponentializer?).
> which is then a tangible threat to their existence
This will almost certainly be a very real threat to AI adoption in various orgs over the next few years.
All it takes is a neo-Luddite in a gatekeeper position, and high-value AI use cases will get booted to the curb.
That is assuming that it is really a force multiplier which is not totally evident at this point.
I really think that this is a lack of imagination at this point for people who actually think this way.
There are two easy wins for almost anyone:
1. Optional tasks that add value but never reach the top of the priority list.
2. Writing routine communication and documentation.
> Anything that is realistically a force multiplier is a person divider. At that point I would expect people to resist it.
The CEOs who are using AI as an excuse to reduce head count are not helping this narrative.
AI will not solve their problems for large staff cuts. It’s just an unrelated excuse to walk back from over hiring in the past (esp. during Covid).
That said, I think that framing AI as a “person divider” is baseless fear-mongering for most job categories.
If it's a multiplier you need to either increase the requested work to keep the same humans or reduce the humans needed if you keep the same workload. It's not straightforward which way each business will follow.
I guess what you’re saying is technically true while being somewhat misleading.
“Increase the requested work” is one way of saying “reduce the amount of scutwork that needs to be done”. Personally, I’m ok having less scutwork. I’m also ok letting AI do optional scutwork that falls into the “nice to have” category (e.g., creating archival information).
On a personal level, I have used AI to automate a lot of required scutwork while freeing up my time to engage in higher added-value tasks. In terms of time, the biggest areas have been preliminary research, summaries, and writing drafts.
Additionally, in one specific use case that I can talk about, I have helped a medical billing office prioritize and organize their work based on estimated hourly value of items processed as well as difficulty (difficult stuff was prioritized for certain times of the day). This work had been eye-balled in the past, and could be done with a decent degree of accuracy with quite a bit of time, but AI did it with higher accuracy and almost no dedicated human time. The whole office appreciated the outcomes.
There are many wins like this that are available with AI, and I think many folks just haven’t found them yet.
Most of my social circle are non-technicial. A lot of people have had a difficult time with work recently, for various reasons.
The global economic climate feels very precarious, politics is ugly, people feel powerless and afraid. AI tends to come up in the "state of the world" conversation.
It's destroying friends' decade old businesses in translation, copywriting and editing. It's completely upturned a lot of other jobs, I know a lot of teachers and academics, for example.
Corporate enthusiasm for AI is seen for what it actually is, a chance to cut head count.
I'm into AI, I get value out of it, but decision makers need to read the room a bit better. The vibe in 2025 is angry and scared.
I mean, it is the reason why the usual suspects push it so aggressively. The underclasses much always be pushed down.
It's mostly bullshit, on most areas LLMs cannot reliably replace humans. But they embrace it because the chance that it might undermine labor is very seductive.
This is just bad management. AI might be the buzzword of the day that comes out of their mouths, but the issue is bad management rather than AI.
Said another way, if it wasn’t AI, it would be something else.
Are all of the items in the long list of something elses in the past and in the future the actual villain here? I don’t think so.
> Then you get these types on HN calling people luddites for having strong opinions and anxieties as if it’s only ever about the technology itself and not the effect it has on actual people
Why are you blaming the tech rather than the source of the problem? Point your finger at the people who are creating the problems — hint, it’s the management and not tools.
AI is just a tool. Just like any tool, it can be used for good or for bad. It can be used skillfully or unskillfully.
Right now, we are certainly in a high variance stage of quality of implementation.
That said, I think AI will do a good job of shining a bright light on the pretenders… it will be much tougher for them to hide.
> in a cutthroat capitalist system.
Unchecked capitalism certainly can take some of the blame here. The problem is that the only system that is better than the current system is almost certainly some other form of capitalism (maybe with some humanity checks and balances).
> That’s exactly the sort of thing that brought the term “tech bro” into the limelight.
Thank you for this. I’m pretty sure that this is the first time I’ve been called a “tech bro” — certainly the first time directly. My friends and family will get a kick out of this, since I have routinely outed tech bros who are more style than substance.
I’m fairly certain that I don’t fall into that category.
It’s also worth noting that while our modern use of Luddite is simply “anti-technology”, there was a lot more going on there. The Napoleonic wars were savaging the economy and Luddism wasn’t just a reaction to the emergence of a new technology but even more the successful class warfare being waged by the upper class who were holding the line against weavers attempts to negotiate better terms and willing to deploy the army against the working class. High inflation and unemployment created a lot of discontent, and the machines bore the brunt of it because they were a way to strike back at the industrialists being both more exposed and a more acceptable target for anyone who wasn’t at the point of being willing to harm or kill a person.
Perhaps most relevant to HN is that the weavers had previously not joined together to bargain collectively. I can’t help but think that almost everyone who said “I’m too smart to need a union” during the longest run of high-paying jobs for nerds in history is going to regret that decision, especially after seeing how little loyalty the C-suite has after years of pretending otherwise.
I think there is a massive amount of work that's currently not being done, industry-wide. Everywhere I've worked has had something like 4-10X more work to do than staff to do it. The feature and bug backlogs just endlessly grow because there is no capacity to keep up. Companies should be able to adopt a force multiplier without losing staff: It could bring that factor down to 2-5X. The fact that layoffs are happening industry-wide shows that leadership might not even know what their workers could be doing.
Given that most of my recent experiences with the tier 1 versions of these folks are people in developing countries who can barely read off of a script, I don’t think that I will miss them.
Put any of the good ones (and there are some good ones) into tier 2 positions and give them a raise.
Thank you!
This breathlessly-positive performance art shows up all the time on HN, whether or not the article is even about AI. I can believe it when someone points out specific tasks that the tool is good at and why, but when someone shows up and extolls AI as the greatest human achievement since the steam engine and how it's going to change life as we know it, it just feels like they are performing for their manager or something.
My non-technical friends are essentially using ChatGPT as a search engine. They like the interface, but in the end it's used to find information. I personally just still use a search engine, and I almost always go to straight to Wikipedia, where I think the real value is. Wikipedia has added much more value to the world than AI, but you don't see it reflected in stock market valuations.
My conclusion is that the technology is currently very overhyped, but I'm also excited for where the general AI space may go in the medium term. For chat bots (including voice) in particular, I think it could already offer some very clear improvements.
I guess it had to happen at some point. If a site is used as ground truth by everyone while being open to contributions, it has to become a magnet and a battleground for groups trying to influence other people.
LLMs don't fix that of course. But at least they are not as much a single point of failure as a specific site can be.
> I'm currently unable to fetch data directly from YouTube due to a tool issue. This means I can't access the transcript, timestamps, or metadata from the video at this time.
I understand that this is unsatisfactory, but the only way to "prove" that the motivations of the people contributing to Wikipedia have shifted would be to run a systematic study for which I have neither the time nor the skills nor indeed the motivation.
Perhaps I should say that am a politically centrist person whose main interests are outside of politics.
Yes, network effects and hyper scale produce perverse incentives. It sucks that Wikipedia can be gamed. Saying that, you'd need to be actively colluding with other contributors to maintain control.
Imagining that AI is somehow more neutral or resistant to influence is incredibly naive. Isn't it obvious that they can be "aligned" to favor the interests of whoever trains them?
The point is well taken. I just feel that at this point in time the reliance on Wikipedia as a source of objective truth is disproportionate and increasingly undeserved.
As I said, I don't think AI is a panacea at all. But the way in which LLMs can be influenced is different. It's more like bias in Google search. But I'm not naive enough to believe that this couldn't turn into a huge problem eventually.
Yeah u can download the entirety of Wikipedia if you want to. What's the single point of failure?
(Most AI will simply find where twitter disagrees with Wikipedia and spout out ridiculousness conspiracy junk.)
I personally didn't use it so much (meaning: Writing content) because it always felt a bit over engineered. From what I remember the only possible entry point is writing questions that get up voted. Not allowed to even write comments, vote and of course not allowed to answer questions. Maybe that's not correct but that has always been my impression.
In general stack exchange seems to be a great platform. I think "it's dying" has an unfortunate connotation. It's not like content just vanishes, just the amount of new stuff is shrinking.
I stopped contributing when the question quality fell of a cliff. Existing contributors got annoyed by the low effort questions, new users got annoyed because their questions got immediately closed, it was no longer fun. There were a lot of discussions on meta how to handle the situation but I just left.
So admittedly things might have changed again, I do not really know much about the development in the last ten or so years.
If you're logged out it's just a list, but if you're logged in it'll grey out and put a checkmark next to the ones you have.
So makes sense that SO like users will use AI, not to mention they get the benefit of avoiding the neurotic moderator community at SO.
But yes, AI put the nail in its coffin. Sadly, ironically, AI trained off it. I mean AI quite literally stole from Stackoverflow and others and somehow got away with it.
That's why I don't really admire people like Sam Altman or the whack job doomer at Anthropic whatever his name is. They're crooks.
I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did. This saves so much time, especially since multiple such LLM tasks can be run simultaneously. But maybe it's because I'm not working on giant, monolithic code bases.
I use it in much the same way you describe, but I find that it doesn't save me that much time. It may save some brain processing power, but that is not something I typically need saving.
I extract more from LLM asking it to write code Infind tedious to write (unit tests, glue code for APIs, scaffolding for new modules, that sort of thing). Recently I started asking it to review the code I write and suggest improvements, try to spot bugs and so on (which I also find useful).
Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.
Running tasks simultaneously don't help much unless you are giving it instructions that are too general that will take it a long time executiny - and the bottleneck will be your ability to review all the output anyway. I also gind that the broader is the scope of what I need it to do, the less precise it tends to be. I achieve most success by being more granular in what I ask of it.
My take is that while LLMs are useful, they are massively overhyped, and the productivity gains are largely overstated.
Of course, you can also "vibe code" (what an awful terminology) and not inspect the output. I find it unacceptable in professional settings, where you are expected to release code with some minimum quality.
Yep but this is much less time than writing the code, compiling it, fixing compiler errors, writing tests, fixing the code, fixing the compilation, all that busy-work. LLMs make mistakes but with Gemini 2.5 Pro at least most of these are due to under-specification, and you get better at specification over time. It's like the LLM is a C compiler developer and you're writing the C spec; if you don't specify something clearly, it's undefined behaviour and there's no guarantee the LLM will implement it sensibly.
I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong.
How many iterations does it normally take to get a feature correctly implemented? How much manual code cleanup do you do?
It's always the easy cop out for whoever wants to hype AI. I can preface it with "I'd go so far as to say", but that is just a silly cover for the actual meaning.
Properly reviewing code, if you are reviewing it meaningfully instead of just glancing through it, takes time. Writing good prompts that cover all the ground you need in terms of specificity, also takes time.
Are there gains in terms of speed? Yeah. Are they meaningful? Kind of.
If you do software engineering the way you learned you were supposed to do it long, long ago, the process actually works pretty well with LLMs.
That's the thing, I do. At work we keep numerous diagrams, dashboards and design documents. It helps me and the other developers understand and and have a good mental model of the system, but it does not help LLMs all that much. The LLMs won't understand our dashboard or diagrams. It could read the design documents, but it wokdn't help it to not make the mistakes it does when coding, and definitely would not reduce my need to review the code it produces.
I said before, I'll say it again - I find it unacceptable in a professional setting to not review the code LLMs produce properly, because I have seem the sort of errors it produces (and I have access to the latest Claude and Gemini models, that I understand to be the top models as of now).
Are they useful? Yeah. Do they speed me up? Sort of, especially when I have to write a lot of boring code (as mentioned before, glue code for APIs, unit tests, scaffolding for new modules, etc). Are the productivity gains massive? Not really, due to the nature of how it generates output, and mainly due to the fact that writing code is only part of my responsibilities, and frequently not the one that takes up most of my time.
However. It doesn't mean AI will go away. AI is really useful. It can do a lot actually. It is a slow adoption because it's somehow not the most intuitive to use. I think that may have a lot to do with tooling and human communication style - or the way we use it.
Once people learn how to use it, I think it'll just become ubiquitous. I don't see it taking anyone's job. The doomers who like to say that are people pushing their own agenda, trolling, or explaining away mass layoffs that were happening BEFORE AI. The layoffs are a result of losing a tax credit for R&D, over hiring, and the economy. Forgetting the tax thing for a moment, is anyone really surprised that companies over hired?? I mean come on. People BARELY do any work at all at large companies like Google, Apple, Amazon, etc. I mean that not quite fair. Don't get me wrong, SOME people there do. They work their tails off and do great things. That's not all of the company's employees though. So what do you expect is going to happen? Eventually the company prunes. They go and mass hire again years later, see who works out, and they prune again. This strategy is why hiring is broken. It's a horrible grind.
Sorry, back to AI adoption. AI is now seen by some caught in this grind as the "enemy." So that's another reason for slow adoption. A big one.
It does work though. I can see how it'll help and I think it's great. If you know how everything gets put together then you can provide the instructions for it to work well. If you don't, then you're not going to get great results. Sorry, if you don't know how software is built, what good code looks like, AND you don't "rub it the right way." Or as people say "prompt engineering."
I think for writing blog posts, getting info, it's easier. Though there's EXTREME dangers with it for other use cases. It can give incredibly dangerous medical advice. My wife is a psychiatrist and she's been keeping an eye on it, testing it, etc. To date AI has done more to harm people than it has help them in terms of mental health. It's also too inaccurate to use for mental health as well. So that field isn't adopting it so quickly. BUT they are trying and experimenting. It's just going to take some time and rightfully so. They don't want to rush start using something that hasn't been tested and validated. That's an understaffed field though, so I'm sure they will love any productivity gain and help they can get.
All said, I don't know what "slow" means for adoption. It feels like it's progressing quickly.
So did AI add value here? It seems to me that it wasted a bunch of my time.
For Boilerplate code we need an AI that is less creative and more secure and predictable. I have fun creative a system design with the right tables and I have implementing logic and interaction design. I don't have the biggest fun writing down dtos and entities.
I would need an AI, that can scan an image and just build me the right lego bricks. We are just getting back to an machine that can do UML from less precise sources.
For me, I can now skip the step of handing something to marketing/editor who wants to make it more impactful because I use AI up front do that - including making sure it's still correct and says what I want it to say.
My observation (not yet mobile friendly): https://www.rundata.co.za/blog/index.html?the-ai-value-chain
* A "best practices" repository: clean code architecture and separation of concerns, well tested, very well-documented
* You need to know the code base very well to efficiently judge if what the AI wrote is sensible
* You need to take the time to write a thorough task description, like you would for a junior dev, with hints for what code files to look at , the goals, implementation hints, different parts of the code to analyse first, etc
* You need to clean up code and correct bad results manually to keep the code maintaineable
This amounts to a very different workflow that is a lot less fun and engaging for most developers. (write tasks, review, correct mistakes)
In domains like CRUD apps / frontend, where the complexity of changes is usually low, and there are great patterns to learn from for the LLM, they can provide a massive productivity boost if used right.
But this results in a style of work that is a lot less engaging for most developers.
It's hit or miss at the moment, but it's definitely way more than "UML code generators".
Typical prose in late 90's early 00s: Designing the right UML meta-schema and UML diagram will generate a bug-free source code of the program, enabling even non-programmers to create applications and business logic. Programs can check the UML diagram beforehand for logic errors, prove security and more.
You couldn't point it at an existing codebase and get anything of value from it, or get it to review pull requests, or generate docs.
And even for what it was meant for, it didn't help you with the design and architecture. You still had to be the architect and tell the tool what to do vs the opposite with LLMs, where you tell it what you want but not how to shape it (in theory).
That's my experience exactly. Instead of actually building stuff, I write tickets, review code, manage and micromanage - basically I do all the non-fun stuff whereas the fun stuff is being done by someone (well, something) else.
However, I think the applicability is beyond frontend (and to be fair you said "like CRUD apps / frontend"). There are a lot of domains where patterns are clearly established and easily reproduceable. For example, I had a basic XML doc type and with a couple of simple prompts in Claude Code I was able to: add namespaces, add an XSD, add an XSL to style into HTML, add full unit tests. That isn't rocket science, but it isn't code I really want to write. In 5 minutes I leveled up my XML doc significantly with a few hundred lines of XSD/XSL/tests.
This example is a case where I happily pass the task to an LLM. It is a bit like eating my vegetables. I find almost no joy in writing schema docs or tests, even when I recognize the value in having them. The XSL to HTML is a nice added bonus that cost me nothing.
This doesn't read like sarcasm in context of the article and it's conclusions
> "Bureaucrats may refuse to implement necessary job cuts if doing so would put their friends out of work, for instance. Companies, especially large ones, may face similar problems."
> "The tyranny of the inefficient: Over time market forces should encourage more companies to make serious use of AI..."
This whole article makes it seem like corporate inefficiencies are the biggest hurdle against LLM adoption, and not the countless other concerns often mentioned by users, teams, and orgs.
Did Jack Welch write this?
Using LLMs feels like the drawbacks of CLI, GUI and calculation by a human combined.
Therby limiting the amount of people who can experiment with it.
(1) Reduces the value that people place on others, thereby creating a more narcisstic society
(2) Takes people's livelihoods away
(3) Is another method by which big tech concentrates the most wealth in their hands and the west becomes even more of a big tech oligarchy.
Programmers tend to stay in bubbles as they always have and happily promote it, not really noticing how dangerous and destructive it is.
To accelerate mass AI adoption in workplaces, it may be necessary to expose the average worker to the risks and rewards of business ownership. However, it might be the case that the average worker simply doesn't want that risk.
If there's no risk, there can't be a reward. So I can't see a way AI adoption could be sped up.
As a manager you have to strike a balance between absolute productivity/efficiency and accountability diffusion: lose enough people to spread accountability and it ends up on your plate when things go wrong. "AI f*cK it up" does not sound like as good an argument for why your chain of command messed things up as "joe doe made a mistake".
Also, AI agents don't make for great office politics partner.
As much as AI is being portrayed as a human replacement there are many social aspects of organizations that AI cannot replace. AI don't vote, AI don't get promotions for siding with different parties in office politics. AI can't get fired
I've found it works quite well as a variant of a google search but trying to exchange a shock absorber that had been ordered incorrectly by dealing with an AI assistant was quite a mediocre experience. It was able to say do you want to return your order but not understand I had a left one instead of a right one. The hype seems to have run ahead of the reality.
orionblastar•6mo ago