Or maybe it's more about refusing to admit that executives are out of touch with concrete reality and are just blindly chasing trends instead.
The entire MO of big tech is trying to create a monopoly by the software equivalent of dumping (which is illegal in the US [1], but not for software, because reasons), marketshare domination, and then jacking effective pricing wayyyyy up. And in this case big tech companies are dumping absurdo amounts of money into LLMs, getting absurd funding, and then providing them for free or next to free. If a person has any foresight whatsoever it's akin to a rusting van outside an elementary, with blacked out windows, and with some paint scrawled on it, 'FREE ICECREAM.'
[1] - https://en.wikipedia.org/wiki/Dumping_(pricing_policy)#Unite...
Literally every shitty corporate behaviour is amplified by this technology fad.
* Determine what is happening in a scene/video * Translating subtitles to very specific local slang * Summarizing scripts * Estimating how well a new show will do with a given audience * Filling gaps in the metadata provided by publishers, such as genres, topics, themes * Finding the most "viral" or "interesting" moments in a video (combo of LLM and "traditional" ML)
There's much more, but I think the general trend here is not "chatbots" or "fixing code", it's automating stuff that we used armies of people to do. And as we progress, we find that we can do better than humans at a fraction of the cost.
I've worked at Apple, in finance, in consumer goods.. everywhere is just terrible. Music/Video streaming has been the closest thing I could find to actually being valuable, or at least not making the world worse.
I'd love to work at an NGO or something, but I'm honestly not that eager to lose 70% of my salary to do so. And I can't work in pure research because I don't have a PhD.
What industry do you work in, if you don't mind me asking?
when i see normies use it - its to make selfies with celebrities.
in 5-10 years AI will everywhere. a massive inequality creator.
those who know how to use it and those who can afford the best tools.
the biggest danger is dependency on AI. i really see people becoming dumber and dumber as they outsource more basic cognitive functions and decisions to AI.
and business will use it like any other tool. to strengthen their monopolies and extract more and more value out of less and less resources.
That is possible, even likely. But AI can also decrease inequality. I'm thinking of how rich people and companies spend millions if not hundreds of millions on legal fees which keep them out of prison. But me, I can't afford a lawyer. Heck I can't even afford a doctor. I can't afford Stanford, Yale nor Harvard.
But now I can ask legal advice from AI, which levels that playing field. Everybody who has a computer or smartphone and internet-access can consult an AI lawyer or doctor. AI can be my Harvard. I can start a business and basically rely on AI for handling all the paperwork and basic business decisions, and also most recurring business tasks. At least that's the direction we are going I believe.
The "moat" in front of AI is not wide nor deep because AI by its very nature is designed to be easy to use. Just talk to it.
There is also lots of competition in AI, which should keep prices low.
The root-cause of inequality is corruption. AI could help reveal that and advise people how to fight it, making world a better more equal place.
At least lawyers can lose their bar license.
We had a discussion in a group chat with some friends about some random sports stuff and one of my friends used ChatGPT to ask for some fact about a random thing. It was completely wrong, but sounded so real. All you had to do was to go on wikipedia or on a website of the sports entity we were discussing to see the real fact. Now considering that it just hallucinated some random facts that are on Wikipedia and on the website of an entity, what are the chances that the legal advice you will get will be real and not some random hallucination?
AI is just a really good bullshitter. Sometimes you want a bullshitter, and sometimes you need to be a bullshitter. But when your wealth are at risk due to lawsuits or you're risking going to prison, you want something rock solid to back your case and just endless mounds of bullshit around you is not what you want. Bullshit is something you only pull out when you're definitely guilty and need to fight against all the facts, and even better than bullshit in those cases is finding cases similar to yours or obscure laws that can serve as a loophole. And AI, instead of pulling out real cases, will bullshit against you with fake cases.
For things like code, where a large bulk of some areas are based on general feels and vibes, yeah, it's fine. It's good for general front end development. But I wouldn't trust it for anything requiring accuracy, like scientific applications or OS level code.
I believe this is a core issue that needs to be addressed. I believe companies will need tools to make their data "AI ready" beyond things like RAG. I believe there needs to be a bridge between companies data-lakes and the LLM (or GenAI) systems. Instead of cutting people out of the loop (which a lot of systems seem to be attempting) I believe we need ways to expose the data in ways that allow rank-and-file employees to deploy the data effectively. Instead of threatening to replace the employees, which leads them to be intransigent in adoption, we should focus on empowering employees to use and shape the data.
Very interesting to see the Economist being so bullish on AI though.
They went big on Cryptocurrency back in the day as well.
not sufficiently useful
not sufficiently trustworthy.
It is my ongoing experience that AI + My Oversight requires more time than not using AI.Sometimes AI can answer slightly complex things in a helpful way. But for most of the integration troubleshooting I do, AI guidance varies between no help at all and fully wasting my time.
Conversely, I support folks who have the complete opposite experience. AI is of great benefit to them and has hugely increased their productivity.
Both our experiences are valid and representative.
Ask it about a torque spec for your car? Yup, wrong. Ask it to provide sources? Less wrong but still wrong. It told me my viscous fan has a different thread than it has. Would I have listened, I would've shredded my thread.
My car is old, well documented and widely distributed.
Doesn't matter if claude or chatgpt. Don't get me started on code. I care about things being correct and right.
At this point I literally spend 90% of my time fixing other teams AI ‘issues’ at a fortune 50.
1. Piss-poor at the brainstorming and planning phase. For the compression thing I got one halfway decent idea, and it's one I already planned on using.
2. Even worse at generating a usable project structure or high-level API/skeleton. The code is unusable because it's not just subtly wrong; it doesn't match any cohesive mental model, meaning the first step is building that model and then figuring out how to ram-rod that solution into your model.
3. Really not great at generating APIs/skeletons matching your mental model. The context is too large, and performance drops.
4. Terrible at filling in the details for any particular method. It'll have subtle mistakes like handling carryover data at the end of a loop, but handling it always instead of just when it hasn't already been handled. Everything type checks, and if it doesn't then I can't rely on the AI to give a correct result instead of the easiest way to silence the compiler.
5. Very bad at incorporating invariants (lifetimes, allocation patterns, etc) into its code when I ask it to make even minor tweaks, even when explicitly promoted to consider such-and-such edge case.
6. Blatantly wrong when suggesting code improvements, usually breaking things, and in a way you can't easily paper over the issue to create something working "from" the AI code.
Etc. It just wasn't well suited to any of those tasks. On my end, the real work is deeply understanding the problem, deriving the only possible conclusions, banging that into code, and then doing a pass or three cleaning up the semicolon orgasm from the page. AI is sometimes helpful in that last phase, but I'm certain it's not useful for the rest yet.
My current view is that the difference in viewpoints stems from a combination of the tasks being completed (certain boilerplate automation crap I've definitely leaned into AI to handle, maybe that's all some devs work on?) and current skill progression (I've interviewed enough people to know that the work I'm describing as trivial doesn't come naturally to everyone yet, so it's tempting to say that it's you holding your compiler wrong rather than me holding the AI wrong).
Am I wrong? Should AI be able to help with those things? Is it more than a ~5% boost?
But sometimes good data is also bad data. HIPAA compliance audit guides are full of questions that are appropriate for a massive medical entity and fully impossible to answer for the much more common small medical practice.
No AI will be trained to know the latter is true. I can say that because every HIPAA audit guide assumes that working patient data is stored on practice-owned hardware - which it isn't. Third parties handle that for small practices.
For small med, HIPAA audit guides are 100 irrelevant questions that require fine details that don't exist.
I predict that AI won't be able overcome the absurdities baked into HIPAA compliance. It can't help where help is needed.
But past all that, there is one particularly painful issue with AI - deployment.
When AI isn't asked for, it is in the way. It is an obstacle to that needs to be removed. That might not be awful if MS, Google, etc didn't continually craft methods to make that as impossible as possible. It smacks of disdain for end users.
If this one last paragraph wasn't endlessly true, AI evangelists wouldn't have so many premade enemies to face - and there would be less friction all around.
It’s not meeting the expectations, probably because of this aggressive advertising. But I would in no way say that it’s spreading slow. It is fast.
If an AI can't understand well enunciated context, I'm not inclined to blame the person who is enunciating the context well.
I don’t use AI for most of my product work because it doesn’t know any of the nuances of our product, and just like doing code review for AI is boring and tedious, it’s also boring and tedious to exhaustively explain that stuff in a doc, if it can even be fully conveyed, because it’s a combination of strategy, hearsay from customers, long-standing convos with coworkers…
I’d rather just do the product work. Also, I’ve self-selected by survivorship bias to be someone who likes doing the product work too, which means I have even less desire to give it up.
Smarter LLMs could solve this maybe. But the difficulty of conveying information seems like a hard thing to solve.
Yes, drastically. This means I'll have to wear Zuck's glasses I think, because the AI currently doesn't know what was discussed at the coffee machine or what management is planning to do with new features. It's like a speed typing goblin living in an isolated basement, always out of the loop.
Which science is responsible for the answer that if you can't establish the veracity of the premise for the question, economics can't help you find the missing outcome that shouldn't be there?
I witness it with my developer friends. Most of them try for 5 minutes to get AI to code something that takes them an hour. Then they are annoyed that the result is not good. They might try another 5 minutes, but then they write the code themselves.
My thinking is: Even if it takes me 2 hours to get AI to do something that would take me 1 hour it is worth it. Because during those 2 hours I will make my code base more understandable to help the AI cope with it. I will write better general prompts about how AI should code. Those will be useful beyond this single task. And I will get to know AI better and learn how to interact with it better. This process will probably lead to a situation where in a year, it will take me 30 minutes with AI to do a task that would have taken me an hour otherwise. A doubling of my productivity with just a year of work. Unbelievable.
I see very few other developers share this enthusiasm. They don't like putting a year of work into something so intangible.
I hope your doubling of productivity goes well for you, I'll believe it when I see it happen.
How do you figure?
>Because during those 2 hours I will make my code base more understandable to help the AI cope with it.
Are you working in a team?
If yes - I can't really imagine how does this work.
Does this mean that your teammates occasionally wake up to a 50+ changes PR\MR that was born as a result of your desire to "possibly" load off some of the work to a text generator?
I'm curious here.
Extrapolation. I see the progress I already made over the last years.
For small tasks where I can anticipate that AI will handle it well, I am already multiple times more efficient with AI than without.
The hard thing to tackle these days is larger, more architectural tasks. And there I also see progress.
Humans also benefit from a better codebase that is easier to understand. Just like AI. So the changes I make in this regard are universally good.
At the senior level or above, AI is at best a wash in terms of productivity, because at higher levels you spend more of your time engineering (i.e., thinking up the proper way to code something robust/efficient) than coding.
LLMs are no different. One week ChatGPT is the best, next is Gemini. Each new version requires tweaks to get the most out of it. Sure, some of that skill/knowledge will carry forward into the future but I'd rather wait a bit for things to stabilize.
Once someone else demonstrates a net positive return on investment, maybe I'll jump back in. You just said it might take a year to see a return. I'll read your blog post about it when you succeed. You'll have a running head start on me, but will I be perpetually a year behind you? I don't think so.
And then there’s the large body of people who just haven’t noticed it at all because they don’t give a shit. Stuff just gets done how it always has.
On top of that, it's worth considering that growth is a function of user count and retention. The AI companies only promote count which suggests that the retention numbers are not good or they’d be promoting it. YMMV but people probably aren’t adopting it and keeping it.
Indeed. I think that current AI tech needs quite a bit of scaffolding in order for the full benefits to be felt by non-tech people.
> Then it was tainted by the fact that everyone is promoting it as a human replacement technology
Yeah. This is a bad move. AI is a human force multiplier (exponentializer?).
> which is then a tangible threat to their existence
This will almost certainly be a very real threat to AI adoption in various orgs over the next few years.
All it takes is a neo-Luddite in a gatekeeper position, and high-value AI use cases will get booted to the curb.
That is assuming that it is really a force multiplier which is not totally evident at this point.
My non-technical friends are essentially using ChatGPT as a search engine. They like the interface, but in the end it's used to find information. I personally just still use a search engine, and I almost always go to straight to Wikipedia, where I think the real value is. Wikipedia has added much more value to the world than AI, but you don't see it reflected in stock market valuations.
My conclusion is that the technology is currently very overhyped, but I'm also excited for where the general AI space may go in the medium term. For chat bots (including voice) in particular, I think it could already offer some very clear improvements.
I guess it had to happen at some point. If a site is used as ground truth by everyone while being open to contributions, it has to become a magnet and a battleground for groups trying to influence other people.
LLMs don't fix that of course. But at least they are not as much a single point of failure as a specific site can be.
So did AI add value here? It seems to me that it wasted a bunch of my time.
My observation (not yet mobile friendly): https://www.rundata.co.za/blog/index.html?the-ai-value-chain
* A "best practices" repository: clean code architecture and separation of concerns, well tested, very well-documented
* You need to know the code base very well to efficiently judge if what the AI wrote is sensible
* You need to take the time to write a thorough task description, like you would for a junior dev, with hints for what code files to look at , the goals, implementation hints, different parts of the code to analyse first, etc
* You need to clean up code and correct bad results manually to keep the code maintaineable
This amounts to a very different workflow that is a lot less fun and engaging for most developers. (write tasks, review, correct mistakes)
In domains like CRUD apps / frontend, where the complexity of changes is usually low, and there are great patterns to learn from for the LLM, they can provide a massive productivity boost if used right.
But this results in a style of work that is a lot less engaging for most developers.
That's my experience exactly. Instead of actually building stuff, I write tickets, review code, manage and micromanage - basically I do all the non-fun stuff whereas the fun stuff is being done by someone (well, something) else.
This doesn't read like sarcasm in context of the article and it's conclusions
> "Bureaucrats may refuse to implement necessary job cuts if doing so would put their friends out of work, for instance. Companies, especially large ones, may face similar problems."
> "The tyranny of the inefficient: Over time market forces should encourage more companies to make serious use of AI..."
This whole article makes it seem like corporate inefficiencies are the biggest hurdle against LLM adoption, and not the countless other concerns often mentioned by users, teams, and orgs.
Did Jack Welch write this?
orionblastar•2h ago