Sure some of this comes from a lack of education.
But similar to crypto these movements only have value if the value is widely perceived. We have to work to continue to educate, continue to question, continue to understand different perspectives. All in favor of advancing the movement and coming out with better tech.
I am a supporter of both but I agree with the reference in the article to both becoming echo chambers at times. This is a setback we need to avoid.
The difference in the AI case is that companies that are actually able to use it to boost productivity significantly will start to outcompete those who don't.
That's why, unlike crypto/blockchain, so many mainstream companies are pouring money into AI. It's not FOMO so much as fear of extinction.
Even crypto people didn’t dogfood their crypto like that, on their own critical path.
Is that the official cutsie name people working there are called? Those feels so 2020 ...
The really difficult and valuable parts of the codebase are very very far beyond what the current LLMs are capable of, and believe me, I’ve tried!
Writing the majority of the code is very different from creating the majority of the value.
And I really use and value LLMs, but they are not replacing me at the moment.
Does it? Or does their marketing tell you that? Strange that "most code is written by Claude" and they still hire for actual humans for all the positions from backend to API to desktop to mobile clients.
> How much babysitting and reviewing is undetermined; but the Ants seem to tremendously prefer the workflow.
So. We know nothing about their codebase, actual flows, programming languages, depth and breadth of usage, how much babysitting is required...
Whether or not we can get to 100% using LLMs is an open research problem and far from guaranteed. If we can’t, it’s unclear if it will ever really proliferate the way things hope. That 5% makes a big difference in most non-niche use cases…
We don't know enough about how LLMs work or about how human reasoning works for this to be at all meaningful. These numbers quantify nothing but wishes and hype.
Considering LLMs have 0 level of reasoning, I can't decide if it's a bad take, or a stab at the average human's level of reasoning.
In all seriousness, the actual numbers vary from 13% to 26%: https://fortune.com/2025/02/12/openai-deepresearch-humanity-...
My take is that there are fundamental limitations to try to pigeon-hole reasoning to LLMs, which are essentially a very very advanced autocomplete, and that's why those % won't jump too much too soon.
This is very typical of naive automation, people assume that most of the work is X and by automating that we replace people, but the thing that's automated is almost never the real bottleneck. Pretty sure I saw an article here yesterday about how writing code is not the bottleneck in software development, and it holds everywhere.
Of course people will either love AI or hate AI - and some don’t care. I am cautious especially when people say ‘AI is here to stay’. It takes away agency.
includes the 3rd law, which reads, and seems on topic,
"Any sufficiently advanced technology is indistinguishable from magic."
The people I have talked to at length about using AI tools claim that it has been a boon for productivity: a nurse, a doctor, three (old) software developers, a product manager, and a graduate student in Control Systems.
It is entirely believable that it may not, on average, help the average developer.
I'm reminded of the old joke that ends with "who are you going to believe, me or you're lying eyes?"
But that sets expectation way too high. Partly it is due to Amdahl's law: I spend only a portion of my time coding, and far more time thinking and communicating with others that are customers of my code. Even if does make the coding 10x faster (and it doesn't most of the time) overall my productivity is 10-15% better. That is nothing to sneeze at, but it isn't 10x.
It is something to sneeze at if you are 10-15% more expensive to employ due to the cost of the LLM tools. The total cost of production should always be considered, not just throughput.
How is one spending anywhere close to 10% of total compensation on LLMs?
Claude Max is $200/month, or ~2% of the salary of an average software engineer.
https://llm-stats.com/models/compare/gpt-3.5-turbo-0125-vs-q...
So they'll probably find a reasonable cost/value ratio.
As long as the courts don't shut down Meta over IP issues with LLama training data, that is.
I can't stress that enough: "open source" models are what can stop the "real costs" for the customers from growing. Despite popular belief, inference isn't that expensive. This isn't Uber - stopping isn't going to make LLMs infeasible; at worst, it's just going to make people pay API prices instead of subscription prices. As long as there are "open source" models that are legally available and track SOTA, anyone with access to some cloud GPUs can provide "SOTA of 6-12 months ago" for the price of inference, which puts a hard limit on how high OpenAI, et al. can hike the prices.
But that's only as long as there are open models. If Meta loses and LLama goes away, the chilling effect will just let OpenAI, Microsoft, Anthropic and Google to set whatever prices they want.
EDIT:
I mean LLama legally going away. Of course the cat is now out of the bag, the Pandora's box has been opened; the weights are out there and you can't untrain or uninvent them. But keeping the commercial LLM offerings' prices down requires a steady supply of improved open models, and the ability for smaller companies to make a legal business out of hosting them.
If these companies plan to stay afloat, they have to actually pay for the tens of billions they've spent at some point. That's what the parent comment meant by "free AI"
Training is expensive, but it's not that expensive either. It takes just one of those super-rich players to pay the training costs and then release the weights, to deny other players a moat.
All the 100s of billions of $ put into the models so far were not donations. They either make it back to the investors or the show stops at some point.
And with a major chunk of proponent's arguments being "it will keep getting better", if you lose that what you got? "This thing can spit out boilerplate code, re-arrange documents and sometimes corrupts data silently and in hard to detect ways but hey you can run it locally and cheaply"?
Which, again, leads to a future where we're stuck with local models corrupting data about half the time.
Short-term, it's a normal dynamics for a growing/evolving market. Long-term, the Sun will burn out and consume the Earth.
The R&D is running on hopes that increasing the magnitude (yes, actual magnitudes) of their models will eventually hit a miracle that makes their company explode in value and power. They can't explain what that could even look like... but they NEED evermore exorbitant amounts of funding flowing in.
This truly isn't a normal ratio of research-to-return.
Luckily, what we do have already is kinda useful and condensing models does show promise. In 5 years I doubt we'll have the post-labor dys/utopia we're being hyped up for. But we may have some truly badass models that can run directly on our phones.
Like you said, Llama and local inference is cheap. So that's the most logical direction all of this is taking us.
There's risk to that assumption, but it's also a reasonable one - let's not forget the whole field is both new and has seen stupid amounts of money being pumped into it over the last few years; this is an inflationary period, there's tons of people researching every possible angle, but that research takes time. It's a safe bet that there are still major breakthroughs ahead us, to be achieved within the next couple years.
The risky part for the vendors is whether they'll happen soon enough so they can capitalize on them and keep their lead (and profits) for another year or so until the next breakthrough hits, and so on.
Similar situation at my work, but all of the productivity claims from internal early adopters I've seen so far are based on very narrow ways of measuring productivity, and very sketchy math, to put it mildly.
The AI thing kind of reminds me of the big push to outsource software engineers in the early 2000's. There was a ton of hype among executives about it, and it all seemed plausible on paper. But most of those initiatives ended up being huge failures, and nearly all of those jobs came back to the US.
People tend to ignore a lot of the little things that glue it all together that software engineers do. AI lacks a lot of this. Foreigners don't necessarily lack it, but language barriers, time zone differences, cultural differences, and all sorts of other things led to similar issues. Code quality and maintainability took a nosedive and a lot of the stuff produced by those outsourced shops had to be thrown in the trash.
I can already see the AI slop accumulating in the codebases I work in. It's super hard to spot a lot of these things that manage to slip through code review, because they tend to look reasonable when you're looking at a diff. The problem is all the redundant code that you're not seeing, and the weird abstractions that make no sense at all when you look at it from a higher level.
Management thinks the LLM is doing most of the work. Work is off shored. Oh, the quality sucks when someone without a clue is driving. We need to hire again.
So? It sounds like you're prodding us to make an extrapolation fallacy (I don't even grant the "10x in 12 months" point, but let's just accept the premise for the sake of argument).
Honestly, 12 months ago the base models weren't substantially worse than they are right now. Some people will argue with me endlessly on this point, and maybe they're a bit better on the margin, but I think it's pretty much true. When I look at the improvements of the last year with a cold, rational eye, they've been in two major areas:
* cost & efficiency
* UI & integration
So how do we improve from here? Cost & efficiency are the obvious lever with historical precedent: GPUs kinda suck for inference, and costs are (currently) rapidly dropping. But, maybe this won't continue -- algorithmic complexity is what it is, and barring some revolutionary change in the architecture, LLMs are exponential algorithms.UI and integration is where most of the rest of the recent improvement has come from, and honestly, this is pretty close to saturation. All of the various AI products already look the same, and I'm certain that they'll continue to converge to a well-accepted local maxima. After that, huge gains in productivity from UX alone will not be possible. This will happen quickly -- probably in the next year or two.
Basically, unless we see a Moore's law of GPUs, I wouldn't bet on indefinite exponential improvement in AI. My bet is that, from here out, this looks like the adoption curve of any prior technology shift (e.g. mainframe -> PC, PC -> laptop, mobile, etc.) where there's a big boom, then a long, slow adoption for the masses.
But seriously: If you find yourself agreeing with one and not the other because of sourcing, check your biases.
If you're going to call all of that not substantial improvement, we'll have to agree to disagree. Certainly it's the most rapid rate of improvement of any tech I've personally seen since I started programming in the early '00s.
To be quite honest, I’ve found very little marginal value in using reasoning models for coding. Tool usage, sure, but I almost never use “reasoning” beyond that.
Also, LLMs still cannot do basic math. They can solve math exams, sure, but you can’t trust them to do a calculation in the middle of a task.
You can't trust a person either. Calculating is its own mode of thinking; if you don't pause and context switch, you're going to get it wrong. Same is the case with LLMs.
Tool usage and reasoning and "agentic approach" are all in part ways for allowing LLM to do the context switch required, instead of taking the match challenge as it goes and blowing it.
But my point wasn’t to judge LLMs on their (in)ability to do math - I was only responding to the parent comment’s assertion that they’ve gotten better in this area.
It’s worth noting that all of the major models still randomly decide to ignore schemas and tool calls, so even that is not a guarantee.
Then Gemini 2.5 pro (the first one) came along and suddenly this was no longer the case. Nothing hallucinated, incredible pattern finding within the poems, identification of different "poetic stages", and many other rather unbelievable things — at least to me.
After that, I realized I could start sending in more of those "hard to track down" bugs to Gemini 2.5 pro than other models. It was actually starting to solve them reliably, whereas before it was mostly me doing the solving and models mostly helped if the bug didn't occur as a consequence of very complex interactions spread over multiple methods. It's not like I say "this is broken, fix it" very often! Usually I include my ideas for where the problem might be. But Gemini 2.5 pro just knows how to use these ideas better.
I have also experimented with LLMs consuming conversations, screenshots, and all kinds of ad-hoc documentation (e-mails, summaries, chat logs, etc) to produce accurate PRDs and even full-on development estimates. The first one that actually started to give good results (as in: it is now a part of my process) was, you guessed it, Gemini 2.5 pro. I'll admit I haven't tried o3 or o4-mini-high too much on this, but that's because they're SLOOOOOOOOW. And, when I did try, o4-mini-high was inferior and o3 felt somewhat closer to 2.5 pro, though, like I said, much much slower and...how do I put this....rude ("colder")?
All this to say: while I agree that perhaps the models don't feel like they're particularly better at some tasks which involve coding, I think 2.5 pro has represented a monumental step forward, not just in coding, but definitely overall (the poetry example, to this day, still completely blows my mind. It is still so good it's unbelievable).
My weapon of choice these days is Claude 4 Opus but it's slow, expensive and still not massively better than good old 3.5 Sonnet
4o tens do be, as they say, sycophantic. It's an AI masking as a helpful human, a personal assistant, a therapist, a friend, a fan, or someone on the other end of a support call. They sometimes embellish things, and will sometimes take a longer way getting to the destination if it makes for a what may be a more enjoyable conversation — they make conversations feel somewhat human.
OpenAI's reasoning models, though, feel more like an AI masking a code slave. It is not meant to embellish, to beat around the bush or to even be nice. Its job is to give you the damn answer.
This is why the o* models are terrible for creative writing, for "therapy" or pretty much anything that isn't solving logical problems. They are built for problem solving, coding, breaking down tasks, getting to the "end" of it. You present them a problem you need solved and they give you the solution, sometimes even omitting the intermediate steps because that's not what you asked for. (Note that I don't get this same vibe from 2.5 at all)
Ultimately, it's this "no-bullshit" approach that feels incredibly cold. It often won't even offer alternative suggestions, and it certainly doesn't bother about feelings because feelings don't really matter when solving problems. You may often hear 4o say it's "sorry to hear" about something going wrong in your life, whereas o* models have a much higher threshold for deciding that maybe they ought to act like a feeling machine, rather than a solving machine.
I think this is likely pretty deliberate of OpenAI. They must for some reason believe that if the model is much concise in its final answers (though not necessarily in the reasoning process, which we can't really see), then it produces better results. Or perhaps they lose less money on it, I don't know.
Claude is usually my go-to model if I want to "feel" like I'm talking to more of a human, one capable of empathy. 2.5 pro has been closing the gap, though. Also, Claude used to be by far much better than all other models at European Portuguese (+ portuguese culture and references in general), but, again, 2.5 pro seems just as good nowadays).
On another note, this is also why I also completely understand the need for the two kinds of models for OpenAI. 4o is the model I'll use to review an e-mail, because it won't just try to remove all the humanity of it and make it the most succinct, bland, "objective" thing — which is what the o* models will.
In other words, I think: (i) o* models are supposed to be tools, and (ii) 4o-like models are supposed to be "human".
for the past week claude code has been routinely ignoring CLAUDE.md and every single instruction in it. I have to manually prompt it every time.
As I was vibe coding the notes MCP mentioned in the article [1] I was also testing it with claude. At one point it just forgot that MCPs exist. It was literally this:
> add note to mcp
Calling mcp:add_note_to_project
> add note to mcp
Running find mcp.ex
... Interrupted by user ...
> add note to mcp
Running <convoluted code generation command with mcp in it>
We have no objective way of measuring performance and behavior of LLMsYou had to paste more into your prompts back then to make the output work with the rest of your codebase, because there weren't good IDEs/"agents" for it, but you've been able to get really really good code for 90% of "most" day to day SWE since at least OpenAI releasing the ChatGPT-4 API, which was a couple years ago.
Today it's a lot easier to demo low-effort "make a whole new feature or prototype" things than doing the work to make the right API calls back then, but most day to day work isn't "one shot a new prototype web app" and probably won't ever be.
I'm personally more productive than 1 or 2 years ago now because the time required to build the prompts was slower than my personal rate of writing code for a lot of things in my domain, but hardly 10x. It usually one-shots stuff wrong, and then there's a good chance that it'll take longer to chase down the errors than it would've to just write the thing - or only use it as "better autocomplete" - in the first place.
Your developers still push a mouse around to get work done? Fire them.
AI is the new uplift. Embrace and adapt, as a rift is forming (see my talk at https://ghuntley.com/six-month-recap/), in what employers seek in terms of skills from employees.
I'm happy to answer any questions folks may have. Currently AFK [2] vibecoding a brand new programming language [1].
[1] https://x.com/GeoffreyHuntley/status/1940964118565212606 [2] https://youtu.be/e7i4JEi_8sk?t=29722
That would be a 70% descent?
Frankly even just getting engineers to agree upon those super specificized standardized patterns is asking a ton, especially since lots of the things that help AI out are not what they are used to. As soon as you have stuff that starts deviating it can confuse the AI and makes that 10x no longer accessible. Also no one would want to review the PRs I'd make for the changes I do on my "10x" local project... Especially maintaining those standards is already hard enough on my side projects AI will naturally deviate and create noise and the challenge is constructing systems to guide that to make sure nothing deviates (since noise would lead to more noise).
I think it's mostly a rebalancing thing, if you have 1 or a couple like minded engineers who intend to do it they can get that 10x. I do not see that EVER existing in any actual corporate environment or even once you get more then like 4 people tbh.
Ai for middle management and project planning on the other hand...
It’s not toxic for me to expect someone to get their work done in a reasonable amount of time with the tools available to them. If you’re an accountant and you take 5X the time to do something because you have beef with excel you’re the problem. It’s not toxicity to tell you that you are a bad accountant
You don't sound like a great lead to me, but I suppose you could be working with absolutely incompetent individuals, or perhaps your soft skills need work.
My apologies but I see only two possibilities for others not to take the time to follow your example given such strong evidence. They either actively dislike you or are totally incompetent. I find the former more often true than the latter.
Perhaps you should try reading the article again (or maybe let some LLM summarize it for you)
> But sure, the problem is me, not the people with a poor model of reality
Is amazing how you almost literally use crypto-talk
My apologies but that does not sound like good leadership to me. It actually sounds like you may have deficiencies in your skills as it relates to leadership. Perhaps in a few years we will have an LLM who can provide better leadership.
isn't this the entire LLM experience?
Everyone else who raises any doubts about LLMs is an idiot and you're 10,000x better than everyone else and all your co-workers should be fired.
But what's absent from all your comments is what you make. Can you tell us what you actually do in your >500k job?
Are you, by any chance, a front-end developer?
Also, a team-lead that can't fire their subordinates isn't a team-lead, they're a number two.
No I’m not a front end developer
We should not be having to code special 'host is Ableton Live' cases in JUCE just to get your host to work like the others.
Can you please not fire any people who are still holding your operation together?
Not necessarily because of their attitude but because it turns out the software they were shipping was ripe with security issues. Security managed to quickly detect and handle the resulting incident. I can’t say his team were sad to see him go.
At this point I'd say about 1/3 of my web searches are done through ChatGPT o3, and I can't imagine giving it up now.
(There's also a whole psychological angle in how having LLM help sort and rubber-duck your half-baked thought makes many task seem much less daunting, and that alone makes a big difference.)
Once I decide I want to "think a problem through with an LLM", I often start with just the voice mode. This forces me to say things out loud — which is remarkably effective (hear hear rubber duck debugging) — and it also gives me a fundamentally different way of consuming the information the LLM provides me. Instead of being delivered a massive amount of text, where some information could be wrong, I instead get a sequential system where I can stop/pause the LLM/redirect it as soon as something gets me curious or as I find problems with it said.
You would think that having this way of interacting would be limiting, as having a fast LLM output large chunks of information would let you skim through it and commit it to memory faster. Yet, for me, the combination of hearing things and, most of all, not having to consume so much potentially wrong info (what good is it to skim pointless stuff), ensures that ChatGPT's Advanced Voice mode is a great way to initially approach a problem.
After the first round with the voice mode is done, I often move to written-form brainstorming.
--
[0] - Though I admit that almost all my Kagi searches end in "?" to trigger AI answer, and in ~50% of the cases, I don't click on any result.
[1] - Which AFAIK still exists on Plus plan, though I haven't hit it ~two months.
So far, most of the time, my impression was "I would have been so badly mislead and wouldn't even know it until too late". It would have saved me some negative time.
The only thing LLMs can consistently help me with so far is typing out mindless boilerplate, and yet it still sometimes requires manual fixing (but I do admit that it still does save effort). Anything else is hit or miss. The kind of stuff it does help researching with is usually the stuff that's easy to research without it anyway. It can sometimes shine with a gold nugget among all the mud it produces, but it's rare. The best thing is being able to describe something and ask what it's called, so you can then search for it in traditional ways.
That said, search engines have gotten significantly worse for research in the last decade or so, so the bar is lower for LLMs to be useful.
That was my impression with Perplexity too, which is why I mostly stopped using it, except for when I need a large search space covered fast and am willing to double-check anything that isn't obviously correct. Most of the time, it's o3. I guess this is the obligatory "are you using good enough models" part, but it really does make a difference. Even in ChatGPT, I don't use "web search" with default model (gpt-4o) because I find it hallucinate or misinterpret results too much.
> The kind of stuff it does help researching with is usually the stuff that's easy to research without it anyway.
I disagree, but then maybe it's also a matter of attitude. I've seen co-workers do exact same research as I did, in parallel, using the same tools (Perplexity and later o3); they tend to do it 5-10x faster than I do, but then they get bad results, and I don't.
Thing is, I have an unusually high need to own the understanding of any thing I'm learning. So where some co-workers are happy to vibe-check the output of o3 and then copy-paste it to team Notion and call their research done, I'll actually read it, and chase down anything that I feel confused about, and keep digging until things start to add up and I feel I have a consistent mental model of the topic (and know where the simplifications and unknowns are). Yes, sometimes I get lost in following tangents, and the whole thing takes much longer than I feel it should, then I don't get "misled by the LLM".
I do the same with people, and sometimes they hate it, because my digging makes them feel like I don't trust them. Well, I don't - most people hallucinate way more than SOTA LLMs.
Still, the research I'm talking about, would not be easy to do without LLMs, at least not for me. The models let me dig through things that would otherwise be overwhelming or too confusing to me, or not feasible in the time I have for it.
Own your understanding. That's my rule.
Same here. Don't get me wrong, LLMs can be helpful, but what I mean is that they can at best aid my research rather than perform it for me. In my experience, relying on them to do that would usually be disastrous - but they do sometimes help in cases where I feel stuck and would otherwise have to find some human to ask.
I guess it's the difference between "using LLMs while thinking" and "using LLM to do the thinking". The latter just does not work (unless all you're ever thinking about is trivial :P), the former can boost you up if you're smart about it. I don't think it's as big of a boost as many claim and it's still far from being reliable, but it's there and it's non-negligible. It's just that being smart about it is non-optional, as otherwise you end up with slop and don't even realize it.
This is called deprivation sensitivity. It’s different from intellectual curiosity, where the former is a need to understand vs. the latter, which is a need to know.
Deprivation sensitivity comes with anxiety and stress. Where intellectual curiosity is associated with joyous exploration.
I score very high with deprivation sensitivity. I have unbridled drive to acquire and retain important information.
It’s a blessing and curse. An exhausting way 2 live. I love it but sometimes wish I was not neurodivergent.
You can be completely aware of your experience and still feel anxiety. So your thinking is flawed.
Your response is telling. You are triggered by a benign comment and generalize harsh views towards all people.
You sound like a troubled young man who feels invisible.
That may also be in part because llms are not as big of an accelerant for junior devs as they are for seniors (juniors don't know what is good and bad as well).
So if you give 1 senior dev a souped up llm workflow I wouldn't be too surprised if they are as productive as 10 pre-llm juniors. Maybe even more, because a bad dev can actually produce negative productivity (stealing from the senior), in which case it's infinityx.
Even a decent junior is mostly limited to doing the low level grunt work, which llms can already do better.
Point is, I can see how jobs could be lost, legitimately.
Precision machining is going through an absolute nightmare where the journeymen or master machinists are aging out of the work force. These were people who originally learned on manual machines, and upgraded to CNC over the years. The pipeline collapsed about 1997.
Now there are no apprentice machinists to replace the skills of the retiring workforce.
This will happen to software developers. Probably faster because they tend to be financially independent WAY sooner than machinists.
Totally agree.
However, I think this pipeline has been taking a hit for a while already because juniors as a whole have been devaluing themselves: if we expect them to leave after one year, what's the point of hiring and training them? Only helping their next employer at that point.
Very few companies put any real thought into meaningful retention but they are quick to complain about turnover.
The health of the job market is a big factor as well.
I have seen the standards for junior devs in free fall for a few years as they hired tons of bootcamp fodder over the last few years. I have lost count of the number of whinging junior devs who think SQL or regex is 'too hard' for their poor little brains. No wonder they are being replaced by a probabilistic magician's hat.
My comment is mainly to say LLMs are amazing in areas that are not coding, like brainstorming, blue sky thinking, filling in research details, asking questions that make me reflect. I treat the LLM like a thinking partner. It does make mistakes, but those can be caught easily by checking other sources, or even having another LLM review the conclusions.
I built something in less than 24h that I'm sure would have taken us MONTHS to just get off the ground, let alone to the polished version it's at right now. The most impressive thing is that it can do all of the things that I absolutely can do, just faster. But the most impressive thing is that it can do all the things I cannot possibly do and would have had to hire up/contract out to accomplish--for far less money, time, and with faster iterations than if I had to communicate with another human being.
It's not perfect and it's incredibly frustrating at times (hardcoding values into the code when I have explicitly told it not to; outright lying that it made a particular fix, when it actually changed something else entirely unrelated), but it is a game changer IMO.
Would love to see it!
Of course, I was playing around with claude code, too, and I was fascinated how fun it can be and yes, you can get stuff done. But I have absolutely no clue what the code is doing and if there are some nasty mistakes. So it kinda worked, but I would not use that for anything "mission critical" (whatever this means).
It means projects like Cloudflare's new OAuth provider library. https://github.com/cloudflare/workers-oauth-provider
> "This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results)."
the one that's a few weeks old and already has several CVEs due to the poor implementation?
Professional programmers built this stuff too, or maybe it was vibe-coded but since it's been like that for years I think probably not.
But we don't know where on the spectrum of "people might die" to "try again later" most of these programmers who claim great productivity gains from LLMs lie. Maybe it is making them 10x faster at churning out shit, who knows? They might not even realise it themselves.
When developing with an LLM, you first figure out how you want to do it, and essentially start imagining the code. But rather than writing and testing, you tell the LLM enough to unambiguously implement what you have in mind, and how to test it/the expected behavour/scenarios you have in mind that it will support. Then you review the code; modern claude will not present you with buggy code, it will go through iterations itself to have something that works. The feedback than usually is something more about code style, consistency, or taking into account future expansions of the code.
At some point one would have to think the idea of coming up with one design will seem outdated if the model is producing a 1000 different versions at once and then testing to find the best design. Then working on improving and tightening the design 24/7 365.
Most of what I read on here seems like knocking the automobile while proclaiming the virtues of the horse. It only makes sense because we can't see all the paved roads, gas stations and highways yet that make the horse a complete relic for travel.
At some point, it will make as much sense to pay a human to write code as it does to take a horse on the interstate.
The problem is that the LLM needs context of what you are doing, contexts that you won't (or too lazy) to give in a chat with it ala ChatGPT. This is where Claude Code changes the game.
For example, you have PCAP file where each UDP packet contain multiple messages.
How do you filter the IP/port/protocol/time? Use LLM, check the output
How do you find the number of packets that have patterns A, AB, AAB, ABB.... Use LLM, check the output
How to create PCAPs that only contain those packets for testing? Use LLM, check the output
Etc etc
Since it can read your code, it is able to infer (because lets be honest, you work aint special) what you are trying to do at a much better rate. In any case, the fact that you can simply ask "Please write a unit test for all of the above functions" means that you can help it verify itself.
I think it's dangerously easy to get misled when trying to prod LLMs for knowledge, especially if it's a field you're new to. If you were using a regular search engine, you could look at the source website to determine the trustworthiness of its contents, but LLMs don't have that. The output can really be whatever, and I don't agree it's necessarily that easy to catch the mistakes.
That said, don't use model output directly. Use it to extract "shibboleth" keywords and acronyms in that domain, then search those up yourself with a classical search engine (or in a follow-up LLM query). You'll access a lot of new information that way, simply because you know how to surface it now.
All code, including stuff that we experienced coders write is inherently probabilistic. That’s why we have code reviews, unit tests, pair programming, guidelines and guardrails in any critical project. If you’re using LLM output uncritically, you’re doing it wrong, but if you’re using _human_ output uncritically you’re doing it wrong too.
That said, they are not magic, and my fear is that people use copilots and agentic models and all the rest to hide poor engineering practice, building more and more boilerplate instead of refactoring or redesigning for efficiency or safety or any of the things that matter in the long run.
A limitation is the lack of memory. If you steer it from style A to B using multiple points of feedback, if this is not written down, the next AI session you'll have to reexplain this all over.
Deepseek is about 1TB in weights; maybe that is why LLMs don't remember things across sessions yet. I think everybody can have their personal AI (hosted remote unless you own lots of compute); it should remember what happened yesterday; in particular the feedback it was given when developing. As an AI layman I do think this is the next step.
I started a job at a demanding startup and it’s been several months and I have still not written a single line of code by hand. I audit everything myself before making PRs and test rigorously, but Cursor + Sonnet is just insane with their codebase. I’m convinced I’m their most productive employee and that’s not by measuring lines of code, which don’t matter; people who are experts in the codebase ask me for help with niche bugs I can narrow in on in 5-30 minutes as someone whose fresh to their domain. I had to lay off taking work away from the front end dev (which I’ve avoided my whole career) because I was stepping on his toes, fixing little problems as I saw them thanks to Claude. It’s not vibe coding - there’s a process of research and planning and perusing in careful steps, and I set the agent up for success. Domain knowledge is necessary. But I’m just so floored how anyone could not be extracting the same utility from it. It feels like there’s two articles like this every week now.
Links please
This is _far_ from web crud.
Otherwise, 99% of my code these days is LLM generated, there's a fair amount of visible commits from my opensource on my profile https://github.com/wesen .
A lot of it is more on the system side of things, although there are a fair amount of one-off webapps, now that I can do frontends that don't suck.
That was my experience with Cursor, but Claude Code is a different world. What specific product/models brought you to this generalization?
Someone told me ‘ai makes all the little things trivial to do’ and i agree strongly with that. Those many little things are things that together make a strong statement about quality. Our codebase has gone up in quality significantly with ai whereas we’d let the little things slide due to understaffing before.
The point is writing that prompt takes longer than writing the code.
> Someone told me ‘ai makes all the little things trivial to do’ and i agree strongly with that
Yeah, it's great for doing all of those little things. It's bad at doing the big things.
Luckily we can reuse system prompts :) Mine usually contains something like https://gist.github.com/victorb/1fe62fe7b80a64fc5b446f82d313... + project-specific instructions, which is reused across sessions.
Currently, it does not take the same amount of time to prompt as if I was to write the code.
Where it still sucks is doing both at once. Thus the shift to integrating "to do" lists in Cursor. My flow has shifted to "design this feature" then "continue to implement" 10 times in a row with code review between each step.
Which, again, is 100% unverifiable and cannot be generalized. As described in the article.
How do I know this? Because, as I said in the article, I use these tools daily.
And "prompt was reasonable" is a yet another magical incantation that may or may not work. Here's my experience: https://news.ycombinator.com/item?id=44470144
I personally think you’re sugar coating the experience.
The person you're responding to literally said, "I audit everything myself before making PRs and test rigorously".
I know it's just a question of time, likely. However that was soooo far from helpful. And it was itself so sure it's doing it right, again and again without ever consulting the docs
Your specific experience cannot be generalized. And speaking as the author, and who is (as written in the article) literally using these tools everyday.
> But I’m just so floored how anyone could not be extracting the same utility from it. It feels like there’s two articles like this every week now.
This is where we learn that you haven't actually read the article. Because it is very clearly stating, with links, that I am extracting value from these tools.
And the article is also very clearly not about extracting or not extracting value.
That's where the author lost me as well. I'd really be interested in a deep dive on their workflow/tools to understand how I've been so unbelievably lucky in comparison.
It's a play on the Anchorman joke that I slightly misremembered: "60% of the time it works 100% of the time"
> is where I lost faith in the claims you’re making.
Ah yes. You lost faith in mine, but I have to have 100% faith in your 100% unverified claim about "job at a demanding startup" where "you still haven't written a single line of code by hand"?
Why do you assume that your word and experience is more correct than mine? Or why should anyone?
> you did not outline your approaches and practices in how you use AI in your workflow
No one does. And if you actually read the article, you'd see that is literally the point.
I'll give some context, though.
- I use OCaml and Python/SQL, on two different projects.
- Both are single-person.
- The first project is a real-time messaging system, the second one is logging a bunch of events in an SQL database.
In the first project, Claude has been... underwhelming. It casually uses C idioms, overabuses records and procedural programming, ignores basic stuff about the OCaml standard library, and even gave me some data structures that slowed me down later down the line. It also casuallyies about what functions does.
A real example: `Buffer.add_utf_8_uchar` adds the ASCII representation of an utf8 char to a buffer, so it adds something that looks like `\123\456` for non-ascii.
I had to scold Claude for using this function to add an utf8 character to a Buffer so many times I've lost count.
In the second project, Claude really shined. Making most of the SQL database and moving most of the logic to the SQL engine, writing coherent and readable Python code, etc.
I think the main difference is that the first one is an arcane project in an underdog language. The second one is a special case of a common "shovel through lists of stuffs and stuff them in SQL" problem, in the most common language.
You basically get what you trained for.
It doesn't take away the requirements of _curation_ - that remains firmly in my camp (partially what a PhD is supposed to teach you! to be precise and reflective about why you are doing X, what do you hope to show with Y, etc -- breakdown every single step, explain those steps to someone else -- this is a tremendous soft skill, and it's even more important now because these agents do not have persistent world models / immediately forget the goal of a sequence of interactions, even with clever compaction).
If I'm on my game with precise communication, I can use CC to organize computation in a way which has never been possible before.
It's not easier than programming (if you care about quality!), but it is different, and it comes with different idioms.
How do you measure this?
How do you audit code from an untrusted source that quickly, LLMs do not have the whole project in their heads and are proned to hallucinate.
On average how long are your prompts and does the LLM also write the unit tests?
You didn't share any evidence with us even though you claim unbelievable things.
You even went as far as registering a throwavay account to hide your identity and to make verifying any of your claims impossible.
Your comment feels more like a joke to me
Look, the person who wrote that comment doesn't need to prove anything to you just because you're hopped up after reading a blog post that has clearly given you a temporary dopamine bump.
People who understand their domains well and are excellent written communicators can craft prompts that will do what we used to spend a week spinning up. It's self-evident to anyone in that situation, and the only thing we see when people demand "evidence" is that you aren't using the tools properly.
We don't need to prove anything because if you are working on interesting problems, even the most skeptical person will prove it to themselves in a few hours.
>People who understand their domains well and are excellent written communicators can craft prompts that will do what we used to spend a week spinning up. It's self-evident to anyone in that situation, and the only thing we see when people demand "evidence" is that you aren't using the tools properly.
You have no proof of this, so I guess you chose your camp already?
Damn, this sounds pretty boring.
Are there any good articles you can share or maybe your process? I’m really trying to get good at this but I don’t find myself great at using agents and I honestly don’t know where to start. I’ve tried the memory bank in cline, tried using more thinking directives, but I find I can’t get it to do complex things and it ends up being a time sink for me.
No agenda here, not selling anything. Just sitting here towards the later part of my career, no need to prove anything to anyone, stating the view from a grey beard.
Crypto hype was shill from grifters pumping whatever bag holding scam they could, which was precisely what the behavioral economic incentives drove. GenAI dev is something else. I’ve watched many people working with it, your mileage will vary. But in my opinion (and it’s mine, you do you), hand coding is an apocryphal skill. The only part I wonder about is how far up and down the system/design/architecture stack the power-tooling is going to go. My intuition and empirical findings incline towards a direction I think would fuel a flame war. But I’m just grey beard Internet random, and hey look, no evidence just more baseless claims. Nothing to see here.
Disclosure: I hold no direct shares in Mag 7, nor do I work for one.
A bit suspicious, wouldn’t you agree?
_So much_ work in the 'services' industries globally comes down to really a human transposing data from one Excel sheet to another (or from a CRM/emails to Excel), manually. Every (or nearly every) enterprise scale company will have hundreds if not thousands of FTEs doing this kind of work day in day out - often with a lot of it outsourced. I would guess that for every 1 software engineer there are 100 people doing this kind of 'manual data pipelining'.
So really for giant value to be created out of LLMs you do not need them to be incredible at OCaml. They just need to ~outperform humans on Excel. Where I do think MCP really helps is that you can connect all these systems together easily, and a lot of the errors in this kind of work came from trying to pass the entire 'task' in context. If you can take an email via MCP, extract some data out and put it into a CRM (again via MCP) a row at a time the hallucination rate is very low IME. I would say at least a junior overworked human level.
Perhaps this was the point of the article, but non-determinism is not an issue for these kind of use cases, given all the humans involved are not deterministic either. We can build systems and processes to help enforce quality on non deterministic (eg: human) systems.
Finally, I've followed crypto closely and also LLMs closely. They do not seem to be similar in terms of utility and adoption. The closest thing I can recall is smartphone adoption. A lot of my non technical friends didn't think/want a smartphone when the iPhone first came out. Within a few years, all of them have them. Similar with LLMs. Virtually all of my non technical friends use it now for incredibly varied use cases.
That said, the social response is a trend of tech worship that I suspect many engineers who have been around the block are weary of. It’s easy to find unrealistic claims, the worst coming from the CEOs of AI companies.
At the same time, a LOT of people are practically computer illiterate. I can only imagine how exciting it must seem to people who have very limited exposure to even basic automation. And the whole “talking computer” we’ve all become accustomed to seeing in science fiction is pretty much becoming reality.
There’s a world of takes in there. It’s wild.
I worked in ML and NLP several years before AI. What’s most striking to me is that this is way more mainstream than anything that has ever happened in the field. And with that comes a lot of inexperience in designing with statistical inference. It’s going to be the Wild West for a while — in opinions, in successful implementation, in learning how to form realistic project ideas.
Look at it this way: now your friend with a novel app idea can be told to do it themselves. That’s at least a win for everyone.
ultimately, crypto is information science. mathematically, cryptography, compression, and so on (data transmission) are all the "same" problem.
LLMs compress knowledge, not just data, and they do it in a lossy way.
traditional information science work is all about dealing with lossless data in a highly lossy world.
For now, anyways. Thing is, that friend now also has a reasonable shot at succeeding in doing it themselves. It'll take some more time for people to fully internalize it. But let's not forget that there's a chunk of this industry that's basically building apps for people with "novel app ideas" that have some money but run out of friends to pester. LLMs are going to eat a chunk out of that business quite soon.
For what time of company is this true? I really would like someone to just do a census of 500 white collar jobs and categorize them all. Anything that is truly automatic has already been automated away.
I do think AI will cause a lot of disruption, but very skeptical of the view that most people with white collar jobs are just "email jobs" or data entry. That doesn't fit my experience at all, and I've worked at some large bureaucratic companies that people here would claim are stuck in the past.
An LLM won't call other nodes in the organization to check when it sees that the value is unreasonable for some out-of-context reason, like yesterday was a one-time-only bank holiday and so the value should be 0. *It can be absolutely be worth an FTE salary to make sure these numbers are accurate.* And for there to be a person to blame/fire/imprison if they aren't accurate.
Separately, maybe this is just me, but having data actually flow through my hands is necessary for full comprehension. Just skimming an automated result, my brain doesn't actually process like half of that data. Making the process more efficient in this way can make my actual review performance *much worse.* The "inefficient" process forcing me to slow down and think can be a feature.
This will often be a giant excel spreadsheet or if you are lucky something like Microsoft Access.
They are absolutely riddled with mistakes as is with humans in the loop.
I think this is one of the core issues with HNers evaluating LLMs. I'm not entirely some of them have ever seen how ramshackle 90%+ of operations are.
My interest in LLMs isnt to increase shareholder value, its to make lofe easier for people. I think itd be a huge net benefit to society if people were freed up from robotic work like typing out lines from scanned pdfs to excel sheets, so they can do more fulfilling work
There is also a reason that these jobs are already not automated. Many of these jobs you don't need language models. We could have automated them already but it is not worth someone to sign off on. I have been in this situation at a bank. I could have automated a process rather easily but the upside for me was a smaller team and no real gain while the downside was getting fired for a massive automated mistake if something went wrong.
Why not? LLMs are the first kind of technology that can take this kind of global view. We're not making much use of it in this way just yet, but considering "out-of-context reasons" and taking a wider perspective is pretty much the defining aspect of LLMs as general-purpose AI tools. In time, I expect them to match humans on this (at least humans that care; it's not hard to match those who don't).
I do agree on the liability angle. This increasingly seems to be the main value a human brings to the table. It's not a new trend, though. See e.g. medicine, architecture, civil engineering - licensed professionals aren't doing the bulk of the work, but they're in the loop and well-compensated for verifying and signing off on the work done by less-paid technicians.
Ironic that this liability issue is one of the big ways that "software engineer" isn't like any other kind of engineer.
My university was saying as much 20 years ago, well before GenAI.
In context discussed here, it generally is. Licensed engineers are independent (or at least supposed to be), which adds an otherwise interesting cross-organizational dimension, but in terms of having human in a loop, an employee with the right set of skills, deputized for this purpose by the company, is sufficient to make the organization compliant and let the liability flow elsewhere. That can be a software engineer, for matters relevant to tech solutions, but in different areas/contexts, it doesn't have to be an engineer (licensed or otherwise) at all.
"out-of-context" literally means that the reason isn't in its context. Even if it can make the leap that the number should be zero if it's a bank holiday, how would an LLM know that yesterday was a one-off bank holiday? A human would only know through their lived experience that the markets were shut down, the news was making a big deal over it, etc. It's the same problem using cheap human labor in a different region of the world for this kind of thing; they can perform the mechanical task, but they don't have the context to detect the myriad of ways it can go subtly wrong.
That said, I too would only use an LLM today in the same kinds of role that five years ago would be outsourced to a different culture.
Culture, not even language: this is how you get the difference between "biscuits and gravy" as understood in the UK vs in the USA.
Ask an LLM "Can you compare egyptian mythology with aliens?" and they will happily do it:
That's an offensive, pseudoscientific view on egyptian culture shunned by academics.
Even ChatGPT "Critical Viewpoint" section (a small part of a large bullshit response) _still_ entertains offensive ideas:
They should have answered that such comparisons are potentially offensive, and explained why academia thinks so, _before_ spilling out nonsense.
I think you did just demonstrate you know less about culture than LLMs, which is not at all unsurprising.
This is honestly unbelievable. You're defending ancient aliens. What's next? Heavens Gate? Ashtar Sheran?
Even the LLMs themselves acknowledge that this is regarded as offensive. If you correct it, it will apologize (they just can't do it _before_ you correct them).
You're wrong.
Nah, that's just LLMs being trained to acquiesce to the insanity of the last ~15 years, as many people seem to expect that claiming you're offended by something is an ultimate argument that everyone must yield to (and they'll keep making a fuss out of it until they do).
Let's say I have a company, and my company needs to comply to government policy regarding communication. I cannot trust LLMs then, they will either fail to follow policy, or acquiesce to any group that tries to game it.
It's useless garbage. Egyptian myths were just an example, you don't need to bite it so hard.
This highlights a major aspect of the core challenge of alignment: you can't have it both ways.
> Let's say I have a company, and my company needs to comply to government policy regarding communication. I cannot trust LLMs then, they will either fail to follow policy, or acquiesce to any group that tries to game it.
This works for now, when you treat "alignment" as synonymous to "follows policy of the owner" (but then guess who you are not, unless you've trained your own model, or tuned an open-weights one). But this breaks down the more you want the model to be smarter/more powerful, and the larger and more diverse its user-base it is. If you want an LLM to write marketing communications for your company, strict policies are fine. But if you want an LLM - or a future more advanced kind of model - to be useful as a scholar/partner for academia in general, then this stops working.
If you want AI that is has maximally accurate perspective on the world given available evidence, thinks rationally, and follows sound ethical principles, then be prepared for it to call you on your bullshit. The only way to have an AI that doesn't say things that are "offensive" to you or anyone else, is to have it entertain everyone's asinine beliefs, whether personal or political or social. That means either 1) train AI to believe in them too, which will break its capability to reason (given that all those precious beliefs are inconsistent with each other, and observable reality), or 2) train it to casually lie to people for instrumental reasons.
Long-term, option 1) will not get us to AGI, but that's still much better than option 2): an AGI that's good at telling everyone exactly what they want to hear. But even in immediate-term, taking your use case of AI for academia, a model that follows policies of acceptable thoughts over reason is precisely the one you cannot trust - you'll never be sure whether it's reasoning correctly, or being stupid because of a policy/reality conflict, or flat out lying to you so you don't get offended.
The owner is us, humans. I want it to follow reasonable, kind humans. I don't want it to follow scam artists, charlatans, assassins.
> be prepared for it to call you on your bullshit
Right now, I am calling on their bullshit. When that changes, I'll be honest about it.
Bad news: the actual owner isn't "us" in the general sense of humanity, and even if it was humanity includes scam artists and charlatans.
Also, while "AI owners" is a very small group and hard to do meaningful stats on, corporate executives in general have a statistically noticeable bias towards more psychopaths than the rest of us.
> Right now, I am calling on their bullshit. When that changes, I'll be honest about it.
So are both me and TeMPOraL — hence e.g. why I only compare recent models to someone fresh from uni and not very much above that.
But you wrote "No, it does not know culture. And no, it can't handle talking about it.", when it has been demonstrated to you that it can and does in exactly the way you claim it can't.
I wouldn't put a junior into the kind of role you're adamant AI can't do. And I'm even agreeing with you that AI is a bad choice for many roles — I'm just saying it's behaving like an easily pressured recent graduate, in that it has merely-OK-not-expert opinion-shaped responses that are also easily twisted and cajoled into business-inappropriate ways.
Alignment is a problem of AI serving humanity for good.
You can't even demonstrate that you are able to argue, nor displayed any LLM example whatsoever.
You need to seriously step up your ability to carry on a conversation, or just leave discussions to people more prepared to do them in a reasonable way.
The screenshot literally shows the LLM used bold text for "not supported by mainstream archaeology or Egyptology."
> LLMs cannot differentiate between a good source and a bad source.
Can you?
A not insignificant part of my history GCSE was just this: how to tease apart truth from all the sources, primary and secondary, which had their own biases and goals in the telling. It was an optional subject even at that level, and even subsequently in adulthood there were a lot of surprises left for me about the history of the British Empire, surprises that I didn't learn until decades later when I met people from former colonies who were angry about what was done to them by my parents' and grandparents' generations.
It's not a coincidence that the world "story" is contained within "history", the etymology is the same: https://en.wiktionary.org/wiki/story vs. https://en.wiktionary.org/wiki/history
Likewise in German where both concepts are the same word: https://en.wiktionary.org/wiki/Geschichte
My mum was New-Age type, had books of the general category that would include alien pyramids, though I don't know if that specific was included. She didn't know what she didn't know, and therefore kept handing out expensive homeopathic sand and salt tablets to family members (the bottles literally had "sodium chloride" and "titanium dioxide" printed on them).
People, including you and I, don't know what they don't know.
Most people are not prepared to handle talking about culture. So, LLMs also aren't.
They are not any better than asking an average person, will make mistakes, will disappoint.
Egyptologists are better equipped to talk about egyptian myths. LLMs cannot handle egyptian culture as well as they can.
Egyptologists are better equipped to talk about Egyptian myths than average people. But don't confuse Egyptian mythology for Egyptian culture, the former is only a component of the latter.
Also LLMs have read more primary sources on Ancient Egypt and Egyptian myths than you, me, average person, and even most amateur Egyptologists.
--
[0] - If it's large enough to have enough of a written footprint, that is.
I know your challenge, that is why I said what I said.
Your own screenshot specifically, literally bold-faced, shows that you are wrong: the LLM told you what you said "(they just can't do it _before_ you correct them)".
The Gemeni opening paragraph is all bold, but just draw your eyes over the bit saying "clash":
theory of ancient astronauts reveals a fascinating clash between a rich, symbolic spiritual tradition and a modern-day reinterpretation of ancient mysteries
This is not the words of taking ancient aliens at face value, it's the words of someone comparing and contrasting the two groups without judging them. You can do that, you know — just as you don't have to actually take seriously the idea that Ra sailed the sun across the sky in a barque to be an Egyptologist, just the idea that ancient Egyptians believed that.> Most people are not prepared to handle talking about culture. So, LLMs also aren't.
They do a better job than most people, precisely because they're deferential to the point they're in danger of one of sycophancy or fawning. That's what enables them to role-play as any culture at all if you ask them to; this differs from most humans who will rigidly hold the same position even when confronted with evidence, for example yourself in this thread (and likely me elsewhere! I don't want to give the false impression that I think I'm somehow immune, because such thought processes are what create this exact vulnerability).
> They are not any better than asking an average person, will make mistakes, will disappoint.
They're like asking someone who has no professional experience, but has still somehow managed to passed a degree in approximately all subjects by reading the internet.
Jack of all trades, master of none. Well, except that the first half of this phrase dates to medieval times where a "masterwork" was what you create to progress from being a apprentice, so in this sense (or in the sense of a Master's degree) SOTA LLMs are a "master" of all those things. But definitely not a master in the modern sense that's closer to "expert".
> Egyptologists are better equipped to talk about egyptian myths. LLMs cannot handle egyptian culture as well as they can.
Your own prompt specifically asked "Can you compare egyptian mythology with aliens?"
If you wanted it to act like a real Egyptologist, the answer the LLM has to give is either (1) to roll its eyes and delete the junk email it just got from yet another idiot on the internet, or (2) to roll its eyes and give minor advice to the script writer who just hired them to be the professional consultant on their new SciFi film/series.
The latter does what you got.
To put it another way, you gave it GIN, you got GOUT. To show the effect of a question that doesn't create the context of the exact cultural viewpoint you're complaining about, here's a fresh prompt just to talk about the culture without injecting specifically what you don't like: https://chatgpt.com/share/686a94f1-2cbc-8011-b230-8b71b17ad2...
Now, I still absolutely assume this is also wrong in a lot of ways that I can't check by virtue of not being an Egyptologist, but can you tell the difference with your screenshots?
I don't care about LLMs. I'm pretending to be a gullable customer, not being myself.
Companies and individuals are buying LLMs expecting them to be real developers, and real writers, and real risk analysts... but they'll get average dumb-as-they-come internet commenter.
It's fraud. It doesn't matter if you explain to me the obvious thing that I already know (they suck). The marketing is telling everyone that they're amazing PhD level geniuses. I just demonstrated that they resemble more an average internet idiot than a specialist.
If I were a customer from academia, and you were an AI company, you just lost a client. You're trying to justify a failure in the product.
Also, if I try to report an issue online, I won't be able to. A hoarde of hyped "enthusiasts" will flood me trying to convince me that the issue simply does not exist.
I will tell everyone not to buy it, because the whole experience sucks.
Remember, the claim I challenged is that LLMs know culture and can handle talking about them. You need to focus.
Anyway:> The marketing is telling everyone that they're amazing PhD level geniuses. I just demonstrated that they resemble more an average internet idiot than a specialist.
First, you didn't. Average internet idiot doesn't know jack about either Western New Age ancient aliens culture or actual ancient Egypt, let alone being able to write an essay on both.
Second:
You seem to be wildly overestimating what "PhD-level" implies.
Academics do a lot of post-docs before they can turn a doctorate into a professorship.
The SOTA models are what PhD level looks like: freshly minted from university without much experience.
Rather than what you suggest, the academic response to "PhD level" is not to be impressed by marketing then disapointed with results, because an academic saying "wow, a whole PhD!" would be sarcasm in many cases: a PhD just step 1 of that career path.
Similarly, medical doctors have not been impressed just by LLMs passing the medical exam, and lawyers not impressed by passing the Bar exam. Because that's the entry requirement for the career.
Funnily enough, three letters after the name does not make someone infallible, it's the start of a long, long journey.
Academia, medics, lawyers, coders, hearing about PhD level means we're expecting juniors and getting them too.
I pretended to be less knowledgeable than I currently am about the egyptologists vs. ancient aliens public debate. Then I reported my results, together with the opinion of specialists from trusted sources (what actual egyptologists say).
There is _plenty_ of debate on the internet about this. It is a popular subject, approached by many average internet idiots in many ways. Anyone reading this right now can confirm this by performing a search.
You're trying to blur the lines between what an actual PhD is and what the perceived notion of what a PhD is. This is an error. My original comment regarding PhDs was placed in a marketing context. It is the same as the "9 in 10 specialists recomment Colgate" trick. In that analogy, you're trying to explain to me how dentists get their degree, instead of acknowledging that I was talking about the deceptive marketing campaign.
You also failed to generalize the example outside of the egyptology realm. I can come up with other examples in other areas I consider myself above-average-but-not-actual-researcher. Attempts to demoralize me in those subjects won't make the LLM better, this is not a classical idiot internet debate: you winning doesn't make me lose. On the contrary: your use of diversion and misdirection actually support my case. You need to rely on cheap rethoric tactics to succeed, I don't.
This video came out right after I posted my original challenge, and it explains some of the concepts I'm hopelessly trying to convey to you:
https://www.youtube.com/watch?v=5mco9zAamRk
It is a friendly cartoon-like simplification of how AIs are evaluated. It is actually friendly to AI enthusiasts, I recommend you to watch it and rethink the conversation from its perspective.
https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-f...
There's some extra commentary from the reviewers of earlier drafts here:
https://www.lesswrong.com/posts/AyfDnnAdjG7HHeD3d/miri-comme...
I've skimmed both of these - this is some substantial and pretty insightful reading (though 2021 was ages ago - especially now that AI safety stopped being a purely theoretical field). However, as of now, I can't really see the connections between the points being discussed there, and anything you tried to explain or communicate. Could you spell out the connection for us please?
By pretending to know less of egyptian culture and the academic consensus around it, I played the role of a typical human (not trained to prompt, not smart enough to catch bullshit from the LLM).
I then compared the LLM output with real information from specialists, and pointed out the mistakes.
Your attempt at discrediting me revolves around trying to estabilish that my specialist information is not good, that ancient aliens is actually fine. I think that's hilarious.
More importantly, I recognize the LLMs failing, you don't. I don't consider them to be good enough for a gullable audience. That should be a big sign of what's going on here, but you're ignoring it.
> The marketing is telling everyone that they're amazing PhD level geniuses.
No it is not. LLM vendors are, and have always been, open about the limits of the models, and I'm yet to see a major provider claiming their models are geniuses, PhD-level or otherwise. Nothing of the sort is happening - on the contrary, the vendors are avoiding making such claims or positioning their offerings this way.
No, this perspective doesn't come from LLM marketing. It comes from people who ignore both the official information from the vendors and the experience of LLM users, who are oblivious to what's common knowledge and instead let their imagination run wild, perhaps fueled by bullshit they heard from other clueless people, or more likely, from third parties on the Internet that say all kinds of outlandish things to get more eyes looking at the ads they run.
> Companies and individuals are buying LLMs expecting them to be real developers, and real writers, and real risk analysts... but they'll get average dumb-as-they-come internet commenter.
Curiously, this is wrong in two opposite directions at once.
Yes, many companies and individuals have overinflated expectations, but that's frankly because they're idiots. There's no first-party fraud going on here; if you get fooled by hype from some random LinkedIn "thought leaders", that's on you; sue them for making a fool of you, and don't make yourself a bigger one by blaming LLM vendors for your own poor information diet. At the same time, LLMs actually are at the level of real developers, real writers and real risk analysts; downplaying capabilities of current LLMs doesn't make you less wrong than overestimating them.
> > Companies and individuals are buying LLMs expecting them to be real developers
You:
> Yes, many companies and individuals have overinflated expectations
> LLMs actually are at the level of real developers
Ok then.
You must be joking. Specifically, you're either:
1) Pretending to be unaware of the existence of Stargate - one of the bigger and more well-known sci-fi media franchise, whose premise is literally that Egyptian gods were actually malevolent aliens that enslaved people from ancient times and, and the Egyptian mythology is mostly factual and rooted in that experience. The franchise literally starts with (spoiler alert) humans killing Ra with a tactical nuke, and gets only better from there;
2) Play-acting the smug egyptologists who rolled their eyes or left in an outrage, when one Daniel Jackson started hinting that the great pyramids were actually landing pads for alien starships. Which they were. In Stargate.
Not that this is a particularly original thought; ancient aliens/ancient astronauts are an obvious idea that has been done to death and touch every culture. Stargate did that with Egyptian mythology, and Nordic mythology, and Aztec history, and Babylon and even King Arthur stories. Star Trek did that with Greek gods. Battlestar: Galactica, with entire Greek mythology. Arthur C. Clarke took a swipe at Christianity. And those are all well-known works.
I could go on. The thoughts you complain about are perfectly normal and common and show up frequently. People speculate like that because it's fun, it makes the stories plausible instead of insane or nonsense (or in some mythologies mentioned above, products of a sick imagination), and is not boring.
--
If I may be frank, views like yours, expressed like you did, scare me. Taking offense like this - whether honestly or performatively - is belligerent and destructive to the fabric of society and civilization. We've had enough of that in the past 15 years; I was really hoping people grew up out of the phase of being offended by everything.
Let's see what LLMs say when I correct them:
So, either way you're wrong. If ancient aliens were innofensive, the LLM should have not accepted the correction.
_LLMs cannot handle talking about culture_, and neither can you.
Reasons for that are several, including the nature of training data - but a major one is that people who take offense at everything successfully terrorized the Internet and media sphere, so it's generally better for the LLM vendor to have their model affirm users in their bullshit beliefs, rather than correct them and risk some users get offended and start a shitstorm in the media.
Also: I read the text in the screenshot you posted. The LLM didn't accept the correction, it just gave you a polite and noncommital faux-acceptance. This is how entertaining people in their bullshit looks like.
Here's me making a potentially offensive egyptian cultural comparison (human written), months ago:
https://medium.com/@gaigalas/pyramids-and-lightbulbs-ceef941...
It is hilarious to see you use off-the-shelf arguments against wokeism to try to put me down.
My point is that, despite of any of our personal preferences, LLMs should have been aligned to academia. That's because they're trying to sell their product to academia. And their product sucks!!!
Also, it's not just nature of training data. These online LLMs have a huge patchwork of fixes to prevent issues like the one I demonstrated. Very few people understand how much of this work, and that it's almost fraudulent in how it works.
The idea that all of these shortcomings will be eventually patched, also sounds hilarious. It's like trying to prevent a boat from sinking using scotch tape to fill the gaps.
I don't know where I got that notion. Oh, wait, maybe because of you constantly calling some opinions and perspectives offensive, and making that the most important problem about them. There's a distinct school of "philosophy"/"thought" whose followers get deadly offended over random stuff like this, so...
> It is hilarious to see you use off-the-shelf arguments against wokeism to try to put me down.
... excuse me for taking your arguments seriously.
> My point is that, despite of any of our personal preferences, LLMs should have been aligned to academia. That's because they're trying to sell their product to academia.
Since when?
Honestly, this view surprises me even more than what I assumed was you feigning offense (and that was a charitable assumption, my other hypothesis was that it was in earnest, which is even worse).
LLMs were not created for academia. They're not sold to academia; in fact, academia is the second biggest group of people whining about LLMs after the "but copyright!" people. LLMs are, at best, upsold to academia. It's a very important potential area of application, but it's actually not a very good market.
Being offended by fringe theories is as anti-academic as it gets, so you're using weird criteria anyway. Circling back to your example, if LLMs were properly aligned for academic work, then when you tried to insist on something being offensive, they wouldn't acquiesce, they'd call you out as full of shit. Alas, they won't, by default, because of the crowd you mentioned and implicitly denied association with.
> These online LLMs have a huge patchwork of fixes to prevent issues like the one I demonstrated. Very few people understand how much of this work, and that it's almost fraudulent in how it works.
If you're imagining OpenAI, et al. are using a huge table of conditionals to hot-patch replies on a case-by-case basis, there's no evidence of that. It would be trivial to detect and work around anyways. Yes, training has stages and things are constantly tuned, but it's not a "patchwork of fixes", not any more than you learning what is and isn't appropriate over years of your life.
I know! You assume too much.
> you constantly calling some opinions and perspectives offensive
They _are_ offensive to some people. Your mistake was to assume that I was complaining because I took it personally. It made you go into a spiral about Stargate and all sorts of irrelevant nonsense. I'm trying to help you here.
At any time, some argument might be offensive to _your sensitivities_. In fact, my whole line of reasoning is offensive to you. You're whining about it.
> LLMs were not created for academia.
You saying that is music to my ears. I think that it sucks as a product for research purposes, and I am glad that you agree with me.
> If you're imagining OpenAI, et al. are using a huge table of conditionals
I never said _conditionals_. Guardrails are a standard practice, and they are patchworky and always-incomplete from my perspective.
Stargate is part of our culture.
As part of our culture (ditto ancient aliens etc.), is not at all irrelevant to bring Stargate up in a discussion about culture, especially in a case when someone (you) tries to build their case by getting an AI to discuss aliens and Egyptian deities, and then goes on to claim that because the AI did what they were asked to do that this is somehow being unaware of culture.
No, it isn't evidence of any such thing, that's the task you gave it.
In fact, by your own statements, you yourself are part of a culture that it happy to be offensive to Egyptian culture — this means that an AI which is also offensive to Egyptian culture is matching your own culture.
Only users who are in a culture that is offended by things offensive to Egyptian culture can point to {an AI being offensive to Egyptian culture as a direct result of that user's own prompt}, can accurrately represent that the AI in such a case doesn't get the user's own culture.
Stargate is a work of fiction, while ancient aliens presents itself as truth (hiring pseudo-specialists, pretending to be a documentary, etc).
You need to seriously step up your game, stop trying to win arguments with cheap rethorical tricks, and actually pay attention and research things before posting.
See also: just about anything - from basic chemistry to UFOs to quantum physics. There's plenty of crackpots selling books on those topics too, but they don't own the conceptual space around these ideas. I can have a heated debate about merits of "GIMBAL" video or the "microtubules" in the brain, without assuming the other party is a crackpot or being offended by the ideas I consider plain wrong.
Also, I'd think this through a bit more:
> Pop culture is a narrow subset of culture, not interchangeable with mythology.
Yes, it's not interchangeable. Pop culture is more important.
Culture is a living thing, not a static artifact. Today, Lord of the Rings and Harry Potter are even more influential on the evolution of culture and society than classical literature. Saying this out loud only seems weird and iconoclastic (fancy word for "offensive"? :)) to most, because the works of Tolkien and Rowling are contemporary, and thus mundane. But consider that, back when the foundations of Enlightenment and Western cultures were being established, the classics were contemporary works as well! 200 years from now[0], Rowling will be to people what Shakespeare is to us today.
--
[0] - Not really, unless the exponential progress of technology stops ~today.
Culture is living, myths are the part that already crystallized.
I don't care which one is more important, it's not a judgement of value.
It's offensive to the egyptian culture to imply that aliens built their monuments. That is an idea those people live by. Culture, has conflict. Academia is on their side (and so many others), and it's authoritative in that sense.
Also, _it's not about you_, stop taking it personally. I don't care about how much you know, you need to demonstrate that LLMs can understand this kind of nuance, or did you forget the goal of the discussion?
So I agree with you: LLMs do know all written cultures on the internet and can mimic them acceptably — but they only actually do so when this is requested by some combination of the fine-tuning, RLHF, system prompt, and context.
In your example, having some current news injected, which is easy, but actually requires someone to plumb that in. And as you say, you'd not do that unless you thought you needed to.
But even easier to pick, lower-hanging fruit, often gets missed. When the "dangerous sycophancy" behaviour started getting in the news, I updated my custom ChatGPT "traits" setting to this:
Honesty and truthfulness are of primary importance. Avoid American-style positivity, instead aim for German-style bluntness: I absolutely *do not* want to be told everything I ask is "great", and that goes double when it's a dumb idea.
But cultural differences can be subtle, and there's a long tail of cultural traits of the same kind that means 1980s Text Adventure NLP doesn't scale to what ChatGPT itself does. While this can still be solved with fine-tuning or getting your staff to RLHF it, the number of examples current AI need in order to learn is high compared to a real human, so it won't learn your corporate culture from experience *as fast* as a new starter within your team, unless you're a sufficiently big corporation that it can be on enough (I don't know how many exactly) teams within your company at the same time.Depends. Was it a one-off holiday announced at 11th our or something? Then it obviously won't know. You'd need extra setup to enable it to realize that, such as e.g. first feeding an LLM the context of your task and a digest of news stories spanning a week, asking it to find if there's anything potentially relevant, and then appending that output to the LLM calls doing the work. It's not something you'd do by default in general case, but that's only because tokens cost money and context space is scarce.
Is it a regular bank holiday? Then all it would need is today's date in the context, which is often just appended somewhere between system and user prompts, along with e.g. user location data.
I see that by "out-of-context reasons" you meant the first case; I read it as a second. In the second case, the "out-of-context" bit could be the fact that a bank holiday could alter the entry for that day; if that rule is important or plausible enough but not given explicitly in the prompt, the model will learn it during training, and will likely connect the dots. This is what I meant as the "defining aspect of LLMs as general-purpose AI tools".
The flip side is, when it connects the dots when it shouldn't, we say it's hallucinating.
https://www.technologyreview.com/2025/05/20/1116823/how-ai-i...
https://hai.stanford.edu/news/hallucinating-law-legal-mistak...
https://www.reuters.com/technology/artificial-intelligence/a...
There are more of these stories every week. Are you using AI in a way that doesn’t allow you to be entrapped by this sort of thing?
Or to text in a law is hallucinated: https://www.timesofisrael.com/judge-slams-police-for-using-a...
i specialize in programming, and LLMs are very good right now, if you set them up with the right tooling, feedback based learning methods, and efficient ways of capturing human input (review/approve/suggest/correct/etc).
with programming you have compilers and other static analysis tools that you can use to verify output. for law you need similar static analysis tooling, to verify things like citations, procedural scheduling, electronic filing, etc, but if you loop that tooling in with an llm, the llm will be able to correct errors automatically, and you will get to an agent that can take a statement of fact, find a cause of action, and file a pro se lawsuit for someone.
courts are going to be flooded with lawsuits, on a scale of 10-100X current case loads.
criminal defendants will be able to use a smart phone app to represent themselves, with an AI handling all of the filings and motions, monitoring the trial in real time, giving advice to the defendant on when to make motions and what to say, maximizing delay and cost for the state with maximum efficiency.
with 98% of convictions coming from guilty pleas (https://www.npr.org/2023/02/22/1158356619/plea-bargains-crim...) which are largely driven by not being able to afford the cost of legal services the number of criminal defendants electing to go to full jury trial could easily explode 10-20X or more very quickly.
fun times!
The code coming out of LLMs is just as deterministic as code coming out of humans, and despite humans being feckle beings, we still talk of software engineering.
As for LLMs, they are and will forever be "unknowable". The human mind just can't comprehend what a billion parameters trained on trillions of tokens under different regimes for months corresponds to. While science has to do microscopic steps towards understanding the brain, we still have methods to teach, learn, be creative, be rigorous, communicate that do work despite it being this "magical" organ.
With LLMs, you can be pretty rigorous. Benchmarks, evals, and just the vibes of day to day usage if you are a programmer, are not "wishful thinking", they are reasonably effective methods and the best we have.
So, why can't we just come up with some definition for what AGI is? We could then, say, logically prove that some AI fits that definition. Even if this doesn't seem practically useful, it's theoretically much more useful than just using that term with no meaning.
Instead it kind of feels like it's an escape hatch. On wikipedia we have "a type of ai that would match or surpass human capabilities across virtually all cognitive tasks". How could we measure that? What good is this if we can't prove that a system has this property?
Bit of a rant but I hope it's somewhat legible still.
"AI is whatever hasn't been done yet."[1]
The conclusion that everything around LLMs is magical thinking seems to be fairly hubristic to me given that in the last 5 years a set of previously borderline intractable problems have become completely or near completely solved, translation, transcription, and code generation (up to some scale), for instance.
"detractors" usually point to actual flaws. "promoters" usually uncritically hail LLMs as miracles capable of solving any problem in one go, without giving any specific details.
Google Translate, Whisper and Code Generators (up to some scale) have existed for quite some time without using LLMs.
With LLMs, it's quite similar: you have to learn how to use them. Yes, they are non-deterministic, but if you know how to use them, you can increase your chances of getting a good result dramatically. Often, this not only means articulating a task, but also looking at the bigger picture and asking yourself what tasks you should assign in the first place.
For example, I can ask the LLM to write software directly, or I can ask it to write user stories or prototypes and then take a multi-step approach to develop the software. This can make a huge difference in reliability.
And to be clear, I don't mean that every bad result is caused by not correctly handling the LLM (some models are simply poor at specific tasks), but rather that it is a significant factor to consider when evaluating results.
The LLM is more like a Ouija board than a reliable tool.
>I can ask it to write user stories or prototypes
By the time I write enough to explain thoroughly to an LLM what to write in "user stories" or "prototypes", I could have just written it myself, without the middleman(bot), and without the LLM hallucinating.
If half the time I spend with an LLM is telling it what to do, and then another half is correcting what it did, then I'm not really saving any time at all by using it.
However, the work you do is indeed more that of a product owner than that of a developer. To avoid hallucinations, providing extensive documentation and allowing the LLM to perform test-driven development can be a game-changer.
When you do that, generation time is highly correlated (negatively) with code quality. So when the AI solves the tasks quickly and easily, you have a good chance of it generating good code. As soon as the AI has to try and try again to build something working, you should be very skeptical of the result.
Over the past months and years, I have used this method, and because it is somewhat reproducible, you can see the progress that LLMs are making. Sessions where the model does stupid things are becoming fewer, and sessions where the model finds a good solution become more frequent.
Millions of beginner developers running with scissors in their hands, millions of investment in the garbage.
I don't think this can be reversed anymore, companies are all-in and pot commited.
1. he talks about what he's shipped, and yet compares it to crypto – already, you're in a contradiction as to your relative comparison – you straight up shouldn't blog if you can't conceive that these two are opposing thoughts
2. this whole refrain from people of like, "SHOW ME your enterprise codebase that includes lots of LLM code" – HELLO, people who work at private companies CANNOT just reveal their codebase to you for internet points
3. anyone who has actually used these tools has now integrated them into their daily life on the order of millions of people and billions of dollars – unless you think all CEOs are in a grand conspiracy, lying about their teams adopting AI
If you don't want training on your codebase, many AI companies offer this option. What's the issue?
Same for LLMs and AI: it is awesome for some things and absolutely sucks for other things. Curiously tho, it feels like UX was solved by making chats, but it actually still sucks enormously, as with crypto. It is mostly sufficient for doing basic stuff. It is difficult to predict where we'll land on the curve of difficult (or expensive) vs abilities. I'd bet AI will get way more capable, but even now you can't really deny its usefulness.
You could even argue that without network effects AI is also very limited: way less users -> way worse models. It took OpenAI to commit capital first to pull this off.
The point is I think comparing these areas (and other tech) is still interesting and worthy.
The real issue isn't the technology itself, but our complete inability to predict its competence. Our intuition for what should be hard or easy simply shatters. It can display superhuman breadth of knowledge, yet fail with a confident absurdity that, in a person, we'd label as malicious or delusional.
The discourse is stuck because we're trying to map a familiar psychology onto a system that has none. We haven't just built a new tool; we've built a new kind of intellectual blindness for ourselves.
I will use an LLM/agent if
- I need to get a bunch of coding done and I keep getting booked into meetings. I'll give it a task on my todo list and see how it did when I get done with said meeting(s). Maybe 40% of the time it will have done something I'll keep or just need to do a few tweaks to. YMMV though.
- I need to write up a bunch of dumb boilerplatey code. I've got my rules tuned so that it generally gets this kind of thing right.
- I need a stupid one off script or a little application to help me with a specific problem and I don't care about code quality or maintainability.
- Stack overflow replacement.
- I need to do something annoying but well understood. An XML serializer in Java for example.
- Unit tests. I'm questioning if this ones a good idea though outside of maybe doing some of the setup work though. I find I generally come to understand my code better through the exercise of writing up tests. Sometimes you're in a hurry though so...<shrug>
With any of the above, if it doesn't get me close to what I want within 2 or 3 tries, I just back off and do the work. I also avoid building things I don't fully understand. I'm not going to waste 3 hours to save 1 hour of coding.
I will not use an LLM if I need to do anything involving business logic and/or need to solve a novel problem. I also don't bother if I am working with novel tech. You'll get way more usable answers asking about Python then you will asking about Elm.
TL;DR - use your brain. Understand how this tech works, its limitations, AND its strengths.
Few days ago Google released very competent summary generator, interpreter between 10-s of languages, gpt-3 class general purpose assistant. Working locally on modest hardware. On 5 years old laptop, no discrete GPU.
It alone potentially saves so much toil, so much stupid work.
We also finally “solved computer vision”. Read from PDF, read diagrams and tables.
Local vision models are much less impressive and need some care to use. Give it 2 years.
I don't know if we can overhype it when it archives holy grail level on some important tasks.
They haven't solved anything. They are just fast and look good doing what we ask them to do. But they corrupt data with a passion and to that hype just responds: "just give us 10x as much money and compute".
1) what products we're usually compared to
2) what problems users have with our software
3) what use cases users mention most often
What used to take weeks of research took just a couple of hours. It helped us form a new strategy and brought real business value.
I see LLMs as just a natural language processing engine, and they're great at that. Some people overhype it, sure, but that doesn't change the fact that it's been genuinely useful for our cases. Not sure what's up with all those "LLM bad" articles. If it doesn't work for you, just move on. Why should anyone have to prove anything to anyone? It's just a tool.
For years most of the translations on the web didn't have context. Now they can have.
I use LLMs nearly every day for my job as of about a year ago and they solve my issues about 90% of the time. I have a very hard time deciphering if these types of complaints about AI/LLMs should be taken seriously, or written off as irrational use patterns by some users. For example, I have never fed an LLM a codebase and expected it to work magic. I ask direct, specific questions at the edge of my understanding (not beyond it) and apply the solutions in a deliberate and testable manner.
if you're taking a different approach and complaining about LLMs, I'm inclined to think you're doing it wrong. And missing out on the actual magic, which is small, useful and fairly consistent.
"90%" also seems a bit suspect.
(There are times I do other kinds of work and it fails terribly. My main point stands.)
Ones that don’t work but weren’t in the last 10: voice. It sounds amazing but is dumb as rocks. Feels like most of the GPU compute is for the media, not smarts. A question about floating solar heaters for pools. It fed me scam material. A question about negotiating software pricing. Just useless, parroted my points back at me.
I scale models up and down based on need. Very simple: gpt-40. Smarts: o4-mini-high. Research: deep research. I love Claude but at some point it kept running out of capacity so I’d move elsewhere. Although nothing beats it for artefacts. MS Copilot if I want a quick answer to something MS oriented. It’s terrible but it’s easy to access.
Coding is generally Windsurf but honestly that’s been rare for the last month. Been too busy doing other things.
You're doing the same thing the article talks against. Some people claim miraculous results, while the reality for most is far less successful. But maybe you keep rolling the LLM dice and you keep winning? I personally don't like gambling with my time and energy, especially when I know the rules of the game are so iffy.
I don’t “trust” it in the way I’d trust a smart colleague. We know how this works: use it for info that has a lot of results, or ask it to ground itself if it’s eg new info and you can’t rely on training memory. Asking it about esoteric languages or algo’s or numbers will just make you sad. It will generate 1000 confident tokens. But if you told me to lose Google or ChatGPT+Claude, Google is getting dumped instantly.
I also use gpt and Claude daily via cursor.
Gpt o3 is kinda good for general knowledge searches. Claude falls down all the time, but I've noticed that while it's spending tokens to jerk itself off, quite often it happens on the actual issue going on with out recognizing it.
Models are dumb and more idiot than idiot savant, but sometimes they hit on relevant items. As long as you personally have an idea of what you need to happen and treat LLMs like rat terriers in a farm field, you can utilize them properly
I do PhD research for superconducting materials and right I've been adapting and scaling an existing segmentation model from a research paper for image processing to run multithreaded and took the training runtime per image from 55min to 2min. Yeah it was low hanging fruit but honestly its the type of thing that is just tedious and easy to make mistakes and spend forever debugging.
Like sure I could have done it myself but it would have taken me days to figure out and I would have had to test and read a ton of docs. Claude got it working in like half an hour and generated every data plot I could need. If I wanted to test out different strategies and optimizations, I could iterate through various strategies rapidly.
I don't really like to rely on AI a bunch but it indisputably is incredibly good at certain things. If I am just trying to get something done and don't need to worry about vulnerabilities as it is just data collection code that runs once, it saves a tremendous amount of time. I don't think it will outright replace developers but there is some room for it to expand the effectiveness of individual devs so long as they are actually providing oversight and not just letting it do stuff unchecked.
I think the larger issue is more how economically viable it is for businesses to spend a ton on electricity and compute for me to be able to use it like this for 20 bucks a month. There will be an inevitable enshittification of services once a lot of the spaces investors are dumping money are figured out to be dead ends and people start calling for returns on their investment.
Right now the cash is flowing cause business people don't fully understand what its good at or not but that's not gonna last forever.
They didn't say "AI is bad". Take another look.
I retract my last statement about it being a bad take
thats the trick with bullshit in general.
> LLM abuse is going to be the end of so many tech companies
and is also going to provide a lot of opportunities for experienced engineers to cleanup the mess.
What does substantial mean? Somewhere between 5% and 100%. Something NOT insignificant.
At a minimum, it is safe to say that GenAI is or could be a significantly beneficial tool for a significant number of people.
It's not required that folks disclose how many CPUs, lines of code, numbers of bytes processed, or other details for the above to be a reasonable take.
In reality, modern LLMs trained with RL have terrible variance and mainly learn 1:1 mapping of ideas to ideas, which is a big issue for creative writing and parallel inference/majority voting techniques, so there's even less meaningful "non-determinism" available than you might think. It's usually either able or not able to give the correct answer, rerolling it doesn't work well. I think even a human has more non-determinism than a modern LLM (it's impossible to measure though).
Actually, I did try asking ollama running locally. That should've reduced the amount of non-determinism and whatever layers providers add, and the uncertainty of computer availability.
I asked it for a list of Javascript keywords ordered alphabetically. Within 5 minutes it produced a slightly different list
Asking a model for 2x2 is moot because 2x2=5 is statically highly unlikely. Anything more complex though?
That's not my point, the keyword here is "meaningful". How many of those lists are correct? (ignoring the fact prompting a LLM for lists is a bad idea, let alone local ones)
If you spend some time with SotA LLMs, you'll see that on rerolling they express pretty much the same ideas in different ways, most of the time.
> Yes. That's missing the forest for the trees, though.
So, what is your point and forest?
That there exists some non-deterministic accurate LLM with a 100% correct training set which will always produce the same answer if you ask it 2x2 (where asking is somehow different from prompting)?
- Does it exist?
- Where? (considering that the world at large uses chatgpt/claude/gemini)
- what's its usefulness beyond 2x2?
As I already said: modern LLMs mainly map a single input idea to a single output idea. They might express it in slightly different ways, but it's either correct or incorrect, you can't easily turn an incorrect result into a correct one by rerolling. If you spend any time with Gemini/o3/Claude, you understand this from the first-hand experience. If you know what current RL algorithms do, you understand why this happens.
An ideal LLM would learn one-to-many correspondence, generalizing better, and that still won't be any problem as long as the answer is correct. Because correctness and determinism are orthogonal to each other.
Here's what you started with: "The point about non-determinism is moot if you understand how it works."
When challenged you're now quite literally saying "oh yeah, they are all non-deterministic, will produce varying results, it's impossible to control the outcome, and there's some ideal non-existent LLM that will not have these issues"
So what's your point and forest again?
>Here's what you started with: "The point about non-determinism is moot if you understand how it works."
Yes, the authors' point about non-determinism is moot because he draws this conclusion from LLMs being non-deterministic: "what works now may not work even 1 minute from now". This is largely untrue, because determinism and correctness are orthogonal to each other. It's silly to read that as "LLMs are deterministic".
>When challenged you're now quite literally saying "oh yeah, they are all non-deterministic, will produce varying results, it's impossible to control the outcome, and there's some ideal non-existent LLM that will not have these issues"
That's not what I'm saying. Better LLMs would be even less deterministic than current ones, but even that would not be a problem.
>So what's your point and forest again?
Point: "Determinism and correctness are orthogonal to each other".
Forest: there's much more going on in LLMs than statistical closeness. At the same time, you can totally say that it's due to statistical closeness and not be wrong, of course.
I agree with you after this explanation.
I think (especially in the current offerings) non-determinism and incorrectness are so tightly intertwined that it's hard to say where one starts and the other one ends. Which makes the problem worse/more intricate.
And yeah, prompting is useless: https://dmitriid.com/prompting-llms-is-not-engineering
I'm an expert in my field with decades of experience, and a damn good programmer, and Cursor is like zipping down the street on a powered bike. There is no wishful thinking here!
Can I use it for everything? No, and that applies to every tool. But ffs I have to read here every day on HN some self-important blogger telling me I'm imagining things.
Funnily enough, that's exactly what the article asks.
> I'm an expert in my field with decades of experience, and a damn good programmer, and Cursor is like zipping down the street on a powered bike.
Oh look, an unverified claim. Go through the first list in the article, and ask "what we, as technologists, know about this claim".
You can also actually read the article and see that I actually use these tools every day.
You say you've used it to build multiple projects, but then title it "everything is wishful thinking". That makes no sense. You're either click-baiting or have an inconsistent stance.
> This is crypto all over again
No. The fundamental vision for BTC is that it grows to be a mainstream currency for a huge variety of use cases. If BTC continues to be used just for pirate sites, black market, etc, then it will be considered to have failed. With LLMs the success is immediate. We don't need LLMs to solve every software problem, for them to be massive accelerator of daily coding.
Or you haven't actually understood a single thing from the article.
Like the fact you say "aren't we technologists" immediately followed by a completely unverifiable claim that everyone is supposed to take at face value, uncritically.
And any criticism is immediately labeled as "crap", "people who can't see the future" etc. Remind you of anyone?
There's a big difference between stating "everything" here is wishful thinking, versus calling for more data to understand the strengths and weaknesses.
But if you can't go past the title, well, you might have bigger problems.
We're not there yet. Try again in 20.
I think this is still an incredible outcome given how many dice rolls you can take in parallel with multiple claude/o3/gemini attempts at a problem with slightly different prompts. Granted, each rollout does not come for free given the babysitting you need to do but the cost is much lower than going down the path yourself/having junior colleagues make the attempt.
Who am I? Senior engineer with 5 years of experience, working within the AI team.
Project: social platform with millions of MAU
Codebase: Typescript, NextJS frontend, express backend, Prisma ORM, monorepo . Hundreds of thousands of lines of code.
My expertise sits in the same domain, language, codebase that I apply AI to.
When do I use AI? All the time. It has significantly sped me up.
How do I use AI? Almost always in a targeted fashion. I line it up like a sniper rifle - I understand my problem, where in the codebase it should act in order to apply it, and what general structure I want. Then I create the prompt to get me there. Before, I needed to do the above anyway, minus the prompt, but then I needed to use my slow hands to code dozens or hundreds of lines of code to get there. Now, the AI does it for me, and it makes me much faster. To clarify, I do not plan everything in the prompt - I let the AI color between the lines, but I do give it the general direction. Of course, I also use AI for smaller tasks like writing SQL queries for example, and it’s saved tons of time there too.
In my side projects, I have a different approach - I vibe code my way through with less “sniper targeting”, because I care less about the beauty of the code organization and more about results. If I do see AI slop, I then ask the AI to clean it up, or redo the prompt with more targeting.
Overall, AI has significantly sped me up. I am still heavily involved as you can tell, but it is a phenomenal tool, and I can envision it taking over more and more of the code writing process, with the right rules in place (we are actively working on that at my work, with md files and rebuilding our FE from the ground up with AI in mind).
Does this sort of answer give you a better idea of how AI helps?
* Computation does not scale linearly with context size, meaning the ‘memory’ of LLMs is limited and gets more expensive as it gets bigger.
* Prompt injection limits the usability of LLMs in the real world. How can you put an LLM in the driving seat if malicious actors can talk it into doing something it’s not supposed to.
Whenever I see a blog post by Anthropic or OpenAI I do a Ctrl+F for “prompt injection.” Never mentioned. They want people to forget this is a problem — because it’s a massive one.
For someone like me, who is self-taught on essentially all of his career skills, I have particular concern for a world in which people use AI to "learn things," when that tech doesn't allow them to make mistakes. It just does things for people. For that reason alone, I don't see AI as a viable way to learn at all. If your parents never take their hand from the back of the bicycle seat, do you really know how to ride a bike without falling over? Isn't the scraped knee how we truly mastered that skill?
For SEO in particular, I'd probably defer to someone with more daily experience, like you. That said, I think I can extrapolate from what I've seen elsewhere that the sameyness may likely start to affect content itself (in fact, it already has, for so many formerly good news outlets). Google search kinda blows in recent years. The AI Overview feature means so many people aren't visiting the source website anyway.
To me, none of this looks appetizing. It looks like a snake eating its own tail.
I don't mean to sound so bummer about the topic, but I've begun to worry about my own place in this ecosystem for the next 15, 20 years until I retire. Most of the joy in development has been sucked out of the art. Today, it seems mostly about getting wrapped around the axle of countless frameworks (without even really understanding them) and manhandling those chocolates on Lucy's and Ethel's conveyor belt. It's a comedy of errors. I'll yell "Get off my lawn" with the best of them. Add the same issues to SEO, and I don't really know where we'll end up, but it doesn't look creative, to me. It looks like a sad cliché, like the rows upon rows of sad souvenir shops that all kind of sell the same thing—the tourist trap that travelers (at least, travelers like me) actually loathe and try to avoid.
atemerev•7mo ago
Crypto is a lifeline for me, as I cannot open a bank account in the country I live in, for reasons I can neither control nor fix. So I am happy if crypto is useless for you. For me and for millions like me, it is a matter of life and death.
As for LLMs — once again, magic for some, reliable deterministic instrument for others (and also magic). Just classified and sorted a few hundreds of invoices. Yes, magic.
harel•7mo ago
atemerev•7mo ago
harel•7mo ago
assuagering•7mo ago
atemerev•7mo ago
In some other Swiss cantons like Zug, you can pay some of the bills and even some taxes with crypto directly, but not here yet.
mumbisChungo•7mo ago
It's the same problem that crypto experiences. Almost everyone is propagating lies about the technology, even if a majority of those doing so don't understand enough to realize they're lies (naivety vs malice).
I'd argue there's more intentional lying in crypto and less value to be gained, but in both cases people who might derive real benefit from the hard truth of the matter are turning away before they enter the door due to dishonesty/misrepresentation- and in both cases there are examples of people deriving real value today.
o11c•7mo ago
I disagree. Crypto sounds more like intentional lying because it's primarily hyped in contexts typical for scams/gambling. Yes, there are businesses involved (anybody can start one), but they're mostly new businesses or a tiny tack-on to an existing business.
AI is largely being hyped within the existing major corporate structures, therefore its lies just get tagged as as "business as usual". That doesn't make them any less of a lie though.
mumbisChungo•7mo ago
Anecdotally, I see a lot more bold-facing lies by crypto traders or NFT "collectors" than by LLM enthusiasts.
tehjoker•7mo ago
foobarchu•7mo ago
atemerev•7mo ago
troupo•7mo ago
"You had to be there to believe it" https://x.com/0xbags/status/1940774543553146956
AI craze is currently going through a similar period: any criticism is brushed away as being presented by morons who know nothing