correct me if I'm wrong those studies would not have looked at TypeScript itself, which for all I know could be complete garbage designed to lock you into MSFT products.
It’s a formal acknowledgement that humans make mistakes, implicit assumptions are dangerous and that code should be validated before it runs. That’s literally the whole point, and if developers by type were YOLO I’ll push it anyway, TS wouldn’t have got anywhere near the traction it has. Static typing is a monument to distrust.
It lets you chain data from one task to another based on knowing what possible and available given the types. That information just straight up doesn't exist in JS code, unless you write JSDoc comments all over with that information as static analysis hints. But why not just embed that information in the language itself instead of tacking it at the top of every chunk of code, much more elegant and allows way more power.
Even if you used JSDoc, you still need that static analysis tool to exist, and hey, that's exactly what TypeScript is. It's both a language definition for augmenting JS with types, but it's also a transpiler which takes TS code and spits out JS code, _and_ it will tell you if there's problems with the types. But since that engine exists, you can plug it into LSP and so on.
Remember, LSP exists _because_ of TypeScript. VSCode exists because of TypeScript. The team that works on VSCode and came up with LSP did it because they wanted TypeScript to integrate with VSCode and they had the foresight to build something generalized that could be used for ANY language.
Yes, we have decades of such research, and the aggregate result of all those studies is that no productivity gain can be significantly demonstrated for static over dynamic, and vice-versa.
Where is the proof that Javascript is a better language than Typescript? How do you know if you should be writing in Java/Python/C#/Rust/etc? Probably should wait to create your startup lest you fall into a psychological trap. That is the ultimate conclusion of this article.
It's ok to learn and experiment with things, and to build up your own understanding of the world based on your lived experiences. You need to be open minded and reevaluate your positions as more formalized understandings become available, but to say it's too dangerous to use AI because science hasn't tested anything is absurd.
This is an interesting question really. It feels like it would be really hard to do a study on that. I guess the strength of TS would show up mainly as program complexity grows such that you can't compare toy problems in student exams or what ever.
Well that position is hopelessly circular, filled with sophistry and fallacy. Descartes proved that one wrong in the 1600s. "Cogito, ergo sum,"
> Where is the proof that Javascript is a better language than Typescript?
Any 'best' question relies upon an answer to the question of for what objective purpose. Not providing that answer, is the same as dissembling circularly, and any conclusion based on that is more likely to be false than true. False beliefs in this manner are by definition, delusion.
It seems messy. Just one example that I remember because it was on HN before: https://www.hillelwayne.com/post/this-is-how-science-happens...
We have one study on test driven development. Another study that attempted to reproduce the results but found flaws in the original. Nothing conclusive.
The field of empirical research in software development practices is... woefully underfunded and incomplete. I think all we can say is, "more data needed."
hwayne did a talk on this [0].
If you ever try to read the literature it's spartan. We certainly haven't improved enough in recent years to make conclusions about the productivity of LLM-based coding tools. We have a study on CoPilot by Microsoft employees who studied Microsoft employees using it (Microsoft owns and develops CoPilot). There's another study that suggests CoPilot increases error rates in code bases by 41%.
What the author is getting at is that you can't rely on personal anecdotes and blog posts and social media influencers to understand the effects of AI on productivity.
If we want to know how it affects productivity we need to fund more and better studies.
I seem to recall at least some research dating back to the 90s on the topic, which showed how much better (by some metric I can't remember) Ada was wrt most other languages of the time.
I'd be interested in the type of structured research the author is interested in. Could it also be researched whether Go or PHP is better for web development? In some sense, I guess. Both are probably more efficient than writing Apache extensions in assembler, but who knows?
The FUD to spread is not that AI is a psychological hazard, but that critical reasoning and training are much, much more important than they once were, it’s only going to get more difficult, and a large percentage of white-collar workers, artists and musicians will likely lose their jobs.
Not sure which side of the argument this statement is promoting.
There must be something for which humans are essential. Right? Hello? Anybody? It's not looking good for new college graduates.[1]
[1] https://www.usatoday.com/story/money/2025/06/05/ai-replacing...
The expense of an LLM prompt is cents, the expense of a entry-level programmer is at least 50k/yr.
Why would this be? Its simple economics.
There are firm requirements on both parties (employer/employee) in any given profession in an economy. You must make profit above what it costs you to survive.
That puts a floor on the labor costs involved for every profession. If these costs fall below that floor in purchasing power, no potential entrants who are competent will enter that market. There is no economic benefit in doing so, given the opportunity cost. Worse, severe competition for jobs will also force the most competent out first (brain drain).
You not only are losing people going into the pipeline, you are losing the mid-senior level people to burnout and this brain drain as well from adverse competition from the job pool shrinking to such a degree. AI effectively eliminates capital formation (through time value of labor going to zero), which breaks the foundations of every market economy over the past thousand years or so. We have no suitable replacement, and we depend on production remaining at current yields to support our population level (food).
What happened to the advanced vacuum tube engineers after transistors were miniaturized? A lot of those processes, and techniques became lost knowledge. The engineers that specialized retired, didn't pass that knowledge on because there was no economic benefit in doing so.
White collar jobs account for ~60% of the economy, and will be replaced by AI. We've only seen a small percentage of the economy impacted so far, and its created chaotic whipsaws.
What happens when those single digit percentage disruptions become over half?
I think a lot of senior people talking about the impacts of this technology on "junior developers" understand this, and are trying to talk their own book.
It is both.
The inconsistency of the distorted reflected appraisal in the responses of these things does harm people, but you don't realize it because its subliminal.
Without knowing how cult programming and torture works, you don't recognize the danger, and to compete you are forced to expose yourself. Its a race to the bottom.
It also is turning all entry level sequential career pipelines off, which results in a cascading catastrophic failure within 5-10 years, at a time where you have the least ability to correct the underlying issues.
Doctors, Lawyers, Engineers, IT, the entry level parts of these careers all can be done by specially crafted prompts using AI, the AI just won't ever improve beyond that point because its not artificial intelligence, its pseudo intelligence.
> But Cialdini’s book was a turning point because it highlighted the very real limitations to human reasoning. No matter how smart you were, the mechanisms of your thinkings could easily be tricked in ways that completely bypassed your logical thinking and could insert ideas and trigger decisions that were not in your best interest.
The author is an outspoken AI skeptic, who then spends the rest of the article arguing that, despite clear evidence, LLMs are not a useful tool for software engineering.
I would encourage them to re-read the first half of their article and question if maybe they are falling victim to what it describes!
Baldur calls for scientific research to demonstrate if LLMs are useful programming productivity enhancements or not. I would hope that, if such research goes against their beliefs, they would chose to reassess.
(I'm not holding my breath with respect to good research: I've read a bunch of academic papers on software development productivity in the past and found most of them to be pretty disappointing: this field is notoriously difficult to measure.)
I took a template (very few of the code should be the same), added language support, included RSS feeds. Here it is: http://news.expatcircle.com/
(registration does not work. Have to upload the latest version. This thing is not really live yet. I will make it available on my github).
BTW, VS Studio is the best software MS ever produced.
The better question should be whether long-term LLM use in software will make the overall software landscape better or worse. For example, LLM use could theoretically allow "better" software engineering by reducing bugs, making coding complex interfaces easier --- but in the long run, that could also increase complexity, making the overall user experience worse because everything is going to be rebuilt on more complex software/hardware infrastructures.
And, the top 10% of coder use of LLMs could also make their software better but make 90% of the bottom-end worse due to shoddy coding. Is that an acceptable trade-off?
The problem is, if we only look at one variable, or "software engineering efficiency" measured in some operational way, we ignore the grander effects on the ecosystem, which I think will be primarily negative due to the bottom 90% effect (what people actually use will be nightmarish, even if a few large programs can be improved).
1. Attempt to prevent LLMs from being used to write software. I can't begin to imagine how that would work at this point.
2. Figure out things we can do to try and ensure that the software ecosystem gets better rather than worse given the existence of these new tools.
I'm ready to invest my efforts in 2, personally.
> Our only recourse as a field is the same as with naturopathy: scientific studies by impartial researchers. That takes time, which means we have a responsibility to hold off as research plays out, much like we do with promising drugs
Author in another article:
> Most of the hype is bullshit. AI is already full of grifters, cons, and snake oil salesmen, and it’s only going to get worse.
https://illusion.baldurbjarnason.com/
So I assume he has science research at hand to back up his claim that AI is full of grifters, cons, ... and that it will get worse.
I also have to point out that the author's maligning of the now famous Cloudflare experiment is totally misguided.
"There are no controls or alternate experiments" -- there are tons and tons of implementations of the OAuth spec done without AI.
"We also have to take their (Cloudflare’s) word for it that this is actually code of an equal quality to what they’d get by another method." -- we do not. It was developed publicly in Github for a reason.
No, this was not a highly controlled lab experiment. It does not settle the issue once and for all. But it is an excellent case study, and a strong piece of evidence that AI is actually useful, and discarding it based on bad vibes is just dumb. You could discard it for other reasons! Perhaps after a more thorough review, we will discover that the implementation was actually full of bugs. That would be a strong piece of evidence that AI is less useful than we thought. Or maybe you concede that AI was useful in this specific instance, but still think that for development where there isn't a clearly defined spec AI is much less useful. Or maybe AI was only useful because the engineer guiding it was highly skilled, and anything a highly skilled engineer works on is likely to be pretty good. But just throwing the whole thing out because it doesn't meet your personal definition of scientific rigor is not useful.
I do hear where the author is coming from on the psychological dangers of AI, but the author's preferred solution of "simply do not use it" is not what I'm going to do. It would be more useful if instead of fearmongering, the author gave concrete examples of the psychological dangers of AI. A controlled experiment would be best of course, but I'd take a Cloudflare style case study too. And if that evidence can not be provided, then perhaps the psychological danger of AI is overstated?
If you think the shoddy code currently put into production is fine you're likely to view LLM generated code as miraculous.
If you think that we should stop reinventing variations on the same shoddy code over and over - and instead find ways of reusing existing solid code and generally improving quality (this was the promise of Object Orientation back in the nineties which now looks laughable) then you'll think LLM's are a cynical way to speed up the throughput of garbage code while employing fewer crappy programmers.
'kentonv said this best on another thread:
It's not the typing itself that constrains, it's the detailed but non-essential decision-making. Every line of code requires making several decisions, like naming variables, deciding basic structure, etc. Many of these fine-grained decisions are obvious or don't matter, but it's still mentally taxing" [... they go on from here].
(Thread: https://news.ycombinator.com/item?id=44209249).
What does that look like on a scoreboard? I guess you'll have to wait a while. Most things that most people write, even when they're successful, aren't notable as code artifacts. A small fraction of successful projects do get that kind of notability; at some point in the next couple years, a couple of them will likely be significantly LLM-assisted, just because it's a really effective way of working. But the sparkliest bits of code in those projects are still likely to be mostly human.
...and yet, the author literally just published a book called "The Intelligence Illusion: Why generative models are bad for business": https://www.baldurbjarnason.com/2024/intelligence-illusion-2...
It seems they might have missed "motivated reasoning" in their study of human cognitive faults.
The key question is "For whom?" Because there are clearly winners and losers as in any social fad.
> Trusting your own judgement on 'AI' is a risk... The only sensible action to take... is to hold off... adoption of “AI” [until] extensive study [for] side effects [and] adverse interactions.
The adverse effects are already here, quite visible in the comment sections of most popular sites. Nobody is going to stop the world in order to be certified by a self-styled QC department, so study it in motion, a lot can be done that way too. Making unreasonable requests as a precondition for studying AI risks is actually quite wasteful and damaging.
> There is no winning in a debate with somebody who is deliberately not paying attention.
You shouldn't be engaging in a debate with that kind of people. Doing it repeatedly sounds like a Stockholm syndrome, you read psychology books, you should know this better than me.
Are there though? We are just at the front of the curve, if we look at the whole... what if AI destroys the underlying mechanics of exchange society, and we rely upon production that only occurs as a result of those mechanics for food, and by extension the disruption caused by AI causes this to fail, with everyone starving... where would the winners be if everyone ended up dying?
"There's only a small probability that an ignition would light the atmosphere on fire."
If only people treated AI with the same care.
> You shouldn't be engaging in a debate with that kind of person...
I agree. A debate requires that both participants follow certain principles. Play-acting on the other hand is where one pretends they are engaging in good faith, and then have a tantrum in reserve. It is the art of the toddler (and infantile).
I fully agree that there are risks and the one you describe is quite real. However, there are important wrinkles which render campaigning against rushed AI impractical and possibly outright damaging for the cause.
For one, the presumptive winners define themselves as such because... they can! Power corrupts, good luck using persuasion to uncorrupt it. In other words, they'll take their chances, hide in their bunkers, etc. Human nature hasn't changed in the last 100 years except for the worse.
Next, the problem with AI is just another step in the complicated relations between man and machine. In other words it's a political problem and there's history of successfully solving it by political means. The previous solutions involved workers rights and wider sharing of the fruits of productivity increases. It took some work to get it done and it can be done again despite the concerted efforts of the would be "winners" to prevent it and pull us back in the Gilded Age.
Except, this was just a temporary measure, and largely was rapidly undone through concerted effort from the Fed and its partner money-printers (following the abandonment of the sound dollar policy/petrodollar), along with the legislative repeal allowing stock buybacks, dark pool/PFOF takeover of the market, commodity warrant printing (money printing to suppress and yield farm differences), etc.
The can has been kicked down the road so many times, and as happens, eventually juggling all those up in the air causes multiple of them to drop at once and the dynamics of the resources needed to overcome become simply unavoidable.
Many believe that we'll simply fall back into a Gilded Age, but Socio-economic collapse with an immediate malthusian reversion following Catton's updated model is far more likely.
Man people sure are taking my little project seriously. Honestly I don't care whether you take my word for it? We don't sell an AI code assistant. I just thought the experience was interesting so I published it.
But yes, there was a really dumb security bug. Specifically the authorization flow didn't check if the redirect URI was on the allowed redirect URI list for the client ID. OAuth 101 stuff. It's especially embarrassing because it was on my list to check for. When the bug was reported I literally said: "There's no way we have that bug, I checked for that." I hadn't, apparently, checked hard enough. My second thought was: "Ugh everyone is going to blame this on the AI." It's really not the AI's fault, it was my responsibility to check this and I somehow didn't. If I'd been reviewing a human's code and missed that it'd still be my fault. (And believe me, I've seen plenty of humans do worse.)
But yeah here we are. Take from it what you will. Again I'm not trying to sell you an AI code assistant!
When you write a blog article about your "little project" and publish it on your very public company's public blog, then the obvious result is that people on the internet will take that article and your project seriously. It's disingenuous to say that you were just writing this stuff up because you found it interesting, by putting it on the CloudFlare blog, and committing it to the CloudFlare GitHub org, it's undeniably something more than that.
> But yeah here we are. Take from it what you will. Again I'm not trying to sell you an AI code assistant!
Nobody is claiming that you're trying to sell anyone an AI code assistant. But you absolutely *were* trying to sell us on using AI to write production code as a process or development model or whatever.
> "Ugh everyone is going to blame this on the AI." It's really not the AI's fault, it was my responsibility to check this and I somehow didn't.
I don't think anyone would assign blame to the AI itself, I think the issue is in exactly how that AI was integrated into whatever development workflow was used to commit and push and ship to production the relevant code. You can say "well this particular bug would have just as easily happened even if I wasn't using AI" and that may be true, but it's not really an interesting or convincing argument. The project was pretty clearly documented and advertised, explicitly, as an exemplar of AI-driven development of production-quality code, so it's gonna be judged in that context.
More fundamentally, the underlying criticism is more along the lines that: using AI to generate code on your behalf, doesn't meet the same standards of quality as the code you should be expected to write yourself, as a senior domain expert, and (presumably) well-compensated engineer on the project. And also, that this gap in quality isn't actually adequately bridged by code review. Which seems to be a position supported by available evidence.
Huh? What blog post? I never wrote any blog post about this, nor did anyone else at Cloudflare.
All I did was publish an OAuth library on GitHub. I wrote it because we needed one as part of a broader project. The goal of the project was never to demonstrate AI coding practices, the goal was to build a framework for MCP servers, and one thing we needed for that was an OAuth implementation.
It happens that I used AI to build it, so I made note of this in the readme, and also included the prompts in the commit history, as this seemed like the honest and appropriate thing to do.
Two months later, someone posted it to Hacker News, highlighting this note -- and then everyone got very excited and/or angry about it. Other people wrote a bunch of blog posts about it. Not me.
For example given the summary of the evidence in the article, I see a risk that AI be used to create some kind of hyper-meta-super AI that can audit/validate/certify the AI engines and/or their output. When that happens, the next obvious step is to protect the public by mandating that all software be validated by the hyper-meta-super AI. The owner of the hyper-meta-super AI will be a huge single-winner, small software producers will not be able to afford compliance, and the only remaining exemptions from the resulting software monoculture will be exemptions granted to those too big to fail.
whatevermom•8mo ago
rblatz•8mo ago
tptacek•8mo ago
mwcampbell•8mo ago
tptacek•8mo ago
peterashford•8mo ago