""" Remarkably, SQL has started dropping slowly recently. This month it is at position #12, which is its lowest position in the TIOBE index ever. SQL will remain the backbone and lingua franca of databases for decades to come. However, in the booming field of AI, where data is usually unstructured, NoSQL databases are often a better fit. NoSQL (which uses data interchange formats such as JSON and XML) has become a serious threat for the well-defined but rather static SQL approach. NoSQL's popularity is comparable to the rise of dynamically typed languages such as Python if compared to well-defined statically typed programming languages such as C++ and Java. """
https://quickstarts.snowflake.com/guide/getting-started-with-cortex-aisql/index.html
I'm glad i'm retired.Its primary point is that TIOBE is based on *number* of search results on a weighted list of search engines, not actual usage in Github, search volume, job listings, or any of the other number of signals you'd expect a popularity index to use.
It could easily be indicating that Python articles are being generated by LLMs more than any other class of articles.
1) Google, on nine different TLDs
2) Amazon, on seven TLDs
3) EBay, on two TLDs
4) wikipedia.org (which ends up defaulting to the English Wikipedia)
5) microsoft.com (which only searches Microsoft documentation)
6) sharepoint.com (similarly, Microsoft 365 documentation)
7) rakuten.co.jp
8) walmart.com
Only one of these is actually a web search engine; there are actually more shopping web sites included than search engines. Bing, and its various mirrors, were apparently all excluded because they don't display the number of hits on the result page.And yes, this only adds up to 23. The TIOBE web site doesn't explain the discrepancy.
I suppose this could be done now for all the existing languages that target LLVM and unify the training set across languages.
DSLs look great if they let you write the code you already know how to write faster. DSLs look like noise to everyone else, including Gemini and Claude.
I used to be a big DSL booster in my youth. No longer. Once you need to stop what you're doing and figure out your ninth or eleventh oddball syntax, you realize that (as per the article) Everything is Easier in Python.
Exactly right. Now that we're in the era of LLMs and Coding Agents it's never been more clear that DSLs should be avoided; because LLMs cannot reason about them as well as popular languages, and that's just a fact. You don't need to dig any further, to think about pros and cons, imo.
The fewer languages there are in the world (as a general rule) the better off everyone is. We do need a low level language like C++ to exist and a high level one like TypeScript, but we don't _need_ multiple of each. The fact that there are already multiple of each is a challenge to be dealt with, and not a goal what we reached on purpose.
To be honest I don't think this is necessarily a bad thing, but it does mean that there is a stifling effect on fresh new DSL's and frameworks. It isn't an unsolvable problem, particularly now that all the most popular coding agents have MCP support that allows you to bring in custom documentation context. However, there will always be a strong force in LLM's pushing users towards the runtimes and frameworks that have the most training data in the LLM.
I'd argue that the problem of solving this effect in DSLs might be a bit harder than for frameworks, because DSLs can have wildly different semantics (imagine for example a logic programming DSL a la prolog, vs a functional DSL a la haskell), so these don't fit as nicely into the framework of MCPs maybe. I agree that it's not unsolvable though, but it definitely needs more research into.
What matters most of all is whether the DSL is written in semantically meaningful tokens. Two extremes as examples:
Regex is a DSL that is not written in tokens that have inherent semantic meaning. LLM's can only understand Regex by virtue of the fact that it has been around for a long time and there are millions of examples for the LLM to work from. And even then LLM's still struggle with reading and writing Regex.
Tailwind is an example of a DSL is that is very semantically rich. When an LLM sees: `class="text-3xl font-bold underline"` it pretty much knows what that means out of the box, just like a human does.
Basically, a fresh new DSL can succeed much faster if it is closer to Tailwind than it is to Regex. The other side of DSL's is that they tend to be concise, and that can actually be a great thing for LLM's: more concise, equals less tokens, equals faster coding agents and faster responses from prompts. But too much conciseness (in the manner of Regex), leads to semantically confusing syntax, and then LLM's struggle.
Not just frameworks, but libraries also. Interacting with some of the most expressive libraries is often akin to working with a DSL.
In fact, the paradigms of some libraries required such expressiveness that they spawned their own in-language DSLs, like JSX for React, or LINQ expressions in C#. These are arguably the most successful DSLs out there.
Let's say you want to generate differently sized text here. An LLM will have ingested lots of text talking about clothing size and tailwind text sizes vaguely follow that pattern. Maybe it generates text-medium as a guess instead of the irregular text-base, or extends the numeric pattern down into text-2xs.
Is this an observation of a similar phenomenon?
Also some of the most widely spoken languages today do feature a high degree of diglossia between spoken and written variety, to a point where the written language has been outpaced. We could call that evolving. Examples would Brazilian Portuguese and American English (some dialects specifically have changed English grammar).
Also, notoriously, Chinese written characters have been used for languages that evolve independently and are not mutually intelligible for millennia. Them being printed on paper instead of written doesn't make a difference.
What we do have today is a higher exposure and dominance of certain dialects, with some countries even mandating a certain type of speech historically, coupled with a higher degree of conectivity in society to a point where not being intelligible to other people very far away carries a much worse penalty. That tampers some of the evolution much more than printing press in my view.
- There likely won't be one Skynet, but rather multiple AI's, produced by various sponsors, starting out as relatively harmless autonomous agents in corporate competition with each other
- AI agents can only inference, then read and write output tokens at a limited rate based on how fast the infrastructure that powers the agent can run
In this scenario a "Skynet" AI writing code in C might lose to an AI writing code in a higher level language, just because of the lost time spent writing the tokens for all that verbose C boilerplate and memory management bits for C. The AI agent that is capable of "thinking" in a higher level DSL is able to take shortcuts that let it implement things faster, with fewer tokens.
Plus if we assume the bottle neck on skynet is physical materials and not processing power, a system written in C can theoretically always be superior to a system written in another language if we assume infinite time can be spent building it.
This is kind of how Skynet begins in the TV series Terminator: The Sarah Connor Chronicles. It takes place after the 2nd movie in an alternate timeline from the 3rd movie (and establishes itself how that's possible without contradiction).
Specific examples I remember: A chess machine for the brain, traffic light cameras for the eyes and ears, a repurposed factory to build the terminators themselves. The series is about the Connors getting information from the future and going on the offensive to prevent Skynet from forming.
Even perhaps training a separate new neural network to translate from Python/Java/etc to your new language.
The new way would be to build a disposable jig instead of a Swiss Army Knife: The LLM can be prompted into being enough of a DSL that you can stand up some placeholder code with it, supplemented with key elements that need a senior dev's touch.
The resulting code will look primitive and behave in primitive ways, which at the outset creates a myriad of inconsistency, but is OK for maintenance over the long run: primitive code is easy to "harvest" into abstract code, the reverse is not so simple.
This article starts with "gaming" examples. Simplified to hell but "gaming".
How many games still look like they're done on a Gameboy because that's what the engine supports and it's too high level to customize?
How about the "big" engines, Unity and Unreal? Don't the games made with them kinda look similar?
But is that the direction LLM coding goes? My experience is that LLM produces code which is much more generic and boring than what skilled programmers make.
Edit to add to my sibling comment:
> But some abstractions which makes standard stuff easy, makes non-standard stuff impossible.
I swear that at some point i could tell a game was made in Unity based on the overall look of the screenshots. I didn't know it's the fault of the default shaders, but they all looked samey.
I think this is what the comment above was lamenting about abstractions. I am all for abstraction when it comes to being productive. And I think new abstractions open new possibilities some times! But some abstractions which makes standard stuff easy, makes non-standard stuff impossible.
I’m not convinced simply getting the LLM to inject documentation about the features will work well (perhaps someone has studied this?) because the reason they’re good at doing ‘well known’ things is the plethora of actual examples they’re trained on.
I've tried Gemini 2.5 Pro, o3 and Claude 4 and they have their own unique tone of confidently wrong, deception about reasoning and gaslighting but produce the same result. I wasted way too much time trying to figure out a zero shot prompting strategy before I just rewrote the whole thing by hand.
Seibel: When do you think was the last time that you programmed?
Allen: Oh, it was quite a while ago. I kind of stopped when C came out.
That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization.
The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue. The motivation for the design of C was three problems they couldn't solve in the high-level languages: One of them was interrupt handling. Another was scheduling resources, taking over the machine and scheduling a process that was in the queue. And a third one was allocating memory. And you couldn't do that from a high-level language.
So that was the excuse for C.
Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels?
Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve.
By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are . . . basically not taught much anymore in the colleges and universities.
I do. Would you really argue we discovered perfection in the first sixty years of computer science? In the first sixty years of chemistry we still believed in phlogiston
But Python/Julia/Lua are by no means the most natural languages - what is natural is what people write before the LLM, the stuff that the LLM translates into Python. And it is hard to get a good look at these "raw prompts" as the LLM companies are keeping these datasets closely guarded, but from HumanEval and MBPP+ and YouTube videos of people vibe coding and such, it is clear that it is mostly English prose, with occasional formulas and code snippets thrown in, and also it is not "ugly" text but generally pre-processed through an LLM. So from my perspective the next step is to switch from Python as the source language to prompts as the source language - integrating LLM's into the compilation pipeline is a logical step. But, currently, they are too expensive to use consistently, so this is blocked by hardware development economics.
Maybe designing new languages to be close to pseudo-code might lead to better results in terms of asking LLMs to generate them? but there's also a fear that maybe prose-like syntax might not be the most appropriate for some problem domains.
LLMs seem pretty good at figuring out these things when given a good feedback loop, and if the DSL truly makes complex programs easier to express, then LLMs could benefit from it too. Fewer lines of code can mean less context to write the program and understand it. But it has to be a good DSL and I wouldn't be surprised if many are just not worth it.
Also, domain-specific stuff can still be useful sometimes, and other stuff involved with designing a programming language.
They're riding a horse in the age of automobiles, just because they think they're more comfortable on horseback, while they've never been in a car even once.
My experience so far is that they write mediocre code which is very often correct, and is relatively easy to review and improve. Of course I work with languages like elixir, python, typescript, and SQL - all of which LLMs are very good at.
Without a doubt I've seen a significant increase in the amount of work I can produce. As far as I can tell the defect rate in my work hasn't changed. But the way I work has, I'm now reviewing and refactoring significantly more than before and hand writing a lot less.
To be honest, I'd worry about someone's ability to compete in the job market if they resisted for much longer. With the obvious exceptions of spaces where LLMs can't be used, or have very poor performance.
It'll dump you three classes and a thousand lines of code, where it should use a simple for loop to iterate.
The code Claude, Gemini and Cursor produces still is not enough to pass half-decent quality checks. If you're in "compile=ship", sure.
If you care about performance, or security, or maintainability, no. It's wasting your time, and the review team's time.
I say this from the perspective of someone who nearly became a PL researcher myself. I could easily have decided to study programming languages for my PhD. Back then I was delighted by learning about cool new languages and language features.
But I did didn't study PL but rather ML, and then I went into industry and became a programming practitioner, rather than a PL researcher. I don't want a custom-designed ML programming language. I want a simple general-purpose language with good libraries that lets me quickly build the things I need to build. (coughPythoncoughcough)
Now that I have reached an age where I am aware of the finiteness of my time left in this universe, my reaction when I encounter cool new languages and language features now my is to wonder if they will be worth learning. Will the promised productivity gains allow me to recoup the cost of the time spent learning. My usual assessment is "probably not" (although now and then something worthwhile does come along).
I think that there is a very real chance that the idea of specialized programming languages will indeed disappear in the LLM era, as well as the need for various "ergonomic" features of general purpose languages that exist only to make it possible to express complex things in fewer lines of code. Will any of that be needed if the LLM can just write the code with what it has?
Some deep PL stuff I doubt there is productivity gain to begin with. But many ideas in the ML language family are simple and reduce debugging pain. Time lost from one encounter with muddy JS/Python semantics is more than the time learning about sum types.
I wonder if we need a language designed to be easier for an AI to reason about, or easier for a human to see the AI's mistakes.
Maybe some boring, kind-of-consistent language like C, Python, or Go is good enough. An LLM spits out a pile of code in one or more of them that does most of what you want, and you can fix it because it's less opaque than assembly. It doesn't sound like a job I'd want, but maybe that's just the way things will go.
I also take issue with the idea that Python is simple. Python's semantics are anything but. The biggest issue the language has, performance, is a consequence of these poorly thought out semantics. If the language was actually simple it would be a lot easier to build a faster implementation.
So with LLMs making it easier to project back and forth between how programmer sees the task at hand, and the underlying dumb/straightforward code they ain't gonna read anyway, maybe we'll finally get to the point of addressing the actual problem of programming language design, which is that you cannot optimize for every task and cross-cutting concern at the same time and expect improvement across the board - we're already at the limit, we're just changing which time of day/part of the project will be more frustrating.
Can someone help me out?
The general programming languages we have now are about as good as they are ever going to get. Balance-wise. One can solve problems across a broad range of topics equally well.
Tailoring a language to make some tasks especially easy / straightforward will likely make some other problems a lot harder / more cumbersome to express.
So further improvements in abstraction and expressiveness have to come from elsewhere. Not better programming languages but partnering up with an LLM?
You understood and explained what I meant well.
> So further improvements in abstraction and expressiveness have to come from elsewhere. Not better programming languages but partnering up with an LLM?
In practice, probably yes; but the point I've been making here and in bringing this up over the years is, further improvements can come if we stop insisting on directly editing "single source of truth" code representation. Then, you no longer need to make trade-offs up front - for task A, you may view code through lens (or "in language") that makes tasks like A especially easy, then for task B you switch to a view that makes tasks like B especially easy.
Examples:
- Switching between "point free style" and explicit temporary variables;
- For a given function, inlining its entire call subtree, turning a tiny function calling tiny functions into a single block you can read top to bottom; super useful for debugging;
- Switching between "exceptions" and "sum types" styles of error handling (they're pretty much equivalent), or more generally, switching between showing all vs. hiding error handling code vs. handling everything except error handling/propagating code;
- A view of code that shows only types, or one showing async, or one entirely hiding it;
- A database-style view for code, that lets you query it and edit in bulk;
- An editable state machine diagram that corresponds to code (maybe needs a little help by manually identifying which set of classes is conceptually a state machine, which methods are transitions, etc.);
- Hide/show logging, telemetry, or any other cross-cutting concern that you don't strictly care about at the moment (superset of earlier hiding of error handling code);
And so on, and so on.
Point being, all those views/perspectives operate on the same underlying artifact - the codebase. It can be plaintext, but because no one is actually reading it directly, it doesn't have to be optimized for anything in particular (or it could just be simple and straightforward, at a price of being verbose; say Python). Meanwhile, the views/perspectives could each use syntax or format best suited for the specific task it helps with. All the trade-offs would be made at the point of use, instead of baked in up front when the project starts.
* Not just English, substitute any other human language into the above
As someone who loves a wide diversity of actively evolving programming languages, it makes me sad to think those days of innovation may be ending. But I hope that's not going to happen.
It has always been the case that anyone designing a new language or adding features to an existing one is acutely mindful of what programming language knowledge is already in the heads of their users. The reason so many languages, say, use `;` for statement terminators is not because that syntax is particularly beautiful. It's just familiar.
At the same time, designers assume that giving users a better way to express something may be worth the cost of asking them to learn and adapt to the new way.
In theory, that should be true of LLMs as well. Yes, a new language feature may be hard to get the LLM to auto-complete. But if human users find that feature makes their code easier to read and maintain, they still want to use it. They will, and eventually it will percolate out into the ecosystem to get picked up the next time the LLMs are trained, in the same way that human users learn new language features by stumbling onto it in code in the wild.
So I'd like to believe that we'll continue to be able to push languages forward even in a world where a large fraction of code is written by machines. I also hope that LLM training cost goes down and frequency goes up, so that the lag behind what's out there in the world and what the LLMs know gets smaller over time.
But it's definitely possible that instead of that, we'll get a feedback loop where human users don't know a language feature even exists because the LLMs never generate code using it, and the LLMs never learn the feature exists because humans aren't writing it.
I have this same fear about, well, basically everything with LLMs: an endless feedback loop where humans get their "information" from LLMs and churn out content which the LLMs train on and the whole world wanders off into a hallucinatory bubble no longer grounded in reality. I don't know how to get people and/or the LLMs to touch grass to avoid that.
I do hope I get to work on making languages great for humans first, and for LLMs second. I'm way more excited to go to work making something that actual living breathing people use than as input data for a giant soulless matrix of floats.
DSL's would be even harder for LLM's to get right in that case compared to the low-resource language itself
In the end I think mentioning Python is a red herring. You can produce an eDSL in Python that's not in LLM training data so difficult for LLMs to grok, and yet still perfectly valid Python. The deeper issue here is that even if you use Python, LLMs are restricting people to use a small subset of what Python is even capable of.
If compilers had significant non-deterministic error rates with no reliable fix, that would probably be a rather different timeline.
This mimics what you see in, say, Photoshop. You can edit pixels manually, you can use deterministic tools, and you can use AI. If you care about the final result, you're probably going to use all three together.
I don't think we'll ever get to the point where we a-priori present a spec to an LLM and then not even look at the code, i.e. "English as a higher-level coding language". The reason is, code is simply more concise and explicit than trying to explain the logic in English in totality up-front.
For some things where you truly don't care about the details and have lots of flexibility, maybe English-as-code could be used like that, similar to image generation from a description. But I expect for most business-related use cases, the world is going to revolve around actual code for a long time.
I haven’t seen technology move this fast before, so I wouldn’t make any hard predictions about how long actual code written by humans survives. We don’t really need AGI at this point to have opaque coding solutions, even if the LLMs should still be better.
But maybe there's a future where a "best of both worlds" that intermingles structured code with unstructured verbal instructions, so that you can ensure that the important aspects of your requirements are explicit and deterministic, and the filler parts can be in English descriptions. It'd compile, and you get red squigglies in your editor if something doesn't make sense, maybe even type hints on the English somehow. I think that'd be a pretty good "best of both worlds" because it'd really let you separate the signal from the noise, which is the problem you have when using either English or programming languages distinctly.
Hmm, I don't think LLMs are quite there yet, but I could see this being a potential way to do things, that may fit better with their strengths, and allow more to be done without digging into actual code.
Now I just need to dust off my old Prolog books and figure out how to use coding assistants in a similar fashion.
So, yeah, I think what you say is the only logical conclusion. Waterfall development doesn't work well for humans, and it's probably not going to work for AI either. In fact, spending a ton of time formalizing a huge requirements doc is going to be even more of an antipattern with AI. When you can just have a conversation, hammer out details, identify nuances, plan for the future, all while seeing the feature in real time, that's going to be a million times more productive. The requirements doc can come with a full demo and a green button to launch it into production immediately after (or during) the review meeting.
So yeah, we'll probably never have "English as code", but rather an iterative process that leads to both English and code descriptions of the feature.
As far as DSLs, I think they still fit. If you see a pattern that makes sense to encode in a DSL, the LLM will happily do that, and doing so could help the AI optimize its limited context and make the invariants of the domain much more explicit.
Fun times.
Of course it can. There's no reason for modern extensions to keep pumping out instructions named things like "VCTTPS2DQ" other than an adherence to cryptic tradition and a confidence that the people who read assembly code are poor saps who don't matter in the grand scheme of the industry, which is precisely my point. And even if x86 was set in stone centuries ago, there's no excuse for modern ISAs to follow suit other than complete apathy over the DX of assembly, and who can blame them?
> You can however improve ergonomics greatly via macros and everyone does this.
Yes, and surely you see how the existence of macro assemblers strengthens my argument?
Someone is mad. Just because you don't have the patience for it doesn't mean everyone has the same preferences as you do.
Anyway, one could argue macros in assembly are part and parcel of the process if you develop things in assembly of significant complexity. If you don't like a particular instruction name, you could always relabel it in a macro to something you like. "Redesigning assembly" of an existing target otherwise makes no sense as a concept as assembly languages are usually specified by ISA designers as a target to meet the developers of compilers.
You can of course write your own assembler for yourself if you really want. That's the beauty of asm, you don't have to meet ideological targets for this or that PL school. You just need to admit the right byte codes, and that's sufficient. It doesn't guarantee things will work how you want though easily in a higher level sense, which is why most people use programming languages.
> Someone is mad
You're completely misunderstanding me. This isn't my personal opinion, it's simply an observation of the utter indifference of ISA designers to any consideration of ergonomics or developer experience. The rest of your comment aptly explains ways to work around this fact while failing to understand that the whole point of this thread is that it didn't have to be this way. For example, there's no reason that an assembly language couldn't have basic syntactical affodances like namespaced identifiers or pseudo-structural elements like basic if, while, and switch. Hell, in the 80s we had chips that executed Lisp, and today we might still have some brave souls making chips that execute Forth. There's no law of the universe that says that assembly has to be jank and primitive.
What humans look at and what an AI looks at right now are similar only by circumstance, and what I sort of expect is that you start seeing something more like a "structure editor" that expresses underlying "dumb" code in a more abstract way such that humans can refactor it effectively, but what the human sees/edits isn't literally what the code "is".
IDK it's not written yet but when it is it will be here: https://kylekukshtel.com/llms-programming-language-design
In an initial experiment, I found that LLMs could translate familiar shell scripting concepts into Hypershell syntax reasonably well. More interestingly, they were able to fix common issues like type mismatches, especially when given light guidance or examples. That’s a big deal, because, like many embedded DSLs, Hypershell produces verbose and noisy compiler errors. Surprisingly, the LLM could often identify the underlying cause hidden in that mess and make the right correction.
This opens up a compelling possibility: LLMs could help bridge the usability gap that often prevents embedded DSLs from being more widely adopted. Debuggability is often the Achilles' heel of such languages, and LLMs seem capable of mitigating that, at least in simple cases.
More broadly, I think DSLs are poised to play a much larger role in AI-assisted development. They can be designed to sit closer to natural language while retaining strong domain-specific semantics. And LLMs appear to pick them up quickly, as long as they're given the right examples or docs to work with.
I don't blame anyone in the picture, I don't disagree that time saved with LLMs can be well worth it, but it still is a topic I think we in the PL community need to wrestle more with.
LLMs are surprisingly bad at bash and apparently very bad at Powershell
Pythonic shell scripting is well suited to their language biases right now
I'm taking a lot of inspiration from Python in the syntax and I'm actually worried it will trip up any LLM I train on my language since I think it will risk probabilistically just reverting to Python mid-answer Time will tell
> Suddenly the opportunity cost for a DSL has just doubled: in the land of LLMs, a DSL requires not only the investment of build and design the language and tooling itself, but the end users will have to sacrifice the use of LLMs to generate any code for your DSL.
I don't think they will. Provide a concise description + examples for your DSL and the LLM will excel at writing within your DSL. Agents even moreso if you can provide errors. I mean, I guess the article kinda goes in that direction.
But also authoring DSLs is something LLMs can assist with better than most programming tasks. LLMs are pretty great at producing code that's largely just a data pipeline.
Examples of domains that might be more challenging to design DSLs for: languages for knitting, non-deterministic languages to represent streaming etc. (i.e https://pldi25.sigplan.org/details/pldi-2025-papers/50/Funct... )
My main concern is that LLMs might excel at the mundane tasks, but struggle at the more exciting advances, and so now the activation energy for coming up with advances DSLs is going to increase and as a result, the field might stagnate.
So it's not just a question of the semantics matching existing programming languages, the question is if your semantics are intelligible given the vast array of semantic constructs that are encoded in any part of the model's weights.
To add to that... One limitation of LLM for a new DSL is that the LLM may be less likely to directly plagiarize from open source code. That could be a feature.
Another feature could be users doing their own work, and doing a better job of it, instead of "cheating on their homework" with AI slop and plagiarism, whether for school or in the workplace.
Here's the cursor rules file we give folks: gist.github.com/aaronvg/b4f590f59b13dcfd79721239128ec208
If something is useful people will use it. Just because it seems like llms are everywhere, not everyone cares. I wouldn't want vibe coders to be my target audience anyway.
What the graph shows is that LLMs struggle with "hard" languages (Rust, Go, C#) with the exception of Ruby.
As AI systems improve, and especially as they add more 'self-play' in training, they might become really good at working in any language you can throw at some.
(To expand on the self-play aspect: when training you might want to create extra training data by randomly creating new 'fake' programming languages and letting it solve problems in them. It's just another way to add more training data.)
In any case, if you use an embedded DSL, like is already commonly done in Haskell, the LLMs should still give you good performance. In some sense, an 'embedded DSL' is just fancy name for a library in a specific style.
At the time, I had given in to Claude 3.5's preference for python when spinning up my first substantive vibe-coded app. I'd never written a line of python before or since, but I just let the waves carry me. Claude and I vibed ourselves into a corner, and given my ignorance, I gave up on fixing things and declared the software done as-is. I'm now the proud owner of a tiny monstrosity that I completely depend on - my own local whisper dictation app with a system tray.
I've continued to think about stack ossification since. Still feels possible, given my recent frustration trying to use animejs v4 via an LLMs. There's a substantial api change between animejs v3 and v4, and no amount of direction or documentation placed in context could stop models from writing against the v3 api.
I see two ways out of the ossification attractor.
The obvious, passive, way out: frontier models cross a chasm with respect to 'putting aside' internalized knowledge (from the training data) in favor of in-context directions or some documentation-RAG solutions. I'm not terribly optimistic here - these models are hip-shooters by nature, and it feels to me that as they get smarter, this reflex feels stronger rather than weaker. Though: Sonnet 4 is generally a better instruction-follower than 3.7, so maybe.
The less obvious way out, which I hope someone is working on, is something like massive model-merging based on many cached micro fine-tunes against specific dependency versions, so that each workspace context can call out to modestly customized LLMs (LoRA style) where usage of incorrect versions of your dependencies has specifically been fine-tuned out.
This is what I've been focused on last few years with a bit of Direction 3 via
python -> smt2 -> z3 -> verified rust
Perhaps a diffusion model for programming can be thought of as:requirements -> design -> design by contract -> subset of python -> gc capable language (a fork of golang with ML features?) -> low level compiled language (rust, zig or C++)
As you go from left to right, there is an increasing level of detail the programmer has to worry about. The trick is to pick the right level of detail for a task.
Previous writing: https://adsharma.github.io/agentic-transpilers/
Until LLMs stop making up language features, methods, and operators out of convenience, DSLs are here to stay.
However LLMs are actually quite useful for translating concepts in DSLs that you don't understand. They don't so it error free, of course, but allows one to ask enough questions to work out why your attempt to translate concepts into this new fucking stupid ontological pile of wank isn't working
This is so true.
A couple months ago I was trying to use LLMs to come up with code to parse some semi-structured textual data based on a brief description from the user.
I didn't want to just ask the LLM to extract the information in a structured format as this would make it extremely slow when there's a lot data to parse.
My idea was, why not ask the LLM to come with a script that does the job. Kind of "compiling" what the user asks into a deterministic piece of code that will also be efficient. The LLM just has to figure out the structure and write some code to exploit it.
I also had the bright idea to define a DSL for parsing, instead of asking the LLM to write a python script. A simple DSL for a very specific task should be better than using something like Python in terms of generating correct scripts.
I defined the DSL, created the grammar and an interpreter and I started feeding the grammar definition to the LLM when I was prompting it to do the work I needed.
The result was underwhelming and also hilarious at some times. When I decided to build a loop and feed the model with the errors and ask to correct the script, I ended up sometimes having the model returning back python scripts, ignoring completely the instructions.
As the author said, everything is easier in Python, especially if you are a large language model!
Not necessarily true. There are two kinds of DSLs: external and internal.
An external DSL has its own tooling, parser, etc. The nix language, for example.
An internal DSL is like a small parasite that lives inside an existing language, reusing some of its syntax and tools. It's almost like intentional pareidolia. Like jQuery, for example.
Internal DSLs reduce the cognitive load, and in my opinion, they're the best kind of DSL.
LLMs just add another reason to this list.
DSL proliferation is a problem. I know this is not something many people care to hear, and I symphasize with that. Smart people are drawn to complexity and elegance, smart people like building solutions, and DSLs are complex and elegant solutions. I get it.
Problem is: Too many solutions create complexity, and complexity is the eternal enemy of [Grug][1]
Not every other problem domain needs its own language, and existing languages are designed to be adapted for many different problem domains. If LLMs help to stifle the wild growth of at least some DSLs that would otherwise be, then I am reasonably okay with that.
Would you say the same about a parallel universe where LLMs were introduced in 1960?
I think no one ever called Python or C or Rust, or Java or Go a "domain specific language".
> We just respect them now as established, because they've been around for so long.
No, we "respect" them because they are general purpose languages that you can do anything with.
> But for every one thousand awkward DSLs that didn't make it, one new tool emerged which lifts software development to a new level.
Please, do list some DSLs that managed to "lift software development to a new level". And again: A General Purpose Language is not a DSL.
- Makefiles
- regular expressions
- m4
- awk
- sed commands
- jinja templates
- jq
- nix
I have a love-hate relationship with many of these but I sure am happy that they tried. Whatever sucks about them is improved by an even better language, not by regressing to just using general purpose languages everywhere.
PS I put awk in there because you’re not the boss of me. It’s a dsl for text manipulation which happens to be Turing complete. If you write any serious program in awk that isn’t some form of text manipulation, it will make the front page of HN, that’s how rare that is. And to TFA’s point: awk excels at the niche for which it was developed—in a world where LLMs came before awk, it would be very hard to get awk off the ground. That is a net loss. That is the point of TFA.
You’re not wrong about the cost. But you’re wrong about the benefit.
And not to put too fine a point on it, but I have seen many awk scripts, elaborate sed-constructs (I refuse to call them scripts) and jq expressions, that would have been a lot simpler, easier to understand, and easier to maintain, if someone had just sat down and rewrote them in Python, or even in Go. They have their uses, but when people start developing what can be called a small application in a DSL that happens to be turing complete, they should just use a real programming language.
The same goes for many of Makefiles btw., some of which resemble an exceprt from the Necronomicon more than they do build instructions.
And btw. "survivorship bias" doesn't fit what's happening here at all, because the only reason we still use sed, or awk, is that they proved useful for a long time, where many other DSL did not.
It is significant that LLMs in coding are being promoted based on a set of promises (and assumptions) that are getting instantly and completely reversed the moment the technology gets an iota of social adoption in some space.
"Everyone can code now!" -> "Everyone must learn a highly specialized set of techniques to prompt, test generated code, etc."
"LLMs are smart and can effortlessly interface with pre-existing technologies" -> "You must adopt these agent protocols, now"
"LLMs are great at 0-shot learning" -> "I will not use this language/library/version of tool, because my model isn't trained on its examples"
"LLMs effortlessly understand existing code" -> "You must change your code specifically to be understood by LLMs"
This is getting rather ridiculous.
https://upload.wikimedia.org/wikipedia/commons/9/94/Gartner_...
Maybe DSLs are “write-only” languages for humans.
I don’t wish ill or sadness on anyone but it doesn’t bother me at all if LLMs drive DSLs into extinction.
The beauty of picking an existing language as the base is you often get an expansive standard library from the get-go. That means your job as a "DSL" writer is more based on making sure you provide the value adds that make sense for the writers of that DSL.
It's worked particularly well for us because we have a data intake pipeline that has to parse and handle all sorts of random garbage (emails, excel docs, csv files, pdfs, etc).
A language like groovy, ruby, and kotlin all work well because it's trivial to add extensions to the syntax in a way that makes sense for your domain problem. Typescript also wouldn't be a bad choice for similar reasons, the only reason I wouldn't consider it is we run a JVM backend and parsing typescript for the JVM is somewhat of a PITA.
darepublic•7mo ago
TimTheTinker•7mo ago
Consider MiniZinc. This DSL is super cool and useful for writing constraint-solving problems once and running them through any number of different backend solvers.
A lot of intermediate languages and bytecode (including LLVM itself) are very useful DSLs for representing low-level operations using a well-defined set of primitives.
Codegen DSLs are also amazing for some applications, especially for creating custom boilerplate -- write what's unique to the scenario at hand in the DSL and have the template-based codegen use the provided data to generate code in the target language. This can be a highly flexible approach, and is just one of several types of language-oriented programming (LOP).
Lerc•7mo ago
Put differently, The languages people actually use had people who decided to use them, they picked the best ones. Making something new, you compete against the best, not the average. That's not to say that it can't be done, but it's not easy.
iguessthislldo•7mo ago