1) Frees us from maintaining a DSL (parsing, LS, ..).
2) Uses something familiar to most web developers.
3) Actually expands our configuration features. E.g. with TypeScript we can change the configuration depending on env vars.
Though we want to keep the same level of abstraction as in DSL. Doing that with great DX is what we want to tackle.
Still parts of the docs have to be handmade, and those are usually not supported directly by the ecosystem, so you have to build your own solutions.
densh•4mo ago
If the hypothesis is correct, it sets an incredibly high bar for starting a new programming language today. Not only does one need to develop compiler, runtime, libraries, and IDE support (which is a tall order by itself), but one must also provide enough data for LLMs to be trained on, or even provide a custom fine-tuned snapshot of one of the open models for the new language.
NitpickLawyer•4mo ago
CC can do that by itself in a loop, in ~3mo apparently. https://cursed-lang.org/
I know it's a meme project, but still it's impressive. And cc is at the point where you can take the repo of that language, ask it to "make it support emoji variables", and 5$ later it works. So yeah ... pretty impressive that we're already there.
DonaldPShimoda•4mo ago
I don't work in this area (I have a very unfavorable view of LLMs broadly), but I have colleagues who are working on various aspects of what you ask about, e.g., developing testing frameworks to help ensure output is valid or having the LLMs generate easily-checkable tests for their own generated code, developing alternate means of constraining output (think of, like, a special kind of type system), using LLMs in a way similar to program synthesis, etc. If there is fruit to be borne from this, I would expect to start seeing more publications about it at high-profile venues in the next year or two (or next week, which is when ICFP and SPLASH and their colocated workshops will convene this year, but I haven't seen the publications list to know if there's anything LLM-related yet).
Twisol•3mo ago
(I have a pretty unfavorable view of LLMs myself, but) a quick search for "LLM" does find four sessions of the colocated LMPL workshop that are explicitly about LLMs and AI agents, plus a spread of other work across the schedule. ("LMPL" stands for "Language Models and Programming Languages", so I guess that's no surprise.)
DonaldPShimoda•3mo ago
Twisol•3mo ago
fragmede•3mo ago
https://news.ycombinator.com/item?id=26998308
manx•4mo ago
I see that difference in llm generated code when switching languages. Generated rust code has a much higher quality than python code for example.
nicoburns•3mo ago
danpalmer•3mo ago
As much as I dislike Go as a language, LLMs are very good at it. Java too somewhat, Python a fair amount but less (and LLMs write Python I don't like). Swift however, I love programming in, but LLMs are pretty bad at it. We also have an internal config language which our LLMs are trained on, but which is complex and not very ergonomic, and LLMs aren't good at it.
ozgrakkurt•3mo ago
And 99% of the time tooling isn’t built by the same person that builds the language compiler