That said, it's a lot of words to say not a lot of things. Still a cool post, though!
I believe we're at a point where it's not possible to accurately decide whether text is completely written by human, by computer, or something in between.
If this blog post is unedited LLM output, the blog owner needs to sell whatever model, setup and/or prompt he used for a million dollars, since it's clearly far beyond the state-of-the-art in terms of natural-sounding tone.
This is such an interesting statement to me in the context of leftpad.
Black Mirror did it first https://en.wikipedia.org/wiki/Hang_the_DJ
This latest fever for LLMs simply confirms that people would rather do _anything_ other than program in a (not necessarily purely) functional language that has meta-programming facilities. I personally blame functional fixedness (psychological concept). In my experience, when someone learns to program in a particular paradigm or language, they are rarely able or willing to migrate to a different one (I know many people who refused to code in anything that did not look and feel like Java, until forced to by their growling bellies). The AI/LLM companies are basically (and perhaps unintentionally) treating that mental inertia as a business opportunity (which, in one way or another, it was for many decades and still is -- and will probably continue to be well into a post-AGI future).
The "more on that later" was unit tests (also generated by Claude Code) and sample inputs and outputs (which is basically just unit tests by a different name).
This is... horrifically bad. It's stupidly easy to make unit tests pass with broken code, and even more stupidly easy when the test is also broken.
These "guardrails" are made of silly putty.
Who the hell is going to use it then? You certainly won't, because you're dependent on AI.
fn read_float_literal(&mut self) -> &'a str {
let start = self.pos;
while let Some(ch) = self.peek_char() {
if ch.is_ascii_alphanumeric() || ch == '.' || ch == '+' || ch == '-' {
self.advance_char();
} else {
break;
}
}
&self.source[start..self.pos]
}
Admittedly, I do have a very idiosyncratic definition of floating-point literal for my language (I have a variety of syntaxes for NaNs with payloads), but... that is not a usable definition of float literal.At the end of the day, I threw out all of the code the AI generated and wrote it myself, because the AI struggled to produce code that was functional to spec, much less code that would allow me to easily extend it to other kinds of future operators that I knew I would need in the future.
andsoitis•53m ago
Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.
Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.
johnfn•51m ago
By what definition? It still matters if I write my app in Rust vs say Python because the Rust version still have better performance characteristics.
johnbender•51m ago
_aavaa_•44m ago
jetbalsa•41m ago
phn•34m ago
On a different but related note, it's almost the same as pairing django or rails with an LLM. The framework allows you to trust that things like authentication and a passable code organization are being correctly handled.
andyfilms1•33m ago
entropie•31m ago
marssaxman•25m ago
There may actually be more value in creating specialized languages now, not less. Most new languages historically go nowhere because convincing human programmers to spend the time it would take to learn them is difficult, but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
UncleOxidant•13m ago
That's assuming that your new, very unknown language gets slurped up in the next training session which seems unlikely. Couldn't you use RAG or have an LLM read the docs for your language?
danielvaughn•13m ago
spelunker•21m ago
onlyrealcuzzo•13m ago
I'm working on a language as well (hoping to debut by end of month), but the premise of the language is that it's designed like so:
1) It maximizes local reasoning and minimizes global complexity
2) It makes the vast majority of bugs / illegal states impossible to represent
3) It makes writing correct, concurrent code as maximally expressive as possible (where LLMs excel)
4) It maximizes optionality for performance increases (it's always just flipping option switches - mostly at the class and function input level, occassionaly at the instruction level)
The idea is that it should be as easy as possible for an LLM to write it (especially convert other languages to), and as easy as possible for you to understand it, while being almost as fast as absolutely perfect C code, and by virtue of the design of the language - at the human review phase you have minimal concerns of hidden gotcha bugs.
koolala•6m ago