Seems like a fair argument to raise, although not too insightful.
As a nitpick, it really is not the case that most programming languages are context-free, but I can sympathize with what the author is trying to get at. Most programming languages have a context-free superset that is useful to use as one of many stages in the parsing pipeline, before type checking and name resolution etc... but apart from perhaps some dynamically typed programming languages, the vast majority of programming languages are context-sensitive, not context-free.
I'm not sure that was intended entirely seriously. After all, humans always terminate too...
Although to be fair, nothing above regular (that I'm aware of, it's been a while) requires infinite space, just unlimited space... you can always make a real Turing machine as long as you keep adding more tape whenever it needs it.
some approximate snippets that come to mind are that decoder-only transformers recognize AC^0 and think in TC^0, that encoder-decoders are strictly more powerful than decoder-only, etc.
Person with last name Miller iric if poke around on arXiv, a few others, been a while since was current top of mind so ymmv on exact correctness of above snippets
GPT is purely about semantics, about truth. To the extent that I think it can be said to be fundamentally informal in its dedication to truth, since it can easily be prompted to produce syntactically incorrect output - wrt the syntax of whatever language whose semantics determine the truth GPT is asked to reflect.
So I think this is a categorically incorrect question.
I mean, what is the fundamental syntax of the output of an LLM? It would have to be that any sequence of utf8 chars is valid.
PaulHoule•1h ago
In the Kuhnian sense, Syntactic Structures was the vanguard of a paradigm shift that let linguists do "normal science" in terms of positing a range of problems and writing papers about them. It was also useful in computer science for formally defining computer languages that are close enough to natural languages that people find them usable.
On the other hand, attempts to use the Chomsky theory to develop language technology failed miserably outside of very narrow domains and in the real world Markov models and conditional random fields often outperformed when the function was "understanding", "segmentation", etc.
From an engineering standpoint, the function that tells if a production is in the grammar that is central to the theory is not so interesting for language technology, I mean, if LLMs were -centric then an LLM would go on strike if you got the spelling or grammar wrong or correct you in a schoolmarmish William Safire way but rather it is more useful for LLMs to silently correct for obvious mistakes the same way that real language users do.
The other interesting thing is the mixing of syntax and semantics. For a long time I believed the "Language Instinct" idea of Pinker in which the ability to serialize and deserialize language is a peripheral that is attached to an animal and you need the rest of the animal and even the surrounding world to have "human" competence. Now LLMs come around and manage to show a lot of linguistic competence without the rest of the animal and boy we have egg on our face, and coming out of the shadow of the Chomsky theory, structuralism will rear it's head again. (e.g. "X is structured like a language" got hung up on the unspoken truth that "language is not structured like a language")
ogogmad•1h ago
Maxatar•54m ago
FeepingCreature•21m ago
(Left-recursion? Use a while loop! Mutable state is a pathway to many abilities functional programmers consider unnatural.)
DiscourseFan•38m ago
lukev•17m ago
czzr•17m ago