Flow-sensitive type inference with static type checks is, IMHO, a massively underrated niche. Doubly so for being in a compiled language. I find it crazy how Python managed to get so popular when even variable name typos are a runtime error, and how dreadful the performance is.
All the anonymous blocks lend themselves to a very clean and simple syntax. The rule that 'return' refers to the closest named function is a cute solution for a problem that I've been struggling with for a long time.
The stdlib has several gems:
- `compile_run_code`: "compiles and runs lobster source, sandboxed from the current program (in its own VM)."
- `parse_data`: "parses a string containing a data structure in lobster syntax (what you get if you convert an arbitrary data structure to a string) back into a data structure."
- All the graphics, sound, VR (!), and physics (!!) stuff.
- imgui and Steamworks.
I'll definitely be watching it, and most likely stealing an idea or two.
Tangential point, but I think this might be one of the reasons python did catch on. Compile checks etc are great for production systems, but a minor downside is that they introduce friction for learners. You can have huge errors in python code, and it'll still happily try to run it, which is helpful if you're just starting out.
As long as warnings are clear I’d rather find out early about mistakes.
Speak for yourself.
Based on what I observe as an occasional tutor, it looks like compiler warnings & errors are scary for newcomers. Maybe it's because it shares the same thing that made math unpopular for most people: a cold, non-negotiable set of logical rules. Which in turn some people treat warnings & errors like a "you just made a dumb mistake, you're stupid" sign rather than a helpful guide.
Weirdly enough, runtime errors don't seem to trigger the same response in newcomers.
Perhaps nicer messages explaining what to do to fix things would help?
C++/C IDE support is famously horrible owning to macros/templates. I think the expectation that you could fire up VS Code and get reliable typescript type hints has been a thing only for a decade or so - for most of modern history, a lot of people had to make do without.
I am guessing that Python, like Ruby, is dynamic enough that it's impossible to detect all typos with a trivial double-pass interpreter, but still.
Wonder if there was ever a language that made the distinction between library code (meant to be used by others; mandates type checking [or other ways of ensuring API robustness]), and executables: go nuts, you're the leaf node on this compilation/evaluation graph; the only one you can hurt is you.
Maybe they will not be called programming languages anymore because that name was mostly incidental to the fact they were used to write programs. But formal, constructed languages are very much still needed because without them, you will struggle with abstraction, specification, setting constraints and boundaries, etc. if all you use is natural language. We invented these things for a reason!
Also the AI will have to communicate output to you, and you'll have to be sure of what it means, which is hard to do with natural language. Thus you'll still have to know how to read some form of code -- unless you're willing to be fooled by the AI through its imprecise use of natural language.
It's a Profile Guided Optimization language - with memory safety like Rust.
It's extremely easy to optimize assuming you either 1) profile it in production (obviously has costs) or 2) can generate realistic workloads to test against.
It's like Rust, in that it makes expressing common illegal states just outright impossible. Though it goes much further than Rust.
And it's easier to read than Swift or Go.
There's a lot of magic that happens with defaults that languages like Zig or Rust don't want, because they want every cost signal to be as visible as possible, so you can understand the cost of a line and a function.
LLMs with tests can - I hope - do this without that noise.
We shall see.
I'm almost ready to launch v0.1 - but the documentation is especially a mess right now, so I don't want to share yet.
I'll update this comment in a week or so [=
And if you're not that confident, shouldn't you still be optimising for humans, because humans have to check the LLM's output?
Not sure if this is a good example, but I used ChatGPT (not even Codex) to fix some Common Lisp code for me, and it absolutely nailed it. Sure, Common Lisp has been around for a long time, but there's not so much Common Lisp code around for LLMs to train on... but OTOH it has a hyperspec which defines the language and much of the standard libraries so I believe the LLM can produce perfect Common Lisp based on mostly that.
In fact, LLMs have shown that we really, really need new programming languages.
1. They have shown that the information density in our existing languages is extremely low: small prompts can generate very large programs.
2. But the only way to get that high information density now (with LLMs) is to give up any hope of predictability. I want both.
"Write a book about a small person who happens upon a magical ring which turns out to be the repository of an evil entities power. The small person needs to destroy the ring somehow, probably using the same means it was created"
...wait a few minutes...
THE LORD OF THE RINGS
I don't think LLMs have solved the problem of wanting code that's concise and also performant.
And while Lobster is a niche language LLMs don't know as well, they do surprisingly well coding in it, especially in the context of a larger codebase as context. It occasionally slips in Python-isms but nothing that can't be fixed easily.
Not suitable for larger autonomous coding projects, though.
> Dynamic code loading
This is what good language design looks like.
mastermage•1d ago
jgavris•1d ago
mastermage•23h ago