The idea between having the red system language, regular scripting language, cross platform gui, and native executables was really cool though. I remember being interested back in ~2015, so my question is...what's going on as it's been a decade. I know the project is crazy ambitious of course, but how close are we to where this is at a stage where most would consider it production worthy.
Then the roadmap slipped, and then never mentioned again.
But I haven't looked at the language or discussions around it for a long while now.
Edit: found some old discussion here. In 2018 they were at version 0.6.4 https://news.ycombinator.com/item?id=18864840
In 2025 they are at version 0.6.6: https://github.com/red/red/releases
btw, i met him in beijing while he was there.
Sassenrath wrote Amiga Logo before starting REBOL.
send friend@rebol.com read http://www.cnn.com
`read` knows that it takes one argument, and `send` knows that it takes two, so this ends up being grouped like (send friend@rebol.com (read http://www.cnn.com))
(which I think is valid syntax; that AST node is called a 'paren').Weirdly, the language also has some infix operators, which seem a bit out-of-place to me. I have no idea how the 'parser'[1] works.
[1] 'parsing' happens so late that it feels funny to call it that. The thing that knows how to treat an array as a representation of an evaluatable expression and evaluate it.
Even things that are normally keywords and statements in other languages (like conditionals and loops) are actually just functions that conform to the exact same parsing rules.
There are no keywords or statements, only expressions. Square backets ("blocks") are used for both code and data, similar to a Lisp list. The main language (called the "'do' dialect") is entirely polish notation with a single exception for infix operators: Whenever a token is consumed, check the following token for an infix operator. If it is one, also immediately consume the immediately following one to evaluate the infix operator.
This results in a few oddities / small pitfalls, but it's very consistent:
* "2 + 2 * 2" = 8 because there is no order of operations, infix operators are simply evaluated as they're seen
* "length? name < 10" errors (if "name" isn't a number) because the infix operator "<" is evaluated first to create the argument to "length?"
I made an infix parser in which certain prefix operators (named math functions) have a low precedence. This allows for things like
1> log10 5 + 5 ;; i.e. log10 10
1.0
But a different prefix operator, like unary minus, binds tighter: 2> - 5 + 5
0
I invented a dynamic precedence extension to Shunting Yard which allows this parse: 3> log10 5 + 5 + log10 5 + 5 ;; i.e. (log10 5 + 5) + (log10 5 + 5)
2.0
Functions not registered with the parser are subject to a phony infix treatment if their arguments look like they might be infix and thus something similar happens to your Red example: 4> len "123" - 2
** -: invalid operands "123" 2
"123" - 2 turns into a single argument to len, which does not participate in the infix parsing at all. log10 does participate because it is formally registered as a prefix operator.The following are also the result of the "phony infix" hack:
4> 1 cons 2
(1 . 2)
5> 1 cons 2 + 3
(1 . 5)
Non-function in first place, function in second place leads to a swap: plus the arguments are analyzed for infix. print tostring 5 + cos pi
Works a bit like a shift/reduce parser, with heavy use of fexprs (blocks in Rebol parlance)I know you enjoy Lisps, so you might like this toy Rebol evaluator written in Scheme: http://ll1.ai.mit.edu/marshall.html
3> log10 5 + 5 + log10 5 + 5 ;; i.e. (log10 5 + 5) + (log10 5 + 5)
2.0
In Rebol this would be equivalent to (log10 (5 + (5 + (log10 (5 + 5)))))I think parsing there depends on the actual value of the current token. So if you assign send to another variable and use that the "parser" will still recognize that it takes 2 parameters.
It's an interesting design, definitely not something one sees frequently.
Wait till you hear of Urbit and see this: https://developers.urbit.org/overview/nock
The website has posts from 2025:
https://www.red-lang.org/2025/04/multiple-monitors-support.h...
Unless you mean the theme, thats probably just a standard Google Blogger/Blogspot theme, which has been around for 25 years.
I don't take any new language seriously unless it's memory safe, free of UB, able to interoperate with what already exists including optional shared libraries (because static linking the world every time in everything is memory and disk wasteful), and assists formal proofs of correctness. Otherwise, what already exists seems preferable for serious use and hobbies can remain fun distractions.
In my eyes it’s more important that FFI be easy, automatic, and as efficient as possible. Go imposes a significant cost on FFI, for example, and many languages have typesystems that are very unfriendly to C ABI or basically require swig.
One thing I really appreciate about Lua is that I can write Lua interfaces for my classes and methods quite easily, and even use Lua as a garbage-collected allocator for native types. Automating the generation of Lua interfaces can easily be done natively with metaprogramming, without involving dependencies like swig. It’s so good that instead of feeling like you’re figuring it out on the Lua side, it’s as if Lua puts the host language first, Lua doesn’t even need to be the owner of the process. It is almost ideal other than the clumsy interface between the stack-like C side and tables, and the dynamic parameter lists.
For so many languages FFI is an afterthought at best.
I thought maybe someone had put the DoD's Red language spec online.
And yes, someone has: https://iment.com/maida/computer/redref/
Sites which do this well (just from the top of my head):
https://odin-lang.org/
immediate code sample visible
"See the Full Demo"
"See More Examples"
https://ziglang.org/
immediate code sample
scroll down a bit, "More Code Samples"
Here on red-lang.org... I can barely find a consecutive meaningful chunk of code... ? "Getting Started" Nope
"Documentation" Nope
"Official Documentation" link to github
https://github.com/red/docs/blob/master/en/SUMMARY.adoc
"Home"
merely a chronologically sorted blog
newest entry links to 50 line "script" by chance
showing off multi-monitor support
(doesn't seem like a super helpful sample)
?
But no one has bothered to write a complete manual like Carl did for Rebol, and the language is a partial implementation in Rebol which has a hybrid Rebol/Red syntax that must ultimately be bootstrapped in Red. In short, you have the scaffolding around it and if you are not a total fan or a dev of the project it is not even worth it.
There are at best two people working on the language, and they both don't have the time and have a very weird approach to docs (like posting extensive google docs or pastebin explanations, but never actually having any proper documentation)
A few years ago I revisited Racket after a long hiatus, and that was maybe the biggest thing I noticed. I really don't like syntax macros as much as I did back in the day. Once I decide to use `define-syntax` I've then got to decide whether I also want to wade into dealing with also implementing a syntax colorer or an indenter as part of my #lang. And if I do decide to do that, then I've got a bunch more work, and am also probably committing to working in DrRacket (otherwise I'd rather stay in emacs) because that's the only editor that supports those features, and it just turns into a whole quagmire.
And it's arguably even worse outside of Racket, where I might have to implement a whole language server and editor plugin to accomplish the same.
Versus, if I can do what I need to do with a reasonably tidy API, then I can get those quality of life things without all the extra maintenance burden.
None of this was a big deal 20 years ago. My expectations were different back then, because I hadn't been spoiled by things like the language server protocol and everyone (finally) agreeing that autoformatting is a Good Thing.
user_part = re.repeat(re.alnum | re.chars(".-_+"))
domain_segment = re.repeat(re.alnum)
domain = re.list(domain_segment,separator=".",minimum=2)
email_address = user_part + "@" + domain
Where, in a real program, `domain` would be defined in a "standard library of constructions" that you can just import and re-use in more complicated regexes.Something like this can be implemented in any language with operator overloading, no DSL required. Without operator overloading, the syntax would be a bit more awkward, but still nicer than the current regexp madness.
I get that the meaning of the operators is not clear unless you're already familiar with regex, but neither is the meaning of !, ?, %, &, |, ^, ~, &&, ||, <<, >>, *, //, &, ++ (prefix), ++ (postfix), and so on. You learn these because you need them once, and then they're burned into your mind forever. Regex was similar for me.
You can't decompose it into parts, you can't give those parts human-friendly names, you can't re-use parts in other regexes, you can't (easily) write functions that return or manipulate regexes (like that "list with separator" function shown above).
Standard math syntax is a DSL. I understand math a lot more quickly than I understand the same thing written in 20 lines of code.
I think the language we use to express ourselves influence the quality of the product. If your language encapsulates complexity, then you can build more complicated things.
I’m not arguing in favor of specific (“pointless”) DSLs, but there’s a nice paper about making a video editing language in Racket [1] that makes a DSL seem pretty convincing.
gRPC and I are not friends and I've thankfully never needed to interact with WCF
Many of them do use macros. But that's not about creating a special language; that's about moving expensive computations and checks that can be done statically to compile time where they belong.
Netlists.
Makefiles.
And so on.
You really don't see the value of DSLs?
The more orthogonal or flexible the language is, the less there tends to be a distinction between redefining syntactical elements and defining functions or methods.
Internal DSLs are always (by definition) valid code in the host language; external DSL, where parsing and interpretation or compilation to executable code for raw text code in the DSL are implemented in the host language, are all new syntax, but “languages that encourage making DSL” are usually ones that are favored for internal, not external, DSLs.
I think the internal/external terminology might be kind of outdated, anyway? I think this might be the first time I've encountered it in the wild in over 10 years.
When people talk about REBOL/Red (or Ruby or Lisps) encouraging making DSLs, they are referring to internal DSLs. In Ruby, these are just APIs consisting of normal objects and methods, in Lisps they often involve macro calls, which may or may not correspond to what people mean by an API, and in REBOL/Red the design of the language is such that “normal” functions can do things that would take macros in Lisp.
GP is right. Don't make DSLs, make APIs, which are:
* More composable
* More reusable
* More simple to reason about
* More natively supported
* More portable
* More readable
* More maintainable
Those are things that spring to mind that I think are unequivocally DSLs, but if you’re willing to consider markup languages as DSLs, the list could get a lot longer.
https://martinfowler.com/books/dsl.html
https://martinfowler.com/dsl.html
Also see:
https://en.m.wikipedia.org/wiki/Domain-specific_language
including the References section.
This might be one of the rare times it's worth it. The C# team alread has the experience and tooling to maintain a language. Maintaining a DSL might be a reasonable choice for them.
It's rarely a good idea for app or library devs to make a similar decision.
In languages where the grammar is sufficiently flexible, the distinction all but disappears, but even in languages where the grammar is rigid and API's stand out like a sore thumb, the API itself still creates a new rule-set that you need to learn.
You can choose to not call that a new language all you want, but the cognitive load is still there.
Just grouping things together in a particular order and giving the group a name does not make a DSL.
The point is that however you want to group it, the effect is the same: You introduce a whole new vocabulary that you need to learn the rules of.
An API can be so complex as to be thought of as a DSL but that makes it bad.
Not long ago, i had to work with a coworker’s mini language and function runner engine, which was basically a mini programming language. Except without a debugger or type checker or stack traces or any of the other million niceties we’d have had if we just used the host language to execute things ‘natively.’
That said, while the level of tooling for big languages goes up, the bar for creating yesteryear’s tooling is going down, with all the LSP tooling we have now, for example. Maybe someday we’ll get languages with tools where libraries have nice tooling without crazy dev effort, and then we’ll change our tune on DSLs.
Metaprogramming comes in handy when ordinary programming results in a lot of complex, hence unmaintainable, code or can't achieve something due to how the language is implemented. It can vastly simplify how you express application logic, e.g. when interfacing with foreign programming languages such as SQL or generating code from data.
Any sane metaprogramming regime allows you to do something like macro expansion, i.e. look at the code the metaprogramming is compiled into in the current context.
And at least the last time I used it in anger (which was, granted, perhaps double digit years ago), the docs started out with and loudly proclaimed "here's how you write your own tasks!" Wait, what? I want to have a build system do at least the 80% case with minimal/zero work; why am I jumping right in to my bespoke needs?
Not sure if this was just bad doc, or the tool actually didn't do the 80% very well then or what, but it's stuck with me.
That's mostly my experience with ruby, too: you can type anything, don't worry about it!
I wanted to mention that Gradle since I think about version 7 supports mostly static typing via Kotlin but regrettably it's lipstick on a pig since almost inevitably one needs to interact with the Groovy side of the house and then all bets are off
That's usually what people complain about (besides the parenthesis) in that it's easy to not be very disciplined.
Lisp is homoiconic, so there isn't really a distinction between programs and data. For example, a snippet of code like a for-loop iterating through a list is also a list that can be inspected and modified. Or something along those lines (there's an XKCD comic that captures the spirit where the person says "it's all CARs" as in lisp you can build everything from CAR, CDR, and CONS I think). You'll have to dig into that on your own. The terms are historically relevant, but seem antiquated now. I've never really understood macros (compile or runtime ones) all that well though, so hopefully someone else in the comics can clarify my mumbo jumbo.
It's impossible to write objective fact. Everything we write is subjected to the context it is expressed in, including the grammar that we use to write it. A DSL accommodates this by letting you make a new grammar on the fly. The trouble is, this doesn't help us get out of the context we are writing in already: it only lets us enter a new one that is nested inside.
So what if we could actually get out? What if we could write from outside of the context we are writing? That's the idea I'm working on. I think it's possible, but it's such an abstract idea that it's tricky to get a handle on.
I also got to know about Red early, followed it and tried it out for a bit.
but as others have said, that move to crypto, to fund the dev work and make the devs money, put me off for good. nothing wrong with making money, let them make plenty, I just didn't jive with crypto as a way of doing it.
sad about it going that route
red-lang.org is blocked!
Phantom believes this website is malicious and unsafe to use.
This site has been flagged as part of a community-maintained database of known phishing websites and scams. If you believe the site has been flagged in error, please file an issue.
Ignore this warning, take me to https://www.red-lang.org/p/about.html anyway.
Seems the last release (alpha) was in 2015.
It's very elegant! I can't fully grasp everything that's happening but the visual appearance of the syntax alone is interesting.
[0] https://github.com/red/code/blob/master/Scripts/clock.red
I would assume it does, because I assume I be able to know these things in a comparable JS or Python example. But if that assumption is correct I really like the ‘look’ of Red.
If it's robust it seems rather neat.
Pretty nonsensical statement. We have that for 50 years. Common Lisp, for example.
For 'platforms' it notes for x86_64, "linux" and other operating systems.
is there a compiler option for this thing to make it spit out a 'freestanding' binary for the architectures it supports?
Red is more different than you may think, just by looking at it. It is designed such that things that look familiar may work very differently under the hood. That's good for making people comfortable, but also means you can't judge a book completely by its cover.
Red is a data format first. That's very Lisp-like, but Red goes further with the large number of datatypes that have a lexical form. e.g. email, url, pair, point, file, date, time, money, etc. Where Lisp* says code is data and data is code, we tend to say "Everything is data until it is evaluated." Rebol was only interpreted, but Red (not all Red however, as some things are too dynamic and require JiT, which we don't have yet) can be compiled.
Red compiles to Red/System (R/S) code. R/S is a static dialect (DSL) of Red, which compiles directly to machine code. No external compiler or C code gen. So you can write DSLs in Red, and those DSLs can be higher or lower level. We call this Metal to Meta programming. Compile a small R/S program, and you will see it's fast, and fully standalone. Compile Red in Dev mode, where the runtime isn't rebuilt, and it's also fast (after the first time). Compile in encap mode and...more to explain. Compile for release and it takes time, but gives you a standalone EXE. It's slow for a number of reasons. Just the current state of things. Compilation speed has not been a priority.
On APIs vs DSLs, a key distinction for me is that API don't have a natural way to enforce the order of operations. That's where a grammar adds value. And because Red is often self-consuming data, the ability to write grammars (`parse` rules) that are very BNF/PEG like, it makes data handling quite powerful. I also think it's easier than most other systems, but that's me, and I've been in the Redbol (Red+Rebol) world for a long time. Two related notes on that. 1) `parse` is, itself a dialect of Red. 2) You can parse not only at the character/string level, but at the value and datatype level, including literal values and typesets. Typesets are a way to express datatypes that are related. e.g. the `number!` typeset matches `[integer! float! percent!]` types. All that said, Red is a multi-paradigm language, including functional (though not pure functional), so you can absolutely build things in an OOP/lib/API manner if you prefer.
Infix came up, and the model is simple. Infix ops have a higher precedence than func calls, but there is no other operator precedence. Strictly left to right for ops. And, yes, operators are a datatype and you can make your own from most 2-arity funcs.
Func args are not enclosed in parens or brackets. This is a fundamental aspect that takes some getting used to. Once you do, at least from what I've seen through the years, it feels natural. We call this "free ranging evaluation" and it's a powerful aspect of Red. It also plays into dialect design. Red is sufficiently flexible that you could hack around this if you want, but then you're fighting the language, rather than working with it.
Red is high level and garbage collected, but it is not "safe" by some standards. Mutability is the default, values are strongly typed but variables are not, you can mix Red and Red/System pretty much however you want, and R/S is basically a C-level language. We talk about these tradeoffs a lot, and how to find a balance. Nothing comes for free.
One of the main dialects in Red, along with `parse`, is the `VID Visual Interface Dialect`. This is how you describe GUIs for Red's cross platform GUI system. You could also build a tree of faces manually, or write your own GUI dialect or API.
Another cross-platform note. Yes, we are 32-bit only at the moment. It hurts us as much as it hurts you. But Red can cross compile from and to any system it runs on. No other software or compilers needed; just a command line switch.
One of our primary goals is to "fight software complexity". That doesn't mean Red will look like C, or JS, or Python. It doesn't mean any one thing. It means a lot of things working in concert. We also hope to keep Red small and easy to set up. Today you can still just drop the EXE somewhere and go. The toolchain (interpreter+compiler) is ~1.5M and the REPLs (text mode and GUI mode, separately) are just over ~2M. We may offer more options at some point, ideas like using LLVM come up a lot. While they solve some problems, they create others. So far, the costs have been deemed unacceptable, and we don't have any showstoppers (other than time). But since Red is open source, with a permissive license...
Happy Reducing!
kstrauser•1mo ago
I've looked it a few times over the years. It's neat. I've never written a single line of it, though.
[0]https://en.wikipedia.org/wiki/Rebol
[1]https://en.wikipedia.org/wiki/Carl_Sassenrath
dev_l1x_be•1mo ago
What a legend!
kstrauser•1mo ago
Izkata•1mo ago
tejtm•1mo ago
justin66•1mo ago
kbelder•1mo ago
leke•1mo ago
gt0•1mo ago
Izkata•1mo ago
I have yet to try converting anything to Red.
Soulsbane•1mo ago