In a lot of little ways, Nim is a lot like a statically typed Lisp with a vaguely Python-ish surface syntax, although this really doesn't give enough credit to all the choice one has writing Nim code.
Would using the C output and using emcc on it solve this problem?
FWIW, people have been doing that for about as long as there's even been an emscripten, but the article is pointing out the lack of more tight integration with stdlib/std compilation toolchains. I would say evolving/growing the stdib in general is a pain point. Both the language and compiler are more flexible than most, though. So, this matters less in Nim that it might otherwise.
Instead, you will need to write your own bindings. Here's an example repository of mine using emscripten with Nim: https://github.com/miguelmartin75/nim-wasm-experiments and here are bindings to emscripten's C API for websockets: https://github.com/miguelmartin75/nim-wasm-experiments/blob/...
With emscripten, you can create bindings to JavaScript using EM_JS macros, which you can emit in Nim. Here is an example of how to do so: https://github.com/treeform/windy/blob/bc98d4642c700f0277551...
Well, that really depends on what the reason for one's interest in Carbon is, which is slightly hinted at by the last sentence. From what I understand big goal is to be able to do automated migration of large C++ codebases at Google to a saner language. Mond had a nice blogpost musing about it[0]. Nim is not that.
Of course, neither is Carbon yet, and we'll have to wait and see if it reaches that point or if it ends up on killedbygoogle.com. I'm rooting for Carbon though, it's a cool idea.
Anyway, that is a different ambition than looking for a successor language that lets you use existing C++ code without requiring that the latter is changed, which is what Nim is suggested to be good at here.
[0] https://herecomesthemoon.net/2025/02/carbon-is-not-a-languag...
C++ code calling Nim code is also not usually as straightforward. So, "fantastic" here may apply only in one call direction.
This will require manual work, but you can use macros or a code generation script to help if your api is large.
> Rust is being developed for LLVM, a compiler that's also shared with C++.
Not at all relevant (and LLVM is a backend target, not a compiler).
LLVM is a backend compiler which compiles to backend targets like x86, which have their own compilers. It's hard to say exactly what a compiler is and isn't, because it's compilers all the way down; compilers are made of compilers.
My working definition is a compiler is a program that turns code from one form into another, whether that's machine code, byte code, IR, or some other high level source. It's a broad definition but it covers all the things we call "compilers".
https://en.wikipedia.org/wiki/Compiler
"In computing, a compiler is software that translates computer code written in one programming language (the source language) into another language (the target language)."
https://en.wikipedia.org/wiki/LLVM
"LLVM, also called LLVM Core, is a target-independent optimizer and code generator.[5] It can be used to develop a frontend for any programming language and a backend for any instruction set architecture. LLVM is designed around a language-independent intermediate representation (IR) that serves as a portable, high-level assembly language that can be optimized with a variety of transformations over multiple passes.[6]""
As I correctly said, LLVM is not a compiler, it is a backend target of Rust, C++, etc. compilers. They read source language and generate backend output ... anything from assembly language to C++. Even when Nim uses the C++ backend, C++ is the backend target, even though the generated C++ is fed to a C++ compiler.
End of story and of my participation.
My definition is supported by your links, so I don't think it's idiosyncratic at all.
> "In computing, a compiler is software that translates computer code written in one programming language (the source language) into another language (the target language)."
That's exactly what I said:
"My working definition is a compiler is a program that turns code from one form into another"
> LLVM... is a target-independent optimizer and code generator.
Code generators are a kind of compiler. The input language is IR, and the output language is machine code. Thus it fits the definition of a compiler you proffered.
> As I correctly said, LLVM is not a compiler, it is a backend target of Rust, C++, etc. compilers.
These things are not mutually exclusive. It can be a target of Rust, C++, etc., but that doesn't make it not a compiler. LLVM being a compiler is supported by both of your wiki links. Your first link lists LLVM under "Notable Compilers and Toolchains". In the second link in the LLVM infobox it reads "Type: Compiler". Nuff said.
Last year I asked around in the Nim community if "the C++ interop" will allow me to easily link-to-and-import in Nim a C++ lib (in this case, a 3D engine called WickedEngine) and thus make a game using its surface API from Nim instead of writing it all in C++.
There seemed to be no straightforward way to do so whatsoever. Sure you can import old-school C APIs. Sure maybe you can have Nim transpile to C++ code. But "fantastic interoperability" didn't have my fantasy here in mind: something like `@importcpp "../libwickedengine/compilecommands.json"` and boom, done, including LSP auto-complete =)
It would be the same for other major C++ libs then: think LLVM, Dear Imgui, Qt, OpenCV, libtorrent, FLTK, wxWidgets, bgfx, assimp, SFML......
Sure, I get it, "unlike C, C++ doesn't have an ABI. These C++ libs should maintain and expose a basic C API". I agree! But still..
Mentally I view Nim as a better, safer, easier C++ now. Anything I wanted to do in C/C++ I can do in Nim, but far easier. Not exactly a Carbon competitor but still an alt C++ 2.0 with C++ interopt.
Theoretically if you want to import a large C++ API (e.g. if you were importing Google's C++ codebase*), you could do so with libclang. I was working on an alternative to c2nim that supported Objective-c and C++, which used libclang, but it's currently in my project graveyard. If you're expecting a @cImport from Zig, then the closest you have to that is tools mentioned above to help with the process.
* to be clear, I understand why Google does not want to use Nim instead of creating a new language (Carbon). i.e. readable code output, full control of the compiler and language design, etc.
The amount of manual work to write bindings is minimal, i.e. you can do so simply by declaring a prototype for a procedure and then appending `{.importcpp, header: "<path>".}`, the same for types. And then compile with `nim cpp`.
Compare the way you wrap libraries in Python, Nim requires so much less work - and yet the Python community wraps every C/C++ library you can think of. Again, in my opinion, if you really want a library: wrap it yourself (ideally the subset you need) or rewrite it in Nim.
Also, most of the libraries you have listed have bindings already:
- Qt: https://github.com/jerous86/nimqt or [seaqt](https://forum.nim-lang.org/t/12709)
- ImGUI: https://github.com/nimgl/imgui
- SFML: https://github.com/oprypin/nim-csfml
- SDL: https://github.com/nim-lang/sdl2
- sokol: https://github.com/floooh/sokol-nim
If there's a popular enough C or C++ library, it's probably already wrapped, especially within the gamedev community.
> The LSP could be faster, and it sometimes crashes due to syntax errors or produces zombie processes.
I just set it up on neovim with Mason and it was pretty quick and easy.
That being said my preferred environment is jetbrains stuff and I’d very much enjoy an up to date plugin there
can somebody provide a reference explaining/demonstrating the ergonomics of ORC/ARC and in particular .cyclic? This is with a view toward imagining how developers who have never written anything in a non-garbage-collected language would adapt to Nimony.
So for ergonomics: reference counting is not a complete system. It's memory safe, but it can't handle reference cycles really very well -- since if two objects retain a reference to each other there'll always be a reference to the both of them and they'll never be freed, even if nothing else depends on them. The usual way to handle this is to ship a "cycle breaker" -- a mini-tracing collector -- alongside your reference counting system, which while is a little nondeterministic works very reasonably well.
But it's a little nondeterministic. Garbage collectors that trace references, and especially tracing systems with the fast heap ("nursery" or "minor heap") / slow heap ("major heap") generational distinction are really good. There's a reason tracing collectors are used among most languages -- ORC/ARC and similar systems have put reference counting back in close competition with tracing, but it's still somewhat slower. Reference counting offers one alternative, though -- the performance is deterministic. You have particular points in the code where destructors are injected, sometimes without a reference check (if the ORC/ARC optimization is good) and sometimes with a reference check, but you know your program will deallocate only at those points. This isn't the case for tracing GCs, where the garbage collector is more along the lines of a totally separate program that barges in and performs collections whenever it so desires. Reference counting offers an advantage here. (Also in interop.)
So, while you do need a cycle breaker to not potentially leak memory, Nim tries to get it to do as little as possible. One of these tools they provide to the user is the .acyclic pragma. If you have a data structure that looks like it could be cyclic but you know is not cyclic -- for example, a tree -- you can annotate it with the .acyclic pragma to tell the compiler not to worry about it. The compiler has its own (straightforward) heuristics, too, and so if you don't have any cyclic data in your program and let the compiler know that... it just won't include the cycle collector altogether, leaving you with a program with predictable memory patterns and behavior.
What these .cyclic annotations will do in Nim 3.0, reading the design documentation, is replace the .acyclic annotations. The compiler will assume all data is acyclic, and only include the cycle breaker if the user tells it to by annotating some cyclic data structure as such. This means if the user messes up they'll get memory leaks, but in the usual case they'll get access to this predictable performance. Seems like a good tradeoff for the target audience of Nim and seems like a reasonable worst-case -- memory leaks sure aren't the same thing as memory unsafety and I'm interested to see design decisions that strike a balance between burden on the programmer vs. burden on performance, w/o being terribly unsafe in the C or C++ fashion.
("The same" being a bit relative, here. Nim's sum types are quite a bit worse than those of an ML. Better than Go's, at least.)
The performance is impressive. I've done some exercises on the side to compare Nim's performance to C++ building large collections along with sequential and random access, and -d:release from Nim puts out results that are neck-and-neck with -O3 for C++. No special memory tricks or anything, just writing very Pythonic, clear code.
Feel free to ask me anything.
I'm probably going to sit down and give Atlas a try soon, and migrate my dependencies.
Note, I fixed up Atlas a few months ago for Araq (Nim's BDFL). It uses a simpler design where pkgs are put in a local `deps` folder. It works fantastic and has replaced Nimble and it's magic for me. Plus the local deps folder is easy for LLM cli tools to grep.
Just make sure to install the latest version!
P.S. @netbioserror I'm working on a sensors project. Shoot me an email if you want to talk sensors/iot/nim! Emails on my GH
> Nimony is case sensitive like most other modern programming languages. The reason for this is implementation simplicity. This might also be changed in the future.
[1] https://nim-lang.org/docs/manual.html#lexical-analysis-ident...
[2] https://nim-lang.github.io/nimony-website/index.html#lexical...
The real correct reason for this is to facilitate grep/global-search-and-replace/LLM.
I recall seeing a comparison of “transpile to js” languages and noted Kotlin and nim as the two that were outputting MBs of js compared to the tens or low hundreds of kbs that other languages were outputting.
When you have a page with many alpine/nim components like this, how does the size increase relative to the # of components added (roughly of course)?
echo echo 1 > j.nim
nim js j.nim
node j.js
>>> see 1\\n <<<<
ls -l j.js
>>> 36636 Sep 1 12:54 j.js <<<
nim js -d:release j.nim
ls -l j.js
>>> 11369 Sep 1 12:56 j.js <<<
So, with -d:release stripping away a lot of debugging logic, it's not so bad. Even with d:release there is probably ~50% of the text of that j.js that is just C comments which could be trivially stripped away. E.g., cpp<j.js|wc -c gives 6350 for that very same 11369 file. There are js minification things one could also run on the output. People do complain about this, but people complain a lot. It's probably not so uncompetitive for less trivial programs that do a little bit more work, both minified, apples-to-apples care & all that.Which brings us to the "just use linux, bro" argument. Windows could be interesting, but imagine making a commercial piece of software with it and then beg each and every antivirus author to please not quarantine your .exe on sight.
And this is quite similar to the old "just use Windows, bro" argument that has been exrtemely popular some years ago whenever someone asked how to make a piece of hardware work under Linux (it still is — whenever anyone tries to extend battery life on a Thinkpad).
To solve your woes, fork and patch the code you need to in order to support pragmas for MSVC. And in atlas, use your fork's git path and if you choose to: submit a PR to the project. You can edit the project locally until something works.
[0] https://en.wikipedia.org/wiki/Eiffel_(programming_language)
The stable compiler repo has a fork https://github.com/nim-works/nimskull/ . It’s unfortunate that developers have different opinions.
cb321•2d ago
E.g., because the feature is so rare (controversial?) it doesn't get mentioned much, but you can also define your own operators in Nim. So, if you miss bitwise `|=` from C-like PLangs, you can just say:
Of course, Nim has a built in `set[T]` for managing bit sets in a nicer fashion with traditionally named set theoretic operators like intersection, union, etc. https://github.com/c-blake/procs makes a lot of use of set[enum] to do its dependency analysis of what Linux /proc files to load and what fields to parse, for example (and is generally much faster than C procps alternatives).This same user-defined operator notation for calls can be used for templates and macros as well which makes a number of customized notations/domain specific languages (DSLs) very easy. And pragma macros make it easy to tag definitions with special compile-time behaviors. And so on.
creata•2d ago
miguel_martin•2d ago
cb321•2d ago
j1elo•2d ago
cb321•2d ago
miguel_martin•2d ago
Also, I did mention operator overloading in my first bullet point on language design, but perhaps I should have highlighted it further.