Parallel/threads is some whole other can of worms, of course. It is unfortunate that the stdlib is weak, both here and for numerics, and for other things, and that people are as dependency allergic as in C culture.
Anyway, "easier to optimize" is often subjective, and I don't mean to discourage you.
Multithreading in Nim is bad to say the least and has been for a while. The standard library option for multithreading is deprecated, and most alternatives like weave are either unmaintained or insanely limited (taskpools, malebolgia, etc.). There's no straightforward, idiomatic way to write data-parallel or task-parallel code in Nim today.
The idea of the project is to make shared-memory parallelism as simple as writing a `parallel:` block, without macros or unsafe hacks, and with synchronization handled automatically by the compiler.
Course, performance can be dragged out of Nim with effort, but there's a need for a language where fast, concurrent, GC-optional code is the default, not something one has to wrestle into existence.
I'm curious if you have looked at Chapel: https://chapel-lang.org/
If not, you might find some thing there inspiring.
death_eternal•6mo ago
Because of the relatively poor state of multithreading in Nim and the reliance on external libraries like Arraymancer for heavy numerical workloads (also the performance issues with boxed values due to `ref object` everywhere), I started writing a language from scratch, with built-in support for concurrency via parallel blocks (without macros) and a C backend, similar to Nim.
GC is optional and the stdlib will work with or without the GC.
Example:
The idea is to make things like accessing shared memory concurrently a trivial process by automating the generation of thread synchronization code.Also there are parallel fors, like so:
It is not ready for use at all currently, though will likely see further development until it is.Compiler implemented in Go, originally with Participle, recursive-descent approach. All examples in the examples directory compile.