Races with mutexes can indicate the author either doesn't understand or refuses to engage with Go's message based concurrency model. You can use mutexes but I believe a lot of these races can be properly avoided using some of the techniques discussed in the go programming language book.
He complains that language design offers no way of avoiding it (in this particular case) and relies only on human or ide. Humans are not perfect and should not be a requirement to write good code.
I feel like Java's IDE support is best in class. I feel like go is firmly below average.
Like, Java has great tooling for attaching a debugger, including to running processes, and stepping through code, adding conditional breakpoints, poking through the stack at any given moment.
Most Go developers seem to still be stuck in println debugging land, akin to what you get in C.
The gopls language server generally takes noticeably more memory and cpu than my IDE requires for a similarly sized java project, and Go has various IDE features that work way slower (like "find implementations of this interface").
The JVM has all sorts of great knobs and features to help you understand memory usage and tune performance, while Go doesn't even have a "go build -debug" vs "go build -release" to turn on and off optimizations, so even in your fast iteration loop, go is making production builds (since that's the only option), and they also can't add any slow optimizations because that would slow down everyone's default build times. All the other sane compilers I know let you do a slower release build to get more performance.
The Go compiler doesn't emit warnings, insisting that you instead run a separate tool (govet), but since it's a separate tool you now have to effectively compile the code twice just to get your compiler warnings, making it slower than if the compiler just emit warnings.
Go's cgo tooling is also far from best in class, with even nodejs and ruby having better support for linking to C libraries in my opinion.
Like, it's incredibly impressive that Go managed to re-invent so many wheels so well, but they managed to reach the point where things are bearable, not "best in class".
I think the only two languages that achieved actually good IDE tooling are elisp and smalltalk, kinda a shame that they're both unserious languages.
Okay, come on now :D Absolutely everything around Java consumes gigabytes of memory. The culture of wastefulness is real.
The Go vs Java plugins for VSCode are no comparison in terms of RAM usage.
I don't know how much the Go plugin uses, which is how it should be for all software — means usage is low enough I never had to worry about it.
Meanwhile, my small Java projects get OOM killed all the time during what I assume is the compilation the plugin does in the background? We're talking several gigabytes of RAM being used for... ??? I'm not exactly surprised, I've yet to see Java software that didn't demand gigabytes as a baseline. InteliJ is no different btw, asinine startup times during which RAM usage baloons.
Nonetheless, it's absolutely trivial to set a single parameter to limit memory usage and Java's GCs being absolute beasts, they will have no problem operating more often.
Also, intellij is a whole IDE that caches all your code in AST form for fast lookoup and stuff like that.. it has to use some extra memory by definition (though it's also configurable if you really want to, but it's a classic space vs time tradeoff again).
Also, I don't know how it's relevant to Go which uses a tracing GC.
Cute argument, but there's no way for a program to know how much memory is available. And there is such a thing as "appropriate amount of memory for this task". Hint: "4GB+" / "unbounded" is not the right answer for an LSP on a <10k line project.
> Nonetheless, it's absolutely trivial to set a single parameter to limit memory usage and Java's GCs being absolute beasts, they will have no problem operating more often.
Cool, then the issues I mentioned should never have existed in the first place. But they did, and probably still do today, I can't test easily. So clearly, they're not so easy to fix for some reason.
Also, this is such programmer-speak. It's trivial to set a single parameter? Absolutely, you just need to know that's what's needed, what the parameter is, how to set it, and what to set it to. Trivial! And how exactly would I, as a user, know any of this?
I'm a decent example here, since I could tell the software is written in Java (guess how), I know about its GC tuning parameters and I could probably figure out what parameters to set. So what exact value should I set? 500MB? 1GB? 2GB? How long should I spend doing trial-and-error?
Now consider you'd be burdening every single user of the program with the above. How about we, as engineers, choose the right program parameters, so that the user doesn't have to do or worry about anything? How about that.
> a classic space vs time tradeoff again
Most of the time, just like here, "it's a tradeoff" is a weasel phrase used to excuse poor engineering. Everything is a tradeoff, so pointing that out says nothing. Tradeoffs have to be chosen wisely too, you know.
It usually knows the total system RAM available, and within containers it knows the resource limits. So your statement is false.
> And there is such a thing as "appropriate amount of memory for this task"
Appropriate for whom? Who to tell that I want this app to have better throughput and I don't care how much memory it would cost, or that I don't care about slightly lower throughput but I want the least amount of memory used?
> Now consider you'd be burdening every single user of the program with the above
Or you know, just set one as the developer? There is not even such a thing as a JRE anymore, the prevalent way to ship a Java app is with a "JRE" made up from only the modules that the app needs, started with a shell script or so. You can trivially set as a developer your own set of flags and parameters.
Please show us how to write that cleanly with channels, since clearly you understand channels better than the author.
I think the golang stdlib authors could use some help too, since they prefer mutexes for basically everything (look at sync.Map, it doesn't spin off a goroutine to handle read/write requests on channels, it uses a mutex).
In fact, almost every major go project seems to end up tending towards mutexes because channels are both incredibly slow, and worse for modeling some types of problems.
... I'll also point out that channels don't save you from data-races necessarily. In rust, passing a value over a channel moves ownership, so the writer can no longer access it. In go, it's incredibly easy to write data-races still, like for example the following is likely to be a data-race:
handleItemChannel <- item
slog.Debug("wrote item", "item", item) // <-- probably races because 'item' ownership should have been passed along.Developers have a bad habit of adding mutable fields to plain old data objects in Go though, so even if it's immutable now, it's now easy for a developer to create a race down the line. There's no way to indicate that something must be immutability at compile-time, so the compiler won't help you there.
I wonder if Go could easily add some features regarding that. There are different ways to go about it. 'final' in Java is different from 'const' in C++, for example, and Rust has borrow checking and 'const'. I think the language developers of the OCaml language has experimented with something inspired by Rust regarding concurrency.
This results in things like you can "cast away" C++ const and modify that variable anyway, whereas obviously we can't try to modify a constant because that's not what the word constant means.
In both languages 5 += 3 is nonsense, it can't mean anything to modify 5. But in Rust we can write `const FIVE: i32 = 5;` and now FIVE is also a constant and FIVE += 3 is also nonsense and won't compile. In contrast in C++ altering an immutable "const" variable you've named FIVE is merely forbidden, once we actually do this anyway it compiles and on many platforms now FIVE is eight...
C++ 'constexpr' and Rust 'const' is more about compile-time execution than marking something immutable.
In Rust, it is probably also possible to do a cast like &T to *mut T. Though that might require unsafe and might cause UB if not used properly. I recall some people hoping for better ergonomics when doing casting in unsafe Rust, since it might be easy to end up with UB.
Last I heard, C++ is better regarding 'constexpr' than Rust regarding 'const', and Zig is better than both on that subject.
Yes, the C++ compile time execution could certainly be considered more powerful than Rust's and Zig's even more powerful than that. It is expected that Rust will some day ship compile time constant trait evaluations, which will mean you don't have to write awkward code that avoids e.g. iterators -- so with that change it's probably in the same ballpark as C++ 17 (maybe a little more powerful). However C++ 20 does compile-time dynamic allocation†, and I don't think that's on the horizon for Rust.
† In C++ 20 you must free these allocations inside the same compile-time expression, but that's still a lot of power compared to not being allowed to allocate. It is definitely possible that a future C++ language will find a way to sort of "grandfather in" these allocations so that somehow they can survive to runtime rather than needing to free them.
Rust does give you the option to break out the big guns by writing "procedural" aka "proc" macros which are essentially Rust that is run inside your compiler. Obviously these are arbitrarily powerful, but far too dangerous - there's a (serious) proc macro to run Python from inside your Rust program and (joke, in that you shouldn't use it even though it would work) proc macro which will try out different syntax until it finds which of several options results in a valid program...
A lot of developers without much (or any) Rust experience get the impression that the Rust Borrow checker is there to prevent memory leaks without requiring garbage collection, but that's only 10% of what it does. Most the actual pain dealing with borrow checker errors comes from it's other job: preventing data races.
And it's not only Rust. The first two examples are far less likely even in modern Java or Kotlin for instance. Modern Java HTTP clients (including the standard library one) are immutable, so you cannot run into the (admittedly obvious) issue you see in the second example. And the error-prone workgroup (where a single typo can get you caught in a data race) is highly unlikely if you're using structured concurrency instead.
These languages are obviously not safe against data races like Rust is, but my main gripe about Go is that it's often touted as THE language that "Gets concurrency right", while parts of its concurrency story (essentially things related to synchronization, structured concurrency and data races) are well behind other languages. It has some amazing features (like a highly optimized preemptive scheduler), but it's not the perfect language for concurrent applications it claims to be.
As for Java, there are fibers/virtual threads now, but I know too little of them to comment on them. Go's green thread story is presumably still good, also relative to most other programming languages. Not that concurrency in Java is bad, it has some good aspects to it.
[0]: An example is https://news.ycombinator.com/item?id=45898923 https://news.ycombinator.com/item?id=45903586 , both for the same article.
You might have a mechanism for scheduling other stuff whilst waiting for the interrupt (like Tokio's runtime), but even that might be strictly serial.
Regarding green threads: Rust originally started with them, but there were many issues. Graydon (the original author) has "grudgingly accepted" that async/await might work better for a language like Rust[1] in the end.
In any case, I think green threads and async/await are completely orthogonal to data race safety. You can have data race safety with green threeads (Rust was trying to have data-race safety even in its early green-thread era, as far as I know), and you can also fail to have data race-safety with async/await (C# might have fewer data-race safety footguns than Go but it's still generally unsafe).
That's not the case with Go, so these are significantly worse than both Rust and Java/C#, etc.
Of course you can have memory corruption in Java. The easiest way is to spawn 2 threads that write to the same ByteBuffer without write locks.
Meanwhile a memory issue in C/Rust and even Go will immediately drop every assumption out the window, the whole runtime is corrupted from that point on. If we are lucky, it soon ends in a segfault, if we are less lucky it can silently cause much bigger problems.
So there are objective distinctions to have here, e.g. Rust guarantees that the source of such a corruption can only be an incorrect `unsafe` block, and Java flat out has no platform-native unsafe operations, even under data races. Go can segfault with data races on fat pointers.
Of course every language capable of FFI calls can corrupt its runtime, Java is no exception.
In C, yes. In Rust, I have no real experience. In Go, as you pointed out, it should segfault, which is not great, but still better than in C, i.e., fail early. So I don't get or understand what your next comment means? What is a "less lucky" example in Go?
> If we are lucky, it soon ends in a segfault, if we are less lucky it can silently cause much bigger problems.
I would love to see an example of this, if you don't mind. My understanding is that the GC in Go actively prevents against what you write. There is no pointer arithmetic in the language. The worst that can happen is a segfault or data corruption due to faulty locking like the Java example I gave above.
Here is a thread discussing it, but there are multiple posts/comment threads on the topic. In short, slices are fat pointers in the language, and data races over them can cause other threads to observe the slice in an invalid state, which can be used to access memory it shouldn't be able to.
Then, of course, there's the languages that are still so deeply single-threaded that they simply can't write concurrency bugs in the first place, or you have to go way out of your way to get to them, not because they're better than Go but because they don't even play the game.
However, it is true the list is short and likely a lot of people taking the opportunity to complain about Go are working in languages where everything they are so excited to complain about are still either entirely possible in their own favorite language (with varying affordances and details around the issues) or they are working in a language that as mentioned simply aren't playing the game at all, which doesn't really count as being any better.
However, are Go programs not supposed to typically avoid sharing mutable data across goroutines in the first place? If only immutable messages are shared between goroutines, it should be way easier to avoid many of these issues. That is of course not always viable, for instance due to performance concerns, but in theory can be done a lot of the time.
I have heard others call for making it easier to track mutability and immutability in Go, similar to what the author writes here.
As for closures having explicit capture lists like in C++, I have heard some Rust developers saying they would also have liked that in Rust. It is more verbose, but can be handy.
I recently worked with a 'senior' Go engineer. I asked him why he never used pointer receivers, and after explaining what that meant, he said he didn't really understand when to use asterisks or not. But hey, immutability by default is something I guess.
C programmers aren’t supposed to access pointers after freeing them, either.
“Easy to do, even in clean-looking code, but you shouldn’t do it” more or less is the definition of a pitfall.
https://www.reddit.com/r/rust/comments/1odrf9s/explicit_capt...
That said, i think about all languages have their own quirks and footguns. I think people sometimes forget that tools are just that, tools. Go is remarkably easy to be productive in which is what the label on the tin can claims.
It isnt “fearless concurrency” but get shit done before 5 pm because traffics a bitch on Wednesdays
To feel productive in.
The closure compiler flag trick looks interesting though, will give this a spin on some projects.
Subtle linguistic distinctions are not what I want to see in my docs, especially if the context is concurrency.
Which PL do you use then ? Because even Rust makes "Subtle linguistic distinctions" in a lot of places and also in concurrency.
Please explain
Anyways, the article author lacks basic reading skills, since he forgot to mention that the Go http doc states that only the http client transport is safe for concurrent modification. There is no "subtlety" about it. It directly says so. Concurrent "use" is not Concurrent "modification" in Go. The Go stdlib doc uses this consistently everywhere.
Where are the “subtle linguistic distinctions”? These types do two completely different things. And neither are even capable of being used in a multithreaded context due to `!Sync` (and `!Send` for Rc and refguards)
https://play.rust-lang.org/?version=stable&mode=debug&editio...
You don't need different threads. I said concurrency not multi-threading. Interleaving tasks within the same thread (in an event loop for example) can cause panics.
https://doc.rust-lang.org/stable/std/cell/struct.RefCell.htm...
https://doc.rust-lang.org/stable/std/cell/struct.RefCell.htm...
If you're using unsafe blocks you can have data races too, but that's the entire point of unsafe. FWIW, my experience is that most Rust developers never reach for unsafe in their life. Parts of the Rust ecosystem do heavily rely on unsafe blocks, but this still heavily limits their impact to (usually) well-reviewed code. The entire idea is that unsafe is NOT the default in Rust.
I like Rust fine, but it’s got plenty of subtle distinctions.
It depends on the platform though (e.g. in Java it is guaranteed that there is no tearing [1]).
[1] In OpenJDK. The JVM spec itself only guarantees it for 32-bit primitives and references, but given that 64-bit CPUs can cheaply/freely write a 64-bit value atomically, that's how it's implemented.
this only works when the language defines a memory model where bools are guaranteed to have atomic reads and writes
so you can't make a claim like "setting a field to true from ... multiple threads ... can be a meaningful operation e.g. if you only care about if ANY of the threads have finished execution"
as that claim only holds when the memory model allows it
which is not true in general, and definitely not true in go
assumptions everywhere!!
Then I give an example of a language where it's safe
I don't get your point. The negation of all is a single example where it doesn't apply.
there is this whole demographic of folks, including the OP author, who seem to believe that they can start writing go programs without reading and understanding the language spec, the memory model, or any core docs, and that if the program compiles and runs that any error is the fault of the language rather than the programmer. this just ain't how it works. you have to understand the thing before you can use the thing. all of the bugs in the code in this blog post are immediately obvious to anyone who has even a basic understanding of the rules of the language. this stuff just isn't interesting.
If, let's say, http.Client was functionally immutable (with all fields being private), and you'd need to have to set everything using a mutable (but inert) http.ClientBuilder, these bugs would not have been possible. You could still share a default client (or a non-default client) efficiently, without ever having to worry about anyone touching a mutable field.
From all the listed cases, only the first one is easy to get caught by, even as an experienced developer. There, the IDE and syntax highlighting is of tremendous help and for general prevention. The rest is just understanding the language and having some practice.
As to whether it's a common pattern, I see closures on WaitGroups or ErrGroups quite often:
workerCount := 5
var wg sync.WaitGroup
wg.Add(workerCount)
for range workerCount {
go func() {
// Do work
wg.Done()
}()
}
wg.Wait()
You can avoid the closure by making the worker func take a *sync.WaitGroup and passing in &wg, but it doesn't really have any benefit over just using the closure for convenience.Likewise for "Trainings". Looks weird to Murrcan eyes but maybe it's a Britishism.
action/action learnings/learning trainings/training asks/ask strategising/strategy
Not that I particularly like it, but compared to all the other stuff it at least seems tolerable. The penchant for deflecting questions and not answering directly, the weasel wording done to cover your ass, the use of words to mean something totally other than the word (e.g. "I take full responsibility" meaning "I will have no personal or professional repercussions"), etc. Some of it seems like it comes out of executive coaching, some of it definitely comes out of fear of lawsuits.
Mind you there are so many expressions like this and we British are masters of them, like "with the greatest of respect,", which conveys meaning slightly more severe than "you are a total fucking idiot and".
I'm not sure if the people who use this word think it's proper English. They rarely seem to care what words mean anyway.
"What are the asks" and "what's the offer" are turning up much more than I'd like, and they annoy me. But not as much as other Americanisms: "concerning" meaning "a cause for concern", "addicting" when the word they are looking for is "addictive", and the rather whiny-sounding "cheater" when the word "cheat" works fine. These things can meet the proverbial fiery end, along with "performant" and "revert back" (the latter of which which is an Americanism sourced from Indian English that is perhaps the only intrusion from Indian English I dislike; generally I think ISE is warm and fun and joyful.)
The BBC still put "concerning" in quotes, because the UK has not yet given up the fight, and because people like me used to write in to ask "concerning what?" I had a very fun reply from a BBC person about this, once. So I assume they are still there, forcing journalists to encase this abuse in quotation marks.
Ultimately all our bugbears are personal, though, because English is the ultimate living language, and I don't think Americans have any particular standing to complain about any of them! :-)
ETA: Lest anyone think I am complaining more about Americanisms than other isms, I would just like to say that one of my favourite proofs of the extraordinary flexibility of English is the line from Mean Girls: "She doesn't even go here!"
In Australia, a "lolly" is more or less any non-chocolate-based sweet (candy).
British people find this confusing in Australia, but this is a great example of a word whose meaning was refined in the UK long after we started transporting people to Australia. Before that, a "lollipop" was simply a boiled treacle sweet that might or might not have been on a stick; some time after transportation started, as the industrialised confectionary industry really kicked off, the British English meaning of the word slowly congealed around the stick, and the Australian meaning did not.
That isn’t a core misunderstanding of Go, that’s a core misunderstanding of programming.
Go was designed from the beginning to use Tony Hoare's idea of communicating sequential processes for designing concurrent programs.
However, like any professional tool, Go allows you to do the dangerous thing when you absolutely need to, but it's disappointing when people insist on using the dangerous way and then blame it on the language.
Each have their own pros and cons. You can see some of the legends who invented different methods of concurrency here: https://www.youtube.com/watch?v=37wFVVVZlVU
There's also a nice talk Rob Pike gave that illustrated some very useful concurrency patterns that can be built using the CSP model: https://www.youtube.com/watch?v=f6kdp27TYZs
For example, in Erlang, `receive` _is_ a blocking operation that you have to attach a timeout to if you want to unblock it.
You're correct about identity/names: the "queue" part of processes (the part that is most analogous to a channel) is their mailbox, which cannot be interacted with except via message sends to a known pid. However, you can again mimic some of the channel-like functionality by sending around pids, as they are first class values, and can be sent, stored, etc.
I agree with all of your points, just adding a little additional color.
Can you blame them when the dangerous way uses 0 syntax while the safe way uses non-0 syntax? I think it's fine to criticize unsafe defaults, though of course it would not be fair to treat it like it's the only option
If you're looking for a language that makes "sharing by communicating" the default for almost every kind of use case, that's Erlang. Yes, it's built around the actor model rather than CSP, but the end result is the same, and with Erlang it's the real deal. Go, on the other hand, is not "built around CSP" and does not "encourage sharing by communicating" any more than Rust or Kotlin are. In fact, Rust and Kotlin are probably a little bit more "CSP-centric", since their channel interface is far less error-prone.
[1] https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-s...
Elixir (and anything that runs on the BEAM) takes an entirely different perspective on concurrency than almost everything else out there. It still has concurrency gotchas, but at worst they result in logic bugs, not violations of the memory model.
Stuff like:
- forgetting to update a state return value in a genserver
- reusing an old conn value and/or not using the latest conn value in Plug/Phoenix
- in ETS, making the assumption nothing else writes to your key after doing a read (I wrote a library to do this safely with compare-and-swap: https://github.com/ckampfe/cas)
- same as the ETS example, but in a process: but doing a write after doing a read and assuming nothing else has altered the process state in the interim
- leaking processes (and things like sockets/ports), either by not supervising them, monitoring them, or forgetting to shut them down, etc. This can lead to things like OOMs, etc.
- deadlocking processes by getting them into a state where they each expect a reply from the other process (OTP timeouts fix this, try to always use OTP)
- logical race conditions in a genserver init callback, where the process performs some action in the init that cannot complete until the init has returned, but the init has not returned yet, so you end up with a race or an invalid state
- your classic resource exhaustion issues, where you have a ton of processes attempting to use some resource and that resource not being designed to be accessed by 1,000,000 things concurrently
- OOMing the VM by overfilling the mailbox of a process that can't process messages fast enough
Elixir doesn't really have locks in the same sense as a C-like language, so you don't really have lock lifetime issues, and Elixir datastructures cannot be modified at all (you can only return new, updated instances of them) so you can't modify them concurrently. Elixir has closures that can capture values from their environment, but since all values in Elixir are immutable, the closure can't modify values that it closes over.Elixir really is designed for this stuff down to its core, and (in my opinion) it's evident how much better Elixir's design is for this problem space than Go's is if you spend an hour with each. The tradeoff Elixir makes is that Elixir isn't really what I'd call a general purpose language. It's not amazing for CLIs, not amazing for number crunching code, not amazing for throughput-bound problems. But it is a tremendous fit for the stuff most of us are doing: web services, job pipelines, etc. Basically anything where the primary interface is a network boundary.
Edited for formatting.
In my career I've found that if languages don't allow developers to shoot themselves (and everyone else) in the foot they're labelled toy languages or at the very least "too restrictive". But the moment you're given real power someone pulls the metaphorical trigger, blows their metaphorical foot off and then starts writing blog posts about how dangerous it is.
One must keep in mind that devs manage to implement even flawed logic that is directly reflected by the code. I'd rather not give them a non-thread safe language that provides a two letter keyword to start a concurrent thread in the same address space. Insane language design.
sorry, what?
https://gaultier.github.io/blog/a_million_ways_to_data_race_...
this code is obviously wrong, fractally wrong
why would you create a new PricingService for every request? what makes you think a mutex in each of those (obviously unique) PricingService values would somehow protect the (inexplicably shared) PricingInfo value??
> the fix
https://gaultier.github.io/blog/a_million_ways_to_data_race_...
what? this is in no way a fix to the problem.
it's impossible to believe the author's claims about their experience in the language, this is just absolute beginner stuff..
Meanwhile with the 4th item, this whole example is gross, repeatedly polling a buffer every 100ms is a massive red flag. And as for the data race in that item, the idiomatic fix is to just use io.Pipe, which solves the entire problem far more cleanly than inventing a SyncWriter.
The author's last comment regarding "It would also be nice if more types have a 'sync' version, e.g. SyncWriter, SyncReader, etc" probably indicates there's some fundamental confusion here about idiomatic Go.
About the only code example I saw in here and thought “yeah it sucks when that happens” is the accidental closure example. Accidentally shadowing something you’re trying to assign to in a branch because you need to handle an error or accidentally reassigning something can be subtle. But it’s pretty 101 go.
The rest is… questionable at best.
bilbo-b-baggins•2mo ago
speedgoose•2mo ago
landr0id•2mo ago
>I have been writing production applications in Go for a few years now. I like some aspects of Go. One aspect I do not like is how easy it is to create data races in Go.
Their examples don't seem terribly convoluted to me. In fact, Uber's blog post is quite similar: https://www.uber.com/blog/data-race-patterns-in-go/
kryptiskt•2mo ago
Like, rightly or wrongly, Go chose pervasive mutability and shared memory, it inevitably comes with drawbacks. Pretending they don't exist doesn't make them go away.
bayindirh•2mo ago
It's generally assumed that people who defend their favorite programming language are oblivious to the problems the language has or choose to ignore these problems to cope with the language.
There's another possibility: Knowing the footguns and how to avoid them well. This is generally prevalent in (Go/C/C++) vs. Rust discussions. I for one know the footguns, I know how bad it can be, and I know how to avoid them.
Liking a programming language as is, operating within its safe-envelope and pushing this envelope with intent and care is not a bad thing. It's akin to saying that using a katana is bad because you can cut yourself.
We know, we accept, we like the operating envelope of the languages we use. These are tools, and no tool is perfect. Using a tool knowing its modus operandi is not "pretending the problems don't exist".
bloppe•2mo ago
I'll always love a greenfield C project, though!
kryptiskt•2mo ago
I said that in response to the hostility ("crap on Go") towards the article. If such articles aren't written, how will newbies learn about the pitfalls in the first place?
bloppe•2mo ago
> Don't communicate by sharing memory; share memory by communicating.
dontlaugh•2mo ago
bayindirh•2mo ago
Moreover, threads are arguably useless without shared memory anyway. A thread is invoked to work on the same data structure with multiple "affectors". Coordination of these affectors is up to you. Atomics, locks, queues... The tools are many.
In fact, processes are just threads which are isolated from each other, and this isolation is enforced by the processor.
dontlaugh•2mo ago
bloppe•2mo ago
Anyway, I would stop short of saying "Go chose shared memory". They've always been clear that that's plan B.
dontlaugh•2mo ago
It's not like it's a disaster, but it's certainly inconsistent.
bloppe•2mo ago
dontlaugh•2mo ago
The opposite default encourages the opposite behaviour.
yvdriess•2mo ago
marhee•2mo ago
gf000•2mo ago
Also, these are minimal reproducers, the exact same mistakes can trivially happen in larger codebases across multiple files, where you wouldn't notice them immediately.
LtWorf•2mo ago
littlestymaar•2mo ago